Common Core Reading Benchmark Assessments
The development of national standards for K-12 English Language Arts was initiated and coordinated by the National Governors Association Center for Best Practices (NGA Center, 2010) and the Council of Chief State School Officers (CCSSO, 2010). Common Core Standards (CCS) are specific academic benchmarks and expectations for all students in public schools endorsing CCS. The Common Core Standards in English Language Arts (ELA) measure student proficiency in the acquisition of knowledge and skills covered by the curriculum at the specified grade level. Students should not be compared to each other but rather evaluated on how well they are individually meeting grade-level standards. These standards provide a consistent framework to prepare students for success in college and careers.
The Common Core Standards (CCS) earmark a significant change in education and assessment. According to the consortia SMARTER Balanced Assessment (SBAC, 2012) and Partnership for the Assessment of Readiness for College and Careers (PARCC, 2012), assessments must contain rigorous item types beyond selected-response items. The requirements include extended constructed-response, performance task, and computer-enhanced. Researchers have indicated the importance of a balanced approach to assessments (Black, Harrison, Lee, Marshall, and Wiliam, 2003; Garrison and Ehringhaus, 2007). This approach focuses on summative assessments, benchmark or interim assessments, and formative assessments. A comprehensive system is a balanced approach, with all assessments having a relatedness intended to improve achievement. Motivation Reading Benchmark Assessments Common Core Aligned are interim assessments that measure student progress in reading at three different points during the year.
The federal requirements of the No Child Left Behind Act (NCLB, 2001), and the Individuals with Disabilities Education Act (IDEA, 2004) mandate that all students participate in the state assessment program. All students must be tested in reading content at their respective grade levels. Accountability rules of NCLB remain in effect until ESEA is reauthorized by Congress. Based on NCLB, the Annual Measurable Objectives (AMOs) increase each year until 2014. NCLB further noted that by the end of the school year
2013-14, 100 percent of students are to be proficient in reading. Summative and formative assessments are necessary and used for developing an accurate picture of a student’s overall academic achievement. Classroom benchmark assessments correlated to the Common Core Standards provide teachers ongoing interval measurements of student progress, thus the rationale for Motivation Reading Benchmark Assessments Common Core Aligned.
Benchmark assessments should be well aligned with curriculum to validate improved learning. Utilization of formative assessments and benchmarks provide a continuous, comprehensive flow of information with which to plan and guide instruction. When teachers are provided time to adequately analyze assessment data relative to their content, then weaknesses in individual students or within the curriculum or instruction can be addressed. Thus, there is a definite need to include benchmark assessments as part of the accountability plan for a campus and district.
These benchmark assessments are designed to measure student acquisition of the knowledge and skills specified in the Common Core Standards at different intervals. The primary purpose of Motivation Reading Benchmark Assessments Common Core Aligned is to provide a valid measure of the quality of reading education in the classroom or across the campus. Research shows that students score higher on standardized tests when they experience focused, aligned practice.
Motivation Reading Benchmark Assessments Common Core Aligned measure how well students have acquired the knowledge and skills taught during reading instruction. The assessments are designed to ensure students are learning at their grade level. Furthermore, Motivation Reading Benchmark Assessments Common Core Aligned provide data to teachers, schools, and school districts to support improved instructional decisions. The Motivation Reading Benchmark Assessments Common Core Aligned serve as accountability measures to help gauge or predict future performance that might occur on state assessments which are part of the Adequate Yearly Progress (AYP) requirements of the federal No Child Left Behind Act (NCLB). With the summative assessment data, educators can pinpoint areas that require additional attention and focus.
Periodic exposure to benchmark assessments provides students with opportunities to experience a variety of assessment items and formats for each standard. These experiences will benefit students facing a common assessment. When assessment is an integral part of reading instruction, the literature indicates that assessment contributes significantly to students’ learning. Assessment should inform and guide teachers as they make instructional decisions. During the school year, practice tests enable students opportunities to evaluate their own work and progress. As a result, Motivation Reading Benchmark Assessments Common Core Aligned arm teachers with essential data or information that helps in the preparation of high-quality instruction.
Results of the Motivation Reading Benchmark Assessments Common Core Aligned provide information about the academic achievement of students. This information is used to identify individual student strengths, determine areas of challenge, and measure the quality of reading across the campus. Utilization of results from various benchmark assessments helps teachers monitor student progress in order to determine future plans for instruction. Students can use the Chart Your Success charts located in the back of the assessment booklet to chart assessment data, self-monitor individual progress over time in reading, and compare the knowledge and skills to previous assessments. The involvement of students in assessment promotes student engagement in individual learning targets. Students need to know what learning targets they are responsible for mastering, and at what level (Stiggins, 2007). Marzano (2005) states, “students who can identify what they are learning www.mentoringminds.com/ccsela significantly outscore those who cannot.” A class diagnostic chart is available at which enables teachers to view and determine students’ strengths and weaknesses. After the analysis of assessment data, findings may indicate students require additional instruction to address deficits in order to achieve skill mastery and close learning gaps. If skill deficits exist, then teachers are encouraged to explore different strategies in order to improve student achievement. Teachers may design learning experiences to revise their curricula, develop formative assessments, examine instructional methods of delivery, target specific populations for remediation and enrichment, create student academic assistance interventions, and/or develop individual plans for student improvement.
Available for Levels 2-5, Motivation Reading Benchmark Assessments Common Core Aligned are diagnostic and prescriptive in nature. These practice assessments provide educators with detailed information on student progress as well as promote flexibility of use in a variety of classroom settings. For each grade level, there are three different versions of the assessments (Forms A, B, C) bound into one student assessment booklet. Each form of the assessments features two single reading passages and one paired passage. The passages incorporate complex text that are of appropriate difficulty and fall within the prescribed ranges of text complexity as detailed by CCSSO/NGA (2010). Passages are appropriate for student interest and integrate content from science, social studies, and the arts. The single passages are followed by ten selected-response assessment items and two constructed- response assessment items. The paired passages are followed by 10-14 selected-response assessment items and two constructed-response assessment items. The constructed-response items require student responses grounded in evidence from the text which is a notable change in the Common Core Standards (Achieve the Core, 2012). In addition, the paired passage includes a Performance Task. The Performance Task requires students to read sources, write three constructed-response pieces, and compose a full narrative, informative/explanatory, or opinion piece based on the sources. In conclusion, the traditional selected-response items, the constructed-response items, and the performance tasks will challenge students to move beyond factual knowledge as they analyze the meaning of texts and offer evidence for support of given responses. The assessment items in Motivation Reading Benchmark Assessments Common Core Aligned will measure the depth, rigor, and complexity of comprehension required by the Common Core Standards. Students should be familiar with such questions since formative assessments should entail questions that engage students in dialogue that apprises the teacher of the degree and depth of student understanding prior to benchmark assessments. According to researchers (Black et al., 2003), it is crucial that time be allotted for teachers to form questions as well as ask questions that probe for deeper thinking of students.
The student book includes a Chart Your Success tool for students to record individual results on the selected-response items, constructed-response items, and the narrative, informative/explanatory, and opinion pieces. Motivation Reading Benchmark Assessments Common Core Aligned Online also features a print-to-digital transition using the same content. Campuses will have digital access to the benchmarks if using Internet-connected computers. Both coalitions Smarter Balanced Assessment Consortium and PARCC indicate future common assessments regarding CCS will be online. While the benchmarks do have an interactive online version, the assessment items are not computer-adaptive but are fixed-form and offer randomized assessment items. According to Davis (2012), testing experts state that fixed-form assessments do not allow teachers to see the extent of learning gaps for struggling students or the extensive range of knowledge for higher-achieving students. Thus, computer-adaptive tests appear to be beneficial in revealing reliable information for those students at either end of the learning continuum.
Earlier in 2012, the United States Department of Education and the Federal Communications Commission announced a blueprint to invite schools to transition to digital textbooks by the end of the next five years. While not mandated, the initiative encourages schools to make the switch from print-to-digital materials based on the projected cost-savings and the academic improvement. Some school districts appear to be making an effort to transition to digital materials and tools for students. Both coalitions support digital literacy in assessments as a 21st Century skill. Technology seems to be an accepted or known behavior in schools today. Certainly, technology is how communication takes place more times than not in college and in the workplace. Today’s students are expected to demonstrate specific digital communication skills in order to show they are digitally literate. Academic excellence is the goal for all students. Acquisition of this goal within the context of a technological environment prepares students to survive within the Digital Age. Thus, performance data must yield evidence that students are successfully prepared to live in a technological society.
The Administration Manual and Answer Key include directions for administering all portions of the assessment. As shared by the United States Department of Education (2003), No Child Left Behind noted the importance of assessment items that align with the depth and breadth of the academic content standards. Therefore, all assessment items in the Motivation Reading Benchmark Assessments Common Core Aligned are coded with the Common Core Standards, Depth of Knowledge Levels (DOK), and Bloom’s Original and Revised Taxonomy Levels. In addition, the Answer Key provides scoring rubrics for the constructed-response pieces and the narrative, informative/explanatory, and opinion pieces. Sample writing for the score points are also included for the constructed-response items.
One clearly defined performance task for each form is presented at the conclusion of the assessment items about the paired selection. This cognitive task is relative to real-world application whenever possible and uses the unit’s passages as the sources. The Performance Task requires students to read sources, write three constructed-response pieces, and compose a full narrative, informative/explanatory, or opinion piece based on the sources. The independently completed task results in a written piece which will be scored using a rubric. It is recommended that students be provided an introduction to performance tasks (Stiggins and Chappuis, 2005; Khattri, Reeve, and Kane, 1998). The design for the student task is composed of two sessions. Session One will denote the following descriptors: Assignment (overview of the performance task), Steps, Session One (amount of time and purpose), Directions for Session One, and Questions for Session One (three questions for completion). Session Two includes the following: Performance Task (overview), Directions for Session Two, Assignment (detailed overview of performance task), Scoring Criteria (advanced knowledge of scoring). The scoring criteria are identified for student self-assessment and translated into rubrics found in the Teacher Edition for teacher scoring. Depth of understanding, research skills, provision of relevant evidence (Darling-Hammond and Pecheone, 2010) and the ability to integrate knowledge and skills across content standards or within a content area (Khattri and Sweet, 1996) are some of the expectations that will be measured or demonstrated with the performance task. Scoring criteria inform students of the established indicators for success by clarifying desired learning outcomes. Crooks (1988) shared criterion-referenced feedback provides the guidance for improving student understanding. Self-assessment and reflection are an important part of the learning process. When students monitor their work against preset criteria, they can receive immediate feedback regarding the task (Wiggins, 1993; Trammel, Schloss, and Alper, 1994). Performance tasks provide educators with an understanding of what students have internalized and what still needs support in regards to the CCS, since the performance tasks clearly connect to the specified Common Core Standards. After studying best practices, the CCS, and rubrics developed by SMARTER Balanced Assessment Consortium (Measured Progress/ETS Collaborative, 2012), the ELA Product Development Team designed rubrics, relevant to the skills being assessed, from which to score the written pieces involved in the performance tasks.
The model Depth of Knowledge (DOK) was developed by Norman Webb (Webb, 2002; 2006). Dr. Webb advocates the necessity of assessment items matching the standard. Webb also wanted educators to be aware of the level of demonstration required by a student when a test item was developed, thus the development of his four levels of DOK. Level 1 assessment items have students recall information. Level 2 items ask students to think beyond reproduction of responses. Students use more than one cognitive process or follow more than one step. Students at Level 3 demonstrate higher levels of thought than the previous levels require as these items are more complex. Responses may have multiple answers, but students must choose one and justify the reasoning behind the selection. Assessment items at Level 4 require students to form several connections with ideas. Typically, performance assessments and open-ended responses are written at this level of thought.
Students can be assisted in organizing the content of their thinking to facilitate complex reasoning. According to Sousa (2006), students are not actually taught to think because children are born with the brain organizational structure that originates thinking. Sousa supports Bloom’s Taxonomy as an organizational structure that is compatible with the manner in which the brain processes information to promote comprehension. Bloom, Englehart, Furst, Hill, and Krathwohl (1956) developed this classification system for levels of intellectual behavior in learning. Bloom’s Taxonomy contains three domains: the cognitive, psychomotor, and affective. Within the cognitive domain, Bloom identified six levels: knowledge, comprehension, application, analysis, synthesis, and evaluation. The taxonomy was revised by Anderson and others (2001) to focus on thinking as an active process. The original and revised taxonomies continue to be useful today in developing and categorizing the critical thinking skills of students. Karin Hess (2009; 2010) designed the Cognitive Rigor English Language Arts Matrix to integrate Revised Bloom’s Taxonomy with Webb’s Depth of Knowledge. Webb’s DOK framework, Bloom’s Taxonomy, and Hess’ ELA Cognitive Rigor Matrix were all utilized by the ELA Product Development Team to develop assessment items and plan performance tasks that reflect rigor, depth, and complexity of thought.
As previously stated, the national shift towards preparing students to survive in the global market will impact the type assessments undertaken by students. Assessments that focus on the Common Core Standards will not only demonstrate if students can succeed in school but also in the real world. Assessments will indicate if students are both college and career ready. For the purpose of the Motivation Reading Benchmark Assessments Common Core Aligned, the various DOK and Bloom’s coding is utilized to reflect the rigor and depth in levels of thought required by students on the benchmark assessments. Assessment items displaying rigor require students to use higher-levels of thought, exhibiting a more challenging 21st Century learning environment. Students may be asked to use such processes as examine, create, prioritize, decide, produce, assess, generate, or classify. Assessments items reflecting relevance require students to work with real-world tasks that have more than one solution.
Over the past years, changes in accountability and testing have led to data playing a major role in the education of students. The United States Department of Education advocates the importance of using data for guiding instruction and improving student learning. Schools are being strongly encouraged to respond to assessment data, using it to identify students’ academic strengths and needs (U.S. Department of Education, 2009). As educators face increasing accountability pressure from federal, state, and local entities to improve student achievement, data should become the central element in how students’ academic progress is monitored and how instructional practices are evaluated. Research seems to indicate there is no single assessment that provides a complete picture of student performance. Motivation Reading Benchmark Assessments Common Core Aligned offer three forms in order to keep a pulse on the progress of student performance, rather than a single snapshot assessment. Each assessment plays a prominent role in determining if quality teaching and learning are occurring. As correct and incorrect assessment answers are analyzed, teachers are able to observe the patterns of thought in which students experience difficulty or exhibit success. This data is informative in that teachers may appropriately adjust and revise instruction to more appropriately address the diversity of needs within classrooms. Thus, assessments have important implications for teaching and learning. Research indicates it is essential that assessment data be used to make well informed instructional decisions (Armstrong and Anthes, 2001; Feldman and Tung, 2001; Forman, 2007; Liddle, 2000).
Benchmarks yield student achievement data on the Common Core Standards throughout the school year, including the ability to report student achievement approaching, falling below, or exceeding the standards. With three forms of benchmark assessments per grade, these assessment instruments are capable of providing data to measure reading progress and proficiency at three different intervals throughout the year. Motivation Reading Benchmarks Common Core Aligned Forms A, B, and C can be used in different ways: as practice, as a diagnostic instrument, and as a teaching tool. Students need opportunities to practice and develop test-taking skills. These tests focus on the skills students will be expected to demonstrate on assessments of Common Core Standards. A diagnostic chart is available on the Mentoring Minds website mentoringminds.com/ccsela. This chart enables teachers to determine students’ strengths and weaknesses. Teachers can view the chart to determine specific areas where additional practice for mastery of skills is warranted. Although benchmarks are not formative assessment tools, the data is informative to annual learning goals. Data from the assessments will guide the teacher in identifying possible areas where adjustments in future instruction may be necessary, thus, using the assessments as teaching tools.
Studies support the use of several measures from which to gauge student achievement. The English Language Arts Product Development Team recognized that assessment systems should include a balance of formative and summative data to be most effective in improving outcomes and in making a significant impact on reading education. The development team studied available guidelines released by the assessment consortia Smarter Balanced Assessment Consortium (SBAC, 2012) and Partnership for the Assessment of Readiness for College and Careers (PARCC, 2011a; 2011b; 2012a; 2012b). PARCC and Smarter Balanced consortia have both released a range of sample items and item specifications regarding the assessment of reading. The prototype items, tasks, and other related information were considered by the English Language Arts Product Development Team in order to design assessment items and tasks that measure a deeper understanding and reflect the requirements and expectations of the consortia. Information appears to show that the common assessments will contain a variety of item types: adaptive multiple choice, extended-constructed response, technology-enhanced constructed-response, and performance tasks. For Motivation Reading Benchmark Assessments Common Core Aligned, a combination of selected-response items, constructed-response items, and performance tasks were developed for all forms with all items aligned to the English Language Arts-Reading Common Core Standards. Although the common assessment test items are reported to be computer-adaptive, the format for Motivation Reading Benchmarks Common Core Aligned will be paper-pencil or an online version of each form containing no computer-adaptive items.
As the school year progresses, the students who are proficient in the various benchmarks can determine how they may perform on the state or common assessments in reading. The three forms offered at each grade enable the benchmarks to be spread out over the year, leaving a window of time for the state or common assessments to be administered.
As the Motivation Reading Benchmark Assessments Common Core Aligned data is examined, teachers can identify students who are performing at the grade-specific standard level, those who are exceeding the standards, and those who are approaching or are functioning below the standard. Teachers can also determine and chart the data for the various subgroups (i.e., ethnicity, disadvantaged, special education, and English Language Learners). All subgroups must make sufficient growth in order for the school to achieve the adequate yearly progress (AYP) status per the No Child Left Behind law.
The developers of Motivation Reading Benchmark Assessments Common Core Aligned reviewed relevant reform efforts on teaching and learning in English Language Arts, studied the Common Core Standards, perused the available and released item specifications shared by the consortia, and employed individual expertise and collective judgment as they designed a resource to lead students into the 21st century.
Motivation Reading Benchmark Assessments Common Core Aligned focus on the grade-level standards for reading. This focus ensures the assessment items align with the assessed standard, resulting in appropriate and effective assessment items based on current information. Webb’s Depth of Knowledge and Bloom’s Taxonomy were the basis for designing items that stimulated students' higher order thinking skills, encouraging rigor and depth in thinking. With grade-specific Common Core Standards for Reading as the focus, the Mentoring Minds Product Development Team developed Motivation Reading Benchmark Assessments Common Core Aligned as a resource for assessing and strengthening education in English Language Arts-Reading.
Bibliography for Common Core Motivation Reading Benchmark Assessments
Achieve the Core. (2012). Common Core Shifts for English Language Arts/Literacy. Retrieved Summer 2012 from http://www.achievethecore.org/downloads/E0702_Description_of_the_Common_Core_Shifts.pdf
Anderson, L. W., & Krathwohl, D. R. (Eds.). (2001). A taxonomy for learning, teaching, and assessing: A revision of Bloom’s taxonomy of educational objectives. New York: Longman.
Armstrong, J., & Anthes, K. (2001). How data can help: Putting information to work to raise student achievement. American School Board Journal, 188(11), 38–41.
Atkin, J. M., Black, P., & Coffey, J. (2001). Classroom assessment and the national science education standards. Washington, DC: National Academy Press.
Black, P., Harrison, C., Lee, C., Marshall, B., & Wiliam, D. (2003). Assessment for learning: Putting it into practice. Maidenhead, UK: Open University Press.
Black, P., & Wiliam, D. (1998). Assessment and classroom learning. Assessment in Education, 5(1), 7–74.
Bloom, B. S. (Ed.). (1956). Taxonomy of educational objectives: The classification of educational goals. Handbook I: Cognitive domain. New York: Longman, Green.
Crooks, T. (1988). The impact of classroom evaluation practices on students. Review of Educational Research, 58(4), 438-481.
CCSSO/NGA. (2010). Common Core State Standards for English Language Arts & Literacy in History/Social Studies, Science, and Technical Subjects/Appendix A. Retrieved June 2012 from http://www.corestandards.org/assets/Appendix_A.pdf
Darling-Hammond, L., & Pecheone, R. (2010). Developing an internationally comparable balanced assessment system that supports high-quality learning. Retrieved August 2012 from http://www.k12center.org/rsc/pdf/Darling-HammondPechoneSystemModel.pdf
Davis, M. (2012a). Shifting to adaptive testing. Education Week Digital Directions. Retrieved October 2012 from http://dd.edweek.org/nxtbooks/epe/dd_2012fall/index.php#/12 digitaldirections.org
Davis, M. (2012b). Tailoring the test to special needs? Education Week Digital Directions. Retrieved October 2012 from http://dd.edweek.org/nxtbooks/epe/dd_2012fall/index.php#/14
Feldman, J., & Tung, R. (2001). Using data-based inquiry and decision making to improve instruction. ERS Spectrum: Journal of School Research and Information, 19(3), 10–19.
Forman, M. L. (2007). Developing an action plan: Two Rivers Public Charter School focuses on instruction. In K. P. Boudett & J. L. Steele (Eds.), Data wise in action: Stories of schools using data to improve teaching and learning (pp. 107–124). Cambridge, MA: Harvard Education Press.
Garrison, C., & Ehringhaus, M. (2007). Formative and summative assessments in the classroom. Retrieved Summer 2012 from http://www.amle.org/Publications/WebExclusive/Assessment/tabid/1120/Default.aspx
Hamilton, L., Halverson, R., Jackson, S., Mandinach, E., Supovitz, J., & Wayman, J. (2009). Using student achievement data to support instructional decision making (NCEE 2009-4067). Washington, DC: National Center for Education Evaluation and Regional Assistance, Institute of Education Sciences, U.S. Department of Education. Retrieved Fall 2012 from http://ies.ed.gov/ncee/wwc/publications/practiceguides/.
Hess, K. (2009). Cognitive rigor matrix for ELA. In Local Assessment Toolkit: Exploring Cognitive Rigor in Curriculum, instruction, and Assessment. Dover, NH: National Center for Assessment. Retrieved Fall 2011 from
Hess, K. (2010). Applying Webb's depth-of-knowledge levels in reading, writing, math science, and social studies. Dover, NH: National Center for Assessment.
Individuals with Disabilities Education Improvement Act (IDEA) (2004). PL 108-446, 20 U.S.C. §§1400 et seq.
International Reading Association Common Core State Standards (CCSS) Committee. (2012). Literacy implementation guidance for the ELA Common Core State Standards [White paper]. International Reading Association.
Retrieved October 2012 from http://www.reading.org/Libraries/association-documents/ira_ccss_guidelines.pdf
Khattri, N., Reeve, A., & Kane, M. (1998). Principles and practices of performance assessment. Manwah, NJ: Lawrence Erlbaum Associates.
Khattri, N. & Sweet, D. (1996). Assessment Reform: Promises and Challenges. In Kane, M., & Mitchell, R. (1996). Implementing performance assessment: Promises, problems, and challenges. (pp. 1-21). Mahwah, NJ: Lawrence Erlbaum Associates.
Liddle, K. (2000). Data-driven success: How one elementary school mined assessment data to improve instruction. American School Board Journal. Retrieved April 2009, from http://www.asbj.
Marzano, R. (2005). What works in schools (PowerPoint presentation). www.marzanoandassociates.com/pdf/ShortVersion.pdf
Measured Progress/ETS Collaborative. (2012). ELA Rubrics (Grades 3-5). Smarter Balanced Assessment Consortium. Retrieved September 2012 from
National Governors Association Center for Best Practices (NGA Center) and the Council of Chief State School Officers (CCSSO). (2010). Common Core Standards for English Language Arts. Washington, DC. Retrieved Fall 2011 from http://www.corestandards.org/assets/CCSSI_ELA%20Standards.pdf
No Child Left Behind. (2001). Washington, DC: U.S. Department of Education.
Partnerships for Assessment of Readiness for College and Careers (PARCC). (2011a). Overview of the Framework for ELA/Literacy. Retrieved Fall 2011 from http://www.parcconline.org/mcf/english-language-artsliteracy/overview-frameworks-elaliteracy
Partnerships for Assessment of Readiness for College and Careers (PARCC). (2011b). PARCC Model Content Frameworks: ELA/Literacy Grades 3-11. Retrieved December 2011 from http://www.parcconline.org/mcf/english-language-artsliteracy/overview-frameworks-elaliteracy
Partnerships for Assessment of Readiness for College and Careers (PARCC). (2012a). Advances in the PARCC ELA/Literacy Assessment – PowerPoint. Retrieved October 2012 from www.parcconline.org/.../parcc/.../PARCC
Partnerships for Assessment of Readiness for College and Careers (PARCC). (2012b). Item and task prototypes. Retrieved August 2012 from http://www.parcconline.org/samples/item-task-prototypes
Perie, M., Marion, S., & Gong, B. (2007). A framework for considering interim
assessments. Dover, NH: National Center for the Improvement of Educational Assessment Retrieved November 13, 2007 from http://www.nciea.org/publications/ConsideringInterimAssess_MAP07.pdf.
Smarter Balanced Assessment Consortium (SBAC). (2012). Smarter Balanced English Language Arts Item and Task Specifications Grades 3-5. Smarter Balanced Assessment Consortium. Retrieved April 2012 from http://www.smarterbalanced.org/wordpress/wp-content/uploads/2012/05/TaskItemSpecifications/EnglishLanguageArtsLiteracy/ELAGeneralItemandTaskSpecifications.pdf
Sousa, D. (2006). How the brain learns. Thousand Oaks, CA: Corwin Press.
Stiggins, R. (2007). Assessment through the student’s eyes. Educational Leadership, 64(8), 22–26. Retrieved Summer 2012 from http://www.ascd.org/publications/educational-leadership/may07/vol64/num08/Assessment-Through-the-Student's-Eyes.aspx
Stiggins, R. (2004). New assessment beliefs for a new school mission. Phi Delta Kappan,
Vol. 86, No. 1, September 2004, pp. 22-27. Retrieved Spring 2012 from
Trammel, D., Schloss, P., & Alper, S. (1994). Using self-recording and graphing to increase completion of homework assignments. Journal of Learning Disabilities, 27(2), 75-81.
U.S. Department of Education. (1990–2007). National Assessment of Educational
Progress. National Center for Educational Statistics. Retrieved September 1, 2007 from http://nces.ed.gov/nationsreportcard/
U.S. Department of Education. (2009). Using ARRA funds to drive school reform and improvement. Retrieved Fall 2012, from www.ed.gov/policy/gen/leg/recovery/guidance/uses.doc.
Webb, N. (2006). Depth-of-Knowledge (DOK) levels for reading. Retrieved Spring 2010 from http://www.education.ne.gov/assessment/pdfs/Reading_DOK.pdf
Webb, N. (2002). Depth-of-Knowledge levels for four content areas. Wisconsin Center for Educational Research.
Wiggins, G. (1993). Assessing student performances: Exploring the purpose and limits of testing. San Francisco, CA: Jossey-Bass.
Wylie, C. (2008). Formative assessment: Examples of practice. The CCSSO Formative Assessment for Students and Teachers (FAST) SCASS. Washington, DC: Council of State School Officers. A paper prepared for the Formative Assessment for Teachers and Students (FAST) State Collaborative on Assessment and Student Standards (SCASS of the council of Chief State School Officers (CCSSO). Retrieved Summer 2012 from http://www.ccsso.org/Documents/2008/Formative_Assessment_Examples_2008.pdf