CALL NOW 1-800-585-5258

Log In

STAAR Reading Assessments


The development of state standards for K-12 English Language Arts was adopted by the State Board of Education in 1998 for Texas schools. Texas Essential Knowledge and Skills (TEKS) for English Language Arts (ELA) identify what students should know and be able to do at every grade. More specifically, the English Language Arts/Reading TEKS measure student proficiency in the acquisition of knowledge and skills covered by the curriculum at the specified grade level. The review of literature suggests that students should not be compared to each other but rather evaluated on how well they are individually meeting grade-level standards. These standards provide a consistent framework to prepare all students for success in K-12 educational years and as they advance to college and careers. Texas measures how well students are progressing in reading with the statewide assessment, the State of Texas Assessments of Academic Readiness (STAAR™).

The most recent STAAR™ assessments earmark a significant change in the State of Texas assessment system. According to the Texas Education Agency (TEA), assessments will contain rigor beyond what has appeared in past state assessments. The rigor of items “will be increased by assessing skills at a greater depth and level of cognitive complexity” (TEA, 2010b). Another new element to the STAAR assessment is the requirement of a four-hour time limit for each assessment. Researchers indicate the importance of a balanced approach to assessments (Black, Harrison, Lee, Marshall, and Wiliam, 2003; Garrison and Ehringhaus, 2007). This approach focuses on summative assessments, benchmark or interim assessments, and formative assessments. A comprehensive system is a balanced approach, with all assessments having a relatedness intended to improve achievement. Motivation Reading Assessments TEKS Aligned are interim assessments that measure student progress in reading at three different points during the year.

The federal requirements of the No Child Left Behind Act (NCLB, 2001), and the Individuals with Disabilities Education Act (IDEA, 2004) mandate that all students participate in the state assessment program. All students must be tested in reading content at their respective grade levels. Accountability rules of NCLB remain in effect until ESEA is reauthorized by Congress. Based on NCLB, the Annual Measurable Objectives (AMOs) increase each year until 2014. NCLB further noted that by the end of the school year

2013-14, 100 percent of students are to be proficient in reading. Summative and formative assessments are necessary and used for developing an accurate picture of a student’s overall academic achievement. Herman, Osmundson, and Dietel (2010) attest to benchmark assessments occupying a space in the middle, yet play an important role in a balanced assessment system. The National Research Council recognize a comprehensive assessment system as one that is coherent, comprehensive, and continuous (NRC, 2001). Classroom benchmark assessments correlated to the Texas Essential Knowledge and Skills provide teachers ongoing interval measurements of student progress, thus the rationale for Motivation Reading Assessments TEKS Aligned.

Benchmark assessments should be well aligned with curriculum to validate improved learning. Utilization of formative assessments and benchmarks provide a continuous, comprehensive flow of information with which to plan and guide instruction. When teachers are provided time to adequately analyze assessment data relative to their content, weaknesses in individual students or within the curriculum or instruction can be addressed. Thus, there is a definite need to include benchmark assessments as part of the accountability plan for a campus and district. These assessments are designed to measure student acquisition of the knowledge and skills specified in the Texas Essential Knowledge and Skills at different intervals. The primary purpose of Motivation Reading Assessments TEKS Aligned is to provide a valid measure of the quality of reading education in the classroom or across the campus. Research shows that students score higher on standardized tests when they experience focused, aligned practice.

Motivation Reading Assessments TEKS Aligned measure how well students have acquired the knowledge and skills taught during reading instruction. The assessments are designed to ensure students are learning at their grade level. Furthermore, Motivation Reading Assessments TEKS Aligned provide data to teachers, schools, and school districts to support improved instructional decisions. The Motivation Reading Assessments TEKS Aligned serve as accountability measures to help gauge or predict future performance that might occur on state assessments which are part of the Adequate Yearly Progress (AYP) requirements of the federal No Child Left Behind Act (NCLB). With the summative assessment data, educators can pinpoint areas that require additional attention and focus.

Periodic exposure to benchmark assessments provides students with opportunities to experience a variety of assessment items and formats for each standard. These experiences will benefit students prior to facing state assessments. When assessment is an integral part of reading instruction, the literature indicates that assessment contributes significantly to students’ learning. Assessment should inform and guide teachers as they make instructional decisions. During the school year, practice tests provide students opportunities to evaluate their own work and progress. As a result, Motivation Reading Assessments TEKS Aligned arm teachers with essential data or information that helps in the preparation of high-quality instruction.

Results of the Motivation Reading Assessments TEKS Aligned provide information about the academic achievement of students. This information is used to identify individual student strengths, determine areas of challenge, and measure the quality of reading across the campus. Utilization of results from various benchmark assessments helps teachers monitor student progress in order to determine future plans for instruction. Students can use the Chart Your Success charts located in the back of the student assessment booklets to chart assessment data, self-monitor individual progress over time in reading, and compare the knowledge and skills to previous assessments. The involvement of students in assessment promotes student engagement in individual learning targets. Students need to know what learning targets they are responsible for mastering and at what level (Stiggins, 2007). Marzano (2005) states, “students who can identify what they are learning significantly outscore those who cannot.” A class diagnostic chart is available at which enables teachers to view and determine students’ strengths and weaknesses. After the analysis of assessment data, findings may indicate students require additional instruction to address deficits in order to achieve skill mastery and close learning gaps. If skill deficits exist, then teachers are encouraged to explore different strategies in order to improve student achievement. Teachers may design learning experiences to revise their curricula, develop formative assessments, examine instructional methods of delivery, target specific populations for remediation and enrichment, create student academic assistance interventions, and/or develop individual plans for student improvement.

Available for Levels 2-5, Motivation Reading Assessments TEKS Aligned are diagnostic and prescriptive in nature. These practice assessments provide educators with detailed information on student progress as well as promote flexibility of use in a variety of classroom settings. For each grade level, there are three different versions of the assessments (Forms A, B, C) bound into one student assessment booklet. Each form of the fourth and fifth grade assessments feature two single reading selections and one paired selection. Each form of the second and third grade assessments has three single selections. The selections incorporate complex text that are of grade-appropriate difficulty and fall within the prescribed ranges of text complexity as detailed by national best practices. Selections are appropriate for student interest and integrate content from science, social studies, and the arts. Selections include the genres of fiction, poetry, drama, informational, procedural, and persuasive pieces as required by the Texas Education Agency (TEA, 2010a). Single selections for Levels 3-5 are followed by 12 selected response assessment items. For Level 2, the single selections contain 10 selected response assessment items. Paired selections are followed by 12-14 selected-response assessment items.  In conclusion, the selected-response items will challenge students to move beyond factual knowledge as they analyze the meaning of texts and locate evidence for support of chosen responses. The assessment items in Motivation Reading Assessments TEKS Aligned will measure the depth, rigor, and complexity of comprehension required by the standards. Students should be familiar with such questions since classroom formative assessments should contain questions that engage students in dialogue that alerts the teacher to the degree and depth of student understanding prior to the administration of benchmark assessments. According to researchers (Black et al., 2003), it is crucial that time be allotted for teachers to design and ask questions that probe for deeper thinking within students.

The student book includes a Chart Your Success tool for students to record individual results on the selected-response items. Motivation Reading Assessments TEKS Aligned Online also features a print-to-digital transition using the same content. Campuses will have digital access to the benchmark assessments if using Internet-connected computers. While these benchmarks are interactive online versions, the assessment items are not computer-adaptive but are fixed-form and offer randomized assessment items. According to Davis (2012), testing experts caution educators that fixed-form assessments do not allow teachers to see the extent of learning gaps for struggling students or the extensive range of knowledge for higher-achieving students. Thus, educators cannot rely on a single fixed-form assessment as the sole determiner of learning gaps.

Earlier in 2012, the United States Department of Education and the Federal Communications Commission announced a blueprint to invite schools to transition to digital materials by the end of the next five years. While not mandated, the initiative encourages schools to make the switch from print-to-digital based on the projected cost-savings and the academic improvement. Some school districts appear to be making an effort to transition to digital materials and tools for students. Digital literacy in assessments appears to be a much needed 21st Century skill. Technology seems to be an accepted or known behavior in schools today. Certainly, technology is how communication takes place more times than not in college and in the workplace. Today’s students are expected to demonstrate specific digital communication skills in order to show they are digitally literate. Academic excellence is the goal for all students. Acquisition of this goal within the context of a technological environment prepares students to survive within the Digital Age. Thus, performance data must yield evidence that students are successfully prepared to live in a technological society. As students study and learn core curriculum of the 21st Century, technology skills, digital literacy, and higher–order thinking will be a focus.

The Administration Manual and Answer Key include directions for administering all portions of the assessment. As shared by the United States Department of Education (2003), No Child Left Behind noted the importance of assessment items that align with the depth and breadth of the academic content standards. Therefore, all assessment items in the Motivation Reading Assessments TEKS Aligned are coded with the Texas Essential Knowledge and Skills, English Language Proficiency Standards (ELPS), the Depth of Knowledge Levels (DOK), and the Bloom’s Original and Revised Taxonomy Levels. Karin Hess (2009; 2010) designed the Cognitive Rigor English Language Arts Matrix to integrate Revised Bloom’s Taxonomy with Webb’s Depth of Knowledge model. The ELA Product Development team applied Hess’ matrix to assessment items to promote depth, complexity, and rigor in student thinking.

The model Depth of Knowledge (DOK) was developed by Norman Webb (Webb, 2002; 2006). Dr. Webb advocates the necessity of assessment items matching the standard. Webb also wanted educators to be aware of the level of demonstration required by a student when a test item was developed, thus the development of his four levels of DOK. Level 1 assessment items have students recall information. Level 2 items ask students to think beyond reproduction of responses. Students use more than one cognitive process or follow more than one step. Students at Level 3 demonstrate higher levels of thought than the previous levels require as these items are more complex. Responses may have multiple answers, but students must choose one and justify the reasoning behind the selection. Assessment items at Level 4 require students to form several connections with ideas. Typically, performance assessments and open-ended responses are written at this level of thought.

Students can be assisted in organizing the content of their thinking to facilitate complex reasoning. According to Sousa (2006), students are not actually taught to think because children are born with the brain organizational structure that originates thinking.  Sousa supports Bloom’s Taxonomy as an organizational structure that is compatible with the manner in which the brain processes information to promote comprehension. Bloom, Englehart, Furst, Hill, and Krathwohl (1956) developed this classification system for levels of intellectual behavior in learning. Bloom’s Taxonomy contains three domains: the cognitive, psychomotor, and affective. Within the cognitive domain, Bloom identified six levels: knowledge, comprehension, application, analysis, synthesis, and evaluation. The taxonomy was revised by Anderson and others (2001) to focus on thinking as an active process. The original and revised taxonomies continue to be useful today in developing and categorizing the critical thinking skills of students. Webb’s DOK framework, Bloom’s Taxonomy, and Hess’ ELA Cognitive Rigor Matrix were all utilized by the ELA Product Development Team to develop assessment items that reflect rigor, depth, and complexity of thought.

The national and State of Texas shift towards preparing students to survive in the global market impact the type assessments undertaken by students. Assessments that focus precisely on the State Standards will not only demonstrate if students can succeed in school but also in the real world.  Assessments will indicate if students are both college and career ready. For the purpose of the Motivation Reading Assessments TEKS Aligned, the various DOK and Bloom’s coding is utilized to reflect the rigor and depth in levels of thought required by students on the benchmark assessments. Assessment items displaying rigor require students to use higher levels of thought, exhibiting a more challenging 21st Century learning environment. Students may be asked to use cognitive processes such as examine, prioritize, decide, assess, or classify, as well as provide evidence to support given responses.

Over the past years, changes in accountability and test­ing have led to data playing a major role in the education of students. The United States Department of Education advocates the importance of using data for guiding instruction and improving student learning. Schools are strongly encouraged to respond to assessment data, using it to identify students’ academic strengths and needs (U.S. Department of Education, 2009). As educators face increasing accountability pressure from federal, state, and local  entities to improve student achieve­ment, the data appear to be the  central element in how students’ aca­demic progress is monitored and how instructional practices are evaluated. Research seems to indicate there is no single assessment that provides a complete picture of student performance. Motivation Reading Assessments TEKS Aligned offer three forms in order to keep a pulse on the progress of student performance, rather than a single snapshot assessment. Each assessment plays a prominent role in determining if quality teaching and learning are occurring. As correct and incorrect assessment answers are analyzed, teachers are able to observe the patterns of thought in which students experience difficulty or exhibit success. This data is informative in that teachers may appropriately adjust and revise instruction to more appropriately address the diversity of needs within classrooms. Thus, assessments have important implications for teaching and learning. Research indicates it is essential that assessment data be used to make well informed instructional decisions (Armstrong and Anthes, 2001; Feldman and Tung, 2001; Forman, 2007; Liddle, 2000).

Benchmarks yield student achievement data on grade-specific and eligible for testing TEKS throughout the school year, including the ability to report student achievement approaching, falling below, or exceeding the standards. With three forms of benchmark assessments per grade, these assessment instruments are capable of providing data to measure reading progress and proficiency at three different intervals throughout the year. Motivation Reading Assessments TEKS Aligned Forms A, B, and C can be used in different ways: as practice, as a diagnostic instrument, and as a teaching tool. Students need opportunities to practice and develop test-taking skills. These tests focus on the skills students will be expected to demonstrate on assessments of Texas Essential Knowledge and Skills. A diagnostic chart is available on the Mentoring Minds website The chart enables teachers to glean insight into students’ strengths and weaknesses. Teachers can view the chart to determine specific areas where additional practice for mastery of skills is probably warranted. Although benchmarks are not formative assessment tools, the data is informative to annual learning goals. Data from the assessments will guide the teacher in identifying possible areas where adjustments in future instruction may be necessary, thus, using the assessments as teaching tools.

Studies support the use of several measures from which to gauge student achievement. The English Language Arts Product Development Team recognized that assessment systems should include a balance of formative and summative data to be most effective in improving outcomes and in making a significant impact on reading education. For Motivation Reading Assessments TEKS Aligned, selected-response items were developed for all forms with all items aligned to the English Language Arts-Reading (ELA/R) TEKS. The format for Motivation Reading Assessments TEKS Aligned will be paper-pencil fixed-form or an online version of each form with randomized items. As the school year progresses, the students who are proficient on the various benchmarks can determine how they might perform on the state assessments in reading. The three benchmark forms offered at each grade provide flexibility in planning by enabling educators to spread out the benchmark assessments over the course of a year, leaving a window of time for the state assessments to be administered. As the Motivation Reading Assessments TEKS Aligned data are examined, teachers can identify students who are performing at the grade-specific standard level, those who are exceeding the standards, and those who are approaching or are functioning below the standard. Teachers can also determine and chart the data for the various subgroups (i.e., ethnicity, disadvantaged, special education, and English Language Learners). All subgroups must make sufficient growth in order for the school to achieve the adequate yearly progress (AYP) status per the No Child Left Behind law.

The Texas Education Agency (TEA) released information regarding the assessment of reading. This information was considered by the English Language Arts Product Development Team in order to design assessment items that measure a deeper understanding and reflect the requirements and expectations for STAAR™. The developers of Motivation Reading Assessments TEKS Aligned reviewed English Language Arts/Reading TEKS, studied the released item specifications, examined the range of sample items, and perused the STAAR Blueprint prior to developing the Motivation Reading Assessments TEKS Aligned.

The benchmark assessments address all Readiness and many Supporting student expectations of the English Language Arts/Reading Texas Essential Knowledge and Skills (TEA, 2011b) that are eligible for testing. Motivation Reading Benchmark Assessments TEKS Aligned are aligned with the Reporting Categories as outlined on the Assessed Curriculum Document prepared by the Texas Education Agency (2010a). The benchmarks also align with the STAAR™ Blueprint (TEA, 2011c) as it includes items that assess students on the Readiness TEKS (60-70%). An emphasis is also placed on assessment items that measure comprehension of the Supporting TEKS (30-40%). While being assessed, students must display a beyond literal understanding of texts as they use their critical and analytical skills. All students must demonstrate sufficient skill in making connections within texts, using evidence to justify or form conclusions and make real-life applications; fourth and fifth graders must also show sufficient mastery of making connections across texts using evidence to justify or form conclusions and make real-life applications. Student results will indicate if students are prepared to apply rigor and complexity of thought as they respond to the assessments emphasizing the existing Reading Standards. The development of the benchmark assessment contents were based on STAAR™ Blueprints (TEA, 2011c), Test Design Schematics (TEA, 2010), Released Test Questions (TEA, 2012a), ELA/R TEKS eligible for testing (TEA, 2011a), the Reading PowerPoint shared by Young (2011), and other assessment curriculum documents found on the Texas Education Agency website. Updates for reading assessments presented at Texas Elementary Principals and Supervisors Association (TEPSA) and Coalition of Reading and English Supervisors of Texas (CREST) conferences were also considered prior to developing the benchmark assessments.

As previously stated, Motivation Reading Assessments TEKS Aligned reflect test items that align with the assessed standard. The alignment results in appropriate and effective assessment items based on current information. Webb’s Depth of Knowledge and Bloom’s Taxonomy were the basis for designing items that stimulated students' higher order thinking skills, encouraging rigor and depth in thinking. With grade-specific and eligible for testing ELA/R TEKS as the key focus for designing assessment items, the Mentoring Minds Product Development Team developed Motivation Reading Assessments TEKS Aligned as a resource for assessing and strengthening education in English Language Arts/Reading.


Bibliography for STAAR Motivation Reading Assessments

Anderson, L. W., & Krathwohl, D. R. (Eds.). (2001). A taxonomy for learning, teaching, and assessing: A revision of Bloom’s taxonomy of educational objectives. New York: Longman.

Armstrong, J., & Anthes, K. (2001). How data can help: Putting information to work to raise student achievement. American School Board Journal, 188(11), 38–41.

Atkin, J. M., Black, P., & Coffey, J. (2001). Classroom assessment and the national science education standards. Washington, DC: National Academy Press.

Black, P., Harrison, C., Lee, C., Marshall, B., & Wiliam, D. (2003). Assessment for learning: Putting it into practice. Maid­enhead, UK: Open University Press.

Black, P., & Wiliam, D. (1998). Assessment and classroom learning. Assessment in Education, 5(1), 7–74.

Bloom, B. S. (Ed.). (1956). Taxonomy of educational objectives: The classification of educational goals. Handbook I: Cognitive domain. New York: Longman, Green.

Crooks, T. (1988). The impact of classroom evaluation practices on students. Review of Educational Research, 58(4), 438-481.

Darling-Hammond, L., & Pecheone, R. (2010). Developing an internationally comparable balanced assessment system that supports high-quality learning. Retrieved August 2012 from

Davis, M. (2012). Shifting to adaptive testing. Education Week Digital Directions. Retrieved October 2012 from

Feldman, J., & Tung, R. (2001). Using data-based inquiry and decision making to improve instruction. ERS Spectrum: Journal of School Research and Informa­tion, 19(3), 10–19.

Forman, M. L. (2007). Developing an action plan: Two Rivers Public Charter School fo­cuses on instruction. In K. P. Boudett & J. L. Steele (Eds.), Data wise in action: Stories of schools using data to improve teaching and learning (pp. 107–124). Cambridge, MA: Harvard Education Press.

Garrison, C., & Ehringhaus, M. (2007). Formative and summative assessments in the classroom. Retrieved Summer 2012 from

Hamilton, L., Halverson, R., Jackson, S., Mandinach, E., Supovitz, J., & Wayman, J. (2009). Using student achievement data to support instructional decision making (NCEE 2009-4067). Washington, DC: National Center for Education Evaluation and Regional Assistance, Institute of Education Sciences, U.S. Department of Education. Retrieved Fall 2012 from

Herman, J. L., Osmundson, E., & Dietel, R. (2010). Benchmark assessments for improved learning (AACC Policy Brief). Los Angeles, CA: University of California.

Hess, K. (2009). Cognitive rigor matrix for ELA. In Local Assessment Toolkit: Exploring Cognitive Rigor in Curriculum, Instruction, and Assessment. Dover, NH: National Center for Assessment. Retrieved Fall 2011 from

Hess, K. (2010). Applying Webb's depth-of-knowledge levels in reading, writing, math science, and social studies. Dover, NH: National Center for Assessment.

Individuals with Disabilities Education Improvement Act (IDEA) (2004). PL 108-446, 20 U.S.C. §§1400 et seq.

Liddle, K. (2000). Data-driven success: How one elementary school mined assess­ment data to improve instruction. Amer­ican School Board Journal. Retrieved April 2009, from http://www.asbj.

Marzano, R. (2005). What works in schools (PowerPoint presentation).

National Research Council. (2001). Knowing what students know: The science and design of educational assessment. Washington, DC: National Academy of Sciences.

No Child Left Behind. (2001). Washington, DC: U.S. Department of Education. 

Perie, M., Marion, S., & Gong, B. (2007). A framework for considering interim

assessments. Dover, NH: National Center for the Improvement of Educational Assessment Retrieved November 13, 2007 from

Sousa, D. (2006). How the brain learns. Thousand Oaks, CA: Corwin Press.

Stiggins, R. (2007). Assessment through the student’s eyes. Educational Leader­ship, 64(8), 22–26. Retrieved Summer 2012 from's-Eyes.aspx

Stiggins, R. (2004). New assessment beliefs for a new school mission. Phi Delta Kappan,

Vol. 86, No. 1, September 2004, pp. 22-27. Retrieved Spring 2012 from

Texas Education Agency. (2010a). Reading Test Design Schematic Grades 3-5 Reading. Austin, Texas: Texas Education Agency. Retrieved May 2012 from

Texas Education Agency (TEA) Student Assessment Division. (2010b). STAAR™  Media Toolkit. Austin, Texas: Texas Education Agency. Retrieved Fall 2011 from

Texas Education Agency(TEA) Student Assessment Division. (2011a). STAAR™ State of Texas Assessments of Academic Readiness Reading Assessment Eligible Texas Knowledge and Skills. Austin, Texas: Texas Education Agency. Retrieved Fall 2010 from

Texas Education Agency (TEA). (2011b). English Language Arts and Reading. Retrieved February 2012, from

Texas Education Agency (TEA). (2011c). STAAR™ Blueprints Grades 3-5 Reading. Retrieved Fall 2011 from

Texas Education Agency (2012a). STAAR™ Released Test Questions Grades 3-5  Reading. Austin, Texas: Texas Education Agency. Retrieved Spring 2012 from

Texas Education Agency. (2012b). State of Texas Assessments of Academic Readiness Summary Report Grades 3-5 Reading. Austin, Texas: Texas Education Agency. Retrieved June 2012 from

Trammel, D., Schloss, P., & Alper, S. (1994). Using self-recording and graphing to increase completion of homework assignments. Journal of Learning Disabilities, 27(2), 75-81.

University of Texas Center for Reading and Language Arts. (2003). Effective instruction for elementary struggling readers: Research-based practices (Revised edition). Austin, TX: Texas Education Agency.

U.S. Department of Education. (1990–2007). National Assessment of Educational

Progress. National Center for Educational Statistics. Retrieved September 1, 2007 from 

U.S. Department of Education. (2009). Using ARRA funds to drive school re­form and improvement. Retrieved Fall 2012, from

Webb, N. (2006). Depth-of-Knowledge (DOK) levels for reading. Retrieved Spring 2010 from

Webb, N. (2002). Depth-of-Knowledge levels for four content areas. Wisconsin Center for Educational Research.

Wiggins, G. (1993). Assessing student performances: Exploring the purpose and limits of testing. San Francisco, CA: Jossey-Bass.

Wylie, C. (2008). Formative assessment: Examples of practice. The CCSSO Formative Assessment for Students and Teachers (FAST) SCASS. Washington, DC: Council of State School Officers. A paper prepared for the Formative Assessment for Teachers and Students (FAST) State Collaborative on Assessment and Student Standards (SCASS).

Young, V. (2011). State of Texas Assessments of Academic Readiness (STAAR™) Grade 3-8 Reading PPT. Austin, TX: Texas Education Agency.














Save money by choosing resources that qualify for multiple funding sources.

Funding Resource Guide >

Related Products