CALL NOW 1-800-585-5258

Log In

STAAR Science Assessments

 

The development of state standards for K-12 Science was adopted by the State Board of Education in 2010 for Texas schools. Texas Essential Knowledge and Skills (TEKS) for Science identify what students should know and be able to do at every grade. More specifically, the Science TEKS measure student proficiency in the acquisition of knowledge and skills covered by the curriculum at the specified grade level. Students should not be compared to each other but rather evaluated on how well they are individually meeting grade-level standards. These standards provide a consistent framework to prepare all students for success in K-12 educational years and as they advance to college and careers. Texas measures how well students are progressing in science with the statewide assessment, the State of Texas Assessments of Academic Readiness (STAAR™).

The most recent STAAR™ assessments earmark a significant change in the State of Texas assessment system. According to the Texas Education Agency (TEA), assessments will contain rigor beyond what has appeared in past state assessments. The rigor of items “will be increased by assessing skills at a greater depth and level of cognitive complexity” (TEA, 2010d). Another new element to the STAAR assessment is the requirement of a four-hour time limit for each assessment. We recommend timing the benchmarks, but leave it up to the school/district to decide. 

Researchers indicate the importance of a balanced approach to assessments (Black, Harrison, Lee, Marshall, and Wiliam, 2003; Garrison and Ehringhaus, 2007). This approach focuses on summative assessments, benchmark or interim assessments, and formative assessments. A comprehensive system is a balanced approach, with all assessments having a relatedness intended to improve achievement. Motivation Science Assessments TEKS Aligned are summative assessments that measure student progress in science at three different points during the year. 

The federal requirements of the No Child Left Behind Act (NCLB, 2001; 2002), and the Individuals with Disabilities Education Act (IDEA, 2004) mandate that all students participate in a state assessment program. All students must be tested in reading/language arts, math, and science academic areas at specified grade levels. Accountability rules of NCLB remain in effect until Elementary Secondary Education Act is reauthorized by Congress. In 2012, some schools applied for flexibility waivers from NCLB.

According to National Science Teachers Association, No Child Left Behind requires that students be tested in science (once in elementary, once in middle level, and once in high school), but only reading and mathematics assessments are counted in a school's Adequate Yearly Progress (AYP). AYP is a statewide accountability system mandated by the No Child Left Behind Act of 2001, requiring each state to ensure that all schools and districts make Adequate Yearly Progress. AYP is a series of annual academic performance goals established for each school, Local Education Agency (LEA), and the state as a whole. Schools, LEAs, and the state meet AYP if they meet or exceed each year’s goals (AYP targets and criteria). AYP is required under Title I of the federal Elementary Secondary Education Act (ESEA). Based on NCLB, the Annual Measurable Objectives (AMOs) increase each year until 2014. By the end of school year 2013-14, the achievement goal of NCLB requires states, school districts, and schools to ensure that all students be proficient in grade-level math and reading. 

Although the requirement exists that science must be tested in a minimum of three grades, science achievement results are required in the calculation of AYP. States have the option of requiring that all public elementary and middle schools designate student achievement in science one of their indicators for making AYP—or adequate yearly progress—under the federal law. While this is not a requirement at this time, a survey was administered by the National Science Teachers Association (NSTA, 2011) to its members. Within two days, almost 600 respondents took the survey.  63%  of those responding members were in favor of counting the science data from testing mandated by NCLB. Responses in support of including the science results for AYP ranged from the practical (“Otherwise why test and gather data?”) to philosophical statements (“Science is the language of life.”). Numerous respondents cited the addition of science to AYP calculations might elevate the importance of science among administrators, students, and parents; thus, that could possibly increase the time and funds allotted for science instruction. 

The rationale for inclusion is two-fold: Interest in and preparation for secondary science must begin at the elementary level. Seemingly, teachers and principals de-emphasize science. Perhaps this is due to the emphasis placed on reading and mathematics because of the distinct accountability requirements and consequences that are in place. Unfortunately, some elementary and middle school teachers lack strong content and pedagogical knowledge in the sciences. All too often, it appears that what is tested seems to be the focus of instruction; therefore, science often is neglected. If science were an AYP indicator, perhaps science would receive or be taught with increased emphasis. As it is, science is not always seen as important as it should be at the elementary levels. When a foundation is not built and strengthened in science in the elementary years, secondary students will struggle in science, causing many students not to pursue fields such as Science, Technology, Engineering, and Mathematics (STEM). Perhaps NCLB should prepare students to pursue careers in a variety of fields (e.g., science, history, health, art, technology), rather than compromising academic areas that are not indicators on AYP. Education’s goal is to increase student achievement and prepare students for a diverse and competitive work environment. For this to occur, it seems that accenting excellence only with reading and mathematics scores, as measured by AYP, limits our perspective on achievement.

Summative, benchmark, and formative assessments are necessary for developing an accurate picture of a student’s overall academic achievement. Herman, Osmundson, and Dietel (2010) attest to benchmark assessments occupying a space in the middle, yet playing an important role in a balanced assessment system. The National Research Council recognize a comprehensive assessment system as one that is coherent, comprehensive and continuous (NRC, 2001). Classroom benchmark assessments correlated to the TEKS provide teachers ongoing interval measurements of student progress, thus the rationale for Motivation Science Assessments TEKS Aligned. Teachers need reliable and ongoing assessment data to determine if the curriculum is aligned to the existing standards and if students are on target for achieving mastery of standards.

Motivation Science Assessments TEKS Aligned are STAAR™ formatted practice science assessments, based on the currently tested TEKS for levels 5, 8, Biology, Chemistry, and Physics and on the required TEKS for levels 3, 4, 6, and 7. Each book will contain three full-length benchmark science assessments. Grades 3, 4, and 5 assessments will follow the STAAR™ blueprint for Level 5 Science and the elementary benchmarks will be available in English and Spanish. The Motivation Science Assessment Level 5 will follow the STAAR™ Level 5 Science Blueprint with a minimum of 40% dual-coded to process skills, as well as readiness standards at 60%-65% and supporting standards at 35%-40% of each assessment. Levels 3 and 4 will include a minimum of 40% dual-coded to process skills and will contain the same number of questions for each reporting category shown on the Level 5 STAAR™ Science Blueprint. Grades 6, 7, and 8 will follow the STAAR™ blueprint for Level 8 Science. The Motivation Science Assessment Level 8 will follow the Level 8 STAAR™ Science Blueprint with a minimum of 40% dual-coded to process skills, as well as readiness standards at 60%-65% and supporting standards at 35%-40% of each assessment. Levels 6 and 7 will include a minimum of 40% dual-coded to process skills and will contain the same number of questions for each reporting category shown on the Level 8 Science Blueprint. The Motivation Science Assessments TEKS Aligned for Biology, Chemistry, and Physics will follow the Science Blueprints for each content area respectively, with a minimum of 40% dual-coded to process skills, as well as readiness standards at 60%-65% and supporting standards at 35%-40% of each assessment. For Level 5, Level 8, Biology, Chemistry, and Physics, all Readiness/Supporting Standards eligible for testing on the STAAR™ will be assessed when all three test forms are completed by students. Teacher and student directions will be provided to simulate a STAAR™ testing environment. Diagnostic and prescriptive in nature, these benchmark assessments provide educators with detailed information on student progress. The structure of the assessments allows flexibility of use. The assessment booklets include the following descriptors. 

  • Each standard eligible for testing on levels 5, 8, Biology, Chemistry, and Physics and all required TEKS for levels 3, 4, 6, and 7 are addressed.
  • Three complete benchmark assessments are contained for pre-, mid-, and post-assessment throughout the year (Forms A, B, and C)
  • The contents feature 44 assessment items on each form including 43 items that are selected response and 1 griddable on levels 3, 4, and 5; 54 assessment items on each form including 51 items that are selected response and 3 griddable items on levels 6, 7, and 8; 54 assessment items on each form including 54 items that are selected response on Biology; 52 assessment items on each form including 47 items that are selected response and 5 items that are griddable items on Chemistry; and 50 assessment items on each form including 45 items that are selected response and 5 items that are griddable items on Physics.
  • “Real-world” contexts are utilized when possible.
  • Assessment items are aligned to the TEKS as well as correlated to the Depth of Knowledge (DOK) and Bloom’s Taxonomy levels.
  • A Chart Your Success section is included in order for students to track their progress and set goals.

These assessments could be marketed to district level administrators or teachers at the fifth grade level. Teachers may desire a product such as this to gather data at various times of the year to determine student achievement. This tool could also serve as a common assessment for all students. Districts would benefit from such a product in order to provide common assessments throughout the year for all students. Most curricula provide unit assessments, not comprehensive assessments covering all standards. Districts could also use this product as a pre-test, mid-year, and post-test for all students.

 

The STAAR™ blueprints were used in the development of these assessments in order to achieve the following.

  • Have the same number of items on these assessments as on the STAAR™ test for each level as defined in the TEA blueprint (e.g., Grade 5 assessments will have 44 items and Biology assessments will have 54 items as specified in the blueprints for these levels.).
  • Have the same number of items for each domain as STAAR™ as defined in the TEA blueprint.
  • Have the same number of items that are answered on open-ended grids as STAAR™ as defined in the information for each level (e.g., Grades 3, 4, and 5 will each have 1 griddable item, and the remainder test items will be multiple choice).
  • Include items assessing all standards (readiness and supporting), but have multiple test items addressing each readiness standard for levels 5, 8, and high school assessments that are given at the end of the course.
  • Reflect the state guideline of 40% or greater of items that can be dual-coded to both a content standard and a process standard.
  • Include reference materials for Level 8, Biology, Chemistry, and Physics.

The Motivation Science Assessments TEKS Aligned are designed to measure student acquisition of the knowledge and skills specified in the Texas Essential Knowledge and Skills at different intervals. Herman (2009) noted there are teachers who want students to demonstrate high performance, thus they deliver instruction that accents what will be assessed as well as include the assessment format within instruction. When teachers do not clearly understand the standards or the learning targets and are unsure how to design instruction, they might resort to teaching to the test. Often instruction is designed to prepare students for merely multiple-choice formats. It is essential that students not be limited to assessments that are only comprised of selected response. Researchers (Herman, Osmundson, and Dietel, 2010) advocate students be given response items that trigger complex thinking and problem-solving. Constructed response items allow teachers to observe the thought processes and reasoning abilities of students. Open-ended assessment items demonstrate that curriculum and instruction should integrate rigor and depth into daily learning experiences. More specifically, instruction that focuses on memorization and is assessed with multiple-choice should not replace in-depth learning and critical thinking. Benchmarks assist campuses in determining how well their science programs are helping students achieve previously set learning goals. The results can depict patterns of performance, noting insufficient performance during the period leading up to the benchmarks. Educators might use the benchmark data to predict if students are on target to meet specific end-of-the-year goals.  Research seems to indicate that students score higher on standardized tests when they experience focused, aligned practice. Therefore, it is imperative that campuses understand why benchmarks are an integral part of the assessment system. Formative assessments are embedded in instruction and used to make informed, ongoing, and timely decisions relative to teaching and learning. The learning targets measured by formative assessments relate to the long-term targets assessed by the benchmarks. Multiple benchmarks address long-term targets, yielding data to show how well students are learning at particular intervals or periods in time. This data relates to the long-term goals as measured by annual assessments (Herman et al., 2010).

At a specific point in time during the year, Motivation Science Assessments TEKS Aligned measure how well students have acquired the knowledge and skills taught during science instruction. The assessments are designed to ensure students are learning at their grade level. Furthermore, Motivation Science Assessments TEKS Aligned provide data to teachers, schools, and school districts to support improved instructional decisions. The Motivation Science Assessments TEKS Aligned serve as accountability measures to help gauge or predict future performance that might occur on state assessments which are part of the requirements of the federal No Child Left Behind Act (NCLB). With summative assessment data, educators can pinpoint areas that require additional attention and focus.

Periodic exposure to benchmark assessments provides students with opportunities to experience a variety of assessment items and formats for each standard. These experiences will benefit students facing a common assessment. “When assessment is an integral part of science instruction, it contributes significantly to student learning” (Ferrer, 2008). Atkin, Black, and Coffey (2001) offer suggestions noted by the National Research Council (NRC, 1996) to help teachers meaningfully incorporate assessment during science instruction. Assessment should inform and guide teachers as they make instructional decisions. During the school year, students can take practice tests to evaluate their own work and progress. Teachers could create customized assessments by assigning students only the items that measure a specific standard. Students partake in these opportunities to demonstrate what they have learned. After students receive immediate achievement feedback, then students may proceed to intervention settings to develop standard mastery and ensure performance gaps are closed prior to the state or common assessment administrations. As a result, Motivation Science Assessments TEKS Aligned arm teachers with essential data or information that helps in the preparation of future high-quality instruction. 

Results of the Motivation Science Assessments TEKS Aligned provide information about the academic achievement of students. This information is used to identify individual student strengths, determine areas of challenge, and measure the quality of science education across the campus. Utilization of results from various benchmark assessments can help teachers monitor student progress in order to determine future plans for instruction. Students can use the Chart Your Success charts located in the back of the assessment booklet to chart assessment data, self-monitor individual progress over time in science, and compare the knowledge and skills to previous assessments. The involvement of students in assessment promotes student engagement in individual learning targets. Students need to know what learning targets they are responsible for mastering, and at what level (Stiggins, 2007).  Marzano (2005) states, “students who can identify what they are learning significantly outscore those who cannot.” A class diagnostic chart is available at www.mentoringminds.com/staar-science which enables teachers to view and perceive students’ strengths and weaknesses at that point in time. After the analysis of assessment data, findings may indicate students require additional instruction to address deficits in order to achieve skill mastery and close learning gaps as students move forward toward annual learning goals. If skill deficits exist, then teachers are encouraged to explore different strategies in order to improve student achievement. Teachers may design learning experiences to revise their curricula, develop formative assessments, examine instructional methods of delivery, target specific populations for remediation and enrichment, create student academic assistance interventions, and/or develop individual plans for student improvement.

For a balanced assessment system, formative assessments must play an essential part of classroom instruction. Formative assessment focuses on improving student performance during classroom instruction whereas summative assessment focuses on accountability and often sorts or classifies students. In formative assessment both teacher and student share responsibility for assessment. The student and teacher share a common understanding of the standards that define quality work. Both student and teacher compare performance to these standards as they assess the work task in progress and when it is completed. Following formative assessment, teaching and learning activities are adjusted to close the gap between the student's performance and the standard. The teacher not only assesses the student's performance, but also provides feedback to the student. Specific, descriptive feedback informs the student as to the next steps to take for improvement in future performance. The teacher will also assess and adjust instruction based on the assessment. Research on formative assessment suggests that students should be aware of their learning targets, their present status, and the next steps in reaching  specified goals or closing any gaps (Atkin, Black, and Coffey, 2001; Black, Harrison, Lee, Marshall, and Wiliam, 2003). Such knowledge helps students keep track of their achievements, know how close they are to their learning targets, and determine future steps to advance their learning. When students are aware of individual achievement gaps and teachers motivate students with continuous feedback linked to the expected outcomes and criteria for success, students are able to steadily move forward and close performance gaps in science. Black and Wiliam (1998) note there is evidence to support a strong relationship between interactive feedback and student achievement. Although Motivation Science Assessments TEKS Aligned are summative in nature, the item coding to standards provides teachers optional formative assessment opportunities to administer only select items that relate to the standards being taught. That action would negate the use of the same assessment as a benchmark at a later date. However used, formative assessments are employed during instruction to advance teaching and learning; benchmark tests provide accountability in determining student learning after instruction. This entire process provides evidence that assessment and instruction are intertwined.

Motivation Science Assessments TEKS Aligned are diagnostic and prescriptive in nature. These practice assessments provide educators with detailed information on student progress as well as promoting flexibility of use in a variety of classroom settings. For each grade level, there are three different versions of the assessments (Forms A, B, C) bound into one student assessment booklet. Each form contains the number of test items as specified in the STAAR™ test blueprints for specific levels. Each form will reflect the state guideline that 40% or greater of items are dual-coded to both a content standard and a process standard. Dual-coding of items shows that students must be more than smart test takers; they must demonstrate scientific investigation and reasoning skills in science as well as conceptual understanding. The resource Motivation Science Assessments TEKS Aligned does have an online version. Test items are presented in a “real-world” context when possible. 

As shared by the United States Department of Education (2003), No Child Left Behind noted the importance of assessment items that align with the depth and breadth of the academic content standards. Therefore, all assessment items in the Motivation Science Assessments TEKS Aligned are coded to the content standard, to the process standard, to the Depth of Knowledge level (DOK), and to Bloom’s Taxonomy.  

The model Depth of Knowledge (DOK) was developed by Norman Webb (Webb, 2002a; 2002b). Dr. Webb advocates the necessity of assessment items aligning to the standards. Webb stated educators should be aware of the level of demonstration required by a student when a test item is developed, thus the development of his four levels of DOK. Level 1 assessment items have students recall information. Level 2 items ask students to think beyond reproduction of responses. Students use more than one cognitive process or follow more than one step. Students at Level 3 demonstrate higher levels of thought than the previous levels require as these items are more complex. Responses may have multiple answers, but students must choose one and justify the reasoning behind the selection. Assessment items at Level 4 require students to form several connections with ideas. Typically, performance assessments and open-ended responses are written at this level of thought.

The literature indicates Bloom’s Taxonomy is a widely accepted organizational structure to assist students in organizing the content of their thinking to facilitate complex reasoning. According to Sousa (2006), Bloom’s Taxonomy is compatible with the manner in which the brain processes information to promote comprehension. Bloom, Englehart, Furst, Hill, and Krathwohl (1956) developed this classification system for levels of intellectual behavior in learning. Bloom’s Taxonomy contains three domains: the cognitive, psychomotor, and affective. Within the cognitive domain, Bloom identified six levels: knowledge, comprehension, application, analysis, synthesis, and evaluation. The taxonomy was revised by Anderson and others (2001) to focus on thinking as an active process. Within the cognitive dimension, level names were changed to verbs and the order of the levels was also changed: Remember, Understand, Apply, Analyze, Evaluate, and Create. The original and revised taxonomies continue to be useful today in developing and categorizing the critical thinking skills of students.Thus, student performances on measures of higher-order thinking ability continue to reveal a critical need for students to develop the skills and attitudes of quality thinking. Furthermore, another reason that supports the need for thinking skills instruction is the fact that educators appear to be in general agreement that it is possible to increase students' creative and critical thinking capacities through instruction and practice. Presseisen (1986) asserts that the basic premise is students can learn to think better if schools teach them how to think. Rigorous critical thought is an important issue in education today; thus, the reason for emphasis in Motivation Science Assessment TEKS Aligned. Attention is focused on quality thinking as an important element of life success (Huitt, 1998; Thomas and Smoot, 1994).

In the 1950s, Bloom found that 95% of the test questions developed to assess learning required students to only think at the lowest level of learning, the recall of information. Similar findings indicated an overemphasis on lower-level questions and activities with little emphasis on the development of students’ thinking skills (Risner, Skeel, and Nicholson, 1992). “Perhaps most importantly in today’s information age, thinking skills are viewed as crucial for educated persons to cope with a rapidly changing world. Many educators believe that specific knowledge will not be as important to tomorrow’s workers and citizens as the ability to learn and make sense of new information” (Gough, 1991). “Now, a considerable amount of attention is given to students’ abilities to think critically about what they do” (Hobgood, Thibault, and Walberg, 2005). It is imperative for students to communicate their thinking coherently and clearly to peers, teachers, and others.

Critical thinking is crucial in science instruction as indicated by verbiage usage in the Texas Essential Knowledge and Skills for Science (TEA, 2011). Critical thinking tasks allow students to explain their thought processes by making thinking visible and offer teachers opportunities to identify misconceptions and misapplications of science skills. The literature notes that when students use their critical thinking abilities integrated with content instruction, depth of knowledge can result. Teachers are encouraged to refrain from limiting instruction to lectures or tasks to rote memorization that exercise only lower levels of thought as opposed to incorporating those which build conceptual understanding (Bransford, Brown, and Cocking, 2000).

It appears the national shift towards preparing students to survive in the global market has impacted the assessments undertaken by students in Texas. Texas does not adhere to the K-12 Next Generation Science Standards (NGSS) which will release the final draft for adoption in Spring 2013, but supports its own state standards, the TEKS. However, the assessment system in Texas does recognize the importance of preparedness for college and the work force during K-12 education years. Thus, assessments that focus on the TEKS will not only demonstrate if students can succeed in school but also in the real world. STAAR™ assessments will portray which students are meeting the challenge of becoming ready for college and the workforce. For the purpose of the Motivation Science Assessments TEKS Aligned, the various DOK and Bloom’s Taxonomy levels are utilized to reflect the rigor and depth in levels of thought required by students on the benchmark assessments. Assessment items displaying rigor require students to use higher-levels of thought, exhibiting a more challenging 21st Century learning environment. Students may be asked to use such processes as examine, create, prioritize, decide, produce, assess, generate, or classify. Assessments items reflecting relevance require students to work with real-world tasks. 

Over the past years, changes in accountability and test­ing have led to data playing a major role in the education of students. The U.S. Department of Education advocates the importance of data utilization for guiding instruction and improving student learning. Schools are being strongly encouraged to respond to assessment data, using it to identify students’ academic strengths and needs (U.S. Department of Education, 2009). As educators face increasing accountability pressure from federal, state, and local  entities to improve student achieve­ment, data should become the central element in how students’ aca­demic progress is monitored and how instructional practices are evaluated. There is no single assessment that provides a complete picture of student performance. Motivation Science Assessments TEKS Aligned offer three forms in order to keep a pulse on the progress of student performance, rather than a single snapshot assessment. Each assessment plays a prominent role in determining if quality teaching and learning are occurring. As correct and incorrect assessment answers are analyzed, teachers are able to observe the patterns of thought in which students experience difficulty or exhibit success. This data is informative in that teachers may appropriately adjust and revise instruction to more appropriately address the diversity of needs within classrooms. Thus, assessments have important implications for instruction. Research indicates it is essential that assessment data be used to make well informed instructional decisions (Armstrong and Anthes, 2001; Feldman and Tung, 2001; Forman, 2007; Liddle, 2000).

Benchmarks provide student achievement data on grade-specific Texas Essential Knowledge and Skills throughout the school year, including the ability to report student achievement approaching, falling below, or exceeding the standards. With three forms of assessment per grade, these assessment instruments are capable of providing data to measure science progress and proficiency throughout the year. The benchmark assessments for science are summative in nature, intended to be administered in their entirety at three different intervals during the year after instruction has occurred. However, assessment items from a benchmark could be used as part of a formative assessment, if so desired. The assessment items, that align to the specific standard(s) focused on during instruction, could be extracted from a benchmark assessment and utilized during instruction and followed by timely and descriptive feedback. If educators wished take this action, then such usage becomes formative assessment. Formative assessments provide rapid and meaningful results to teachers to improve or adjust instruction. Positive adjustments to instruction more than likely lead students to master the standard(s) at hand.

Motivation Science Assessments Forms A, B, and C can be used in different ways: as practice, as a diagnostic instrument, and as a teaching tool. Students need opportunities to practice and develop test-taking skills. These tests focus on the skills students will be expected to demonstrate on STAAR™ assessments. A diagnostic chart is available on the Mentoring Minds website mentoringminds.com/staar-science. This chart enables teachers to determine students’ strengths and weaknesses. Teachers can view the chart to determine specific areas where additional practice for mastery of skills is warranted. Data from the assessments will help teachers identify areas where additional instruction is necessary, thus, using the assessments as teaching tools.

Studies support the use of several measures from which to gauge student achievement. The Science Product Development Team recognizes that assessment systems should include a balance of formative and summative data to be most effective in improving outcomes and in making a significant impact on science education. The development team studied available guidelines released by the Texas Education Agency Assessment Division and a range of sample items and item specifications regarding the assessment of science (TEA, 2010a; TEA, 2010b; TEA, 2010c; TEA, 2010d; TEA, 2012). This information was considered by the Science Product Development Team in order to design assessment items and tasks that measure a deeper understanding. Released information from the TEA indicates that STAAR™ assessments will contain two item types: multiple-choice and open-ended or griddable. Griddible items give students opportunities to formulate responses independently without being influenced by provided answer choices. Multiple-choice items will include reverse thinking questions, using not and except, as well as questions containing the distractors All of the Above, None of the Above, and Not Here. The format for Motivation Science Assessments TEKS Aligned will be paper-pencil, with items following the protocol noted in the STAAR™ Blueprints (2010b).

As the school year progresses, students who are proficient in the various benchmarks can determine how they might perform on future STAAR™ assessments in science. The three forms offered at each grade enable the Motivation Science Assessments TEKS Aligned to be spread out over the year, leaving a window of time for the state assessments to be administered. As data from the Motivation Science Assessments TEKS Aligned are examined, teachers can identify students who are performing at the grade-specific standard level, those who are exceeding the standards, and those who are approaching or are functioning below the standard. Teachers can also determine and chart the data for the various subgroups (i.e., ethnicity, disadvantaged, special education, and English Language Learners). All subgroups must make sufficient growth in order for the school to advance or surpass campus and state assessment goals. If a state has specified science as a goal on the adequate yearly progress (AYP) status per the No Child Left Behind law, then goal progress also pertains to that designation. 

The developers of Motivation Science Assessments TEKS Aligned reviewed relevant reform efforts on teaching and learning in science, studied the Science Standards, perused the item specifications released by the state, and employed individual expertise and col­lective judgment as they designed assessment resources to lead students into the 21st century. Motivation Science Assessments TEKS Aligned focus on the grade-level standards for science. This focus ensures that test items align with the assessed content and process standards, resulting in appropriately written assessment items based on current information. Webb’s Depth of Knowledge, Bloom’s Taxonomy, and the TEKS form the basis for designing items that stimulate students' higher-order thinking skills and encourage rigor and depth in thinking. With the Science Standards as academic guiding points, the Mentoring Minds Product Development Team for Science developed Motivation Science Assessments TEKS Aligned, a resource for assessing and strengthening science education.

 

Bibliography for Motivation Science Assessments 

American Association for the Advancement of Science (AAAS). (1989). Science for all Americans. New York: Oxford University Press.

American Association for the Advancement of Science (AAAS). (1993). Benchmarks for science literacy: Project 2061. New York: Oxford University Press. http://www.project2061.org/publications/bsl/online.

Anderson, L. W., & Krathwohl, D. R. (Eds.). (2001). A taxonomy for learning, teaching, and assessing: A revision of Bloom’s taxonomy of educational objectives. New York: Longman. 

Armstrong, J., & Anthes, K. (2001). How data can help: Putting information to work to raise student achievement. American School Board Journal, 188(11), 38–41.

Atkin, J. M., Black, P., & Coffey, J. (2001). Classroom assessment and the national science education standards. Washington, DC: National Academy Press.

Black, P., Harrison, C., Lee, C., Marshall, B., & Wiliam, D. (2003). Assessment for learning: Putting it into practice. Maid­enhead, UK: Open University Press.

Black, P., & Wiliam, D. (1998). Assessment and classroom learning. Assessment in Education, 5(1), 7–74. 

Bloom, B., Englehart, M., Furst, E., Hill, W., & Krathwohl, D. (1956). Taxonomy of educational objectives: The classification of educational goals. Handbook I: Cognitive Domain. New York: Longmans Green.

Boaler, J. (1998). Open and closed mathematics: student experiences and understanding. Journal for Research in Mathematics Education, 29, 41-62.

Bransford, J., Brown, A., & Cocking, R. (2000). How people learn: Brain, mind, experience, and school (Expanded Edition). Washington, DC: National Academy Press.

Buxton, C. (1998). Improving the science education of English language learners: Capitalizing on educational reform. Journal of Women and Minorities in Science and Engineering, 4, 341-363.

Carlo, M., August, D., & Snow, C. (2005). Sustained vocabulary-learning strategies for English language learners. In E.H. Hiebert & M. Kamil (Eds.), Teaching and learning vocabulary: Bringing research to practice, 137-153.  Mahwah, NJ:  Erlbaum.

Farkas, R. (2003). Effects of traditional versus learning-styles instructional methods on middle school students. Journal of Educational Research, (97): 43-81.

Feldman, J., & Tung, R. (2001). Using data-based inquiry and decision making to improve instruction. ERS Spectrum: Journal of School Research and Informa­tion, 19(3), 10–19.

Ferrer, Lourdes (2008). B.E.S.T. (Building effective strategies for teaching) of science. Sampaloc, Manila: Rex Book Store, Inc.

Forman, M. L. (2007). Developing an action plan: Two Rivers Public Charter School fo­cuses on instruction. In K. P. Boudett & J. L. Steele (Eds.), Data wise in action: Stories of schools using data to improve teaching and learning (pp. 107–124). Cambridge, MA: Harvard Education Press.

Garrison, C., & Ehringhaus, M. (2007). Formative and summative assessments in the classroom. Retrieved Summer 2012 from http://www.amle.org/Publications/WebExclusive/Assessment/tabid/1120/Default.aspx

Gough, D. (1991). Thinking about thinking. Alexandria, VA: National Association of Elementary School Principals.

Gunning, T. (2003). Creating literacy instruction for all children, Fourth Edition.  Boston, MA: Allyn & Bacon/Pearson Education.

Harlen, W. (2005). Assessing science understanding: A human constructivist view. London: Elsevier Academic Press.

Herman, J. (2009). Moving to the next generation of standards for science: Building on recent practices (CRESST Report 762). Los Angeles: University of California, National Center for Research on Evaluation, Standards, and Student Testing (CRESST). Retrieved Fall 2012 from http://www.cse.ucla.edu/products/reports/R762.pdf

Herman, J. L., Osmundson, E., & Dietel, R. (2010). Benchmark assessments for improved learning (AACC Policy Brief). Los Angeles, CA: University of California.

Hess, K. (2010a). Applying Webb's depth-of-knowledge levels in reading, writing, math, science, and social studies. Dover, NH: National Center for Assessment. 

Hess, K. (2010b). Table 1: Detailed descriptors of depth-of-knowledge levels for science. Dover, NH: National Center for Assessment.

Hobgood, B., Thibault, M., & Walbert, D. (2005). Kinetic connections: Bloom’s taxonomy in action. University of North Carolina at Chapel Hill: Learn NC.

Huitt, W. (1998). Critical thinking: An overview. Educational Psychology Interactive. Valdosta, GA: Valdosta State University. Retrieved May 7, 2007 from http://chiron.valdosta.edu/whuitt/col/cogsys/critthnk.html. [Revision of paper presented at the Critical Thinking Conference sponsored by Gordon College, Barnesville, GA, March 1993.]

Individuals with Disabilities Education Improvement Act (IDEA) (2004). PL 108-446, 20 U.S.C. §§1400 et seq.

Lee, O. & Fradd, S. (1998). Science for all, including students from non-English-language backgrounds. Educational Researcher, 27, 12-21.

Liddle, K. (2000). Data-driven success: How one elementary school mined assess­ment data to improve instruction. Amer­ican School Board Journal. Retrieved April 2009 from http://www.asbj.

McMurrer, J. (2008). Instructional time in elementary schools: A closer look at changes for specific subjects. Center on Education Policy, p. 2.

Maal, N. (2004).  Learning via multisensory engagement.  Association Management. Washington, DC: American Society of Association Executives.

Marzano, R. (2005). What works in schools (PowerPoint presentation). www.marzanoandassociates.com/pdf/ShortVersion.pdf

Michaels, S., Shouse, A.W., & Schweingruber, H. (2007). Ready, set, science!: Putting research to work in K-8 science classrooms. Washington, DC: National Academies Press.

National Research Council (NRC) & National Committee on Science Education Standards and Assessments (1996). National Science Education Standards: Chapter 4 Standards for Professional Development of Teachers of Science, 55-74.

National Research Council (NRC) & National Committee on Science Education Standards and Assessments (1996). National Science Education Standards: A guide for teaching and learning. Washington, DC: National Academy Press.

National Research Council (NRC). (2007). Taking science to school: Learning and teaching science in grades K-8. Washington, DC: The National Academies Press.

National Research Council (NRC): Committee on Conceptual Framework for the New K-12 Science Education Standards. (2012). A Framework for K-12 Science Education: Practices, crosscutting concepts, and core ideas. Washington, DC: The National Academies Press.

National Research Council. (2001). Knowing what students know: The science and design of educational assessment. Washington, DC: National Academy of Sciences.

National Science Teachers Association. (2011). Should science count toward AYP? Arlington, VA: National Science Teachers Association. Retrieved October 2012 from  http://www.nsta.org/publications/news/story.aspx?id=58205

Next Generation Science Standards (NGSS) Staff, (2012). The Next Generation Science Standards (NGSS). [Developed by The National Research Council, the National Science Teachers Association, and the American Association for the Advancement of Science, managed by Achieve, Inc.]. Washington, DC : Achieve, Inc. Retrieved December 2012 from http://www.nextgenscience.org/next-generation-science-standards

No Child Left Behind Act of 2001, Pub. L No. 107–110, 115 Stat. 1425 (2002).

No Child Left Behind. (2001). Washington, DC: U.S. Department of Education. 

Paul, R.W. (1985). Bloom’s taxonomy and critical thinking instruction. Educational Leadership, 42, 36-39.

Presseisen, B.Z. (1986). Critical Thinking and Thinking Skills: State of the Art Definitions and Practice in Public Schools. Paper presented at the Annual Meeting of the American Educational Research Association, San Francisco, CA.

Rangel, E. (2007). Science education that makes sense. Research Points, 5(1), 1. Retrieved November 20, 2009, from http://www.aera.net//uploadedFiles/Journals_and_Publications/Research_Points/RP_Summer07.pdf

Redfield, D. & Rousseau, E. (1981). A meta-analysis of experimental research on teacher questioning behavior. Review of Educational Research, 51(2), 237-245.

Risner, G., Skeel, D., & Nicholson, J. (1992). A closer look at textbooks. Science and Children, 30(10), 42-45, 73.

Ruby, A. 1999. Hands-on science and student achievement. Report RGSD-159, Santa Monica, CA: Rand Corporation.

Saracaloglu, A. S., & Yenice, N. (2009). Investigating the self-efficacy beliefs of science and elementary teachers with respect to some variables. Journal of Theory and Practice of Education, 5(2), 244-260. Retrieved November 24, 2009, from the Education Research Complete database.

Schmidt, W. H., McKnight, C. C., & Raizen, S. A. (1997). Splintered vision: An investigation of U.S. mathematics and science education. Norwel, MA: Kluwer Academic. 

Skamp, K., & Logan, M. (2005). Students' interest in science across the middle school years. Teaching Science - The Journal of the Australian Science Teachers Association, 51(4), 8-15. Retrieved September 25, 2009, from the Education Research Complete database. 

Skamp, K. (2007). Conceptual Learning in the primary and middle years: the interplay of heads, hearts, and hands-on science. Teaching Science - the Journal of the Australian Science Teachers Association, 53(3), 18-22, 5p. Retrieved September 25, 2009, from the Education Research Complete database.

Sousa, D. (2006). How the brain learns. Thousand Oaks, CA: Corwin Press.

Stahl, S. &  Fairbanks, M. (1986). The effects of vocabulary instruction:  A model-based meta-analysis.  Review of Educational Research, 56, 72-110.

Stiggins, R. & Conklin, N. (1992). In teachers’ hands: Investigating the practice of classroom assessment. Albany, NY: SUNY Press.

Texas Education Agency (TEA). (2012). Science. Retrieved August 2012 from   http://www.tea.state.tx.us/index2.aspx?id=5483

Texas Education Agency (TEA) Student Assessment Division. (2010a). STAAR™ Assessed Curriculum, Science. Austin, Texas: Texas Education Agency. Retrieved Fall 2010 from http://www.tea.state.tx.us/student.assessment/staar/ac/

Texas Education Agency (TEA). (2010b). STAAR™ Blueprints Science. Retrieved Fall 2010 from http://www.tea.state.tx.us/student.assessment/staar/blueprints/

Texas Education Agency (TEA). (2010c). STAAR™ Science Resources. Retrieved Fall 2010/2011, May 2012 from http://www.tea.state.tx.us/student.assessment/staar/science/

Texas Education Agency (TEA). (2010d). STAAR Media Toolkit – STAAR™  vs. TAKS. Retrieved Fall 2010 from  http://www.tea.state.tx.us/index2.aspx?id=2147504081

Texas Education Agency (TEA). (2011). Texas Essential Knowledge and Skills for Science. Austin: Texas Education Agency. Retrieved Fall 2011 from http://ritter.tea.state.tx.us/rules/tac/chapter112/index.html

Texas Education Agency (2012). STAAR™ Released Test Questions Science. Austin, Texas: Texas Education Agency. Retrieved Spring 2012 from http://www.tea.state.tx.us/student.assessment/staar/testquestions

Thomas, G., & Smoot, G. (1994, February/March). Critical thinking: A vital work skill.

U.S. Department of Education. (1990–2007). National Assessment of Educational

Progress. National Center for Educational Statistics. Retrieved September 1, 2007 from http://nces.ed.gov/nationsreportcard/ 

U.S. Department of Education. (2009). Using ARRA funds to drive school re­form and improvement. Retrieved Fall 2012 from www.ed.gov/policy/gen/leg/recovery/guidance/uses.doc.

Webb, N. (2002a). Depth-of-Knowledge levels for four content areas. Wisconsin Center for Educational Research.

Webb, N. (2002b). Depth-of-Knowledge (DOK) levels for science. Retrieved Spring 2010 from http://www.ride.ri.gov/assessment/DOCS/NECAP/Science/DOK_Science.pdf

Wood, T. & Sellers, P. (1996). Assessment of a problem-centered mathematics program: Third grade. Journal for Research in Mathematics Education, 27, 337-353.

Resources

Funding

Save money by choosing resources that qualify for multiple funding sources.

Funding Resource Guide >

Related Products