Share on Facebook0Tweet about this on Twitter0Share on LinkedIn0Pin on Pinterest0Email this to someone

While formative assessment can provide the day-to-day, minute-by-minute data teachers need to make in-the-moment adjustments to teaching, there is still a need for interim assessments. These assessments are taken at intervals throughout the year – every five to nine weeks or so – to evaluate student knowledge and skills relative to a specific set of academic goals. The results are used by principals, school leaders and teachers to inform instruction and decision-making in the classroom and at the school and district level, as well as to measure student growth over time.

The Benefits of Interim Assessments and Three Mistakes to Avoid in Interpreting DataAs Kim Marshall stated in his article – Interim Assessments: Keys to Successful Implementation – in 2006:

The basic argument for interim assessments is actually quite compelling: let’s fix our students’ learning problems during the year, rather than waiting for high-stakes state tests to make summative judgments on us all at the end of the year, because interim assessments can be aggregated and have external referents (projection to standards, norms, scales).

A good, balanced model of assessments includes interim assessments, which have three primary purposes and responsibilities:

  1. Provide information to help educators guide instruction for all students in a manner that supports growth and achievement.
  2. Project performance on the state assessment in order to help educators identify students who may need intervention to meet standards.
  3. Provide educators and parents with an accurate measure of the student’s growth over time.

When they are properly implemented, interim assessments serve as a time-efficient means of measuring student progress within a general subject area. Typically, interim assessments include 30 to 50 items, take the average student about an hour to complete, and produce both a relatively accurate estimate of student performance in a discipline, as well as an estimate of performance in the primary standards within that discipline.

Of course with any assessment, it’s all about how the data is used. Michael LoCascio, a district administrator in Illinois, shared some common mistakes that educators make when interpreting assessment data. He pointed out several correctable mistakes, including:

  1. Confusing Correlation with Causation. Educators learn to think fast in the classroom, making split second decisions on how to adjust instruction appropriately during a lesson. Yet this same ingrained ability to form quick decisions can serve as a distraction when analyzing student performance data. When two seemingly related events occur at the same time, it can be easy to assume that one event caused the other. A rise in student grades may coincide with the start of an afterschool homework club. Yet, what appears to be a logical relationship may actually be misleading information, and occasionally, when poorly interpreted, can convince schools to perpetuate ineffective practices and programs.
  1. Failing to Understand the Intricacies of Averaged Data. The mean is created by adding together all the scores in a given set and dividing by the number of entries. In short, it is the sum divided by the count. Though easy to calculate and compare, the mean has a few drawbacks: it can be easily skewed by abnormally high or low scores, and it can be strongly influenced by the size of the count itself. And, averages based on very small groups have less validity than averages based on large groups.  Due to these issues, the resulting average score can be misleading and, either in a positive or negative way, potentially create inaccurate descriptions of student performance.
  1. Creating False Connections between Averaged Data and Real Students. Despite its connotation, there are actually very few students who are the living embodiment of averaged data.  Most of our students have instructional needs—both strengths and weaknesses—which vary greatly from the average score. Yet both our informal opinions and our large scale school analysis tend to place a large emphasis on the averaged data. As a result, we tend to create broad conclusions about student learning which do not necessarily reflect the strengths and needs of our real students.

In general, if used and executed correctly, interim assessments can play a strong role in moving student learning forward. And together with embedded formative assessment, the two are strong measurement tools for all educators. How are you using interim assessment and interim assessment data in your school or district?

Share your thoughts with us on Facebook or Twitter (@Assess2Learn).