In our last blog post, we laid out the first three of seven steps that go into creating a growth assessment that works. Assessment results and data all too often do not tell the whole story, but when an assessment truly works we can actually see student growth and teachers can set goals and guide progress.
Here are the remaining four steps to building a reliable and working growth assessment:
- Use a deep pool of questions to increase validity
The more questions an assessment presents to the student, the greater precision we can expect. When there are many items falling at, above, and below the student’s level, educators gain an increasingly detailed view of the student’s achievement. This granularity is what elevates a proper assessment above a simple “pass or fail” event.
This requires not only many questions at each difficulty level along the scale, but also that appropriate questions be presented to each student. Computer adaptive testing (CAT) makes this process manageable and scalable—meaning that it can be repeated with reliable results with larger or smaller groups of students.
- Ensure fairness through empirical bias and sensitivity reviews
You now have a deep pool of items aligned to content standards, arranged along a vertical scale, and spanning the full range of students you want to measure. However, students come to school from myriad backgrounds—cultural, socio-economic, ethnic, religious, etc.
In addition, all students may not have had the opportunity to learn the material to be tested, or the material may be presented in such a way as to privilege a certain background. These factors all contribute to the potential for bias in an assessment.
Using practices such as Differential Item Functioning (DIF) and bias and sensitivity reviews to reduce bias in the instruments created helps. The American Educational Research Association (AERA), American Psychological Association (APA), and the National Council on Measurement in Education (NCME) publish standards on this for test makers to support fairness and to provide consistency in approach across developers.
- Define the purpose of the assessment to determine the precision required
Test fatigue and demands on classroom time are widely touted factors in “opt out” discussions. That is why clearly defining the purpose of the assessment and the role of the data educators gather is crucially important. Simply put, a more robust assessment composed of more questions will give a more precise idea of student achievement, but this requires more questions over a wider range of grade levels. How precise do educators need to be, and when?
Balancing the need for data against the time required to do the assessment can be tricky. The ability to determine how precise a measure is needed and tailoring the assessment to provide that while minimizing demands on valuable classroom time is one of the key benefits of some computer adaptive tests.
- Providing context for growth
Once achievement and growth are accurately measured, a world of instructional opportunity opens—as long as there is accompanying information that provides a context. That’s where the standards come into play and the assessment tool becomes the basis for contextualized comparisons. Two important comparisons we can draw from a vertically scaled score and normative data are “growth compared to peers” and “growth trajectories.”
A teacher certainly benefits from knowing what the student’s score is in relation to all the other students in the classroom. A Principal benefits from knowing her or his school’s position within a district, and a district supervisor finds it useful to place a school’s performance in the state and national context. This need is met by establishing growth compared to peers.
By providing each student with the right instruction at the right time during the school year, growth data can help teachers instill a “personal navigation system” that transforms all students into lifelong learners. Thoughtful use of accurate and fair assessment data leads directly to the equity and growth that are the future of education in America.
About the Author – Judy Harris:
Judy brings over 17 years of teaching experience to her role as Policy & Advocacy Director at NWEA. She is a National Board Certified Teacher, a Reading Specialist, and middle school Language Arts teacher. Judy was a member of the Joint Assessment Task Force for Oregon and co-authored “The New Path for Oregon: System of Assessment to Empower Meaningful Student Learning.” Judy has also served on both national and state level ESSA implementation teams, and she was one of just 30 active classroom teachers selected by the NEA from across the nation to help educators with the implementation of ESSA. What excites her the most about the work she is doing is having the opportunity to move initiatives forward that will make a difference for kids as they get prepared to chase their own individual American dream.