The Common Core state standards describe what students in California are expected to know and be able to do at each grade level. Schools, teachers and textbook developers use these standards to inform what they teach, when they teach it, and what success looks like.
Each year, California public school students in grades 3-8 take standardized tests based on these standards. Parents receive results in the form of an individual Student Score Report. In 2017 the state also began gathering these results together as indicators on the California School Dashboard.
This post is the fourth in our series about the California School Dashboard. To learn more about the Dashboard, the "indicators" it includes, and how "Status" and "Change" combine to determine color-coded levels of "Performance," please read the earlier posts:
The California School Dashboard summarizes students' academic performance in Mathematics and in English Language Arts. Performance is based on individuals' year-end test scores and how those individuals improve their scores from year to year.
The tests for these subjects are known as the California Assessment of Student Performance and Progress (CAASPP). You might hear people refer to these tests as "Smarter Balanced" tests because other states use them, too, under other names. Don't be confused -- they're the same thing.
The CAASPP tests are computer-based, and they adapt to each student as they answer questions correctly or incorrectly. The idea is for the test to ask "Goldilocks" questions for each student. Rather than wasting time on questions that are too easy or too hard, the adaptive CAASPP tests try to ask questions that are just right.
For the Dashboard, California has an elegant method for aggregating individual student test scores and individual score improvement. It even works across grade levels and subjects.
Dear reader: If you're content to know that Blue is good, Red is bad, and performance color can vary a lot depending on whether your scores are rising or falling, well, awesome. Just stop here.
If you want to peek under the hood and understand how it works, read on. Our goal for the rest of this post is to equip you well enough that you can explain it to others.
Scaling the test
Let's start by understanding the tests better. By design, some questions on the CAASPP tests are harder than others. Hard questions are worth more than easy ones. Based on the difficulty of the questions that they can answer, students are assigned a four-digit "scaled" score. The score targets rise with each grade level. (Note: most people call them "scale scores" -- without the "d" -- or just "scores." Now that you know, we'll call them scores. Why be fussy, right?)
The Student Score Report for parents shows their student's score, along with a statistical confidence interval meant to indicate the range of scores that the student might get if they took an equivalent test on another day. (Yes, this interval leaves room for wishful thinking.)
Four-digit scores are daunting, so the report simplifies further. Scores fall in one of four "levels" to represent how they compare with grade level expectations:
The lowest scale score in level 3 serves an important function on the Dashboard. This score target isn't a fixed number -- it varies by test and by grade level, as summarized in the charts below:
English Language Arts - Score Levels by grade (Grades 3-8)
Mathematics - Score Levels by grade (Grades 3-8)
The Difference from Level 3 (DF3)
To summarize academic status across a group of students, such as a school, the Dashboard compares the score of each member of the group to the level 3 target for their grade. The average scale score point distance from this target is poetically known as (wait for it...) "Distance from Level 3." On some reports this is abbreviated "DF3", to the delight and almost certain confusion of the LA-based pop band by the same name.
Here's how it works. Suppose every student in every grade in your school were to score exactly at the minimum level 3 score target for their grade level. In this case, the average "distance from level 3" status for your school would be zero. Your's school's status level, by definition, would be medium.
If every student's scale score were ten points above the target for his or her grade level, the average DF3 would be +10 (and your school's status level would be "high"). Does that mean that the performance color would be green?
No, not necessarily. Remember: the Dashboard measures both status and change.
An average score ten scale points above the level three mark falls in the "high" status range for both English Language Arts and Mathematics. But the performance color for "high" status can be Orange, Yellow, Green or Blue depending on how this year's score compares to last year's score.
As shown in the charts above, the target scores rise with each grade level, but not in a straight line. The following chart summarizes how much the level 3 target score goes up by subject and grade.
From grade 3 to 4, California expects students to improve their English Language Arts score by about 40 points. The next two years the expectation is to increase by about 30 points. In math, the expected point increases are a bit larger.
These variations in the expected year-to-year score increase aren't terribly important. Why? Because the Dashboard aggregates change for individual students relative to these expected score increases. If each student exactly keeps pace with the expected change from grade to grade, the Dashboard will show that the "change" is zero. On the other hand, if every student improves their scale score by more than the expected number of points over last year, the net change will be positive.
It might help to think of the change in relative score as a measure of momentum. It suggests whether students are falling behind, keeping pace, closing gaps, or extending their lead.
Let's use an example to get comfortable using the Five-by-Five Table for Mathematics below. Suppose that your school's average math score increased by 15 points, placing it 10 points above the Level 3 cut point. What is the school's performance color for Mathematics?
Here's how to tackle it. Your school is 10 points above the Level 3 cut point. On the left side of the table, this matches the description of the "High" row ("5 points below to less than 35 points above"). Now look at the top: your school improved by 15 points, which qualifies it for the "Increased Significantly" column. Congratulations: your school's performance color is blue!
Let's try the same exercise using the English Language Arts/Literacy indicator. We'll use exactly the same numbers: Suppose that your school's average score increased by 15 points, placing it 10 points above the Level 3 cut point. What is the school's performance color?
On this indicator, 10 points above Level 3 places your school somewhere in the "High" status row, which means that performance can be Orange, Yellow, Green or Blue depending on how much it changed over the prior year.
Because the score went up by only 15 points, it didn't quite qualify for the "blue" performance level. Because it falls in the "Increased" column for change, the performance color in this case is green.
We just popped the hood to inspect HOW the Dashboard indicators work for Math and English Language Arts, referencing the related 5x5 colored tables. If your head is spinning, relax: the Dashboard automatically calculates the performance colors for you. But I hope the exercise has made one thing very clear: Performance on the Dashboard is very sensitive to change. A year of improvement can make a huge difference in the performance rating.
Search all lesson and blog content here.
Not a member? Join now.
or via email
Already Joined Ed100? Sign In.
or via email