In 2017, the California Department of Education introduced a mechanism to interpret schools’ and districts’ vital signs — the California School Dashboard. Unfortunately, the Dashboard is broken.
Although the Dashboard starts with facts, it often interprets them using twisted logic and funky math. It often flags districts for underperformance that don’t deserve it. And all too often it fails to notice districts with important problems that deserve attention. This matters because flawed data can lead good people to make bad plans.
How do I know this? For over 20 years, I’ve led a company that’s helped over 240 California districts understand their schools’ vital signs and explain them to others in their school accountability report cards. As a veteran of the accountability wars, I know that the cost of tarnished reputations is serious collateral damage.
I’m not the only critic of this Dashboard. Some are scholars like Morgan Polikoff (USC Rossier School), who co-authored a review of the Dashboard which was published by PACE in September 2018 as part of the “Getting Down To Facts” research. (Click here to listen to his video summary of its key points.)
Another researcher, Paul Warren, who retired recently from the Public Policy Institute of California, criticized the Dashboard’s illogic in a report published June 2018. Warren’s comments have special oomph because he led the Assessment and Accountability Division of the California Department of Education (CDE) from 1999 through 2003. (Click here to read my blog post about this report.)
Other critics include the 125 districts and 9 county offices of education that have quietly demoted the Dashboard’s place in their plans, and instead turned to the CORE Data Collaborative for higher quality evidence. Our own client districts have also relied on stronger evidence we built for them, comparing their vital signs to those of districts whose students are very much like their own. One client, Morgan Hill USD, won an award for excellence from the California School Boards Association for a plan that effectively disregarded the official Dashboard’s diagnosis, relying instead on their own evidence base.
Two examples of the Dashboard’s half-dozen flaws are perhaps most troubling. As Ed100 explains in Lesson 9.7, California’s Dashboard combines test scores (status) with a comparison to prior year’s scores (change) and assigns a district a single color (performance) based on both values.
This invites trouble because status and change have no relation to each other. Instead of having two perfectly clear and simple measures, the Dashboard blurs them together, obscuring the meaning of both.
The intended mission of the Dashboard is to draw attention to where it is most needed — especially when it comes to patterns of unequal results, usually called gaps, among groups of students. Unfortunately, the way that the Dashboard has been designed doesn't deliver on this mission. Instead of helping to mind the gaps, it obscures them.
As designed, the Dashboard flags a group of students as needing attention when its performance is two “colors” from the “all students” category. Let’s ignore the loss of evidence that results when scalar measures like numbers are reduced to range-and-frequency bins that are colors. The more astonishing error is including a student subgroup in the larger “all students” group you’re comparing it to. Why not directly compare the two subgroups whose differences are of interest? (The rest of the world follows this convention.) No surprise, that’s exactly what the CAASPP reporting site shows, when you click on this link for Performance Trend Reports and then select “ethnicity.”
As a result, the state Department of Education and county offices of education often focus their scarce resources on the wrong schools. Districts with egregious gaps are not getting dinged by the Dashboard for large gaps in graduation rates and test scores for numerically large subgroups (e.g., boys and girls, ethnic subgroups). Conversely, many districts are getting flagged for gaps for students with disabilities and English learners, due to the small numerical size of those subgroups of students, combined with their understandably lower scores.
The California Department of Education might begin solving its Dashboard dilemmas by calling on Sean Reardon and his team at the Stanford Education Data Archive and the Stanford Center for Education Policy Analysis. How lucky that within 2.5 hours from Sacramento, they can find the social scientists who are leaders in the measurement of inequality in education
Search all lesson and blog content here.
Login with Email
We will send your Login Link to your email
address. Click on the link and you will be
logged into Ed100. No more passwords to
remember!
Questions & Comments
To comment or reply, please sign in .
Anna Meza June 8, 2020 at 10:45 pm
jroubanis June 8, 2020 at 5:34 pm
What are some ways that status and change can be aggregated on the Dashboard?
Steve Rees June 16, 2020 at 5:54 pm