Which school do you want to support?
Some childhood gaps are welcome news, like that gap-tooth smile. Achievement gaps? Not so much.
California's student body is officially organized into groups by grade level, but it can also be seen as consisting of "subgroups" based on race, economic wellbeing, English language readiness and gender. Frustratingly stable patterns predict which subgroups of students tend to do better in school and which tend to do worse. These measurable differences, known as "achievement gaps," are usually quantified using standardized test scores. Gaps in students' academic readiness can also be measured in other ways, such as graduation rates, college-going rates and course grades, but in the education system, test scores are the currency of the realm.
There are some good reasons to measure achievement gaps using standardized test scores. They are widely used and understood. They can reveal small differences or trends. They are comparatively quick and inexpensive. They also seem to correlate powerfully with less convenient measures. Students with higher scores are more likely to persist in school, graduate, go to college and succeed there.
Poverty strongly correlates with test scores. Students who qualify for free or reduced-price lunch service (a common measure of poverty) tend to score lower on tests than the kids who don’t. Race correlates, too: on average, black and Latino students tend to score lower on standardized tests than white and Asian students do.
Describing patterns of difference turns out to be more difficult than it might seem, even using test scores. After a student takes a standardized test like the CAASPP, California's state tests, a complex process grades the test using a "scaled score". It's more than just a simple measurement of the number of questions that you get right or wrong. Some questions are harder than others, so they are worth more than easier ones. Some questions are literally worth nothing because they are experimental.
A test score without context is meaningless, so test designers identify performance tiers, assigning cut-points that indicate whether students have met expectations. California's system separates scores into four tiers.
Let's suppose that you and your twin sister each take a test, say the grade 5 CAASPP test in mathematics. Your sister scores 2,527 scaled points. You score 2,528—one point higher.
Your parents receive two envelopes in the mail with the score reports. Tearing open the first envelope, with warmth in their heart they learn that your score indicates "Standard Met." They open your sister's envelope, and the warmth fades. Alas, your sister's score falls short. It lands in the "Standard Nearly Met" range.
Of course, the one-point "gap" between these hypothetical scores is insignificant from a statistical perspective. If you both retook the test, your scores would probably be similar but not the same. Like free throws in basketball, there's skill involved, but a bit of randomness, too.
Randomness becomes less important when small samples like this are combined in larger numbers. The close calls on either side of the line tend to cancel one another out. Unhappily, the statistics show that there are some big gaps.
They're big. Here's a comparison of CAASPP scores by group for English Language Arts:
The gaps are big for math, too:
The gaps remained wide in 2017, a topic examined by the California Legislative Analyst in its brief for the 2018 budget. Selected charts:
Starting in 2002, the CST showed rising scores for 10 years, for all groups.
Yes. Over time, the long-term trajectory of California students' scores on standardized tests has been upward. Slowly but significantly, annual successions of students at each grade level are out-scoring their predecessors.
Scores on the CAASPP tests have risen modestly, but the tests have only been around for a few years. It's easier to see the long-term trend by looking at the test results on the "CST" tests that preceded it. For all subgroups, scores rose for ten years.
Yes. According to California's state tests, scores have been rising over time for all groups of students. The improvement is corroborated by rising graduation rates. The federal NAEP test shows long-term improvement, too.
Are scores rising in a way that reduces achievement gaps? Perhaps not—or at least not lately. According to the Public Policy Institute of California, "achievement gaps persist and have widened in some cases." What can be done about it? EdSource launched an online conversation about it in 2015, inviting views about what should be changed. Spoiler alert: it does't seem there's a ready supply of easy answers.
Test scores are going up /
and that is great
good news /
But not enough /
to disrupt /
the gaps
between the groups
When looking at any chart that compares "proficiency" rates, squint a little. Remember, individual scores are pesky, detailed things. A little bump can push a score over the cutoff line between one achievement category and the next. Cutoff points are defined by fiat—they are supposed to represent a standard of achievement, not a relative position within a curve.
By the nature of statistics, if a cut-off point stays put while a curve moves, metrics that describe how much of the curve falls above or below the cut-off point will highlight the significance of some changes while understating others. If the cut point is anywhere close to the steep part of the curve, a little bit of change can cast an exaggerated shadow. Education statistics are loaded with metrics derived from cut-off scores.
Bell curves have two tails, and some critics argue that reducing gaps by boosting the learning of kids at the bottom of the learning curve isn't enough. They argue that long-term American competitiveness requires that a school’s mission, should include focus on gifted kids at the top of the curve, too.
The success of California’s schools, districts, and public education system as a whole are often judged by how effectively they address these predictable gaps in achievement. The next lesson discusses the ways these systems’ successes are measured.
Updated September 2017, March 2018
Search all lesson and blog content here.
Login with Email
We will send your Login Link to your email
address. Click on the link and you will be
logged into Ed100. No more passwords to
remember!
Questions & Comments
To comment or reply, please sign in .
Evan Molin February 22, 2018 at 5:25 pm
Sonya Hendren September 1, 2018 at 2:24 pm
NATHANIEL CAUTHORN February 22, 2018 at 1:04 pm
#sorrybutnotrealllysorry #@meifyoudare
Grace Thomas February 22, 2018 at 8:33 am
February 22, 2018 at 8:31 am
Emma Mechelke February 22, 2018 at 8:30 am
Brooklyn February 22, 2018 at 8:28 am
Abigail Hennessy February 22, 2018 at 8:26 am
VICTOR NGUYEN February 22, 2018 at 8:25 am
Connor Pargman February 22, 2018 at 8:24 am
Gloria Lucioni January 6, 2019 at 10:34 pm
Hannah Symalla February 22, 2018 at 8:21 am
Olivia Thomas February 22, 2018 at 8:12 am
g4joer6 April 22, 2015 at 11:38 pm
Sherry Schnell April 16, 2015 at 1:36 pm
Jeff Camp - Founder January 16, 2015 at 10:02 am
There are many perspectives on the purposes of testing students annually. Education statistical curmudgeon Bruce Baker argues that testing all students in search of statistical validity shouldn't be part of the motivation: https://schoolfinance101.wordpress.com/2015/01/16/the-subgroup-scam-testing-everyone-every-year/
sarah.chan April 4, 2015 at 9:43 pm
Gloria Lucioni January 6, 2019 at 10:39 pm