Which school do you want to support?
Some childhood gaps are welcome news, like that gap-tooth smile. Achievement gaps? Not so much.
California's student body is officially organized into groups by grade level, but it can also be organized into subgroups based on race, economic wellbeing, English language readiness and gender. Frustratingly stable patterns predict which subgroups of students tend to do better in school and which tend to do worse. These measurable differences, known as achievement gaps, are usually quantified using standardized test scores. Gaps in students' academic readiness can also be measured in other ways, such as graduation rates, college-going rates and course grades, but in the education system, test scores are the currency.
There are some good reasons to measure achievement gaps using standardized test scores. They are widely used and understood. They can reveal small differences or trends. They are comparatively quick and inexpensive. They also seem to correlate powerfully with variables that are harder to measure. Students with higher standardized test scores are more likely to persist in school, graduate, go to college and succeed there.
Poverty strongly correlates with test scores. Students who qualify for free or reduced-price lunch services (a common measure of poverty) tend to score lower on tests than the kids who don’t. Race correlates, too: on average, black and Latino students tend to score lower on standardized tests than white and Asian students do. Achievement gaps are a symptom of a deeper issue — opportunity gaps — where a lack of generational wealth in low-income communities, especially commnities of color, results in less educational opportunities for students. This term acknowledges systemic disadvantages to success.
Describing patterns of difference turns out to be more difficult than it might seem, even using test scores. After a student takes a standardized test like the CAASPP, California's state tests, a complex process grades the test using a scaled score. It's more than just counting the number of questions that you get right or wrong. Some questions are harder than others, so they are worth or weighted more than easier ones. Some questions are literally worth nothing because they are experimental.
A test score without context is meaningless, so test designers identify performance tiers, assigning benchmarks that indicate whether students have met expectations. California's system separates scores into four tiers.
Let's suppose that you and your twin sister each take a test, say the grade 5 CAASPP test in mathematics. Your sister scores 2,527 scaled points. You score 2,528—one point higher.
Your parents receive two envelopes in the mail with the score reports. Tearing open the first envelope, with warmth in their heart they learn that your score indicates "Standard Met." They open your sister's envelope, and the warmth fades. Alas, your sister's score falls short. It lands in the "Standard Nearly Met" range. (This is just an example — scores are anonymous.) When looking at the entire group of fifth grade students who are in the "Standard Met" group for math, we can't distinguish who barely made it into the group and who was one point away from being placed into the "Standard Exceeded" group.
Of course, the one-point "gap" between these hypothetical scores is insignificant from a statistical perspective. If you both retook the test, your scores would probably be similar but not the same. Like free throws in basketball, there's skill involved, but a bit of randomness, too.
Randomness becomes less important when small samples like this are combined in larger numbers. The close calls on either side of the line tend to cancel one another out. Unfortunately, the statistics show that there are some big gaps.
They're big. Here's a comparison of CAASPP scores by subgroup for English Language Arts:
The gaps are big for math, too:
The gaps remained wide in 2017, a topic examined by the California Legislative Analyst in its brief for the 2018 budget. Analysis of achievement gaps in an ongoing topic in the California School Dashboard.
Starting in 2002, the CST showed rising scores for 10 years, for all groups.
Yes. Over time, the long-term trajectory of California students' scores on standardized tests has been upward. Slowly but significantly, annual successions of students at each grade level are out-scoring their predecessors.
Scores on the CAASPP tests have risen modestly, but the tests have only been around for a few years. It's easier to see the long-term trend by looking at the test results on the California Standards Test (CST) that preceded it, which showed a steady pattern of rising scores across all groups of students in both English and math. For all subgroups, scores rose for ten years. According to California's state tests, scores have been rising over time for all groups of students. The improvement is corrovorated by rising graduation rates. The federal NAEP test shows long-term improvement, too. In recent years, however, the pattern of improvement has been less clear, as discussed in Lesson 1.6. The big picture answer is that throughout the decades, in the long term, the gap has shrunk significantly. In the last, say, ten years, it hasn't shrunk much... if at all.
Are scores rising in a way that reduces achievement gaps? Tests scores for all subgroups have risen, as mentioned above, but the gap has not narrowed. According to EdSource and the Public Policy Institute of California, "achievement gaps persist and have widened in some cases." Looking short term, it's simply too tough to tell — the interaction between test scores and benchmarks confounds the analysis.
Even as scores have risen, achievement gaps have remained substantial — and they don't seem to be going away. The measurement of achievement gaps is so dependent upon where you place the goalposts that it is reckless analysis to measure the size of gaps using categories. The short answer to this question, then, is... it depends how you measure it (and over what time frame you are looking). Over the very long term, it's clear that more students, for example, have access to college. However, it's undeniable that the achievement gap still exists — and there are valid reasons for schools and districts to keep a careful eye on these patterns of achievement, so that they can do something about it. What can be done about it? EdSource launched an online conversation about it in 2015, inviting views about what should be changed. Spolier alert: it doesn't seem like there's a ready supply of easy answers.
It depends how you measure them.
However, demographics aren't destiny. One documented success of a school district in San Gabriel Valley shows promise. Latino students may score the second lowest out of all racial groups statewide, but at Covina-Valley Unified School District, Laatino students outperform them in standardized testing and graduation rates. The district began adopting a college mindset, heavily promoting higher education ot students as soon as kindergarten, and providing computer coding classes in elementary school. While the gap between Covina Valley's Latino students and the statewide white and Asian students still exists, the school district is taking big strides in modeling how schools can close the gap within their own classrooms. For more on college readiness and college access programs, see Lesson 9.4. On average, students tend to score in patterns, but these patterns conceal deeper truths, and there are always exceptions — proving that with the right tools, they can be disrupted.
When looking at any chart that compares "proficiency" rates, squint a little. Remember, individual scores are pesky, detailed things. A little bump can push a score over the cutoff line between one achievement category and the next. Cutoff points are defined by fiat—they are supposed to represent a standard of achievement, not a relative position within a curve.
By the nature of statistics, if a cut-off point stays put while a curve moves, metrics that describe how much of the curve falls above or below the cut-off point will highlight the significance of some changes while understating others. If the cut point is anywhere close to the steep part of the curve, a little bit of change can cast an exaggerated shadow. Education statistics are loaded with metrics derived from cut-off scores.
Bell curves have two tails, and some critics argue that reducing gaps by boosting the learning of kids at the bottom of the learning curve isn't enough. They argue that long-term American competitiveness requires that a school’s mission, should include focus on gifted kids at the top of the curve, too.
The success of California’s schools, districts, and public education system as a whole are often judged by how effectively they address these predictable gaps in achievement. The next lesson discusses the ways these systems’ successes are measured.
Search all lesson and blog content here.
Login with Email
We will send your Login Link to your email
address. Click on the link and you will be
logged into Ed100. No more passwords to
remember!
Questions & Comments
To comment or reply, please sign in .
Carol Kocivar December 6, 2023 at 3:12 pm
About 55% of students were deemed college ready in English in 2022–23, compared to 60% in 2016– Declines in math were similar, but levels of preparation are lower.
Demographic groups with historically lower college enrollment—Native American, Black, low-income, and Latino students, and students who have ever been categorized as English Learners (ever ELs)—were deemed ready for college-level math at lower rates than their peers.
Nearly two out of three Asian students (65%) were deemed ready, compared to 47% of Filipino students and 40% of white students.
https://www.ppic.org/publication/college-readiness-in-california/?utm_source=ppic&utm_medium=email&utm_campaign=epub
Jeff Camp - Founder November 16, 2023 at 12:36 pm
Carol Kocivar March 24, 2023 at 4:53 pm
https://edworkingpapers.com/ai23-742
Carol Kocivar July 5, 2022 at 2:42 pm
either. University of Pennsylvania graduate school of education.
https://www.gse.upenn.edu/news/rethinking-achievement-gap
There are similar findings on providing enrichment /advanced opportunities especially for low income and students of color.
Research Deep Dive: What we know about gifted education
https://fordhaminstitute.org/national/resources/education-gadfly-show-826-research-deep-dive-what-we-know-about-gifted-education
Carol Kocivar June 14, 2022 at 1:21 pm
(1) examines data on K-12 student achievement gaps,
(2) identifies funding provided for disadvantaged and low-performing students,
(3) assesses existing state efforts to serve these students,
(4) develops options for better supporting these students.
https://lao.ca.gov/reports/2020/4144/narrowing-k12-gaps-013120.pdf
Jenny Greene August 12, 2020 at 4:43 pm
Evan Molin February 22, 2018 at 5:25 pm
Sonya Hendren September 1, 2018 at 2:24 pm
NATHANIEL CAUTHORN February 22, 2018 at 1:04 pm
#sorrybutnotrealllysorry #@meifyoudare
Grace Thomas February 22, 2018 at 8:33 am
February 22, 2018 at 8:31 am
Emma Mechelke February 22, 2018 at 8:30 am
Brooklyn February 22, 2018 at 8:28 am
Abigail Hennessy February 22, 2018 at 8:26 am
VICTOR NGUYEN February 22, 2018 at 8:25 am
Connor Pargman February 22, 2018 at 8:24 am
Gloria Lucioni January 6, 2019 at 10:34 pm
Hannah Symalla February 22, 2018 at 8:21 am
Olivia Thomas February 22, 2018 at 8:12 am
g4joer6 April 22, 2015 at 11:38 pm
Sherry Schnell April 16, 2015 at 1:36 pm
Jeff Camp - Founder January 16, 2015 at 10:02 am
There are many perspectives on the purposes of testing students annually. Education statistical curmudgeon Bruce Baker argues that testing all students in search of statistical validity shouldn't be part of the motivation: https://schoolfinance101.wordpress.com/2015/01/16/the-subgroup-scam-testing-everyone-every-year/
sarah.chan April 4, 2015 at 9:43 pm
Gloria Lucioni January 6, 2019 at 10:39 pm