This post, the fifth in a series, explains the "English Learners" indicator on the California School Dashboard. The Dashboard, which made its debut in March of 2017, provides significant information about the academic "performance" of students learning English. If you don't understand the basics about the Dashboard -- especially how it blends "status" and "change" -- you might want to read this blog series from the start.
Over 40% of California's students speak a language other than English at home. About a fifth of California's students are categorized as "English Learners," which means that they aren't fully fluent in English yet. As described in Lesson 6.3, an important role of California's school system is to ensure that each student progresses to fluency in English regardless of their starting point. California's school finance system provides funds to school districts and charter schools specifically with this educational need in mind.
For many years the state of California has used the California English Language Development Test (CELDT) to gauge students' proficiency in English. After an initial assessment, students are tested annually to evaluate their progress. When a student scores high enough, teachers and parents may reclassify him or her as fluent. The CELDT's days are numbered; it is scheduled to be replaced in 2018-19 by the English Language Proficiency Assessments for California (ELPAC).
For an individual student, current English proficiency is measured by a score on a test. But how do you combine those test results to tell whether a school (or a school district) is effective at teaching students English? To get at this, California has divided the score levels on the test into six (well, seven*) levels of proficiency:
A school that is successful with English Learners (ELs) is one that quickly advances the English language skills of each individual student. You cannot tell whether a school is good at addressing this challenge just by averaging EL student scores (or levels of proficiency). The mix of students in any school can change significantly from year to year based on immigration patterns, economic factors or sheer chance.
To get beyond this problem, the meaning of "status" and "change" for the English Learners indicator on the Dashboard differs importantly from other indicators. Ignoring the fine print for a moment, the "status" measurement for this indicator is calculated as the percentage of individual EL students that improved their English proficiency level last year. (Yes, the status metric in this case is actually a measurement of change.) The "change" measurement can be thought of as a measure of acceleration or deceleration. It evaluates whether the percentage of EL students that improved last year (in comparison to two years ago) was better or worse than the percentage two years ago that improved in comparison to those three years ago.
Here's the fine print (skip this paragraph if you want). By definition, this measurement is all about change, so it requires a baseline. Accordingly, the first time a student takes this test, his or her proficiency level has no influence on the Dashboard. Also, students that are "promoted" to Early Advanced, Advanced or RFEP may "count" on the Dashboard for up to two years. Financial incentives explain part of this fussiness at the top end of the scale. Under the rules of the Local Control Funding Formula, school districts may receive funding to support students that are learning English. Once they are reclassified fluent, this support ends. Tests don't automatically determine reclassification of students; districts are supposed to consult with teachers and parents.
At this point you might be wondering how your school measures up when it comes to educating English Learners. When you visit the Dashboard and look up your school you might or might not get a clear answer. To protect privacy, the Dashboard only shows data for a group if there are more than ten students in it. Performance is only shown for groups of thirty or more.
For an easier way to get a sense of how your school compares, visit the Department of Education's 5x5 Placement Report for your district. This tool shows the performance of each school using the structure of the 5x5 placement grid that we described in the third post of this series. Below the 5x5 grid you will see a summary of the number of schools in each performance level, something like this (San Francisco Unified 2016 data):
The 5x5 placement grid shows the name of each school in the district in the appropriate box, reflecting its status and change. This is a useful view for driving conversation -- you can easily identify schools in your district that are outperforming your own, which can lead to useful questions and conversations: What are those schools doing differently?
Unfortunately, this grid view is bulky and does not print well. For a view that is easier to use and contains more information, find the "View Detailed Data" button in the upper corner above the grid. You will see a table that looks something like this:
Click the image above to jump to the table and view it live. Sort this table by performance color and you will have the basis for a robust conversation. (Note that for technical reasons charter schools are not displayed in this table. The Dashboard treats each charter school as a separate school district.)
The California School Dashboard is a work in progress, and it shows. In addition to the Dashboard itself, you should be aware of the lightning-fast summary view available at EdSource and the Department of Education's 5x5 Placement Report tool. All three tools use the same data, presented in different ways.
Search all lesson and blog content here.
Not a member? Join now.
or via email
Already Joined Ed100? Sign In.
or via email