Which school do you want to support?
Most of the time, teachers substantially work alone. In the absence of a strong Professional Learning Community they receive only occasional, minimal feedback about their work. In many schools and districts, this feedback is based on a check-box exercise: the principal or a designated evaluator shows up with a clipboard, watches for a few minutes from the back of the room, makes a few marks on a form, and leaves. In most cases, any further observation and evaluation is rare – and similarly perfunctory.
More extensive observation is almost always a reflection of trouble, and may be designed to put pressure on the teacher to improve, or to quit. Some teacher contracts limit the number of times a principal may observe a teacher, or set rules that require the principal to provide advance notice for observation.
In 2009, the New Teacher Project criticized this perfunctory process in a widely-read report titled “The Widget Effect.” The report argues that “school systems treat all teachers as interchangeable parts, not professionals. Excellence goes unrecognized and poor performance goes unaddressed. This indifference to performance disrespects teachers and gambles with students’ lives.” Teachers unions, in turn, criticized the Widget Effect approach for placing too much reliance on the judgment of the school principal, or on over-interpretation of student test scores.
In California, districts are not required to consider student test scores when evaluating teachers.
In 2013, the National Council on Teacher Quality (NCTQ) released a detailed report comparing teacher evaluation practices and policies across the US states. A strong majority of states include student achievement (usually test scores) as an element of a teacher's evaluation. California is an exception.
In California, teacher evaluation is often seen as punitive. Extensive observation and evaluation is costly, so it is rare -- and generally seen as a signal of trouble.
In 2015 Students Matter, an education advocacy group, filed a lawsuit (Doe v. Antioch et. al.) to argue that school districts are compelled under existing California law (the Stull Act) to evaluate teachers in a way that includes student achievement data. In 2016 Superior Court judge Barry Goode ruled that the Stull Act is not specific; it leaves the manner and consequences of evaluation up to school districts.
An alternative approach, called Peer Assistance and Review (PAR) supports principals in some California districts. (Poway, a district in Southern California, has used PAR for many years.) In this system, districts invest in more frequent observation and evaluation, and try to make it beneficial to the teacher being observed. Underperforming teachers are assigned a coach and evaluated by a teacher panel. There is some evidence that this approach is effective in raising teacher performance. It also may be helpful in “coaching out” some teachers who might do better in a different line of work. If managed carefully, PAR can help provide the required documentation to support a formal dismissal when called for. Critics of PAR express concern that it can have the opposite effect, creating hurdles and obstacles to dealing with performance issues in a clear, effective way.
In California, few districts have implemented PAR or other constructive evaluation systems for teachers in part because they cost money. Districts have had little appetite to divert funds from general operations, and when it comes time to negotiate the contract, most teacher unions have preferred to advocate for education dollars to go toward salaries rather than support for evaluation. Attempts at the state level to change and standardize evaluation procedures for teachers have generally fallen flat. California's 2013 budget act changed the rules for education funding, giving districts greater flexibility over how they use funds. That may reopen the question, in some districts at least, of how to strengthen teacher evaluation systems.
Of course, the people in a school who are best-positioned to know a teacher's strengths and weaknesses are the ones carrying backpacks: the students. In eight states, student surveys are a required element of teacher evaluations according to the 2013 NCTQ report. Many schools and districts are experimenting with student feedback in the search for alternatives to test scores as a way to understand teacher strengths and weaknesses. The 2010 National Teacher of the Year, Sarah Brown Wessling, uses a version of the Tripod survey (see image below) to gather student feedback. Her version of the survey also includes space for students to write their own response.
Teachers can "opt out" of being evaluated by students.
In 2010 the California Association of Student Councils (CASC), a statewide organization of student leaders, argued that teachers should include feedback from students. The student organization enthusiastically advocated for legislation to facilitate student involvement in teacher evaluation and celebrated passage of SB1422, which appeared to pave the way. No funding was provided for such evaluations, however, and student participation in teacher feedback remains rare. Ironically, the amended legislation that eventually passed appears to prohibit districts from making student evaluations a required practice: instead, it guarantees each teacher an unlimited option to opt out. This is a good example of how advocacy can backfire.
Another alternative proposed but not yet tried (readers, please add a comment if you know otherwise) is for higher-grade teachers to evaluate lower-grade teachers based on the preparedness and work of the students they teach.
The demand for meaningful teacher evaluation systems gained urgency in the great recession. When the budget requires laying off teachers, who should be the first to go? The lack of effective evaluation systems for teachers made it difficult for school leaders to argue effectively that they should be able to use judgment in who should go and who should stay. Professor Eric Hanushek, who strenuously promotes the idea of more judgment in teacher retention, argues that targeted layoffs should be a key strategy for school improvement. He says, "If you eliminate the bottom five percent of teachers in terms of effectiveness, or if you replaced five to eight percent of the worst teachers with an average teacher, U.S. achievement would rise to somewhere between Canada and Finland. A small number of teachers has a really big impact on the achievement of kids."
A 2013 PACE/Rossier survey of California voters found significant popular support for this idea: "when asked what would have the most positive impact on public schools, the top answer was "removing bad teachers from the classroom" (43 percent), followed by "more involvement from parents" (33 percent), and "more money for school districts and schools" (25 percent)." Part of what makes this idea difficult is that it implies certainty about who, exactly, falls in that list of worst teachers. Test scores are generally lowest in places where poverty is greatest, and it can be challenging to untangle effectiveness from circumstance.
"Value added" analysis attempts to isolate teacher effects from other influences. It provides a statistical method to examine whether students are scoring as expected, given their circumstances, or differently. Over time, if a teacher's students tend to come out of their classes with more improvement in their scores than the model predicts, the teacher might be doing something worth celebrating. Of course, the reverse is also true. The methodology for this analysis is steadily improving, but there is a lot to know. Harvard Professor Raj Chetty studies this topic, and his explanations are useful.
Surveys of voters, parents and teachers tend to agree that the system is failing to take action when teachers are ineffective; inaction on this issue became one of the core arguments in the 2014 case Vergara vs. California.
How can anyone become better at their work in the absence of meaningful feedback?
Teacher evaluation should be about much more than dismissal decisions. For the vast majority of teachers, evaluations should be about professional improvement. After all, how can anyone become better at their work in the absence of meaningful feedback? Greatness By Design, a California task force report on how to support outstanding teachers, stresses that educators be evaluated against professional standards and that evaluation be informed by data from a variety of sources, including measures of educator practice and student learning and growth.
Pivot Learning Partners (a Full Circle Fund grant recipient) is one of many organizations that has explored ways to shift the emphasis of evaluation from rewards and punishment to professional improvement. Their work indicated that teachers seem to value professional feedback when it isn't couched in high-stakes terms. This conclusion has been echoed by large-scale surveys of teachers by the Gates Foundation, which in 2009 began the MET project. The project was a major effort to "build and test measures of effective teaching to find out how evaluation methods could best be used to tell teachers more about the skills that make them most effective and to help districts identify and develop great teaching." In the video below from TED Talks Education with John Legend, Bill Gates argues for significant investment support of teacher development.
Student surveys, incidentally, were one of the methods that the MET research found to be valid for measuring teachers’ effectiveness. The Race to the Top program brought significantly increased focus to the question of how to evaluate teacher performance. Many pioneering schools and districts took inspiration from Charlotte Danielson's Professional Practice framework, which defines a rubric for evaluating and coaching teachers in order to make evaluations more consistent and focused. Others have adapted, tweaked and improved on Danielson's work, and many rubrics can be found online at the National Center on Teaching Quality.
Perhaps the most thoroughly developed program for teacher evaluation and improvement is Washington, D.C.'s program (called IMPACT). As testing has become a more important component of school management and accountability, student test results have become a component of teacher evaluation in many states, though the shift to Common Core standards has caused California, Washington D.C. and others to pause in implementing any consequences associated with the testing component of evaluation. The topic continues to be a subject of debate in California.
Search all lesson and blog content here.
Not a member? Join now.
or via email
Already Joined Ed100? Sign In.
or via email