A critical study of the LA Times teacher ranking project was released today by the National Education Policy Center.
The question of teacher rankings has particular relevance to Georgia, which, under its Race to the Top grant, will begin assigning teachers effectiveness grades based in part on student test scores. There is no indication yet whether those grades will be made public, a decision likely to fall to the state Legislature.
In explaining its controversial teacher ranking system, the LA Times said:
About 6,000 Los Angeles elementary school teachers and 470 elementary schools are included in The Times’ database of “value-added” ratings. Third-, fourth- and fifth-grade teachers who taught at least 60 students from the 2002-03 through 2008-09 academic years were evaluated in the Times analysis. Most of Los Angeles Unified School District’s elementary schools are included. Test scores for charter schools that do not report directly to the district were not available.
A teacher’s value-added rating is based on his or her students’ progress on the California Standards Tests for English and math. The difference between a student’s expected growth and actual performance is the “value” a teacher added or subtracted during the year. A school’s value-added rating is based on the performance of all students tested there during that period.
Teacher rankings are also in the news this week in New York where Gov. Cuomo is trying to carve a middle ground between New York City’s push to have rankings accessible to parents and teacher unions’ desire to shield the information, which it considers unreliable and inflammatory.
The Wall Street Journal reports: (This is an excerpt. Please read full piece.)
The governor introduced a bill that would let parents view records about the performance of their children’s currently assigned teachers. The proposal would let the general public view overall school performance data, but it would be scrubbed of teachers’ names.
Mr. Cuomo’s proposal would let the public see how schools are doing across various cross-sections: by grade, poverty level, subject and other characteristics. School districts would also have to disclose how many teachers and principals moved between levels from one year to the next.
But the legislation would prohibit schools from releasing data to the general public that could identify any individual teachers and principals. In February, data was released that ranked about 18,000 New York City teachers and identified them, angering teachers unions.
Teachers unions have been pushing to protect their members’ evaluations under state law, similar to those of police officers and firefighters. They argued that the evaluation system is imprecise and inaccurate. Fighting back hard was New York City Mayor Michael Bloomberg, who has argued that the public has a right to know how teachers are doing, and that parents are smart enough to parse complicated evaluation results. Mr. Cuomo has tried to position himself in the middle, saying he understood privacy concerns but thought parents should have access.
Here is the official release on the study the LA Times rankings:
Over the past two weeks, court hearings have been held in a lawsuit intended to force the Los Angeles Unified School District to use student achievement data to rate public school teachers. A bigger question: Do the results resulting from value-added analyses of the school district’s student achievement data really tell us anything useful about a teacher’s performance?
According to a new report, released today by the National Education Policy Center (NEPC), housed at the University of Colorado Boulder, the answer is probably “no.” After carefully reexamining the data used by the Los Angeles Times to generate rankings published by the newspaper in May 2011, the new report concludes that the data cannot be reliably used to distinguish among teachers as the Times attempted to do. According to the findings released by NEPC, the Los Angeles Times’ attempt to use value-added models to generate rankings over simplify the science behind the value added analysis, misleading the public. Simplistic use of the rankings would therefore likely have a negative impact on education debate and policy.
Beginning in 2010 and again in 2011, the Times commissioned a social scientist to produce a “value-added” assessment of the test performance of thousands of Los Angeles teachers. Based on his analyses, the Times derived a single numerical score for each teacher on a five-point scale that ranged from “least effective” to “most effective.”
In the resulting articles, names and numerical rankings of thousands of teachers were published. Parents were directed to the Times website to access the online database to discover the effectiveness of their children’s teachers.
“The Los Angeles Times has added no value to the discussion of how best to identify and retain the highest quality teachers for our nation’s children,” stated Alex Molnar, a research professor at the University of Colorado Boulder School of Education and NEPC’s director of publications. “Indeed, it has made things worse. Based on this flawed use of data, parents are enticed into thinking their children’s teachers are either wonderful or terrible.”
According to Dr. Catherine Durso, a statistical expert at the University of Denver’s Department of Computer Science, who authored the NEPC report, “An Analysis of the Use and Validity of the Test-Based Teacher Evaluations Reported by the Los Angeles Times: 2011,” “the large variability of the scores must be taken into account in any use of the value-added results”
For example, the Times teacher ratings are not stable from year to year. She found in particular that when teachers change schools, their rankings are likely to change. Furthermore, there is strong evidence that the teacher-linked effects derived from the value-added models include contributions to student learning from factors outside the classroom – factors not attributable to the teachers (e.g., those related to family and community).
“The Los Angeles Times editors and reporters either knew or should have known that their reporting was based on a social science tool that cannot validly or reliably do what they set out to quantify,” Molnar said. “Yet in their ignorance or arrogance they used it anyway, to the detriment of children, teachers, and parents.”
–From Maureen Downey, for the AJC Get Schooled blog