Tennessee is ahead of Georgia in developing a teacher evaluation system that considers student outcomes, a factor in its early receipt of a federal Race to the Top grant. (Georgia won its $400 million grant in round two.)
In response to concerns about the Tennessee Value-Added Assessment System, the state undertook a study that was released this week.
Here are highlights of the Tennessee study, but those of you interested in this issue — and how it may play out in Georgia, which is poised to roll out its new teacher eval system this year — ought to read the full report:
In July 2011, Tennessee became one of the first states in the country to implement a comprehensive, student outcomes-based, statewide educator evaluation system. This landmark legislation established the parameters of a new teacher and principal evaluation system and committed to implementation during the 2011-12 school year.
The act required 50 percent of the evaluation to be comprised of student achievement data —35 percent based on student growth as represented by the Tennessee Value-Added Assessment System or a comparable measure and the other 15 percent based on additional measures of student achievement adopted by the state Board of Education and chosen through mutual agreement by the educator and evaluator. The remaining 50 percent of the evaluation is determined through qualitative measures such as teacher observations, personal conferences and review of prior evaluations and work.
Implementation of the evaluation system began at the start of the 2011-12 school year. The department made a concentrated effort to solicit and encourage feedback, meeting with teachers and administrators across the state.
While administrators continued to tout the system’s impact on instruction, the public discussion about teacher evaluation began to detract from the real purpose of the evaluation system: improving student achievement. In response, Gov. Haslam, supported by legislative leadership, tasked the State Collaborative on Reforming Education with conducting an independent review of the system through a statewide listening and feedback process and producing a report to the state Board of Education and department outlining a range of policy considerations.
Through our feedback gathering process, common themes have emerged:
•Administrators and teachers — including both supporters and opponents of the evaluation model — believe the TEAM rubric effectively represents high-quality instruction and facilitates rich conversations about instruction.
• Administrators consistently noted that having school-wide value-added scores has led to increased collaboration among teachers and a higher emphasis on academic standards in all subjects.
• Administrators and teachers both feel too many teachers have treated the rubric like a checklist rather than viewing it as a holistic representation of an effective lesson, and both groups feel additional training is needed on this point.
• Teachers in subjects and grades that do not yield an individual value-added score do not believe it is fair to have 35 percent of their evaluation determined by school-wide scores.
• Implementation of the 15 percent measure has not led to selection of appropriate measures, with choices too often dictated by teacher and principal perceptions of which measure would generate the highest score rather than an accurate reflection of achievement.
• Administrators consistently noted the large amount of time needed to complete the evaluation process. In particular, administrators want to spend less time observing their highest performing teachers and more time observing lower performing teachers. Additionally, they feel the mechanics of the process (e.g., data entry) need to be more streamlined and efficient.
•Both administrators and teachers consistently felt better about the system as the year progressed, in part due to familiarity with the expectations and because of changes that allowed for fewer classroom visits during the second semester.
• Local capacity to offer high-quality feedback and to facilitate targeted professional development based on evaluation results varies considerably across districts.
The 2011-12 school year saw tremendous progress for public education in Tennessee, as measured by the most significant outcome – student achievement. Test scores improved, in aggregate, at a faster rate than any previously measured year. Math and science scores, in particular, increased significantly, moving students forward against rigorous, nationally benchmarked standards. To put this into perspective, 55,000 more students are at or above grade level in math than in 2010; 38,000 more students are at or above grade level in science. This growth and achievement represents real change in the academic trajectory and potential life options for Tennessee students and can be the very real difference between long-term success and failure.
Teacher observation results from year one are encouraging and demonstrate more meaningful differentiation than ever before. However, they also indicate that as a state, we must more accurately and consistently reflect the true spectrum of teacher performance. While there was concern among educators in the early stages of training and implementation that few teachers would receive observation scores demonstrating performance exceeding expectations, results show that more than 75 percent of teachers scored a 4 or a 5 (scores demonstrating performance exceeding expectation) with less than 2.5 percent scoring a 1 or 2 (scores demonstrating performance below expectations). While these scores dispel the myth that teachers cannot receive high scores on the observation rubric, when considered alongside student achievement results, they demand reflection and thoughtful consideration. For example, while scores for teachers exceeding expectations on observations were aligned with those receiving scores of 4 or 5 based on student achievement growth, this same alignment did not occur for those teachers performing at the lowest levels in terms of student outcomes.
•Measurement of the quantitative impact on student performance. This includes an examination of both the 35 percent of evaluation scores driven by TVAAS and the 15 percent achievement measure selected by teachers and principals. In particular, we must ensure that as many teachers as possible have effective means of measuring impact on students, and we must consider what additional weight the quantitative portion of the evaluation should be given for teachers who do not have access to individual metrics.
•Changes to the qualitative rubric. This area focuses on ways to maintain the many pieces of the rubric that allow teachers and administrators to have strong discussions about instruction, while streamlining areas that were redundant or less effective in facilitating conversations.
• Increases in process efficiencies. We want to ensure that administrators are spending their time on observations and on feedback conversations, not on entering data into systems. Additionally, administrators should spend time with the teachers who need the most help.
• Management of district implementation. We must ensure that districts apply the evaluation system fairly, while still allowing for significant local innovation. We must also ensure that districts provide robust feedback and professional development to teachers who currently lack the skills to advance student achievement effectively
–From Maureen Downey, for the AJC Get Schooled blog