We already have national standards. National tests aren’t far behind. Grants were awarded today to two groups to develop national tests.
Having already adopted the Common Core State Standards, Georgia is a partner state in the group that will develop and pilot a series of assessments throughout the year that will be averaged into one score for accountability purposes, a move away from a single high-stakes test administered on a single day. I would assume that these tests would replace the CRCT in Georgia once they were piloted and accepted.
According to the US DOE:
In an effort to provide ongoing feedback to teachers during the course of the school year, measure annual student growth, and move beyond narrowly-focused bubble tests, the U.S. Department of Education has awarded two groups of states grants to develop a new generation of tests. The new tests will be aligned to the higher standards that were recently developed by governors and chief state school officers and have been adopted by 36 states. The tests will assess students’ knowledge of mathematics and English language arts from third grade through high school.
The grant requests, totaling approximately $330 million, are part of the Race to the Top competition and will be awarded to the Partnership for Assessment of Readiness for College and Careers (PARCC) and the SMARTER Balanced Assessment Consortium (SBAC) in the amounts of approximately $170 and $160 million respectively.
“As I travel around the country the number one complaint I hear from teachers is that state bubble tests pressure teachers to teach to a test that doesn’t measure what really matters,” said U.S. Secretary of Education Arne Duncan. “Both of these winning applicants are planning to develop assessments that will move us far beyond this and measure real student knowledge and skills.”
The Partnership for Assessment of Readiness for College and Careers is a coalition of 26 states including Alabama, Arkansas, Arizona, California, Colorado, the District of Columbia, Delaware, Florida, Georgia, Illinois, Indiana, Kentucky, Louisiana, Massachusetts, Maryland, Mississippi, North Dakota, New Hampshire, New Jersey, New York, Ohio, Oklahoma, Pennsylvania, Rhode Island, South Carolina and Tennessee. The SMARTER Balanced Assessment Consortium is a coalition of 31 states including Alabama, Colorado, Connecticut, Delaware, Georgia, Hawaii, Iowa, Idaho, Kansas, Kentucky, Maine, Michigan, Missouri, Montana, North Carolina, North Dakota, New Hampshire, New Jersey, New Mexico, Nevada, Ohio, Oklahoma, Oregon, Pennsylvania, South Carolina, South Dakota, Utah, Vermont, Washington, Wisconsin, and West Virginia. The assessments will be ready for use by the 2014-15 school year.
“Given that these assessment proposals, designed and developed by the states, were voluntary, it was impressive to see a vast majority of states choose to participate,” said Duncan.
The PARCC coalition will test students’ ability to read complex text, complete research projects, excel at classroom speaking and listening assignments, and work with digital media. PARCC will also replace the one end-of-year high stakes accountability test with a series of assessments throughout the year that will be averaged into one score for accountability purposes, reducing the weight given to a single test administered on a single day, and providing valuable information to students and teachers throughout the year.
The SMARTER coalition will test students using computer adaptive technology that will ask students tailored questions based on their previous answers. SMARTER will continue to use one test at the end of the year for accountability purposes, but will create a series of interim tests used to inform students, parents, and teachers about whether students are on track.
For both consortia, these periodic assessments could replace already existing tests, such as interim assessments that are in common use in many classrooms today. Moreover, both consortia are designing their assessment systems with the substantial involvement of experts and teachers of English learners and students with disabilities to ensure that these students are appropriately assessed.
The parameters of the competition were informed by 10 public and expert input meetings that the Department hosted across the country last winter. Forty-two invited assessment experts joined nearly 1,000 members of the public and officials from 37 states plus Washington D.C. for over 50 hours of public and expert input on critical questions about assessment and assessment design.
The winning applicants were selected by a panel of peer reviewers. Due to the highly technical nature of the Race to the Top Assessment Competition, the Department sent invitations to two groups of individuals to serve as peer reviewers: 1) experts who served as panelists for the Race to the Top Assessment public meetings (these were nominated by the director of the National Academies of Sciences’ Board on Testing and Assessment, by the U. S. Department of Education’s National Technical Advisory Council chair, and/or by Department experts); and 2) persons experienced as peer reviewers in the Title I review of State assessment systems (all recruited on the basis of assessment expertise). The Department specifically solicited individuals with experience and expertise in K-12 assessment design, development, implementation, and use for instructional improvement, and those with expertise in complex organizational and project leadership and management.