Georgia DOE: Improvement on 7 of 8 End-Of-Course Tests

From DOE this morning:

The results of the spring 2012 Georgia End-of-Course Tests show student improvement in seven of eight comparable tests. In comparing the results to Spring 2011, students demonstrated improvement in U.S. History, Economics, Biology, Physical Science, Ninth Grade Literature, American Literature, and Mathematics I. The percentage of students who met or exceeded standards for Mathematics II decreased by one percentage point.

In April 2011, the State Board of Education approved a plan to phase out the Georgia High School Graduation Tests, beginning with students who entered ninth grade for the first time in the 2011-2012 school year. For these students, the EOCT counts as 20% of the final grade, an increase from the previous 15% level.

“It is encouraging to me to see student performance increase in the large majority of the End-of-Course Tests,” said State School Superintendent Dr. John Barge. “End-of-Course Tests are more rigorous than the Georgia High School Graduation Tests, so increases are further testament to the great job our teachers are doing delivering the Georgia Performance Standards to students in a way that they are grasping.”

Beginning this school year, End-of-Course Tests will be used as a factor of high school success for accountability purposes on the new College and Career Ready Performance Index.

Details from the EOCT Report:

• Sixty-eight (68%) percent of Georgia’s students met or exceeded the standard for U.S. History. This is an increase of two (2) percentage points when compared to spring 2011.

• Seventy-seven (77%) percent of Georgia’s students met or exceeded the standard for Economics/Business/Free Enterprise. This is an increase of five (5) percentage points when compared to spring 2011.

• Seventy-three (73%) percent of Georgia’s students met or exceeded the standard for Biology. This is an increase of three (3) percentage points when compared to spring 2011.

• Seventy-seven (77%) percent of Georgia’s students met or exceeded the standard for Physical Science. This is an increase of one (1) percentage point when compared to spring 2011.

• Eighty-four (84%) percent of Georgia’s students met or exceeded the standard for Ninth Grade Literature & Composition. This is an increase of two (2) percentage points when compared to spring 2011.

• Eighty-nine (89%) percent of Georgia’s students met or exceeded the standard for American Literature & Composition. This is an increase of one (1) percentage point when compared to spring 2011.

• Sixty-five (65%) percent of Georgia’s students met or exceeded the standard for Mathematics I. This is an increase of four (4) percentage points when compared to spring 2011.

• Fifty-four (54%) percent of Georgia’s students met or exceeded the standard for Mathematics II. This is a decrease of one (1) percentage point when compared to spring 2011.

• Sixty-three (63%) percent of Georgia’s students met or exceeded the standard for GPS Algebra. This test is new and was not administered last year.

• Seventy-four (74%) percent of Georgia’s students met or exceeded the standard for GPS Geometry. This test is new and was not administered last year.

More Information About End-of-Course Tests

Any student receiving credit for a course that requires an EOCT must participate. The test serves as the final exam for the course and contributes 15% or 20% to each student’s final course grade as provided for in State Board Rule.

The purposes of the EOCTs are to assess student achievement of the Georgia Performance Standards (GPS) in the designated core courses and to provide data to assist in the improvement of instruction and learning. The EOCTs also provide data to evaluate the effectiveness of instruction at the school, system, and state levels.

The EOCTs assess a sample of the knowledge and skills that educators agree comprise a complete curriculum for each course. Georgia students have opportunities to learn – and are expected to master – much more than the tests address. The Georgia Performance Standards are the source of the knowledge and skills assessed on the End-of-Course Tests in the following courses:

English Language Arts

- Ninth Grade Literature and Composition

- American Literature and Composition


- mathematics I: Algebra/Geometry/Statistics

- Mathematics II: Geometry/Algebra II/Statistics

- GPS Algebra

- GPS Geometry


- Biology

- Physical Science

Social Studies

- United States History

- Economics/Business/Free Enterprise

Student performance is categorized into three levels: Does Not Meet Standard, Meets Standard, and Exceeds Standard.

–From Maureen Downey, for the AJC Get Schooled blog

30 comments Add your comment

Atlanta Mom

July 3rd, 2012
11:48 am

What does it mean “does not meet standard”? Is this a raw score of under 60%? Something less?


July 3rd, 2012
11:54 am

Does it mean that I have become too cynical about education in Georgia when I see this headline and immediately assume that the test has been further dumbed down and the cut scores dropped rather than that Georgia’s students have improved?


July 3rd, 2012
12:02 pm

Maureen, not to hijack this blog but have you considered blogging on the article that appeared in the weekend AJC about tightening up the unemployment benefits policy? I did not realize that some private school teachers along with cafeteria workers were getting unemployment over the summer since they were not working.


July 3rd, 2012
12:04 pm

Regarding this topic, I need greater context so I can determine the validity of this information. Do we know the actual ‘grades’ students go on the test with assurances that what we are looking at is not on the curve? Reading what I have about the use of cut scores has made me a skeptic.


July 3rd, 2012
12:05 pm

But can they make change ? If you can’t make change, which is the most basic of math skills, how will they ever get a job ?

William Casey

July 3rd, 2012
12:09 pm

When I retired in 2006, the American History EOCT was absurdly easy. Perhaps, it’s more rigorous now but, that’s not the way to bet.

usually lurking

July 3rd, 2012
12:11 pm

My children count on the EOCT being easy, and love that it counts for such a large part of their grade. As a parent, not so much. EOCT is the final exam in some AP classes too – what a joke that is.

Proud Teacher

July 3rd, 2012
12:25 pm

This just means that more people are teaching to the test. I have had to cut down on my own curricula in order to accomodate all of the standards and mandates from everywhere else. My classes were more challenging and required more effort by the students until the state and feds handed down their standards and the administration became obssessed with numbers from the state and feds that told them what my kids were learning. Sadly, EOCT does not reallly address all that the students are not being taught in order to keep happy numbers for people that I don’t consider important to their education. Oh, well . . . . . . . . The students are still worth the effort. I just wish I didn’t have to fight for them so hard against the unseen forces in public education.

Ole Guy

July 3rd, 2012
12:27 pm

Once again, we see a celebration of mediocrity…(DOE message of exclamation, accompanied by banners furling in the breeze to the accompaniment of trumpets): “In 2011, our studies indicate that ga schools experienced an X% drop in the number of kids who didn’t “fall into the drink” while visiting the reading room. This achievement signifies great strides in education”!

WHEN are we going to start seeing REAL achievements? an increase in college graduation rates, perhaps even a drop…to zero…in the numbers of HOPE scholars who: a) DO NOT require remediation courses, and b) actually graduate ON TIME.

I am quite certain there are some who grow weary reading these stories of “great strides” in Ga public education. These celebrations of mediocrity are tantamount to heaping praise upon one for simply doing a job which is expected. Let us start to contemplate the possibility of raising the bars of expectation to new levels. The false motivation of celebrating mediocrity only goes so far.

Beverly Fraud

July 3rd, 2012
1:20 pm

Why won’t Matt the Mouth Organ address the FAIR and LEGITIMATE questions posed about the new writing standards that were raised awhile back?

THAT would be a lot more useful than the above sound and fury that signifies NOTHING.

Attentive Parent

July 3rd, 2012
1:26 pm


Have you gotten an official comment on the EdWeek story on Georgia being put on high risk status related to teacher eval?

I read the letter and am usually good at the meanings and it left me with a huh?

It guess it was a threat to remind Georgia to do what is wanted or Forego $33 million.


July 3rd, 2012
1:28 pm

Usually lurking –

AP students have (or are supposed to have) just taken an AP exam and they are required to take an EOCT in classes which require it. What is your issue with using the EOCT as a final exam? Would you like them to take ANOTHER test on top of those 2 they’re already taking?

Maureen Downey

July 3rd, 2012
1:34 pm

@Attentive, Story in the works.

Glad To See Improvement

July 3rd, 2012
1:35 pm

Personally, I am glad to see improvement. It beats a drop in scores. Remember, we are working with the same students. Public schools don’t get to pick their students based on ability. They teach whoever comes.


July 3rd, 2012
1:37 pm

I believe that the high EOCT results are deceptive. All Forsyth county 8th graders took the Physical Science EOCT this year. My 8th grader brought home an EOCT score report (I’ve never seen one from my high schooler) showing a score of 92. I was very suprised to discover, upon looking at the score report, that he only got around 79% of the problems correct. It makes me wonder how many problems a students needs to answer correctly in order to “meet” the expectations and pass the test.


July 3rd, 2012
1:46 pm

The EOCT is a joke—over inflated and over valued as an authentic assessment tool.

Jerry Eads

July 3rd, 2012
2:13 pm

ATL Mom, I’ll try to do a short version of how such tests are built. Those of you who read my tripe will note that this is the 3rd time I’ve done this in the last few days, but this may be more thorough.

The “pass level” for any one test at any one grade is determined by a small collection of people who follow a set of rules for judging the minimum amount that a student should know in a certain subject at a certain grade using a testing industry standard procedure known as “modified Angoff” after the guy who came up with the idea but somebody didn’t like it the way it was so it was changed. That “amount of knowledge” is determined by the group of “experts” deciding whether this imaginary minimally capable student should know or not know each question on an example test. A decent one page description is at

Although it’s apparently pretty hard for people to understand, two things (actually more) determine how many questions a student gets correct on a test: (1) how much he or she knows about the subject and (2) how hard the questions are. While we’re not anywhere NEAR as good as people assume we (test developers) are, we’re fairly good at making questions harder or easier. We can pretty easily make a test for this minimally competent student on which they can get, say, ten correct out of 60 or 50 out of 60. the most accurate four option multiple choice tests are such that the AVERAGE (middle knowledge) student gets about 55% of the questions correct. The pass level for the minimally competent student should be a bit lower than that (because by definition they’re below the middle), and indeed the state’s minimum competency tests (CRCT, EOCT, etc.) are built with that in mind (someone noted that the pass % of questions on one test was 48%).

Now, I’ve left a LOT out, and I have huge concerns about the utility of these tests, but the technical testing folks at DOE and their contractors do a pretty decent job of putting together several different tests for each subject at each grade each year, even though they face terrifying production schedules, severe design limitations and (even at $25 million a year if that’s accurate), very limited resources for the task at hand.

The way they’re EQUATED (made so the pass level is the same for each test) (that’s how we make up the “scale scores”) is usually done these days by a fancy statistical procedure known as the Rasch model – again after the guy who came up with it. I won’t even begin to try to explain that one here.

That doesn’t mean I like the tests. I think minimum competency testing such as this state does – and for fifteen years I helped build for another state – has very little instructional value. I’m ecstatic that the state superintendent and board are going to phase out at least the EOCTs.

Finally, some of the increases in pass rates this year may be meaningful (even if it’s just a function of even further narrowed curriculum), but even with so many students, it’s possible that a 1% change in pass rates could be nothing more than the error in equating last year’s tests with this year’s tests. Like I said earlier, we’re pretty good at this, but we’re nowhere near as good as people seem to think we are, which is one of the main reasons we shouldn’t be depending on student testing to rate teachers.

Good Mother

July 3rd, 2012
4:44 pm

How do GA kids compare to kids in other states? Until we have a standardized national test, we won’t know so all of these “we are improving measurements” are meaningless.

Atlanta Mom

July 3rd, 2012
5:09 pm

Jerry,thanks for the information. Sorry I missed the previous two explanations. Here’s a simple question for simple minds (and I am referring to my own mind).
When these tests are distributed to the schools, has the number of correct answers needed to “pass” or “exceed” already been determined? Based on what you’ve said, that seems to be the case.

Jerry Eads

July 3rd, 2012
5:42 pm

GM, we had perfectly usable comparative tests that provided us that information. the minimum competency “movement” pretty much destroyed the market for those tests. If done well (we’ll wait and see), the PAARC tests for CC could provide us similar information..

AM, hope it helped. It’s complicated stuff – like in any profession, the methods are understood rarely by those outside (think about designing airplanes, for example). Unfortunately, if there’s a flaw in an airplane, people get killed, and we notice pretty quickly. With tests, we often get away with a lot because people figure we’re measuring what they want us to measure, and that changes in performance are simply a function of changes in students. Neither of those assumptions is reasonable without a LOT of evidence, hardly ever provided in understandable terms.

The short answer to your question is kinda :-) . Usually SOME questions from the previous year are put into the test for this year, and the known performance from last year is used to help determine how hard the questions are this year. That gives us the difficulty of the test this year and enables us to set the cut scores this year. There are lvariations on the theme given the resources and time, but my guess is that they’re still doing it this way for the current tests.

Jerry Eads

July 3rd, 2012
5:46 pm

GM, NAEP also purports to do this (cross-state comparisons).

Jerry Eads

July 3rd, 2012
6:40 pm

Oy. I fear Georg would roll over in his grave now that I re-read that brief explanation, but it’s close enough to get the point across. The model is, indeed, pure genius. It enabled us more accurately than ever before to link performance from one test to another. There’s actually a much more complex model that looked on paper like it would be even better (if you must ask, the 3-parameter latent trait model), but in practice it’s not enough of an improvement (and subject to other errors) to warrant replacing the Rasch single parameter model.


July 3rd, 2012
7:08 pm

Jerry-unless I’m mistaken, the state board is phasing out the graduation tests, not the EOCTs. I’m assuming that’s what you meant though.


July 3rd, 2012
8:05 pm

PROUD TEACHER writes,,,,,,

12:25 pm

This just means that more people are teaching to the test. I have had to cut down on my own curricula in order to accomodate all of the standards and mandates from everywhere else.

Kinda maakes me proud too!! 7/4/2012

N. GA Teacher

July 3rd, 2012
8:16 pm

The EOCT is a pretty stringent measure of a student’s knowledge at one point in time but the REAL story here is NOT the validity of the test. The REAL story lies in students’ very LOW motivation to study for a test that only counts 15 to 20%, particularly when looking forward to Christmas or summer break. Even with a miserable grade on the EOCT, students can still pass the CLASS and move on.


July 3rd, 2012
8:42 pm

” The EOCTs also provide data to evaluate the effectiveness of instruction at the school, system, and state levels. ”

How? What if I have a class of students who just do not understand math II, no matter how hard I try. I guess my pay will be based on this.

Atlanta Mom

July 4th, 2012
12:33 pm

Thank you for your explanations. Indeed, it makes more sense to me now.

Jerry Eads

July 4th, 2012
5:31 pm

MAM, I thought I’d read here just a bit ago that they were going to also trash the EOCTs. That would have been big news. Oh well. It was a nice fantasy for a day!
STS, Yep, you personally see it and your experience is reflected in every decent piece of research out there on the issue. So much for “research-based” decision-making.
NGaT, EOCTs are still nothing but minimum comptetency tests. They only tell you whether kids passed at a level FAR below “average.” They are not (except for the also arbitrarily assigned “pass+” points) designed to tell you anything about your students’ progress. I cannot even begin to imagine most people are satisfied with knowing only that their kid is above or below, say, the 20th percentile.
TT – Nope, doesn’t help you much, does it.
AM, Happy to be here. Look forward to agreeing with you, arguing with you, and learning with you here.
THANKS Maureen.


July 6th, 2012
10:15 am

@Proud Teacher

This year’s cut score was lower than last year’s cut score, for at least the Math I EOCT. (Yes, we Math teachers have the capability to determine the scoring algorithms used in a one-to-one mapping, invertible system of equations.) No, it does not mean more teachers are teaching to the test, or being more successful in teaching to the test.

Jerry Eads

July 6th, 2012
1:29 pm

P, the RAW score may well have been lower, as the test developers have to equate the cut score difficulty with last year’s test. To the extent they can (it’s not perfect by any means), they have to make the cut score the same difficulty each year. If the questions turn out to be a bit harder this year, they have to lower the number of questions correct in order to pass. The raw score is mapped to the scale score, which should remain the same. If it didn’t, that’s VERY interesting.