Questions on the CRCT. Answers from Georgia DOE.

crcted.0920 (Medium)A reader sent me a series of questions about the state’s Criterion-Referenced Competency Tests, which I asked the state Department of Education to answer. I am running this today because the state just released district scores.

The AJC also now has  a searchable database of scores from all districts in Georgia.

I appreciate the time that DOE took to draft this detailed response as I requested that it be as jargon-free as possible for the non educators on the blog. As DOE spokesman Matt Cardoza said, “While there is a lot of technical information in this answer, they asked a very technical question.  It’s as jargon-free as we can get it.”

From the reader:

How does the DOE oversee the CRCT test validity and scoring year to year in order to do the charts and comparisons the DOE released this week? As a former principal and test coordinator before that, I was never told the cut scores on the first test or the retest. I felt that there was wiggle room at the state level in deciding how many questions a child could miss and still pass and that it was decided each year after the tests were all in and scored. I also felt the retest had a different and less demanding cut score. Who makes sure that the level of difficulty of the questions remains constant across years so that there is valid comparison? If the questions and cut scores are manipulated year to year, how can valid comparisons be made?

If I were state superintendent, I would want to show steady progress and improvement, and the fact that there is not transparency in the cut scores and the test is created in-house and updated year to year, there is room to manipulate the results without changing a single child’s answer. I have no proof; I am just asking the question. Has anyone looked at this?

From DOE:

The process Georgia uses to build state assessments, such as the Criterion-Referenced Competency Tests (CRCT), is an established, time-tested practice that all reputable test developers use. This process follows the professional standards jointly developed by several organizations, including the American Psychological Association (APA), the National Council of Measurement in Education (NCME), and the American Educational Research Association (AERA). While the state contracts for the development, administration, scoring, and reporting of the assessment programs (for example, CTB McGraw-Hill is the current contractor for the CRCT), the Georgia Department of Education (GaDOE) assessment staff provides direct oversight of this work.

Georgia educators make significant contributions to the state’s testing programs by reviewing test items both before and after they are field tested with Georgia students. Field testing involves trying out newly written items with a representative sample of students and is a crucial step towards ensuring the items are appropriate and not confusing for students before holding students and schools accountable for performance on the items. As such, field test items do not contribute to student results.

Multiple steps are taken to ensure the technical quality of each assessment program. For example, Georgia convenes a Technical Advisory Committee (TAC), comprised of six nationally-recognized experts in the field of educational measurement. The purpose of the TAC is to provide the state with impartial, expert advice on the technical qualities of the state’s assessments. TAC meets quarterly and reviews every step of the test development, scoring, and reporting process for each testing program.

Additionally, testing programs such as the CRCT must undergo a comprehensive review process conducted by the U.S. Department of Education (US ED) known as Peer Review. During this review, each state must submit detailed documentation providing evidence of the technical qualities of the program(s). These include, but are not limited to, qualities such as alignment, development and maintenance procedures, and technical reports. A committee of peers (measurement, curriculum, and education policy experts) selected from other states reviews the evidence and evaluates the overall quality and soundness of the instruments.

So, how does the GaDOE know that year to year comparisons of test scores on a test are valid? When test forms are built, careful consideration is given to both the content and statistical features of the items selected to comprise the form. A test blueprint, with both content and statistical targets, guides the form development. Throughout the test form building process, the goal is to develop a form that is as parallel as possible to the blueprint and previous forms. And while the test forms are created to be as parallel as possible in terms of content coverage and difficulty, the fact remains that differences in unique collections of items can result in subtle changes in difficulty. A statistical procedure called equating serves to equalize those differences.

Equating is not unique to Georgia; equating is a process used by virtually all large-scale assessment programs (including the SAT, ACT, other state assessments, etc.). Behind and undergirding each test program and each administration is a large variety of statistical work that takes place to ensure the assessments are technically sound and equated appropriately. The process of equating a test ensures that students taking a test are always held to the same level of achievement, regardless of any differences in the collection of items that comprise the test form taken. Thus whenever multiple test forms are used in the same administration or when a different form is given in a subsequent administration (e.g., grade 7 science in 2011 and grade 7 science in 2012), they must be equated.

The technical work for the CRCT involves expressing each test form on a common metric called the theta scale, such that performance on each form can be compared. Working with a common metric means that difference in test performance can be interpreted as a result of changes in student achievement as opposed to changes in test difficulty. It is particularly important to understand that the cut scores determined within large scale assessment programs like the CRCT are set in this common metric and are held constant for the lifetime of the testing program.

For example, the cut scores for the grade 7 Mathematics were determined via an extensive standard setting procedure in the first year of the testing program. However, the cut score for the CRCT is set using the theta scale and not the raw score. The particular raw score required to achieve the cut score of 800 in any subsequent test administration is based on the difficulty of the collection of test items that comprise the form.

Working within a common metric such as the theta scale and implementing statistical procedures such as equating allows us to attribute, with confidence, any changes in student performance to student achievement and not as a by-product of the test form that was administered. The passing score always has the same meaning from administration to administration.

Finally, the technical work underlying the development and administration of the CRCT is well documented and has been extensively and independently reviewed by the TAC as well as the US ED. For many years the raw score (or number correct) achieved by students has been reported on class rosters and individual student reports.

Given this transparency it would be exceedingly difficult, if not impossible, for the state to “manipulate” the cut score to achieve some “desired” result.

Given the stakes associated with the test results and recent events in our state, it is natural to have questions about how tests are developed and validated. Individuals interested in learning more can Google the words “psychometrics” (the advanced study of measurement) and “test equating.”

–From Maureen Downey, for the AJC Get Schooled blog

39 comments Add your comment

Dr. Craig Spinks/ Georgians for Educational Excellence

June 28th, 2012
6:45 am

Might I suggest that a competent, disinterested, out-of-state entity be retained to evaluate CRCT test-development, -administration, -scoring and -score-reporting processes and to render an unredacted evaluation report on these processes written in voter-comprehensible English.

ScienceTeacher671

June 28th, 2012
7:42 am

Wow, I’ve had the same questions as your reader about the EOCTs and GHSGT. We all (teachers and students) think the summer version of GHSGT is easier than the “regular” version, BTW!

ScienceTeacher671

June 28th, 2012
7:44 am

Speaking of which, I wonder when the state will make the high school results for last semester public? They ought to have them by now, right?

Mountain Man

June 28th, 2012
7:50 am

So how do you compare a high school in Dunwoody, Georgia to one in Saint Louis, Missouri?

John Konop

June 28th, 2012
8:02 am

The countries with the best education systems in the world test way less than us. If we tested less and focused more on educational alternatives like on line education, different tracks based on aptitude…..you would see results with the money focused on teachers over administrators.

carlosgvv

June 28th, 2012
8:11 am

“oversee the CRCT test”

Maureen, you have a dark sense of humor.

colin schaeffer

June 28th, 2012
8:23 am

The. CRCT’s are horribly stressful for students, not to mention educators. They impede education and are a huge waste of money. Why people don’t stand up against this folly is beyond me. Another reason we homeschool through a co-op

Tony

June 28th, 2012
8:33 am

Know matter how stringent the procedures for development of the state tests, they are only representative of a small portion of what kids learn. They focus on aspects of the curriculum which are easily converted into multiple choice items, and can only measure a small slice of the curriculum. Tests can be informative to teachers and schools. Unfortunately, the overemphasis on test results is killing our kids opportunities to learn more relevant ideas. Often attributed to Albert Einstein, “Not everything that counts can be measured. Not everything that can be measured counts.”

Steve G.

June 28th, 2012
8:45 am

To Mountain Man: You cannot compare a high school in Dunwoody, Georgia to on in Saint Louis, Missouri unless students took the same assessment, such as the ITBS. These assessments are “norm-referenced”, which makes them comparible. The CRCT and EOCT are “Criterion-referenced” and based solely on the curriculum for Georgia. That’s how the DOE is able to compare results from students and schools within our state, but not between states.

However, some of this will change in the future when assessments will be based upon the Common Core State Standards. As most states have adopted this curriculum, more comparisons will be able to be made between students, schools and states from across the U.S.

catlady

June 28th, 2012
8:54 am

Blah, blah, blah. You are too dumb to understand. Blah, Blah, Blah.

What does Jerry Eads think of this answer?

It has been my observation that the CRCT is poorly worded at times, varies in difficulty markedly from spring to summer in terms of passing, and doesn’t really tell us much. And yes, I have read the test. I am frequently chose to do a reading accomodation.

Jennifer

June 28th, 2012
9:50 am

Who cares about CRCT anymore? We need to start freaking out about PARCC Assessments (the NEW and IMPROVED CRCT…). We start taking it next year, and from the released test questions, it’s going to be a lot more difficult.

William Casey

June 28th, 2012
9:55 am

Interesting and informative article that took me back forty years to my days as graduate assistant to two statistics professors at UWG. During my 31 years in the classroom, I took considerable care in test construction, producing tests that were challenging and comprehensive as well as valid and reliable. Yes, I understand the need for standardized tests. However, I often wondered if we use them properly to facilitate learning. Following administration of my teacher-produced unit tests, we always took time to “debrief” the tests, going over each item. Students were required to determine WHY they missed a particular item. This process often produced heated discussion. LOL I was able to refine my tests from year-to-year based on this feedback. The students became better test-takers and developed more efficient study habits. Lots of learning took place. I was just wondering if anything like this takes place with standardized tests? It seems to me that students simply take standardized tests and then forget about them. In terms of student learning, this seems like less than best use of a very expensive test. I could be wrong.

Fed Up

June 28th, 2012
10:23 am

It doesn’t matter what the questions are.

The state just skews the results to get what they want. 800 is “meets standards” and 850 is “exceeds standards.” There is no set maximum score, nor do they release the maximum scores. They state outright that scores cannot be compared from subject to subject or from year to year. What this means to me is that the State of Georgia can MANIPULATE the scores however they need to in order to say that x% of students met the standards. If you take a basic bell curve and STRRRETCH it wider, then the average will move up and a higher percentage of students will fall above the magic 800. Nothing has changed, except Georgia now looks like it did much better.

Why am I cynical? My children generally score perfect or miss only one or two questions. The Math & Science maximums are generally very high, while Reading and English/Language Arts (ELA) are lower. Math & Science are “harder” so they need to fix up the scores to make the the overall population look better.

Here are my son’s 4th grade results from this year:
Math 990 (perfect out of 60)
Science 927 (2 wrong out of 60)
Reading 891 (1 wrong out of 40)
ELA 898 (1 wrong of 50)
Social Studies 891 (3 wrong out of 60)

And I wonder why Georgia is afraid to compare itself against other students on a nationally normed test?!

Double Zero Eight

June 28th, 2012
10:53 am

It appears the Technical Advisory Committee owes
Georgia a refund. This panel of experts failed to do
their job prior to 2010. In addition, the methodology
involving the “theta scale” and “equating” must have
skewed the “common metic” . Not one of the six
members that comprised the TAC concluded that the
APS results were atypical.

Bernie

June 28th, 2012
11:25 am

Congratulations! President Obama for your hard won fight for the American People.
God knows you have endured the wrath and traps of so many who have hated and despised you, so vehemently for no “GOOD” reason and without a cause.

We can only pray that he continues his many blessings for you and this GREAT NATION.

Wondering

June 28th, 2012
11:59 am

To Fed Up: I have always wondered why in the world each test has a different perfect score, not just by content-area, but also each grade level.. As a teacher, we are under pressure of course to improve our students’ test scores (hello, value-added!) If each year the perfect score goes up or down, a 950 in 5th grade and then a 920 in 6th looks to be a decline when actually both are perfect scores. Wouldn’t it be easier to understand the test scores if they were all worth 1000 points or 900? Why all the different top scores??? It is hard to compare scores even for the same child in the same year, or year to year!

When some concerned colleagues and I called the state to discuss this, as well as to get enlightened on how one question is worth 20-30 points (such as you see in your son’s Reading and ELA scores, we were told that the 1st question missed on the CRCT is weighted heavily but each successive question missed afteward is worth less and less points. This is so that the DOE can get “data-rich information” around the “meets” vs. “does not meet” scores, they say. Some principals are not inclined to concern themselves with this, however. They only see a decline in points, and even though missing 1 question does not seem statistically valid to show a decline in growth, it sure is not showing “value-added” improvement in their eyes. This is a cause for alarm with the idea of pay being tied into showing student growth. Statistically, with 7 versions of the same CRCT test out there, a student could very well get all the questions right on one version and miss one on another version, (or 2). Puts “perfection” in a new light, doesn’t it?

So, I am also wondering how in the world this is all going to be factored into the teacher accountability pay-piece of the RTTT initiative. Don’t get me wrong: I am a Georgia Master Teacher who sees decent growth each year for her students. I am not afraid of accountability; I just want to be fairly assessed. If principals, teachers and parents find CRCT scores murky, so can politicians.

And so… I wonder.

Jerry Eads

June 28th, 2012
12:15 pm

I absolutely agree with the DOE tech folk. The current technical mechanics of multiple-choice test development have been in place for decades, and I am absolutely sure that the contractor and DOE staff do their level best to maintain the integrity of the tests, including equating one form of each subject at each grade test with the next form, whether it’s an alternate form for a retest or a subsequent form for the following year. The integrity of the DOE technical staff will be beyond reproach. And any test-development contractor would void their contract before manipulating test difficulty. A scandal of that nature would put them out of business.

That said, it is virtually impossible to make one form of a test EXACTLY the same difficulty as another form. There is ALWAYS some error. Sometimes the process simply misses – as happened with a social studies test some years ago. That’s a truth that some folks have a hard time living with, but it’s simply inescapable fact. We do the best we can at test equating, it’s awfully good, but we’re not working with electronic scales accurate to the microgram by any stretch of the imagination. We’re workin with human performance, not weight, and equating depends on human performance. It varies. A lot. From moment to moment and from day to day.

And THAT said, I remain absolutely firm that the tests measure little more than minimal recognition and simple computation skills that challenge only a small minority of our students – and hence are virtually useless in supporting instruction at average or higher levels (Yeah, I know, there’s “pass+” and “pass++”. And does that help teaching and learning?). My dear hope is that the machinations around the so-called Common Core and associated test development efforts will provide us tools that are useful not only for our low performing students but for every child. And teacher.

And the beat goes on...

June 28th, 2012
12:16 pm

And we continue to wonder why there is a dearth of education funds. I know why. Georgia spends literally millions for testing, and the results of these tests are far less indicative of what a student knows and can do than what most classroom teachers are able to assess on their own. My high school has less that 400 students enrolled. We are a small, rural community. We consistently year after year have students (approximately 65%) who attend UGA, Georgia Tech, and prestigious out-of-state colleges (Duke, Harvard, UCLA) and these students earn degrees from these colleges. The other 35% attend technical colleges or join the military and some just go to work, not because their choices are limited, but rather because these options are the options they have chosen. Yes, we have some students who have dropped out of school, much our dismay. However, the teachers in this small school can tell you, with some degree of accuracy, which students will most likely excel in college and which will struggle. We do all we can to prepare all students for what lies ahead of them, but we could do a heck of a lot better if we could focus more on teaching content and how to think critically than and how to be a good productive citizen than wasting hours on test preparation, test-taking skills, etc. If we used all the hours we spend on test preparation on more life-enriching experiences, on creating more and better learning situations, and on allowing kids to enjoy their high school experience without the pressures of standardized tests, I believe we could increase student success in more areas than what we are currently assessing. Oh, we have great test scores, but few students get to attend theater productions in Atlanta, trips to museums, or, heck, even cattle shows, because most funds are diverted to test preparation, administration, and remediation. Georgia is producing some great test-takers who know little of the world.

Jerry Eads

June 28th, 2012
12:22 pm

Oh, Hi Cat – I didn’t address item development – THAT’s another issue. Do remember that ALL state minimum competency testing programs have one thing in common with the space program – - – -

Low bid. Some may consider that a cheap shot. While I was trying to be cute, it’s a fact of life for these folks. Again, I absolutely DO NOT question the integrity of the DOE tech staff or the contractors. The reality is, however, that the development process is always a race against time with limited resources. I cannot even begin to tell you how ecstatic I am to never have to be near it again.

Jerry Eads

June 28th, 2012
12:26 pm

Ah, looks like the software pulled my first post for some reason, having to do with equating. We’ll see if M puts it back in.

Jerry Eads

June 28th, 2012
1:06 pm

Gonna be a busy day. Wondering: There was never any intention or budget to develop the tests for the content or the difficulty to be scaled from one grade to the next, just like there was never (to the best of my understanding) actual curriculum development that would attempt to sequence the expectations from one grade to the next in the so-called “standards.” Whether conscious or not, I’m pretty sure one of the reasons states started calling their requirements “standards” instead of CURRICULUM (with proper scope and sequence) is that then those in charge didn’t have to WORRY about proper curriculum development, assuming they even knew enough to ask the question. My personal experience in Virginia was that no one even asked questions such as “given what we want children to know in 6th grade, what is it we should want them to learn in 5th (and so forth). Curriculum development is an extremely complex expertise in which few people who come out of the P-12 schools (where almost all state education employees come from) have training.

Given there’s possibly inadequate scope and sequence to the de facto curriculum (upon which the tests are supposed to be based), there’s little reason (and it may be impossible) to scale the tests from one level to the next like the full-range nationally norm-referenced tests we used to use. (There are other issues too, but I won’t start on them.) Hence, there’s no possible way to compare the “scores” from one test to the next.

Fed Up

June 28th, 2012
1:58 pm

I’ve gone through k through 5 with the commercial K12.com CURRICULUM… It is sequential, intense, but also adaptable. Imagine learning about societies from ancient Mesopotamia through 20 th century American history in ORDER by the fifth grade. None of this Indians-and-MLK broken record. Every. Single. Year. But since we are also a public charter school, I have to break pout my ouiji board to figure pout what in the world

Fed Up

June 28th, 2012
1:59 pm

Figure out what in the world Georgia wants us to study from their scanty descriptions of “standards”

[...] Speaking of how CRCT is scaled, the Georgia Department of Education released system-wide data today on the 2012 scores. [...]

catlady

June 28th, 2012
3:56 pm

Jerry, I thought when you compared the development of the CRCT to the space program, you were referring to the old Werner Von Braun joke, “We have to be careful when we send astronauts to the moon. You don’t want it to be a half mood–we’d miss it!”

I threw up my hands years ago when the question about the state bird was on one form for 3rd grade three times.

Many of us wonder how kids scoring 750 in the spring administration can, with 2-3 weeks of summer school (now cut drastically back) magically make an 800 on the second administration. I mean, these are kids two years behind in reading–how did they “gain” that much but don’t seem to gain in 9 months of school? The late jim d used to write about this extensively on this blog.

Kate

June 28th, 2012
4:44 pm

You can get the cut scores annually from GADOE. What they tell you is how many questions there are on each test, and how many a student must answer correctly on each test each to be proficient, advanced, etc.

For the last three years, the 8th grade CRCT in reading, for example, requires students to answer 48% of the questions correctly to score “proficient,” which is scored as 800. Since I work at the high school level, it concerns me that these tests suggest that 8th grade students answering fewer than half of the questions correctly on a reading test are considered adequate preparation for high school curriculum.

CRCT cut scores hover right around 52% for the other subject areas each year. GHSGT cut scores are similar.

Our data shows that in our relatively high-performing North Fulton school, an 8th grader scoring 820 or lower on math, English and/or reading CRCTs is *not* prepared for high school.

Kate

June 28th, 2012
4:45 pm

Forgive my poor wording above. Didn’t proofread. :-\

Jerry Eads

June 28th, 2012
5:00 pm

Cat, honest, don’t even try to compare the state’s mincomp test “scale scores” to anything else in this world. The ONLY place on the scale that will stay the same is whatever they decide the “cut score” will be. The tests are adjusted so THAT score remains the same (even though the raw score – the number of questions required to pass – will change. Sometimes only a question or two, but it’ll change.). NO OTHER SCORE is comparable from one test to the next or even one form to the next. OH – sorry – the scale will be stretched and shuffled so that whatever the “pass+” level is will end up the same. We all want to assume that there’s SOMETHING else that has meaning beyond the pass-fail points, but there’s not. We expect the difference between 250 and 300 is the same amount of “achievement” as between 300 and 350, but it’s NOT. We spend what – $20 million a year on those things? That’s what I saw quite a few years ago – who knows what the bill is today. Adding extra “features” would add (significantly) to the development time and cost.

As someone pointed out above, there are NO data whatsoever suggesting that these tests have ANYTHING to do with anything else – no studies suggesting they have any relationship with elementary, middle, or high school success (other than the arbitrary (it’s called “Modified Angoff”) dart-throwing that sets the pass-fail score. Neither is there any work that relates these tests to any OTHER outcomes like job or college success.

Jerry Eads

June 28th, 2012
5:13 pm

Kate, as you probably know, the number of questions a group of kids gets right is a function of two things: (1) what they know and (2) how hard the questions are. The artifact of “70% is passing” and so forth are there because you folks in the classroom are expert enough to set the difficulty of the questions to what you need and expect of the kids.

The optimal difficulty level for the multiple-choice norm-referenced tests of days of yore was about 55%. That enabled the most accurate estimate of students’ position (percentile) compared to everyone else.

If you built your own tests so that the average was about 55% of the questions answered correctly, you’d have better (more reliable) tests. One of the biggest problems with the convention of 70, 80 and 90% for B, C and A, for example, is your ability to differentiate accurately among the A and B students is very significantly diminished, because your tests are substantially less reliable than if you built them with greater difficulty.

Fed Up

June 28th, 2012
5:14 pm

When I was a kid, summer school was six or eight weeks. I took plenty of classes to get ahead. That is not allowed in Cobb county, probably others too . Everyone must plod along no matter what.

Dr. Craig Spinks/ Georgians for Educational Excellence

June 28th, 2012
6:03 pm

Jerry,

The last three sentences of the first paragraph and the entire last paragraph of your 5:00 PM posting contain information which is thought-provoking and troublesome.

$20M/ year over the course of a decade for a testing program without efficacy documentation: GA educrats boggle the mind.

Craig

catlady

June 28th, 2012
6:55 pm

Dr. Spinks, Jerry has said elegantly what I have been saying for a long time: So 800 is “passing.” Does that mean the student can do the work required at the next level? Has there been a predictive model devised that says the kids need to know x and y before they can accomplish the goals in grade z? If not, in what ways is the test helpful to the student? ‘Cause we KNOW they aren’t being retained.

Jerry Eads

June 28th, 2012
8:48 pm

Craig, Cat, and others, We’re a very small crowd that chooses to deal with the details about minimum competency (not to mention the rest of) testing. Sadly, the reality – railed at for decades by Monty and others at Fairtest, dear Bracey and a number of the most respected testing experts in the world – is that such testing is far worse than that which tells us nothing – it misleads us into believing it actually does tell us something. I am and will be forever awed at how simply and completely the egregiously expensive exercise of minimum competency testing – to paraphrase P.T. – fools almost everyone all the time. Including superintendents, school boards and, worst, legislators. My sense is that Mr. Barge is NOT fooled (unlike his predecessors), but he must choose his battles.

Again, we can hope that the stuff being put together by PAARC for the “common core” will be more useful. Given my experience with the wonderfully well-intentioned New Standards Project of the 90’s and its, um, difficulties, I’ll be (very) cautiously optimistic. Thankfully the ENORMOUS expense will be shared across states.

Jerry Eads

June 28th, 2012
9:42 pm

One of my favorite comics is Kevin and Kell, penned by Bill Holbrook. Yesterday’s was another shot at testing (he tosses ‘em in every now and then; his other half’s in our business :-) . Enjoy. Don’t stop with this one: I think it’s one of the best social commentary strips, yet gentler than Doonesbury or Prickly City.
http://www.kevinandkell.com/2012/kk0627.html

Truth in Moderation

June 28th, 2012
10:06 pm

A test measures retained knowledge and understanding from what has been taught. The CURRICULUM drives a test. An assessment measures property value. A State Assessment measures the value of a student to the State. STATE MANDATED OUTCOMES (OBE, Standards) drive the curriculum. Teachers have become facilitators for aligning student’s attitudes, values and beliefs with the desire of the STATE. We now have a closed loop. The teacher (facilitator) MUST teach to the ASSESSMENT (CRCT) and the student MUST pass the assessment. This is why RTTT aligns teacher pay with CRCT results. Think about what kind of citizen a globalist politician would want, to keep them in power…..

Please note that sometime around 2008 (I think) the Item Specifications began to be changed from levels based on Bloom’s Taxonomy of Educational Objectives, Cognitive and Affective Domain levels to:

“Each of Georgia’s test programs contains a range of test items in terms of both difficulty (rigor) and complexity. To gauge the complexity or cognitive demand of the test items, Georgia uses a model called “Depth of Knowledge” (DOK), which was developed by Norman Webb at the University of Wisconsin.”
http://blogs.ajc.com/get-schooled-blog/2010/07/15/you-asked-doe-responded-that-crct-does-measure-higher-order-thinking/
Here’s a good summary of DOK:
http://nde.doe.nv.gov/Assessment/DOK/DOK_OverviewInformation.pdf

In my opinion, after extensively studying Bloom’s Taxonomy and its use as a basis for the original CRCT’s, the “DOK” isn’t much different in ideology, but is less specific in its domain descriptions. Here is what you ask for: Using the FOIA, get the the RFP (try 2007-2010) for the CRCT ITEM SPECIFICATIONS to rewrite them to conform with the new DOK level numbers. Get a copy of all the bids, and the winning bids. In the past, there was a document with ETS on it. This information will tell VOLUMES. Also, take the time to read BLOOM’S TAXONOMY OF EDUCATIONAL OBJECTIVES, AFFECTIVE DOMAIN, and ask yourself, who will determine what the correct answers are? Find the CRCT item specifications that measure the “affective domain” and find out what the State answer is. Don’t let the “psychometrics” gobbledygook distract you. It isn’t hard. It is all about power and control. Teachers are hired, supposedly, because they are experts in their subjects and understand the art of teaching. They should be able to make their own tests to measure what they have taught. The classroom grade should carry the most weight in testing what the student has learned. Curriculum is a LOCAL issue. If the local school wants to spend money on a national test such as the ITBS, they should do so as a guide for the GENERAL effectiveness of their teachers and curriculum. The deciding factor should be, are the parents pleased with the results? Parent’s have the most at stake for their child’s success. The CRCT’s were forced on the states by the Feds (through the UN), originally with Goals 2000. The states implemented them to get the money. In my opinion, each state was manipulated to create a CRCT type test, where core Item Specifications are the SAME in each state. Someone has access to ALL data. Read this book by Bev Eakman to fully understand:
http://www.sntp.net/education/behavioral_manual.htm
Also:
http://www.seanet.com/~barkonwd/school/DELPHI.HTM

Dr. Craig Spinks/ Georgians for Educational Excellence

June 29th, 2012
2:48 am

Jerry,

How might I join GAERA? The membership link on its website seems to be broken.

Craig

Jerry Eads

June 29th, 2012
12:17 pm

Hi Craig – Hm. go to http://www.gaera.org, go to contacts, and send an email to the Sec’y-Treasurer. That’ll get it done, but also let her know the link on the GA Sounthern conference registration site (apparently what you tried?) is busted.

[...] She also points out how complicated scoring the CRCT is here. [...]

Joseph O'Reilly

July 3rd, 2012
11:02 am

It is very normal for CRCT analysis process to be very complicated because parents do not like the whole process and the best way to avoid critism is to add technical terms avoid examples.
Theta score is used in the conversion process of student raw scores (value of right and wrong responses) into the way officials want to see tht looks nicer. If the question was answered by few students the score calculation changes.
This way students are not penalized poor stardardized education or poorly prepared test item.