Test Bias or Real Differences?

##AUTHORSPLIT##<--->

A test publisher’s perspective on closing the achievement gap.

Even with the years of effort dedicatedMargaret A. Jorgensento equalizing education, theachievement gap in the US persists.The question now facing test developers iswhether the gaps are the result of truedifferences in achievement or of bias in thetests. Test publishers have developedmethods of detecting and eliminating biasin their assessment products; however,despite all the attention and resourcesdirected at closing the achievement gap, adisparity in performance still remains. Butan “item design” would enable assessmentsnot only to measure a student’s academicachievement, but also to provide insightand detailed information about whatlearning objectives a student has not yetreached. By both serving as a thermometerand offering educational prescriptions,standardized assessments can be a tool fornarrowing the achievement gap.

With professional and ethical responsibilitiesto remove barriers of bias in theirassessments, test publishers extensively relyon industry standards to produce highqualityassessment instruments. For apublished test to be fair and unbiased, itmust measure a student’s achievementwithout being affected by extraneousfactors such as student gender, culture,ethnicity, geography, or soci'economicstatus. Deeply underpinning this discussionis that the clear purpose of educationalassessments is to measure the differences instudent achievement. There is no legitimatereason to build and administer a test thatconfirms sameness. Only by understandingthe differences between students can wecustomize instruction for each student.

Given best practices in the industry foreliminating bias, what can test publishersdo to contribute and facilitate a deeperunderstanding of the root causes of thesedifferences? Addressing this requires ashift in the traditional function and designof standardized assessment systems andindividual items. What if items wereconstructed differently so that they wouldreveal where learning breaks down forindividual students?

Multiple-choice items have a longhistory of helping educators and policymakersunderstand what students know andcan do. Basically, items are created to sortstudents into two categories: those whoknow the content and those who do not.They are not written to provide systematicinsights into where students are in theirthinking; valuable information for teacherswhose job it is to help move students whodo not answer the item correctly to anunderstanding of the standard.

Harcourt (harcourtassessment.com)has developed a new item type that helpsteachers understand not only whichstudents know the content, but also wherethe students who do not know the contentare in their understanding. Given a typicalmultiple-choice item, we have the opportunityto sort students into four levels ofachievement. By building assessmentsthat provide increasingly precise informationabout why students choose the incorrectanswer, test publishers make a directcontribution to the improvement of classroominstruction. Teachers can knowwhere learning has broken down andwhere instruction should be targeted.

If measured differences in achievementare a function of learning, and if oursociety hopes to eliminate these differences,it seems necessary to probe deeplyinto learning. Validity is defined toonarrowly if it focuses solely on measuringthe right thing in the right way—if itfocuses only on the correct answer. Now,validity must also address how a studentarrives at a wrong answer and ways thatinformation can be used to assist withlearning. Our energy must shift from separatingstudents into groups who havelearned and those who have not. It mustmove to a profound exploration of eachstudent’s individual learning path.Reaching this goal will allow us to fullyunderstand the real differences in studentlearning that are the root cause of thenation’s achievement gap.

Margaret A. Jorgensen is senior VP ofproduct research and innovation for HarcourtAssessment (harcourtassessment.com), withresponsibility for the research and conceptualdesign of innovative new products.


Forum Touts Cutting-Edge Assessment Practices

Harcourt Assessment (harcourtassessment.com) and T.H.E. Journal are sponsoringthe 2005 Midwest Assessment Forum from Oct. 20-21 in Evanston, IL, adjacent toNorthwestern University. The forum will provide ways to transform your school’sassessment scores, implement innovative programs, and show improved progressin student learning. Highlights of this year’s forum include roundtable discussions;breakout sessions on such topics as grants and funding and formative assessmentstrategies; and meetings on data-driven decision-making initiatives, studentsachievement in low-performing schools, and NCLB-related assessment issues andstrategies. Speakers featured at the forum include top industry experts such asCoSN’s Irene Spero, Alan Endicott from the US Department of Education, and theUniversity of North Alabama’s Mark Edwards. Registration for the forum is $199.For more information, call (800) 572-5373 or visit harcourtassessment.com/haiweb/Cultures/en-US/Events/ Midwest+Assessment+Forum.htm.

This article originally appeared in the 09/01/2005 issue of THE Journal.

Whitepapers