Research: Improving Test Scores Doesn't Equate to Improving Abstract Reasoning

A team of neuroscientists at MIT and other institutions has found that even when schools take instructional steps that help raise student scores on high-stakes tests, that influence doesn't translate to improvements in learners' abilities to perform abstract reasoning. The research, which took place a couple of years ago, studied 1,367 then-eighth-graders who attended traditional, charter, and exam schools in Boston. (All were public schools.)

The researchers found that while some schools raised their students' scores on the Massachusetts Comprehensive Assessment System (MCAS) — a sign of "crystallized intelligence" — the same efforts don't result in comparable gains in "fluid intelligence." The former refers to the knowledge and skills students acquire in school; the latter describes the ability to analyze abstract problems and think logically.

"It's not always clear what dimensions you have to pay attention to get the problem correct. That's why we call it fluid, because it's the application of reasoning skills in novel contexts," explained Amy Finn, an MIT postdoc and lead author of a paper on the research, which will soon be published in Psychological Science.

For example, one test of fluid reasoning asked students to choose which of six pictures completed a puzzle. This is a task, according to the researchers, that requires integration of multiple kinds of information: shape, pattern, and orientation.

"Our original question was this: If you have a school that's effectively helping kids from lower socioeconomic environments by moving up their scores and improving their chances to go to college, then are those changes accompanied by gains in additional cognitive skills?" said John Gabrieli, a professor of health sciences and technology, and a senior author on the research. "As we started that study, it struck us that there's been surprisingly little evaluation of different kinds of cognitive abilities and how they relate to educational outcomes,"

What the researchers found was that instructional practices in schools could account for 24 percent of score variation in English and 34 percent in math, but only 3 percent of the variation in fluid cognitive skills. "These findings suggest that schools that improve standardized achievement tests do so primarily through channels other than cognitive skills," the researchers wrote.

The researchers emphasized that the results of their work shouldn't be used to criticize schools that are succeeding in improving student test scores. After all, Gabrieli noted, "It's valuable to push up the crystallized abilities, because if you can do more math, if you can read a paragraph and answer comprehension questions, all those things are positive."

However, they said they do hope that the research will encourage educators and policymakers to take into account instructional practices that enhance cognitive skills. Although the effectiveness of programs such as those focused on improving memory, attention, executive function, and inductive reasoning aren't proven to be effective, that's a next stage for research by somebody, the study's authors asserted.

The researchers said they plan to continue tracking these students, who are now in 10th grade, to monitor how their academic performance and other life outcomes play out. They have also begun a project involving high school seniors to track how standardized test scores and cognitive abilities influence college attendance and graduation rates.

The project included researchers from the Center for Education Policy Research at Harvard University and Brown University. The work was funded by grants from the Bill & Melinda Gates Foundation and the National Institutes of Health.

About the Author

Dian Schaffhauser is a former senior contributing editor for 1105 Media's education publications THE Journal, Campus Technology and Spaces4Learning.

Featured