What Was That Math Student Thinking? Cornell Prof Aims to Find out

Researchers at Cornell University are working on software that will help math teachers understand what their students were thinking that led them to finding incorrect answers.

Researchers at Cornell University are working on software that will help math teachers understand what their students were thinking that led them to finding incorrect answers.

Erik Andersen, assistant professor of computer science at Cornell, said that teachers spend a lot of time grading math homework because grading is more complicated than just marking an answer as right or wrong.

"What the teachers are spending a lot of time doing is assigning partial credit and working individually to figure out what students are doing wrong," Andersen said in a prepared statement. "We envision a future in which educators spend less time trying to reconstruct what their students are thinking and more time working directly with their students."

To help teachers get through their grading and understand where students need more help, Andersen and his team have been building an algorithm that reverse engineers the way students arrived at their answers.

They began with a dataset of addition and subtraction problems solved — or not — by about 300 students and tried to infer what the students had done right or wrong.

"This was technically challenging, and the solution interesting," said Andersen in a news release. "We worked to come up with an efficient data structure and algorithm that would help the system sort through an enormous space of possible things students could be thinking. We found that 13 percent of these students made clear systematic procedural mistakes, and the researchers' algorithm learned to replicate 53 percent of these mistakes in a way that seemed accurate. The key is that we are not giving the right answer to the computer — we are asking the computer to infer what the student might be doing wrong. This tool can actually show a teacher what the student is misunderstanding, and it can demonstrate procedural misconceptions to an educator as successfully as a human expert."

Eventually the researchers hope to develop a program that will be able to offer teachers reports on learning outcomes to improve instruction and differentiation. For now, the tool only works with addition and subtraction problems, but the team plans to expand to algebra and more complicated equations eventually.

For more information visit cs.cornell.edu.

About the Author

Joshua Bolkan is contributing editor for Campus Technology, THE Journal and STEAM Universe. He can be reached at [email protected].

Featured

  • stylized illustration of a desktop, laptop, tablet, and smartphone all displaying an orange AI icon

    Survey: AI Shifting from Cloud to PCs

    A recent Intel-commissioned report identifies a significant shift in AI adoption, moving away from the cloud and closer to the user. Businesses are increasingly turning to the specialized hardware of AI PCs, the survey found, recognizing their potential not just for productivity gains, but for revolutionizing IT efficiency, fortifying data security, and delivering a compelling return on investment by bringing AI capabilities directly to the edge.

  • handshake between two individuals with AI icons (brain, chip, network, robot) in the background

    Microsoft, Amazon Announce New Commitments in Support of Presidential AI Challenge

    At the Sept. 4 meeting of the White House Task Force on Artificial Intelligence Education, Microsoft and Amazon announced new commitments to expanding AI education and skills training.

  • digital learning resources including a document, video tutorial, quiz checklist, pie chart, and AI cloud icon

    Quizizz Rebrands as Wayground, Announces New AI Features

    Learning platform Quizizz has become Wayground, in a rebranding meant to reflect "the platform's evolution from a quiz tool into a more versatile supplemental learning platform that's supported by AI," according to a news announcement.

  • abstract pattern of cybersecurity, ai and cloud imagery

    Report Identifies Malicious Use of AI in Cloud-Based Cyber Threats

    A recent report from OpenAI identifies the misuse of artificial intelligence in cybercrime, social engineering, and influence operations, particularly those targeting or operating through cloud infrastructure. In "Disrupting Malicious Uses of AI: June 2025," the company outlines how threat actors are weaponizing large language models for malicious ends — and how OpenAI is pushing back.