Analysis

AI in Education Shows Most Promise for the Repetitive and Predictable

An new RAND report has concluded that using artificial intelligence in education shows promise, but only when it comes to supporting teachers with repetitive and predictable tasks.

According to author Robert Murphy, senior policy researcher for RAND Education, "the work of teachers and the act of teaching" can't be "completely automated like repetitive tasks taking place on the manufacturing floor. After all, he pointed out, "Good teaching is complex and requires creativity, flexibility, improvisation and spontaneity." Teachers need the abilities to "think logically and apply common sense, compassion and empathy to deal with the everyday nonacademic issues and problems that arise in the classroom." These are abilities that even the most advanced AI systems lack.

However, AI has so far found a perch in three "core challenges" of teaching: intelligent tutoring systems, automated essay scoring and early warning systems to identify struggling students who may be at risk of not graduating.

AI, as Murphy explained, describes applications of software algorithms and methods that allow computers to simulate human perception and decision-making. The genre can be divided into two broad categories. "Narrow" or "weak" AI refers to algorithms or code that performs a "single, specific function" such as a driverless vehicle responding to a stop sign differently from a yield sign. Siri and Alexa fall into this category too. "Strong" AI are those applications that show "general intelligence." These are the ones that demonstrate reasoning capabilities, similar to humans, and a "common sense understanding of how the world works." They can solve novel problems without having preprogrammed knowledge of the task to be performed. These are also mostly "still an aspiration of the AI community," wrote Murphy.

It's the narrow type of AI that's been used so far in education, one flavor to apply rule-based applications to run adaptive instructional programs, and another flavor to run applications that use machine-based learning, to handle activities such as automated scoring of student writing.

Where topics are "amenable" to a rule-based AI architecture, AI can become an effective source for classroom instruction and student support, the report explained. For example, for that first category, intelligent tutoring systems (ITSs), these programs have been around for years to help students master concepts. Because the ITS requires input from experts and extensive programming to encompass the domain knowledge of a given topic, they have tended to work best in subjects with finite domains, such as math, the physical sciences, computer science and literacy.

And even when ITS is used in the classroom, it's best suited for "independent learning time," Murphy said, such as for remediating skills, covering more advanced topics or finishing homework. Even those areas require the teacher to do "careful monitoring," since the systems vary in the level of support they provide to students. Teachers need to have time in their schedules to monitor student performance and progress and "intervene before these students experience frustration, lose initiative and disengage."

In the area of scoring student essays, AI has found success in MOOCs to power the scoring of the writing of "the thousands of students who may be enrolled in a single course." Current scoring engines can provide "basic feedback, guidance and model writing samples" to help guide students in improving their writing.

Likewise, early warning systems have been implemented in most K-12 districts and even more colleges and universities, according to the report. As Murphy described, these systems use a mix of "fairly simple, rule-based prediction models, monitoring one or more key measures that had been identified in the research literature as important indicators of students straying off track and dropping out." When the indicators hit a threshold, the student is flagged for follow-up intervention.

These "simple warning systems" aren't failsafe, Murphy warned. They're only as good as the models they're built on and the quality of the data being fed into them. Also, the "training data sets" may have human biases that aren't readily apparent; and there's too often a lack of transparency in how the decisions are being made.

The data questions regarding AI are a big source of its challenges too, Murphy said. Although student information systems for most large school districts and higher education institutions contain "a significant amount of digital data on students' family characteristics, courses taken, teachers, end-of-course grades, disciplinary and special education referrals and standardized achievement scores [for math and reading]," they lack "the fine-grained information on instruction and learning that is required to train a machine learning–based adaptive instruction system." That kind of data is usually only available from existing online instructional platforms, which limit who gets the access. Concurrently, data access issues are only getting more complicated due to privacy regulations, "even when the data are anonymized."

Murphy recommended that developers continue refining the use of machine learning in the three uses where it has already found some success. Along with that, however, software publishers need to do a better job of giving stakeholders — administrators, educators, parents and students — greater visibility into how their AI applications work.

He also suggested that AI use in teaching and learning be researched far more, to better understand two things: 1) "the effects of the products on teaching and learning and the products' cost-effectiveness relative to existing approaches"; and 2) the "unintended consequences that these systems might have on instructional decisions and opportunities as a result of possible learned bias in the algorithmic models or of inaccuracies in model predictions, recommendations and feedback."

For the time being, wrote Murphy, the best use of AI applications will remain playing "an assistive role, supporting rather than replacing teachers in their work with students in a limited set of content and topic areas that are most amenable to AI approaches."

The report is openly available on the RAND website.

About the Author

Dian Schaffhauser is a former senior contributing editor for 1105 Media's education publications THE Journal, Campus Technology and Spaces4Learning.

Whitepapers