Research

Study Points to Unaddressed Risks of Using Gen AI in K–12 Education

The nonprofit Center for Democracy & Technology (CDT) has released a study indicating that, while more teachers and students are using generative AI in their schoolwork during the 2023-24 school year as compared to the prior year, few teachers are receiving guidance on how to deal with perceived or actual irresponsible and unethical AI student use, leading to troubling disciplinary action against protected student classes.

The report, "Up in the Air: Educators Juggling the Potential of Generative AI with Detection, Discipline, and Distrust," suggests that the explosive acceptance and use of AI in schoolwork and lack of training about AI have caused teachers to become overly reliant on AI content detection tools, which are still mostly ineffective.

The report is based on a survey of 460 sixth- through 12th-grade teachers in November/December 2023, and compares it to a survey conducted in August 2023 for the 2022-23 school year. During the current year, 60% more schools have allowed the use of generative AI than the prior year. Results show that while AI use is up for both teachers and students, training in detection and disciplinary action for irresponsible use lags.

  • More teachers (80%) have received formal training in AI use policies and procedures, but only 28% have received guidance in disciplinary measures for its irresponsible or unethical use by students;
  • More teachers (69%) use AI content detection tools despite their questionable accuracy, and this use disproportionately affects "students who are protected by civil rights laws;"
  • More teachers (64%) report students have gotten into trouble at school for using, or being perceived as using, generative AI on school assignments, "a 16 percentage-point increase from last school year;" and
  • More teachers (52%) are distrustful of whether their students' work is their own and not the work of generative AI, with a higher percentage of distrust in schools where AI use in schoolwork is banned.

Examples of "protected classes" of students getting in trouble for using AI include students with disabilities (76% of licensed special education teachers see high use among them) and students with individualized education programs (IEP) or 504 plans (special learning environments).

The report's conclusion worries that the increased use of AI in schoolwork and teachers' use of AI detection tools for academic integrity have "significant implications for students' educational experience, privacy, and civil rights" and advises that teachers need better training in how to "manage use (and misuse) on a day-to-day basis."

Visit CDT's release page for more information, including links to the full report, a slide deck on the findings, and other resources.

About the Author

Kate Lucariello is a former newspaper editor, EAST Lab high school teacher and college English teacher.

Whitepapers