Study Points to Unaddressed Risks of Using Gen AI in K–12 Education

The nonprofit Center for Democracy & Technology (CDT) has released a study indicating that, while more teachers and students are using generative AI in their schoolwork during the 2023-24 school year as compared to the prior year, few teachers are receiving guidance on how to deal with perceived or actual irresponsible and unethical AI student use, leading to troubling disciplinary action against protected student classes.

The report, "Up in the Air: Educators Juggling the Potential of Generative AI with Detection, Discipline, and Distrust," suggests that the explosive acceptance and use of AI in schoolwork and lack of training about AI have caused teachers to become overly reliant on AI content detection tools, which are still mostly ineffective.

The report is based on a survey of 460 sixth- through 12th-grade teachers in November/December 2023, and compares it to a survey conducted in August 2023 for the 2022-23 school year. During the current year, 60% more schools have allowed the use of generative AI than the prior year. Results show that while AI use is up for both teachers and students, training in detection and disciplinary action for irresponsible use lags.

  • More teachers (80%) have received formal training in AI use policies and procedures, but only 28% have received guidance in disciplinary measures for its irresponsible or unethical use by students;
  • More teachers (69%) use AI content detection tools despite their questionable accuracy, and this use disproportionately affects "students who are protected by civil rights laws;"
  • More teachers (64%) report students have gotten into trouble at school for using, or being perceived as using, generative AI on school assignments, "a 16 percentage-point increase from last school year;" and
  • More teachers (52%) are distrustful of whether their students' work is their own and not the work of generative AI, with a higher percentage of distrust in schools where AI use in schoolwork is banned.

Examples of "protected classes" of students getting in trouble for using AI include students with disabilities (76% of licensed special education teachers see high use among them) and students with individualized education programs (IEP) or 504 plans (special learning environments).

The report's conclusion worries that the increased use of AI in schoolwork and teachers' use of AI detection tools for academic integrity have "significant implications for students' educational experience, privacy, and civil rights" and advises that teachers need better training in how to "manage use (and misuse) on a day-to-day basis."

Visit CDT's release page for more information, including links to the full report, a slide deck on the findings, and other resources.

About the Author

Kate Lucariello is a former newspaper editor, EAST Lab high school teacher and college English teacher.

Featured

  • mathematical formulas

    McGraw Hill Launches AI-Powered ALEKS for Calculus

    McGraw Hill has added ALEKS for Calculus to its lineup of ALEKS digital learning products, bringing AI-powered personalized learning support to the calculus classroom.

  • abstract pattern of cybersecurity, ai and cloud imagery

    Report Identifies Malicious Use of AI in Cloud-Based Cyber Threats

    A recent report from OpenAI identifies the misuse of artificial intelligence in cybercrime, social engineering, and influence operations, particularly those targeting or operating through cloud infrastructure. In "Disrupting Malicious Uses of AI: June 2025," the company outlines how threat actors are weaponizing large language models for malicious ends — and how OpenAI is pushing back.

  • Digital clouds with data points and network connections

    Microsoft's Windows 365 Cloud Apps Available in Public Preview

    Microsoft has announced that its Windows 365 Cloud Apps are now available in public preview. This allows IT administrators to stream individual Windows applications from the cloud, removing the need to assign Cloud PCs to every user.

  • teen studying with smartphone and laptop

    OpenAI Developing Teen Version of ChatGPT with Parental Controls

    OpenAI has announced it is developing a separate version of ChatGPT for teenagers and will use an age-prediction system to steer users under 18 away from the standard product, as U.S. lawmakers and regulators intensify scrutiny of chatbot risks to minors.