4 'Key Foundations' for AI in Education

Artificial intelligence (AI) is here to stay. The U.S Department of Education has released guidance for the education sector — including potential benefits, warnings about misuses, and principles for developing guidelines at the district level.

During hearings in the U.S. Congress in May 2023, AI developers and key stakeholders urged lawmakers to take seriously the need for oversight of this technology and to act now to develop policies and laws to monitor and direct the responsible use of it. Every sector is being affected by AI, including the education system.

In its report, “Artificial Intelligence and the Future of Teaching and Learning: Insights and Recommendations,” released in May 2023, the Office of Educational Technology (OET) of the U.S. Department of Education (ED) outlines several core messages. ED makes clear its support for ed tech, including AI, to improve teaching and learning but emphasizes that knowledge about AI needs to be shared, support given for those using it, and policies developed for its safe and responsible use.

The report recognizes that the main components of AI are being “increasingly embedded in all types of educational systems,” shifting from “capturing data to detecting patterns in data” and from “providing access to instructional resources to automating decisions.”

It expresses deep concerns about the increase in delegating responsibilities to a computer system. The report also expresses concerns about biases in pattern detection and fairness in automating decisions.

The report acknowledges the opportunities AI presents: voice assistants, writing help, trip planning, and, for educators, disabled student support, multilingual learning support, finding lesson materials, and lesson creation support.

But there are risks as well: security and privacy breaches, inappropriate or wrong information, enhancement of cultural or other biases, student plagiarism, and fairness.

The report also acknowledges the rapid rise and the benefits and risks of generative AI-enabled chatbots, but does not focus specifically on those.

During listening sessions in 2022 and 2023, attended by over 700 participants, OET recognized three main reasons to address the use of AI in education now: First, AI can help educate students at a lower cost, help them make up learning losses due to the pandemic, and customize learning to their specific culture and community.

Second, risks include increased student surveillance, dangers of “algorithmic discrimination” against some student populations, unfair exam monitoring systems, or inaccurate information.

Third, unintended or unexpected consequences may include the AI adapting its curricular learning pace for specific students based on “incomplete data, poor theories, or biased assumptions about learning,” thus affecting achievement rates. This can also affect the hiring of teachers.

With these issues in mind, the report articulates guiding questions about what a collective vision for the use of AI in education looks like and what the timeline should be for developing guidelines to address it, based on four “key foundations”:

  1. Which AI technologies to use in education to keep people central in decision-making;
  2. How to advance equity and root out bias that could interfere with every student achieving success;
  3. How to ensure safety, ethics, and effectiveness in data privacy and security;
  4. How to ensure transparency and disclosure about how AI models work in terms of educational goals, as well as human control when needed.

Visit this page to read and download a summary handout of the report’s main points. A webinar going into more depth on this report will be held Tuesday, June 13, 2023, at 2:30 p.m. ET. Signup is available by QR code at this summary handout link.

The full report can be downloaded from this page.

About the Author

Kate Lucariello is a former newspaper editor, EAST Lab high school teacher and college English teacher.

Featured

  • An elementary school teacher and young students interact with floating holographic screens displaying colorful charts and playful data visualizations in a minimalist classroom setting

    New AI Collaborative to Explore Use of Artificial Intelligence to Improve Teaching and Learning

    Education-focused nonprofits Leading Educators and The Learning Accelerator have partnered to launch the School Teams AI Collaborative, a yearlong pilot initiative that will convene school teams, educators, and thought leaders to explore ways that artificial intelligence can enhance instruction.

  • landscape photo with an AI rubber stamp on top

    California AI Watermarking Bill Supported by OpenAI

    OpenAI, creator of ChatGPT, is backing a California bill that would require tech companies to label AI-generated content in the form of a digital "watermark." The proposed legislation, known as the "California Digital Content Provenance Standards" (AB 3211), aims to ensure transparency in digital media by identifying content created through artificial intelligence. This requirement would apply to a broad range of AI-generated material, from harmless memes to deepfakes that could be used to spread misinformation about political candidates.

  • closeup of laptop and smartphone calendars

    2024 Tech Tactics in Education Conference Agenda Announced

    Registration is free for this fully virtual Sept. 25 event, focused on "Building the Future-Ready Institution" in K-12 and higher education.

  • cloud icon connected to a data network with an alert symbol (a triangle with an exclamation mark) overlaying the cloud

    U.S. Department of Commerce Proposes Reporting Requirements for AI, Cloud Providers

    The United States Department of Commerce is proposing a new reporting requirement for AI developers and cloud providers. This proposed rule from the department's Bureau of Industry and Security (BIS) aims to enhance national security by establishing reporting requirements for the development of advanced AI models and computing clusters.