ED Releases Toolkit for Intentional Use of AI in Education

The United States Department of Education's Office of Educational Technology has released a new resource to help education leaders navigate AI adoption while ensuring student protection. Titled "Empowering Education Leaders: A Toolkit for Safe, Ethical, and Equitable AI Integration," the guidebook builds on ED's May 2023 report, "Artificial Intelligence and the Future of Teaching and Learning: Insights and Recommendations," and is "designed to help educational leaders make critical decisions about incorporating AI applications into student learning and the instructional core."

The toolkit covers 10 key topic areas, or "modules": opportunities and risks; privacy and data security; civil rights, accessibility, and digital equity; understanding evidence of impact; considering the instructional core; planning an AI strategy; establishing a task force to guide and support AI efforts; building AI literacy for educators; updating AI policies and advocating for responsible use; and developing an organization-wide AI action plan. The 10 modules are organized into three sections that can be "accessed and revisited in any order depending on an educational leader’s unique needs and priorities," according to the report. Those sections are:

  • Mitigating Risk: Safeguarding Student Privacy, Security, and Non-Discrimination. "Awareness of applicable Federal laws, rules, and regulations is an essential first step when planning for the use of AI in schools and classrooms," the report notes. "This section invites leaders to learn about privacy and data security requirements; how civil rights, accessibility, and digital equity relate to AI; and a close consideration of the opportunities and risks associated with the use of AI."
  • Building a Strategy for AI Integration in the Instructional Core. Designed for education leaders who are engaged in the strategic planning process around the use of AI, this section "provides resources to support educational leaders in considering the evidence supporting AI-enabled tools, and guiding leaders through each of these three essential steps."
  • Maximizing Opportunity: Guiding the Effective Use and Evaluation of AI. This section covers the use of AI for both educator productivity and instruction, and "is appropriate for an educational leader who has a clear strategy in place for the use of AI, and who is ready to focus on guiding, shaping, and continually evaluating the use of AI in their community."

While educators can learn from the report in any order, the authors suggest that all the sections are important to AI success: "Regardless of which path an educational leader initially takes in this AI journey, we recommend navigating to the other modules in due course because the knowledge, questions, and actions in each of these three sections are designed to reinforce the others, together supporting the effective use of AI in education."

"Consider the metaphor of a mountain trek to represent the journey of incorporating AI in education. Like preparing for a challenging climb, achieving AI success requires careful planning, teamwork, and risk management," the report adds. "The trek-themed graphics in the toolkit highlight this proactive approach, reminding educational leaders of the importance of safety, ethics and equity no matter where they are on their AI journeys."

The full report is available here on the Office of Educational Technology site.

About the Author

Rhea Kelly is editor in chief for Campus Technology, THE Journal, and Spaces4Learning. She can be reached at [email protected].

Featured

  • glowing digital human brain composed of abstract lines and nodes, connected to STEM icons, including a DNA strand, a cogwheel, a circuit board, and mathematical formulas

    OpenAI Launches 'Reasoning' AI Model Optimized for STEM

    OpenAI has launched o1, a new family of AI models that are optimized for "reasoning-heavy" tasks like math, coding and science.

  • landscape photo with an AI rubber stamp on top

    California AI Watermarking Bill Supported by OpenAI

    OpenAI, creator of ChatGPT, is backing a California bill that would require tech companies to label AI-generated content in the form of a digital "watermark." The proposed legislation, known as the "California Digital Content Provenance Standards" (AB 3211), aims to ensure transparency in digital media by identifying content created through artificial intelligence. This requirement would apply to a broad range of AI-generated material, from harmless memes to deepfakes that could be used to spread misinformation about political candidates.

  • clock with gears and digital circuits inside

    Report Estimates Cost of AI at Nearly $300K Per Minute

    A report from cloud-based data/BI specialist Domo provides a staggering estimate of the minute-by-minute impact of today's generative AI boom.

  • glowing lines connecting colorful nodes on a deep blue and black gradient background

    Juniper Intros AI-Native Networking and Security Management Platform

    Juniper Networks has launched a new solution that integrates security and networking management under a unified cloud and artificial intelligence engine.