ED Releases Toolkit for Intentional Use of AI in Education

The United States Department of Education's Office of Educational Technology has released a new resource to help education leaders navigate AI adoption while ensuring student protection. Titled "Empowering Education Leaders: A Toolkit for Safe, Ethical, and Equitable AI Integration," the guidebook builds on ED's May 2023 report, "Artificial Intelligence and the Future of Teaching and Learning: Insights and Recommendations," and is "designed to help educational leaders make critical decisions about incorporating AI applications into student learning and the instructional core."

The toolkit covers 10 key topic areas, or "modules": opportunities and risks; privacy and data security; civil rights, accessibility, and digital equity; understanding evidence of impact; considering the instructional core; planning an AI strategy; establishing a task force to guide and support AI efforts; building AI literacy for educators; updating AI policies and advocating for responsible use; and developing an organization-wide AI action plan. The 10 modules are organized into three sections that can be "accessed and revisited in any order depending on an educational leader’s unique needs and priorities," according to the report. Those sections are:

  • Mitigating Risk: Safeguarding Student Privacy, Security, and Non-Discrimination. "Awareness of applicable Federal laws, rules, and regulations is an essential first step when planning for the use of AI in schools and classrooms," the report notes. "This section invites leaders to learn about privacy and data security requirements; how civil rights, accessibility, and digital equity relate to AI; and a close consideration of the opportunities and risks associated with the use of AI."
  • Building a Strategy for AI Integration in the Instructional Core. Designed for education leaders who are engaged in the strategic planning process around the use of AI, this section "provides resources to support educational leaders in considering the evidence supporting AI-enabled tools, and guiding leaders through each of these three essential steps."
  • Maximizing Opportunity: Guiding the Effective Use and Evaluation of AI. This section covers the use of AI for both educator productivity and instruction, and "is appropriate for an educational leader who has a clear strategy in place for the use of AI, and who is ready to focus on guiding, shaping, and continually evaluating the use of AI in their community."

While educators can learn from the report in any order, the authors suggest that all the sections are important to AI success: "Regardless of which path an educational leader initially takes in this AI journey, we recommend navigating to the other modules in due course because the knowledge, questions, and actions in each of these three sections are designed to reinforce the others, together supporting the effective use of AI in education."

"Consider the metaphor of a mountain trek to represent the journey of incorporating AI in education. Like preparing for a challenging climb, achieving AI success requires careful planning, teamwork, and risk management," the report adds. "The trek-themed graphics in the toolkit highlight this proactive approach, reminding educational leaders of the importance of safety, ethics and equity no matter where they are on their AI journeys."

The full report is available here on the Office of Educational Technology site.

About the Author

Rhea Kelly is editor in chief for Campus Technology, THE Journal, and Spaces4Learning. She can be reached at [email protected].

Featured

  • An elementary school teacher and young students interact with floating holographic screens displaying colorful charts and playful data visualizations in a minimalist classroom setting

    New AI Collaborative to Explore Use of Artificial Intelligence to Improve Teaching and Learning

    Education-focused nonprofits Leading Educators and The Learning Accelerator have partnered to launch the School Teams AI Collaborative, a yearlong pilot initiative that will convene school teams, educators, and thought leaders to explore ways that artificial intelligence can enhance instruction.

  • landscape photo with an AI rubber stamp on top

    California AI Watermarking Bill Supported by OpenAI

    OpenAI, creator of ChatGPT, is backing a California bill that would require tech companies to label AI-generated content in the form of a digital "watermark." The proposed legislation, known as the "California Digital Content Provenance Standards" (AB 3211), aims to ensure transparency in digital media by identifying content created through artificial intelligence. This requirement would apply to a broad range of AI-generated material, from harmless memes to deepfakes that could be used to spread misinformation about political candidates.

  • closeup of laptop and smartphone calendars

    2024 Tech Tactics in Education Conference Agenda Announced

    Registration is free for this fully virtual Sept. 25 event, focused on "Building the Future-Ready Institution" in K-12 and higher education.

  • cloud icon connected to a data network with an alert symbol (a triangle with an exclamation mark) overlaying the cloud

    U.S. Department of Commerce Proposes Reporting Requirements for AI, Cloud Providers

    The United States Department of Commerce is proposing a new reporting requirement for AI developers and cloud providers. This proposed rule from the department's Bureau of Industry and Security (BIS) aims to enhance national security by establishing reporting requirements for the development of advanced AI models and computing clusters.