Report: AI R&D Should Align with ED Recommendations and Focus on Context, Partnership, and Public Policy

"AI is sometimes presented as a race to be the first to advance new techniques or scale new applications — innovation is sometimes portrayed as rapidly going to scale with a minimally viable product, failing fast, and only after failure, dealing with context," according to a new report, "Artificial Intelligence and the Future of Teaching and Learning: Insights and Recommendations," by the Office of Educational Technology (OET) of the U.S. Department of Education (ED).

As far back as 2010, the National Education Technology Plan (NETP) set a research and development (R&D) challenge for ed tech developers to "create personalized learning systems that continuously improve as they are used."

The new AI report suggests further R&D goals and makes recommendations for AI ed tech developers, keeping in mind a focus on "context sensitivity" for success in educational goals.

"We look forward to new meanings of 'adaptive' that broaden outward from what the term has meant in the past decade. For example, 'adaptive' should not always be a synonym of 'individualized' because people are social learners. Researchers therefore are broadening 'adaptivity' to include support for what students do as they learn in groups," the report notes.

"The R&D focus on context must be prioritized early and habitually in R&D; we don't want to win a race to the wrong finish line," it adds.

R&D recommendations are made from these perspectives:

  • Attention to the "long tail of learner variability," that is, the multiple ways in which people engage in teaching and learning according to their "strengths and needs." This replaces the "teaching to the middle" philosophy.

  • Partnership in design-based research, the shift toward co-design from multiple stakeholders — teachers, students, parents, and others. A commitment to this can foster digital inclusion and generate discussions about the need for AI explainability, transparency, and responsibility.

  • Teacher professional development and the expectation that teachers should adopt and embrace emerging ed tech, especially AI, but have too little training. Focus should be placed on how to increase teacher literacy about AI.

  • Alignment with public policy efforts, including funding, to keep AI algorithmically unbiased, ethical, inclusive, private, and secure.

Based on these considerations, the report concludes with several recommendations moving forward with regard to the use of AI in ed tech:

  1. Keep "humans in the loop" so that a technology-enhanced future is "more like an electric bike and less like robot vacuums";

  2. Promote AI models that conform to "a shared vision for education," i.e., humans determining goals and evaluating such models, with heavy involvement from local, state, and federal policymakers keeping an eye on and holding developers accountable for overblown promises and unsupported claims;

  3. Design AI ed tech based on modern learning pedagogy;

  4. Strengthen public trust in AI by demonstrating its "safety, usability, and efficacy";

  5. Keep educators informed and involved in AI ed tech at every step and foster respect for their skills and value to society;

  6. Focus R&D on enhancing context, trust, and safety;

  7. Develop "guidelines and guardrails" for the use of AI ed tech.

Throughout the report, reference is made to the "Blueprint for an AI Bill of Rights," released by the White House in fall 2022. Five basic rights are outlined and elaborated: 1. Safe and effective systems; 2. Algorithmic discrimination protections; 3. Data privacy; 4. Notice and explanation; 5. Human alternatives, consideration, and fallback.

Visit this page for a summary handout of the report's main points. A webinar going into more depth on this report will be held Tuesday, June 13, 2023, at 2:30 p.m. ET. Signup is available by QR code on the handout page.

The full report can be downloaded from this page.

About the Author

Kate Lucariello is a former newspaper editor, EAST Lab high school teacher and college English teacher.

Featured

  • glowing digital brain made of blue circuitry hovers above multiple stylized clouds of interconnected network nodes against a dark, futuristic background

    Report: 85% of Organizations Are Leveraging AI

    Eighty-five percent of organizations today are utilizing some form of AI, according to the latest State of AI in the Cloud 2025 report from Wiz. While AI's role in innovation and disruption continues to expand, security vulnerabilities and governance challenges remain pressing concerns.

  • Rebind platform

    Grant Program to Give Free Access to AI-Powered Reading Platform

    E-reading publishing company Rebind has announced a new "Classics in the Classroom" grant program for United States high school and college educators, providing free access to the company's AI-powered reading platform for the Fall 2025 term.

  • portable Wi-Fi hotspot rests on a stack of books and a laptop in a library

    Senate Votes to Rescind E-Rate Program Funding Loaner WiFi Hotspots for Schools and Libraries

    The Senate has passed a joint resolution to overturn "Addressing the Homework Gap Through the E-Rate Program," a July 2024 expansion to the FCC's E-Rate program that allowed schools and libraries to utilize E-Rate resources to loan out WiFi hotspots to students, school staff, and library patrons.

  • three main icons—a cloud, a user profile, and a padlock—connected by circuit lines on a blue abstract background

    Identity Has Become a Critical Security Perimeter for Cloud Services

    A new threat landscape report from Fortinet points to new cloud vulnerabilities. According to the company's 2025 Global Threat Landscape Report, while misconfigured cloud storage buckets were once a prime vector for cybersecurity exploits, other cloud missteps are gaining focus.