Report: AI R&D Should Align with ED Recommendations and Focus on Context, Partnership, and Public Policy

"AI is sometimes presented as a race to be the first to advance new techniques or scale new applications — innovation is sometimes portrayed as rapidly going to scale with a minimally viable product, failing fast, and only after failure, dealing with context," according to a new report, "Artificial Intelligence and the Future of Teaching and Learning: Insights and Recommendations," by the Office of Educational Technology (OET) of the U.S. Department of Education (ED).

As far back as 2010, the National Education Technology Plan (NETP) set a research and development (R&D) challenge for ed tech developers to "create personalized learning systems that continuously improve as they are used."

The new AI report suggests further R&D goals and makes recommendations for AI ed tech developers, keeping in mind a focus on "context sensitivity" for success in educational goals.

"We look forward to new meanings of 'adaptive' that broaden outward from what the term has meant in the past decade. For example, 'adaptive' should not always be a synonym of 'individualized' because people are social learners. Researchers therefore are broadening 'adaptivity' to include support for what students do as they learn in groups," the report notes.

"The R&D focus on context must be prioritized early and habitually in R&D; we don't want to win a race to the wrong finish line," it adds.

R&D recommendations are made from these perspectives:

  • Attention to the "long tail of learner variability," that is, the multiple ways in which people engage in teaching and learning according to their "strengths and needs." This replaces the "teaching to the middle" philosophy.

  • Partnership in design-based research, the shift toward co-design from multiple stakeholders — teachers, students, parents, and others. A commitment to this can foster digital inclusion and generate discussions about the need for AI explainability, transparency, and responsibility.

  • Teacher professional development and the expectation that teachers should adopt and embrace emerging ed tech, especially AI, but have too little training. Focus should be placed on how to increase teacher literacy about AI.

  • Alignment with public policy efforts, including funding, to keep AI algorithmically unbiased, ethical, inclusive, private, and secure.

Based on these considerations, the report concludes with several recommendations moving forward with regard to the use of AI in ed tech:

  1. Keep "humans in the loop" so that a technology-enhanced future is "more like an electric bike and less like robot vacuums";

  2. Promote AI models that conform to "a shared vision for education," i.e., humans determining goals and evaluating such models, with heavy involvement from local, state, and federal policymakers keeping an eye on and holding developers accountable for overblown promises and unsupported claims;

  3. Design AI ed tech based on modern learning pedagogy;

  4. Strengthen public trust in AI by demonstrating its "safety, usability, and efficacy";

  5. Keep educators informed and involved in AI ed tech at every step and foster respect for their skills and value to society;

  6. Focus R&D on enhancing context, trust, and safety;

  7. Develop "guidelines and guardrails" for the use of AI ed tech.

Throughout the report, reference is made to the "Blueprint for an AI Bill of Rights," released by the White House in fall 2022. Five basic rights are outlined and elaborated: 1. Safe and effective systems; 2. Algorithmic discrimination protections; 3. Data privacy; 4. Notice and explanation; 5. Human alternatives, consideration, and fallback.

Visit this page for a summary handout of the report's main points. A webinar going into more depth on this report will be held Tuesday, June 13, 2023, at 2:30 p.m. ET. Signup is available by QR code on the handout page.

The full report can be downloaded from this page.

About the Author

Kate Lucariello is a former newspaper editor, EAST Lab high school teacher and college English teacher.

Featured

  • glowing digital human brain composed of abstract lines and nodes, connected to STEM icons, including a DNA strand, a cogwheel, a circuit board, and mathematical formulas

    OpenAI Launches 'Reasoning' AI Model Optimized for STEM

    OpenAI has launched o1, a new family of AI models that are optimized for "reasoning-heavy" tasks like math, coding and science.

  • landscape photo with an AI rubber stamp on top

    California AI Watermarking Bill Supported by OpenAI

    OpenAI, creator of ChatGPT, is backing a California bill that would require tech companies to label AI-generated content in the form of a digital "watermark." The proposed legislation, known as the "California Digital Content Provenance Standards" (AB 3211), aims to ensure transparency in digital media by identifying content created through artificial intelligence. This requirement would apply to a broad range of AI-generated material, from harmless memes to deepfakes that could be used to spread misinformation about political candidates.

  • clock with gears and digital circuits inside

    Report Estimates Cost of AI at Nearly $300K Per Minute

    A report from cloud-based data/BI specialist Domo provides a staggering estimate of the minute-by-minute impact of today's generative AI boom.

  • glowing lines connecting colorful nodes on a deep blue and black gradient background

    Juniper Intros AI-Native Networking and Security Management Platform

    Juniper Networks has launched a new solution that integrates security and networking management under a unified cloud and artificial intelligence engine.