Report: Best Use of AI in Augmenting (Not Replacing) Teacher

A new report is out to help teachers and school leaders understand how artificial intelligence can change education. "Artificial Intelligence (AI) in K-12," prepared by the Consortium for School Networking (CoSN), examines how the technology can augment what educators do to help students get "personalized instruction at scale" while also introducing "new challenges and considerations." The project was supported by Microsoft and CoSN's memorial fund, the Charles Blaschke Fund.

The report asserted that AI already exists in applications being used by schools, such as learning analytic platforms, online courseware, voice assistants and within commonly used programs such as the AI in Microsoft Office that recommends a PowerPoint layout or suggests a formula in Excel.

A big area of focus is on how the education community should consider the use of AI in terms of "privacy, bias and literacy. As the authors noted, most AI tech has been designed and developed for commercial purposes, which means it doesn't pay much attention to state or federal privacy legislation meant for school-age children. Also, much of AI is driven by "black-box" algorithms, which may introduce "flaws and biases" into the interpretation of data. And, educators will need a certain level of "algorithmic literacy" to use AI effectively.

The main message is that AI on its own won't replace the "presence of a high-quality teacher." Its "true promise," the report suggested, will require a "combination of high-tech and high-touch," using AI "to support great teachers and create new learning opportunities for students that take advantage of meaningful human relationships."

"Artificial intelligence has the power to advance education and supplement the learning process of each and every student with personalized instruction," said Keith Krueger, CEO of CoSN. "We are grateful for our partners' support on this project and hope this report gives school districts insight on how this groundbreaking technology can improve existing practices and broadly reshape education moving forward."

The full 14-page report is openly available on a link provided by CoSN.

About the Author

Dian Schaffhauser is a former senior contributing editor for 1105 Media's education publications THE Journal, Campus Technology and Spaces4Learning.

Featured

  • Abstract geometric pattern with interconnected nodes and lines

    Microsoft 365 Copilot Updates Offer Expanded AI Capabilities, Collaboration Tools

    Microsoft has announced updates to its Microsoft 365 Copilot AI assistant, including expanded AI capabilities in individual apps, the ability to create autonomous agents, and a new AI-powered collaboration workspace.

  • An open book with text transforming into smooth lines represents reading ease

    Fluency Innovator Grants to Award Free Subscriptions to WordFlight Literacy Intervention Solution

    The call for applications is now open for Foundations in Learning's Fall 2024 Fluency Innovator Grants program. Teachers and administrators from schools and districts serving grades 3-8 may apply to receive a free subscription to WordFlight, a literacy assessment and intervention solution for students with deficits in reading fluency and comprehension, for the Fall 2024 semester.

  • AI-themed background with sparse circuit lines and minimal geometric shapes

    Microsoft to Introduce AI Agent Building Tools in Copilot Studio

    In November, Microsoft plans to roll out a public preview of a new feature within Copilot Studio, allowing users to create autonomous AI "agents" designed to handle routine tasks.

  • landscape photo with an AI rubber stamp on top

    California AI Watermarking Bill Supported by OpenAI

    OpenAI, creator of ChatGPT, is backing a California bill that would require tech companies to label AI-generated content in the form of a digital "watermark." The proposed legislation, known as the "California Digital Content Provenance Standards" (AB 3211), aims to ensure transparency in digital media by identifying content created through artificial intelligence. This requirement would apply to a broad range of AI-generated material, from harmless memes to deepfakes that could be used to spread misinformation about political candidates.