SIIA Releases Guiding Principles for AI in Ed Tech

The Software & Information Industry Association (SIIA) today released a set of guiding principles for artificial intelligence in education technology.

The principles, called the "Education Technology Industry's Principles for the Future of AI in Education," were releasd at an event on Capitol Hill today and were developed in conjunction with several companies involved in education and ed tech, including Pearson, D2L, Instructure, McGraw Hill, GoGuardian, and others.

The seven principles are (taken verbatim from SIIA):

  1. AI technologies in education should address the needs of learners, educators and families.

  2. AI technologies in education should account for educational equity, inclusion and civil rights as key elements of successful learning environments.

  3. AI technologies used in education must protect student privacy and data.

  4. AI technologies used in education should strive for transparency to enable the school community to effectively understand and engage with the AI tools.

  5. Companies building AI tools for education should engage with education institutions and stakeholders to explain and demystify the opportunities and risks of new AI technologies.

  6. Education technology companies and AI developers should adopt best practices for accountability, assurance and ethics, calibrated to mitigate risks and achieve the goals of these Principles.

  7. The education technology industry should work with the greater education community to identify ways to support AI literacy for students and educators.

The guiding principles document elaborates on these principles.

According to the organization: "SIIA believes that the successful deployment of AI technologies in education must be done in a way that supports those who use it, protects innovation in the field, and addresses the risks associated with the development and use of these new tools. AI should replace neither the educator nor the learning experience. The Education Technology Industry's Principles for the Future of AI in Education builds on experiences with and successes in using these technologies to advance educational objectives. These principles provide a framework for how we can look to the future of implementing AI technologies in a purpose-driven, transparent, and equitable manner."

"With AI being used by many teachers and educational institutions, we determined it was critical to work with the education technology industry to develop a set of principles to guide the future development and deployment of these innovative technologies,” said Chris Mohr, president of SIIA, in a prepared statement. "Partnering with teachers, parents, and students will be critical to improving educational outcomes, protecting privacy and civil rights, and understanding of these technologies. I commend our member companies who embraced this initiative to collaborate and for their commitment to support our children and teachers."

Further details can be found at edtechprinciples.com.

About the Author

David Nagel is the former editorial director of 1105 Media's Education Group and editor-in-chief of THE Journal, STEAM Universe, and Spaces4Learning. A 30-year publishing veteran, Nagel has led or contributed to dozens of technology, art, marketing, media, and business publications.

He can be reached at [email protected]. You can also connect with him on LinkedIn at https://www.linkedin.com/in/davidrnagel/ .


Featured

  • An elementary school teacher and young students interact with floating holographic screens displaying colorful charts and playful data visualizations in a minimalist classroom setting

    New AI Collaborative to Explore Use of Artificial Intelligence to Improve Teaching and Learning

    Education-focused nonprofits Leading Educators and The Learning Accelerator have partnered to launch the School Teams AI Collaborative, a yearlong pilot initiative that will convene school teams, educators, and thought leaders to explore ways that artificial intelligence can enhance instruction.

  • landscape photo with an AI rubber stamp on top

    California AI Watermarking Bill Supported by OpenAI

    OpenAI, creator of ChatGPT, is backing a California bill that would require tech companies to label AI-generated content in the form of a digital "watermark." The proposed legislation, known as the "California Digital Content Provenance Standards" (AB 3211), aims to ensure transparency in digital media by identifying content created through artificial intelligence. This requirement would apply to a broad range of AI-generated material, from harmless memes to deepfakes that could be used to spread misinformation about political candidates.

  • closeup of laptop and smartphone calendars

    2024 Tech Tactics in Education Conference Agenda Announced

    Registration is free for this fully virtual Sept. 25 event, focused on "Building the Future-Ready Institution" in K-12 and higher education.

  • cloud icon connected to a data network with an alert symbol (a triangle with an exclamation mark) overlaying the cloud

    U.S. Department of Commerce Proposes Reporting Requirements for AI, Cloud Providers

    The United States Department of Commerce is proposing a new reporting requirement for AI developers and cloud providers. This proposed rule from the department's Bureau of Industry and Security (BIS) aims to enhance national security by establishing reporting requirements for the development of advanced AI models and computing clusters.