3 AI Imperatives for Schools in 2024
How will artificial intelligence impact schools and districts this year? We asked AI and education leaders for their predictions and thoughts on the most important issues to consider as the technology evolves and adoption expands. Here's what they told us.
1) Responsible AI Will Be Critical as Complex Issues Persist
In 2024, AI in education will continue evolving with new architectures and transformer models like GPT-5 and Gemini 2 leading the way. However, the complexity of developing robust GenAI solutions might slow the adoption of open source models in ed tech. Institutions will navigate between adopting AI assistants within existing applications, leveraging GUI-based low-code platforms, utilizing API connections to proprietary models, and building custom stacks for enhanced privacy.
The most crucial considerations for education institutions will be the accessibility and ethical implementation of these technologies. Adoption will vary, with AI assistants and low-code platforms likely gaining traction for their ease of integration and user-friendliness.
However, the need for data privacy and custom solutions might drive some toward locally hosted models or API-based connections to sophisticated services. Regardless of the approach, integrating responsible AI practices — like improving source attribution, debiasing datasets, and ensuring privacy — will be vital. These measures are not just ethical imperatives but also crucial for maintaining trust and efficacy in educational environments.
As we move into 2024, education institutions must weigh the promise of AI against these practical and ethical considerations, ensuring that the technology they adopt not only enhances learning but also aligns with the core values of education.
— Noble Ackerson, CTO, American Board of Design and Research
2) Institutions Must Take AI Skills Training Seriously
There's a crying need for faculty and staff professional development about generative AI. The topic is complicated and fast moving. Already the people I know who are seriously offering such support are massively overscheduled. Digital materials are popular. Books are lagging but will gradually surface. I hope we see more academics lead more professional development offerings.
For an academic institution to take emerging AI seriously it might have to set up a new body. Present organizational nodes are not necessarily a good fit. For example, a computer science department can be of great help in explaining the technology, but might not have a lot of experiencing in supporting non-CS teaching with AI. Campus IT will probably be overwhelmed already, and might not have the academic cloud needed to win the attention of some faculty and staff. Perhaps a committee or team is a better idea, with members drawn from a heterogeneous mix of the community. Not to be too alarmist, but we might learn from how some institutions set up emergency committees to handle COVID in 2020, bringing together diverse subject matter experts, stakeholders, and operational leaders. If a campus population comes to see AI as a serious threat, this might be a useful model.
This is the heroic age of generative AI, as it were, with major developments under way and many changes happening quickly. Things will settle down in a bit, most likely, as new technologies become productive level services and as the big money and governments start corralling AI for their ends, at least until the next wave hits. By this I mean colleges, universities, and individual academics have the opportunity to exert influence on the field while it's still fluid. As customers, as partners, as intellectuals we can engage with the AI efforts. The engagement can take various forms, including creating open source projects, negotiating better service contracts with providers, lobbying for regulations, and issuing public scholarship. I hope campuses can grasp and support such work.
— Bryan Alexander, futurist and convener of the Future Trends Forum (excerpted with permission from "From 2023 to 2024 in AI, part 2")
3) Advancing AI Literacy Will Empower Innovation in Teaching and Learning
Within the next year, education will undergo a pivotal shift from enhancing digital fluency to advancing AI literacy and empowerment. This change is vital to align with the increasing presence of AI, transforming it from a mere tool into an integral part of academic and creative work.
The first phase is fostering AI literacy, where institutions will enrich curricula with AI principles, applications, and ethical considerations. This ensures the campus community is not only proficient in using AI but also in critically evaluating its impact and implications. Educators will recalibrate teaching to emphasize human insights and skills not replicable by AI, while preserving intellectual autonomy.
The progression toward AI empowerment will see institutions enabling innovative uses of AI in personalizing learning, advancing research, and enhancing administrative efficiency. This broader incorporation will transition AI from a complex computational entity to a partner in academia's collaborative fabric.
Realizing this vision necessitates ethical guidelines, strategic educational approaches, and fortified secure digital infrastructures. The goal for the forthcoming year is comprehensive AI integration that enriches human capability and reflects academic integrity.
— Kim Round, Ph.D., founding partner, Instructioneers LxD
The K-12 world is slowly moving past the paradigm of viewing AI as a vehicle for students to cheat and as an obstacle educators must conquer to assess learning authentically. This year, I expect AI-driven gains in upskilling for educators and students to encourage schools to reimagine pedagogy broadly. Students will have opportunities to be more entrepreneurial in applying discipline-specific knowledge to projects that force them to engage more deeply with the world outside the classroom.
— Louis Tullo, chief technology officer, Ravenscroft School