Education's AI Safety Blind Spot: Only 6% of Student-Facing Systems Are Tested

According to a recent global survey,only 6% of education organizations have conducted AI red-teaming. The Kiteworks Data Security and Compliance Risk: 2026 Forecast Report reveals this gap as among the most troubling in the study: AI systems affecting students, including minors, deployed without adversarial testing to identify vulnerabilities before attackers or unintended behaviors cause harm.

Education's Unique Risks

Education faces a unique combination of sensitive populations and limited security resources. The report surveyed 225 security, IT, and risk leaders across 10 industries and 8 regions. Education's AI security posture reflects chronic underinvestment: 84% lack AI anomaly detection, 74% lack network isolation, and 68% lack kill switches. The sector serves one of the most vulnerable populations — children — while maintaining one of the weakest AI security profiles.

Data Containment Control Failures

The containment control gaps are severe. The report found that 79% of education organizations lack purpose binding — the ability to enforce limitations on what AI agents are authorized to do. This means AI systems deployed for educational purposes can potentially access student data beyond their authorized scope, with no technical mechanism to prevent it. An AI tutoring system without purpose binding might access disciplinary records, health information, or family data if that data is accessible on the network. The report identified a 15- to 20-point gap between governance controls and containment controls across all industries. Education sits at the extreme end of this gap, with monitoring capabilities that allow observation but containment capabilities that cannot prevent harm.

Student data carries particular sensitivity. Beyond standard personally identifiable information, student records often include behavioral assessments, learning disability diagnoses, family situation details, counselor notes, and developmental observations. AI systems with broad access to this data — and without containment controls — create exposure that extends beyond privacy to potential harm. Data about minors in the wrong hands enables targeting, manipulation, and exploitation. The report found that 35% of organizations cite personal data in prompts as a top privacy exposure. In education, this means teachers and administrators may paste student information into AI assistants without technical controls preventing it.

AI Visibility Gaps

Like other sectors, education faces third-party AI exposure but with higher stakes, the report found. Education technology vendors increasingly embed AI in learning management systems, tutoring platforms, assessment tools, and administrative systems. Only 36% of organizations have visibility into how vendors handle data in AI systems. Education organizations deploying vendor AI that touches student data cannot see what those vendors' systems do with that data. They cannot verify whether vendor AI systems train on student information, whether student data crosses borders for processing, or whether vendor containment controls protect against unauthorized access.

The AI anomaly detection gap in education — 84% lacking — is among the highest of any sector. Education organizations cannot detect when AI systems begin behaving unexpectedly. For AI systems used in student assessment, behavioral monitoring, or personalized learning, unexpected behavior might not be obvious until it affects student outcomes. A grading AI that drifts toward bias, a recommendation system that develops problematic patterns, or a monitoring system that generates false positives could operate for extended periods before human review catches the problem. The report found that 60% of organizations globally lack AI anomaly detection; education's 84% rate represents a 24-point deficit even against an already inadequate baseline.

Importance of Training Data Governance

The training data security dimension affects both AI systems and student privacy. The report found that 78% of organizations cannot validate data before training and 77% cannot trace training data provenance. Education AI systems may be training on student data whose consent status is unclear, whose retention compliance is unknown, and whose security cannot be assured. Training data that includes minors' information carries heightened protection requirements that most education organizations cannot demonstrate they have satisfied. The report also found that 53% of organizations cannot recover AI training data after an incident. When education organizations discover that AI systems were trained inappropriately on student data, they have no mechanism to remediate.

Incident Response Challenges

The incident response gaps are particularly concerning given education's resource constraints. The report found that 89% of organizations have never practiced incident response with vendors. Education organizations, typically operating with limited security staff, face incidents involving vendor AI systems without playbooks, without practice, and without coordinated response procedures. The report found that 87% lack joint incident response playbooks with vendors. When an ed tech vendor's AI system is compromised, education organizations will improvise their response while student data remains at risk.

Governance and Leadership

The report found that 72% of organizations cannot produce a reliable inventory of their software components. Education organizations deploying AI systems — often acquired through district-level decisions with limited security review — cannot identify what components those systems contain. The AI supply chain visibility in education is likely worse than the already poor global average. When vulnerabilities are discovered in AI dependencies, education organizations will scramble to determine which student-facing systems are affected.

Board and leadership engagement compounds the problem. The report found that 54% of boards globally are not engaged on AI governance, and organizations without board engagement are 26–28 points behind on every AI maturity metric. Education governance structures — school boards, district leadership, university trustees — often lack cybersecurity expertise and may not recognize AI security as requiring specific attention beyond general technology oversight. When leadership does not ask about AI security, organizations do not invest in it.

The report identified audit trails as a keystone capability: Organizations with evidence-quality audit trails show 20- to 32-point advantages across every AI metric. Education organizations with fragmented logs across learning management systems, student information systems, and various ed tech platforms cannot reconstruct what AI systems accessed, what decisions they influenced, or what data they processed. When incidents occur or parents ask questions, education organizations cannot provide evidence-quality answers.

Looking Ahead

The path forward requires education to prioritize student protection in AI deployment decisions. Red-teaming should be mandatory before deploying AI systems that access student data or affect student outcomes — the 6% rate is unacceptable for a sector serving children. Vendor AI contracts should require security attestations that education organizations can verify, not merely trust. Containment controls — kill switches, purpose binding, network isolation — should be prerequisites for any AI system accessing student information. The report's finding that 100% of organizations have AI on their roadmap applies to education as well — deployment will accelerate regardless of security readiness.

The report projects that AI deployment will continue accelerating across all sectors. Education organizations that deploy AI without security controls will accumulate exposure faster than they build defenses. The gap between AI adoption and AI security is dangerous in any sector; in a sector serving children, it is unconscionable. The report found that organizations just starting their AI journey are 33–42 points behind on containment controls. Many education organizations fall into this category, deploying AI without the governance infrastructure that more mature organizations have built through experience.

The uncomfortable reality is that education organizations are deploying AI systems that access student data — including data about children — without testing whether those systems can be compromised, without the ability to detect when they misbehave, and without the ability to stop them quickly when something goes wrong. The sector protecting some of the most vulnerable populations has some of the weakest AI security controls. The 6% red-teaming rate is not a resource constraint to be addressed when budgets allow — it is a failure to protect children that requires immediate correction.

Featured