Report: 96% of School Apps Send Kids’ Personal Data to Potentially Harmful Third Parties

Internet Safety Labs Randomly Tested Apps Used in Every State by Almost Half a Million Students

In a new K–12 ed tech safety benchmark report, “School Mobile Apps Student Data Sharing Behavior,” Internet Safety Labs finds that 96% of all apps used in schools share children’s personal information with third parties without the knowledge or consent of the users or the schools.

The report, the first of four in the works, is based on research by ISL first published in May 2021. The data was gathered from a random sample of 13 schools in each of the 50 states and the District of Columbia for a total of 663 schools and 455,882 students.

According to ISL, 78% of the student data shared with third parties was shared with advertising and data analytics entities.

ISL tested 1,357 apps recommended or required by schools. Applying four safety score outcomes (high risk, some risk, do not use, and unable to test), it found that 78% of apps were scored “do not use” and 18% were “high risk.”

Key Findings Concerning Student Privacy and Safety

  • 74.9% of the apps tested were scored “very high risk” for having SDK components that were “likely to share data with high-risk entities,” ISL said.
  • Location information was accessed by 79% of the apps, based on permission analysis.
  • 52% of the apps tested accessed the students’ calendar and contacts information.
  • 72% of K–12 schools’ “top 25 recommended apps” and 56% of the top 25 mandatory apps were scored “do not use” by ISL.
  • 28% of apps tested were not specifically made for education, the report noted, “such as The New York Times, YouTube, or Spotify, effectively providing no limits or guardrails for children.”

The report also details child safety concerns about Google, noting that Google “dominates K–12 ed tech as the prime supplier of both hardware and software.” According to the report, 75% of schools provided Google Chromebooks or tablets to students, with Apple providing 34%. Of the apps sending out data, 68% of it went to Google and 36% to Apple.

ISL recommended that schools and local education agencies become more cautious about adopting new ed tech — adding that schools need more support and financial resources to understand and implement it.

The organization noted it had seen “no evidence that ed tech developers prioritize safety” and recommended that the industry “needs to join the conversation on software product safety for students.” The report said that changes are not out of reach because only “a handful of key developers provide most of these apps to schools across the US… [t]hey should be able to readily make the necessary safety improvements.”

Read or download the full ed tech safety report from the Internet Safety Labs website.

Internet Safety Labs is a 501(c)(3) non-profit, independent software product safety organization dedicated to consumer safety. Learn more at InternetSafetyLabs.org.

About the Author

Kate Lucariello is a former newspaper editor, EAST Lab high school teacher and college English teacher.

Featured

  • glowing digital human brain composed of abstract lines and nodes, connected to STEM icons, including a DNA strand, a cogwheel, a circuit board, and mathematical formulas

    OpenAI Launches 'Reasoning' AI Model Optimized for STEM

    OpenAI has launched o1, a new family of AI models that are optimized for "reasoning-heavy" tasks like math, coding and science.

  • landscape photo with an AI rubber stamp on top

    California AI Watermarking Bill Supported by OpenAI

    OpenAI, creator of ChatGPT, is backing a California bill that would require tech companies to label AI-generated content in the form of a digital "watermark." The proposed legislation, known as the "California Digital Content Provenance Standards" (AB 3211), aims to ensure transparency in digital media by identifying content created through artificial intelligence. This requirement would apply to a broad range of AI-generated material, from harmless memes to deepfakes that could be used to spread misinformation about political candidates.

  • clock with gears and digital circuits inside

    Report Estimates Cost of AI at Nearly $300K Per Minute

    A report from cloud-based data/BI specialist Domo provides a staggering estimate of the minute-by-minute impact of today's generative AI boom.

  • glowing lines connecting colorful nodes on a deep blue and black gradient background

    Juniper Intros AI-Native Networking and Security Management Platform

    Juniper Networks has launched a new solution that integrates security and networking management under a unified cloud and artificial intelligence engine.