Media Management Tool for Schools Enters U.S. Market, Addresses Student Image Privacy Concerns

Digital media management system Pixevety has made its consent-driven photo, data, and video platform and protocols available to U.S. school districts, the company said in a news release.

Based in Australia, Pixevety protects “millions of assets and nearly 1 million families globally with cutting-edge technology that enables schools to safeguard their photo, video and data management administration and consent” in accordance with strict GDPR policies, according to Pixevety’s website. 

“Recent litigation by American parents against social media companies should be sounding alarm bells for U.S. schools about their own obligation to protect the online privacy and security of their students,” said Pixevety CEO Colin Anson. “Years after the E.U., Great Britain and nations around the world imposed tough legal standards on organizations collecting personal data, American schools can finally access GDPR-compliant technology to safeguard their students’ photos, videos and data.” 

The number of student school photos publicly available online is “staggering,” Pixevety said, “creating major concerns around child tracking that parents may not have considered”: 

  • 20 million student photos have been shared online by U.S. public schools and districts 

  • In about 4.9 million of those images, students are identifiable 

  • 726,000 images also contain the full names and approximate locations of students 

“It’s time for U.S. schools to get onboard with online security standards used around the world,” said Anson. “When we launched Pixevety over a decade ago, our vision was to create an exceptional media management system for schools that not only embraced GDPR and consent protocols but also offered schools efficient, secure and automated privacy tools. Today, we have surpassed those initial benchmarks with technology that is fully encrypted, allowing parental photo consent in real time while addressing student online privacy needs.” 

The company said its platform provides built-in privacy-by-design, AI and photo consent technology, enabling schools to: 

  • Access and implement best practice safeguards for storing, managing and sharing media 

  • Automate the entire photo consent process to ensure schools respect the privacy of all members 

  • Safely capture media “on the go” with the Pixevety mobile app and central storage 

  • Efficiently organize and tag media with Pixevety’s smart Virtual Archivist, built on ethical AI technology 

  • Share media safely to build a lifetime of engagement 

Learn more at Pixevety.com.

About the Author

Kristal Kuykendall is editor, 1105 Media Education Group. She can be reached at [email protected].


Featured

  • student holding a smartphone with thumbs-up and thumbs-down icons, surrounded by abstract digital media symbols and interface elements

    Teaching Media Literacy? Start by Teaching Decision-Making

    Decision-making is a skill that must be developed — not assumed. Students need opportunities to learn the tools and practices of effective decision-making so they can apply what they know in meaningful, real-world contexts.

  • AI microchip under cybersecurity attack, surrounded by symbols of threats like a skull, spider, lock, and warning shield

    Report Finds Agentic AI Protocol Vulnerable to Cyber Attacks

    A new report from Backslash Security has identified significant security vulnerabilities in the Model Context Protocol (MCP), technology introduced by Anthropic in November 2024 to facilitate communication between AI agents and external tools.

  • laptop with an AI graphic, surrounded by books, a tablet, a smartphone with a graduation cap icon, a smart speaker, and a notebook with a brain illustration

    Michigan Virtual, aiEDU Partner to Expand AI Support for Teachers

    A new partnership between Michigan Virtual and the AI Education Project (aiEDU) aims to accelerate AI literacy and AI readiness across Michigan's K-12 schools.

  • abstract pattern of cybersecurity, ai and cloud imagery

    Report Identifies Malicious Use of AI in Cloud-Based Cyber Threats

    A recent report from OpenAI identifies the misuse of artificial intelligence in cybercrime, social engineering, and influence operations, particularly those targeting or operating through cloud infrastructure. In "Disrupting Malicious Uses of AI: June 2025," the company outlines how threat actors are weaponizing large language models for malicious ends — and how OpenAI is pushing back.