Nonprofit Launches K–12 Edtech Privacy Evaluation Platform

The new platform from Common Sense tests the privacy of commonly used apps used in classrooms.

Schools looking for an easier way to manage the task of evaluating the privacy and security practices of thousands of education technology applications on the market now have a new tool to evaluate apps that students commonly use.

Common Sense Education, a San Francisco-based nonprofit that works with more than 100,000 schools to help students harness technology for learning, has launched the K–12 Edtech Privacy Evaluation Platform in collaboration with more than 70 schools and districts throughout the United States.

"Evaluating the privacy and security practices of educational software is a daunting task for most schools and districts, but it doesn't have to be," said James P. Steyer, founder and CEO of Common Sense, in a prepared statement. "By working together with educators, Common Sense has developed a comprehensive, centralized, and free resource to help an education community that is spread out across the country learn from each other and make more informed decisions about protecting student privacy."

On the platform, apps undergo the following four tests:

  • A transparency evaluation that identifies the thoroughness of the policy;
  • A qualitative evaluation that clarifies the strengths and weaknesses in the policy;
  • A summary evaluation that, based on the qualitative evaluation, organizes strengths and risks into four categories of safety, privacy, security and compliance; and
  • An app evaluation that provides an overall summary of strengths and potential risk of an app.

The tool was created after an initiative of several school districts, including Fairfax Country Public Schools and Houston ISD, formed to address the complex and varied privacy policies. The initiative approached Common Sense in 2014. The Bill & Melinda Gates Foundation and the Michael & Susan Dell Foundation later provided funding for the platform.

To explore the K–12 Edtech Privacy Evaluation Platform, visit the Common Sense Graphite site.

About the Author

Sri Ravipati is Web producer for THE Journal and Campus Technology. She can be reached at [email protected].

Featured

  • stylized makerspace with levitating tools, glowing holographic displays, and simplified characters collaborating

    TinkRworks, 1st Maker Space Partner for Hands-on STEAM Learning

    STEAM curriculum provider TinkRworks and 1st Maker Space, a provider of customized makerspaces and STEAM labs, have partnered to help foster hands-on learning in STEAM classrooms.  

  • An elementary school teacher and young students interact with floating holographic screens displaying colorful charts and playful data visualizations in a minimalist classroom setting

    New AI Collaborative to Explore Use of Artificial Intelligence to Improve Teaching and Learning

    Education-focused nonprofits Leading Educators and The Learning Accelerator have partnered to launch the School Teams AI Collaborative, a yearlong pilot initiative that will convene school teams, educators, and thought leaders to explore ways that artificial intelligence can enhance instruction.

  • silhouetted human figures stand opposite a glowing digital brain, surrounded by abstract circuits and shadowy shapes

    Tech Execs Expect AI Advancements to Increase Security Threats

    Forty-one percent of tech executives in a recent international survey said they believe advancements in AI will significantly increase security threats. NetApp's second annual Data Complexity Report points to 2025 as "AI's make or break year."

  • landscape photo with an AI rubber stamp on top

    California AI Watermarking Bill Supported by OpenAI

    OpenAI, creator of ChatGPT, is backing a California bill that would require tech companies to label AI-generated content in the form of a digital "watermark." The proposed legislation, known as the "California Digital Content Provenance Standards" (AB 3211), aims to ensure transparency in digital media by identifying content created through artificial intelligence. This requirement would apply to a broad range of AI-generated material, from harmless memes to deepfakes that could be used to spread misinformation about political candidates.