California DOJ Warns Ed Tech Companies to Only Collect Necessary Student Data

Source: California Department of Justice, Office of the Attorney General.

The California Department of Justice is urging education technology companies to abide by the state’s privacy laws and be cautious with data mining. California Attorney General Kamala Harris earlier this week released “Ready for School: Recommendations for the Ed Tech Industry to Protect the Privacy of Student Data,” a report outlining best practices for ed tech companies operating in the Golden State.

In California, it is illegal for third-party providers to collect and sell student information obtained through services with schools and districts (as outlined in AB 1584), or to use the information to market products to students (SB 1177). Harris encourages companies not to collect student information beyond what is necessary to accomplish the educational goals set forth by the school. Additionally, she strongly recommends that companies not sell student information, except as part of an acquisition or merger.

Other key recommendations offered in the report include:

  • Minimizing data collection and data retention and using data for strictly educational purposes;
  • Keeping contractual commitments to not disclose or sell student information;
  • Establishing policies that enable parents and legal guardians to fully understand how collected student data is used and maintained; and
  • Implementing security measures to protect student information.

To develop these recommendations and others, Harris created the Privacy Enforcement and Protection Unit. The unit consulted “a wide range of stakeholders including the Ed Tech industry, educators and privacy and consumer advocates,” the report states.  

“Organizations that make use of student data must take every step possible to be transparent with parents and schools and to protect student privacy,” Harris wrote in the report. “As the devices we use each day become increasingly connected, it’s critical that we implement robust safeguards for what is collected, how it is used, and with whom it is shared."

In addition, the state attorney general’s office created an online complaint form for the public to report any online services or mobile applications that violate the California Online Privacy Protection Act, which includes ed tech products and services.

The full report is available on the California Department of Justice site.

About the Author

Sri Ravipati is Web producer for THE Journal and Campus Technology. She can be reached at [email protected].

Featured

  • hand touching glowing connected dots

    Registration Now Open for Tech Tactics in Education: Thriving in the Age of AI

    Tech Tactics in Education has officially opened registration for its May 7 virtual conference on "Thriving in the Age of AI." The annual event, brought to you by the producers of Campus Technology and THE Journal, offers hands-on learning and interactive discussions on the most critical technology issues and practices across K–12 and higher education.

  • teenager interacts with a chatbot on a computer screen

    Character.AI Rolls Out New Parental Insights Feature Amid Safety Concerns

    Chatbot platform Character.AI has introduced a new Parental Insights feature aimed at giving parents a window into their children's activity on the platform. The feature allows users under 18 to share a weekly report of their chatbot interactions directly with a parent's e-mail address.

  • laptop screen displaying a typed essay, on a child

    McGraw Hill Acquires Essaypop Digital Learning Tool

    Education company McGraw Hill has announced the acquisition of Essaypop, a cloud-based writing tool that will enhance the former's portfolio of personalized learning capabilities.

  • a professional worker in business casual attire interacting with a large screen displaying a generative AI interface in a modern office

    Study Finds Generative AI Could Inhibit Critical Thinking

    A new study on how knowledge workers engage in critical thinking found that workers with higher confidence in generative AI technology tend to employ less critical thinking to AI-generated outputs than workers with higher confidence in personal skills.