Study Recommends Total Ban on Facial Recognition in Schools

Plotting in the popular British TV show MI5 frequently relied on the use of facial recognition at train stations and high rises, to hunt down the international terrorists among crowds of people wandering past the surveillance cameras. In reality, facial recognition isn't ready for prime time. That's the finding of a research project at the University of Michigan. A study by researchers in the Ford School of Public Policy specifically cited the heightened risk of racism and potential for privacy erosion.

Among the problems for the technology highlighted in the report, "Cameras in the Classroom: Facial Recognition Technology in Schools," are these:

  • Facial recognition works most accurately with white men and "much less accurately" with people of color, children, women, gender non-conforming people and disabled people. As a result, school security that uses facial recognition has the potential to "take existing racial biases and make them worse, causing more surveillance and humiliation of black and brown students."

  • Facial recognition systems "will make surveillance a part of everyday life for young people"; once it's installed for one purpose, it "will be expanded to other uses," without students knowing or consenting. That includes the use of student personal data by companies in ways that students won't know about or be able to control.

  • The technology "punishes nonconformity." The authors asserted that facial recognition will force students to dress and appear in specific ways so as not to be called out. Its accuracy, they wrote, is higher for "white, male, cisgender and non-disabled people." Students who don't fit those categories may not be counted for attendance or recognized as having a school account for purchasing lunch.

The project was undertaken by researchers in the Ford School's Science, Technology, and Public Policy (STPP) Program and arrives at a time when schools are debating the use of facial recognition products to track students and automate attendance. Previously, the technology has gained interest as a possible school security measure, to ensure that people on watchlists weren't allowed into school buildings.

"We have focused on facial recognition in schools because it is not yet widespread and because it will impact particularly vulnerable populations," said Shobita Parthasarathy, STPP director and professor of public policy, in a statement. "The research shows that prematurely deploying the technology without understanding its implications would be unethical and dangerous,"

To understand how the use of the technology might unfold, the researchers looked at similar types of security mechanisms, including CCTV cameras, metal detectors and biometric devices to see their impact.

"Some people say, 'We can't regulate a technology until we see what it can do.' But looking at technology that has already been implemented, we can predict the potential social, economic and political impacts, and surface the unintended consequences," said Molly Kleinman, STPP's program manager.

Though the study has recommended a total ban on the technology's use, the authors have provided a list of 15 policy recommendations for those at the national, state and school district levels who may be considering its use, as well as a set of sample questions to be asked by stakeholders, such as principals and teachers or parents and students.

Among the guidance, the report advised schools to convene a diverse group of stakeholders to do a "thorough evaluation" of facial recognition, including consideration of ethical implications, understanding the transparency of the data and algorithm used by the technology and accuracy of face-matching for all kinds of people. Questions included: How often will data be deleted? Can students opt out? How will you ensure the technology is not used beyond its original intended purpose? And where does the data used to train the algorithm come from?

The full report, executive summary and other materials related to the study are openly available on the university website.

The authors of the study will be hosting a hosting a webinar on their findings on Sep. 16, 2020 at 1 p.m. Eastern time. Register through the university website.

About the Author

Dian Schaffhauser is a former senior contributing editor for 1105 Media's education publications THE Journal, Campus Technology and Spaces4Learning.

Featured

  • glowing digital human brain composed of abstract lines and nodes, connected to STEM icons, including a DNA strand, a cogwheel, a circuit board, and mathematical formulas

    OpenAI Launches 'Reasoning' AI Model Optimized for STEM

    OpenAI has launched o1, a new family of AI models that are optimized for "reasoning-heavy" tasks like math, coding and science.

  • landscape photo with an AI rubber stamp on top

    California AI Watermarking Bill Supported by OpenAI

    OpenAI, creator of ChatGPT, is backing a California bill that would require tech companies to label AI-generated content in the form of a digital "watermark." The proposed legislation, known as the "California Digital Content Provenance Standards" (AB 3211), aims to ensure transparency in digital media by identifying content created through artificial intelligence. This requirement would apply to a broad range of AI-generated material, from harmless memes to deepfakes that could be used to spread misinformation about political candidates.

  • clock with gears and digital circuits inside

    Report Estimates Cost of AI at Nearly $300K Per Minute

    A report from cloud-based data/BI specialist Domo provides a staggering estimate of the minute-by-minute impact of today's generative AI boom.

  • glowing lines connecting colorful nodes on a deep blue and black gradient background

    Juniper Intros AI-Native Networking and Security Management Platform

    Juniper Networks has launched a new solution that integrates security and networking management under a unified cloud and artificial intelligence engine.