Robotic Infant Simulators Actually Promote Teen Pregnancy, Study Finds

Robotic infant simulators — or robot babies — used by schools to help prevent teen pregnancy actually have the opposite effect, according to a study published in the British health journal The Lancet.

Australian researchers randomly assigned nearly 3,000 teenage girls to one of two groups. Some received an automated doll (programmed to cry, sleep, eat and spoil its diapers on a realistic schedule, with sensors to track whether students are properly caring for it), while others received only standard health education. The researchers tracked the girls until they turned 20, using records from hospital and abortion clinics.

The results were surprising: 8 percent of the girls who received an infant simulator ended up giving birth, compared to just 4 percent of those who received standard health education. And 9 percent of the girls who received a robot baby had an abortion, compared to 6 percent who received standard health education.

“The infant simulator-based VIP program did not achieve its aim of reducing teenage pregnancy,” the researchers determined. “Girls in the intervention group were more likely to experience a birth or an induced abortion than those in the control group before they reached 20 years of age.”

Wisconsin-based Realityworks is the largest provider of infant simulators to schools in the United States and abroad. The company estimates it controls 95 percent of the infant-simulator market.

Shortly after the Lancet study was released, the company issued a statement calling The Lancet study “deeply flawed” and criticizing the researchers’ findings as “junk science” — despite their rigorous methodology. The company complained that the Australian schools in the study did not use the full Realityworks curriculum.

“The study had nothing to do with us, our curriculum or our RealCare Baby infant simulators, nor are its conclusions about us credible,” the company said in its statement.

A Bloomberg Businessweek investigation late last year found that two-thirds of U.S. school districts buy some kind of infant simulator, and that the Realityworks model (which costs about $650 apiece) had become “a staple of American education, reaching more than 6 million students at 17,000 schools.”

In addition to robot babies, Realityworks now produces many other “experiential learning technologies,” including a growing number of simulators for career and technical education programs, including nursing, welding and animal science, Education Week reported last week.

About the Author

Richard Chang is associate editor of THE Journal. He can be reached at [email protected].

Featured

  • pattern featuring interconnected lines, nodes, lock icons, and cogwheels

    Red Hat Enterprise Linux Update Expands Automation, Security

    Open source solution provider Red Hat has released Red Hat Enterprise Linux (RHEL) 9.5, the latest version of its flagship Linux platform.

  • illustration of a VPN network with interconnected nodes and lines forming a minimalist network structure

    Report Finds Increasing Number of Vulnerabilities in OpenVPN

    OpenVPN, an open source virtual private network (VPN) system integrated into millions of routers, firmware, PCs, mobile devices and other smart devices, is leaving users open to a growing list of threats, according to a recent report from Microsoft.

  • blue, green, and yellow swirling lines of light form a dense, interconnected network

    New Amazon Nova Models Ramp Up Generative AI Performance

    Amazon Web Services (AWS) has unveiled Amazon Nova, a cutting-edge suite of foundation models (FMs) for generative AI.

  • landscape photo with an AI rubber stamp on top

    California AI Watermarking Bill Supported by OpenAI

    OpenAI, creator of ChatGPT, is backing a California bill that would require tech companies to label AI-generated content in the form of a digital "watermark." The proposed legislation, known as the "California Digital Content Provenance Standards" (AB 3211), aims to ensure transparency in digital media by identifying content created through artificial intelligence. This requirement would apply to a broad range of AI-generated material, from harmless memes to deepfakes that could be used to spread misinformation about political candidates.