Student Device Access Skews Along Income, Racial Lines

A recent study on the "digital divide" among high school students shows improving device access, but persistent barriers for historically underprivileged populations.

Testing nonprofit ACT surveyed thousands of high school students last winter and found that the vast majority of them had access to at least one smartphone (96%) and laptop (87%) at home.

Moreover, the percentage of students with more than one device at home has ticked up since 2018, the last time ACT collected data on the digital divide. Among students from low-income families (i.e., those whose annual incomes are lower than $36,000), 82% reported having between two and four devices, compared to 72% in 2018. Students from moderate-income families ($36,000 to $100,000) also saw an increase to 93% who own two to four devices, compared to 86% six years ago.

However, despite showing aggregate improvements, the survey also revealed disparities that fall stubbornly along income and racial lines.

For instance, only 76% of low-income students said they own a laptop, compared to 92% of high-income (over $100,000) students. Conversely, only 58% of high-income students reported having a monthly smartphone plan, compared to 70% of low-income students — suggesting that low-income students are more likely to rely on smartphones (instead of PCs) as their only internet-connected device.

Black and Hispanic students are also more likely to struggle with internet access compared to their white and Asian counterparts. For instance, though the percentage of students relying on dial-up internet (as opposed to more reliable broadband) is low, Black and Hispanic students are more likely to use it than Asian and white students.

These gaps could have serious ramifications for students, particularly as basic technological proficiency becomes more critical to their success, noted ACT CEO Janet Godwin.

"[D]isparities in access continue to prevent students from engaging in online learning and completing assignments," Godwin said in a prepared statement. "This divide also could affect students' ability to develop digital literacy skills, which are essential to preparing students for the challenges of consuming content in an AI-driven world. We are seeing gains in critical areas of technology access compared to our 2018 findings, but they are not enough to bridge this divide."

The ACT's full report, titled "How High School Students Use and Perceive Technology at Home and School," is available to download here.

About the Author

Gladys Rama (@GladysRama3) is the editorial director of Converge360.

Featured

  • a stylized magnifying glass and a neural network pattern with interconnected nodes, symbolizing search and AI processes

    OpenAI Launching AI-Powered Search Engine

    OpenAI has unveiled SearchGPT, a new AI-powered search engine designed to access information from across the internet in real time. The much-anticipated prototype will provide more organized and meaningful search results by summarizing and contextualizing information rather than returning lists of links.

  • Abstract geometric pattern with interconnected nodes and lines

    Microsoft 365 Copilot Updates Offer Expanded AI Capabilities, Collaboration Tools

    Microsoft has announced updates to its Microsoft 365 Copilot AI assistant, including expanded AI capabilities in individual apps, the ability to create autonomous agents, and a new AI-powered collaboration workspace.

  • Google Adds AI Video Creator to Workspace Labs

    Google has added a new AI-powered video creation service as part of its Workspace Labs program, where users can try out new AI features.

  • landscape photo with an AI rubber stamp on top

    California AI Watermarking Bill Supported by OpenAI

    OpenAI, creator of ChatGPT, is backing a California bill that would require tech companies to label AI-generated content in the form of a digital "watermark." The proposed legislation, known as the "California Digital Content Provenance Standards" (AB 3211), aims to ensure transparency in digital media by identifying content created through artificial intelligence. This requirement would apply to a broad range of AI-generated material, from harmless memes to deepfakes that could be used to spread misinformation about political candidates.