Chronic Absenteeism Severe, Especially Among Historically Marginalized Groups

An analysis of data from more than 325,000 preK–12 students found that chronic absenteeism has reached severe levels between 2022 and 2023 and that disparities between demographic groups are growing.

School Innovations and Achievement (SI&A) analyzed date from students in 30 districts in California from March 2022 to March 2023 and found that, in that time, those students missed a total of more than 15 million hours of school — an average of roughly 43.5 hours each — and one-third of them had missed 10% or more of the school year (chronic absenteeism).

Further, according to SI&A, "Historically marginalized student groups continue to have higher rates of absenteeism and the differences in attendance rates by student groups are growing. This has implications for equity when considering academic recovery."

SI&A, which provides tools for tracking and managing student attendance, noted that attendance is a significant predictor of student success and that targeting families with interventions early on can have a positive impact on student attendance. "We know that school attendance is the number one predictor of student success, which underscores the urgency of finding effective interventions for the growing rate of chronic absenteeism in U.S. schools," said Erica Peterson, SI&A national education manager, and a co-author of the report, in a prepared statement. "Interventions focused on areas such as school-home communication and relationship building need to be prioritized as districts work to support good attendance habits and get students back on track academically."

SI&A said the key is communication "with targeted, positive messaging to all families and home adults at all levels about the importance of good attendance habits." Addressing language and technology barriers in those communications is also critical.

The complete report, "Chronic Absence Patterns Across California Schools," is freely available via SI&A's website.

About the Author

David Nagel is the former editorial director of 1105 Media's Education Group and editor-in-chief of THE Journal, STEAM Universe, and Spaces4Learning. A 30-year publishing veteran, Nagel has led or contributed to dozens of technology, art, marketing, media, and business publications.

He can be reached at [email protected]. You can also connect with him on LinkedIn at https://www.linkedin.com/in/davidrnagel/ .


Featured

  • Report Explores Teacher and Administrator Attitudes on K–12 AI Adoption

    K–12 administration software provider Frontline Education recently released a new research brief regarding the use of AI adoption in schools, according to a news release. “Insights into K–12 AI Adoption: Educator Perspectives and Pathways Forward” was developed from the results of the Frontline Research and Learning Institute’s annual survey of district leaders.

  • PowerBuddy for Data

    PowerSchool Releases AI-Powered Tools for Students, Admins

    PowerSchool recently announced the general availability of two new AI-powered education tools, one for students and one for education data managers.

  • abstract pattern of interlocking circuits, hexagons, and neural network shapes

    Anthropic Offers Cautious Support for New California AI Regulation Legislation

    Anthropic has announced its support for an amended version of the "Safe and Secure Innovation for Frontier Artificial Intelligence Models Act," California’s Senate Bill 1047 (SB 1047), because of revisions to the bill the company helped to influence — but not without some reservations.

  • person signing a bill at a desk with a faint glow around the document. A tablet and laptop are subtly visible in the background, with soft colors and minimal digital elements

    California Governor Signs Off on AI Content Safeguard Laws

    California Governor Gavin Newsom has officially signed a series of landmark artificial intelligence bills into law, signaling the state’s latest efforts to regulate the burgeoning technology, particularly in response to the misuse of sexually explicit deepfakes. The legislation is aimed at mitigating the risks posed by AI-generated content, as concerns grow over the technology's potential to manipulate images, videos, and voices in ways that could cause significant harm.