Information of 700 Students Exposed in Assessment Company Data Breach

Information of nearly 700 students was accessed in a breach of Questar Assessment, according to information released by the company.

The company, which administers math and English language arts tests, first became aware of a breach that revealed the information of 52 students in New York. Following that breach, which the company said was perpetrated by a former employee, Questar found that the records of 663 students in Mississippi had also been breached. The students attended Tupelo Middle School, Tupelo High School and Jefferson County Junior High in 2016.

Names, state ID numbers, grade levels, test results and teacher names were all among the information accessed. State education representatives said they do not share addresses or Social Security numbers with the company, according to an Associated Press report about the breach.

"What I was reassured of as far as what would be deemed harmful data, that type of information was not breached," said Gearl Loden, superintendent of Tupelo Public School District, according to a report in the Daily Journal. "I hate that we've had a breach, but I am thankful, though, that it isn't any harmful data based on the information I received."

The Mississippi State Department of Education plans to send letters to all the students affected and State Superintendent Carey Wright has demanded that Questar reset all passwords, undergo an outside security audit and offer a corrective plan this month.

"We take seriously the protection of student information, and any violation of privacy is absolutely unacceptable. We are actively pursuing immediate corrective action from Questar," Wright told the Clarion Ledger.

Questar was awarded a 10-year contract worth nearly $111 million with the state's department of education in 2015. It won a second contract, worth $2.2 million, in June after another vendor incorrectly scored almost 1,000 history tests.

About the Author

Joshua Bolkan is contributing editor for Campus Technology, THE Journal and STEAM Universe. He can be reached at [email protected].

Featured

  • An elementary school teacher and young students interact with floating holographic screens displaying colorful charts and playful data visualizations in a minimalist classroom setting

    New AI Collaborative to Explore Use of Artificial Intelligence to Improve Teaching and Learning

    Education-focused nonprofits Leading Educators and The Learning Accelerator have partnered to launch the School Teams AI Collaborative, a yearlong pilot initiative that will convene school teams, educators, and thought leaders to explore ways that artificial intelligence can enhance instruction.

  • landscape photo with an AI rubber stamp on top

    California AI Watermarking Bill Supported by OpenAI

    OpenAI, creator of ChatGPT, is backing a California bill that would require tech companies to label AI-generated content in the form of a digital "watermark." The proposed legislation, known as the "California Digital Content Provenance Standards" (AB 3211), aims to ensure transparency in digital media by identifying content created through artificial intelligence. This requirement would apply to a broad range of AI-generated material, from harmless memes to deepfakes that could be used to spread misinformation about political candidates.

  • closeup of laptop and smartphone calendars

    2024 Tech Tactics in Education Conference Agenda Announced

    Registration is free for this fully virtual Sept. 25 event, focused on "Building the Future-Ready Institution" in K-12 and higher education.

  • cloud icon connected to a data network with an alert symbol (a triangle with an exclamation mark) overlaying the cloud

    U.S. Department of Commerce Proposes Reporting Requirements for AI, Cloud Providers

    The United States Department of Commerce is proposing a new reporting requirement for AI developers and cloud providers. This proposed rule from the department's Bureau of Industry and Security (BIS) aims to enhance national security by establishing reporting requirements for the development of advanced AI models and computing clusters.