Turnitin AI Detector Analyzed 38M Submissions in its First 6 Weeks; Updates Answer Educator Feedback

In the first six weeks of educators using Turnitin’s new AI writing detection feature, the platform processed 38.5 million submissions, finding that 3.5% of those submissions contained more than 80% AI-written text, and just under one-tenth of submissions contained at least 20% AI-written text.

In a new blog post, Turnitin Chief Product Officer Annie Chechitelli explains the findings and details a few tweaks to the platform’s AI detection feature, in response to feedback from educators using it since its launch in early April.

Updates to the AI detection feature include:

  • Asterisk Added to Scores Under 20%: An asterisk will now appear next to the indicator “score” — or the percentage of a submission considered to be AI-written text — when the score is less than 20%, since the analysis of submissions thus far shows that false positives are higher when the detector finds less than 20% of a document is AI-written. The asterisk indicates that the score is less reliable, according to the blog post. 

  • Minimum Word Count Raised: The minimum number of words required for the AI detector to work has been raised from 150 to 300, because the detector is more accurate the longer a submission is, Chechitelli said. “Results show that our accuracy increases with a little more text, and our goal is to focus on long-form writing. We may adjust this minimum word requirement over time based on the continuous evaluation of our model.”

  • Changes to Detector Analysis of Opening and Closing Sentences: “We also observed a higher incidence of false positives in the first few or last few sentences of a document,” Chechitelli said. “Many times, these sentences are the introduction or conclusion in a document. As a result, we have changed how we aggregate these specific sentences for detection to reduce false positives.”

In their feedback, instructors’ and administrators’ main concern is false positives for “AI writing detection in general and in specific cases within our writing detecion,” according to the blog post. Since the release of the detection feature, Turnitin has seen that “real-world use is yielding different results” from lab tests performed during development, Chechitelli said.

The findings follow Turnitin’s investigation of cases where educators flag a submission for additional scrutiny due to questionable detection results, and an additional study of 800,000 academic writing samples — written before the release of ChatGPT — run through Turnitin’s AI detector.

Other findings from the detector’s first six weeks in use by educators include confusion about how to interpret Turnitin’s scores or AI writing metrics, Chechitelli said. 

She explained that the detector calculates two different statistics: the AI writing metric at the document level and at the sentence level.

As a result of educator feedback, “we’ve updated how we discuss false positive rates for documents and false positive rates for sentences,” she said. 

For documents with over 20% of AI writing, Turnitin’s document false positive rate is less than 1%, which was again validated by the new analysis of 800,000 pre-GPT writing samples. This translates into one human-written document out of 100 being incorrectly flagged as AI-written, Chechitelli said.

“While 1% is small, behind each false positive instance is a real student who may have put real effort into their original work,” she said. “We cannot mitigate the risk of false positives completely given the nature of AI writing and analysis, so, it is important that educators use the AI score to start a meaningful and impactful dialogue with their students in such instances.”

Turnitin has published a guide for educators on how to handle false positives on its website. 

The sentence-level false positive rate is slightly higher at around 4%, according to the blog post; the company’s analysis of results since the detector’s launch found that the false positive incidence is more common in documents with a mix of human- and AI-written text, “particularly in the transitions between human- and AI-written content,” Chechitelli said. 

Findings on false positives at the sentence-level:

  • 54% of false positive sentences are located right next to actual AI writing

  • 26% of false positive sentences are located two sentences away of actual AI writing

  • 10% of false positive sentences are located three sentences away of actual AI writing

  • The remaining 10% are not near any actual AI writing

The correlations between these sentences and their proximity to actual AI writing warrant further research, she added, which is already in the works.

Another key finding from educators’ feedback while using the detector is that “teachers feel uncertain about the actions they can take upon discovering AI-generated writing,” Chechitelli said. “We understand that as an education community, we are in uncharted territory.”

Turnitin has published a number of free resources for educators struggling with AI misuse and addressing it with students:

Read the full blog post and learn more at Turnitin.com.

Featured

  • students using digital devices, surrounded by abstract AI motifs and soft geometric design

    Ed Tech Startup Kira Launches AI-Native Learning Platform

    A new K-12 learning platform aims to bring personalized education to every student. Kira, one of the latest ed tech ventures from Andrew Ng, former director of Stanford's AI Lab and co-founder of Coursera and DeepLearning.AI, "integrates artificial intelligence directly into every educational workflow — from lesson planning and instruction to grading, intervention, and reporting," according to a news announcement.

  • toolbox featuring a circuit-like AI symbol and containing a screwdriver, wrench, and hammer

    Microsoft Launches AI Tools for Educators

    Microsoft has introduced a variety of AI tools aimed at helping educators develop personalized learning experiences for their students, create content more efficiently, and increase student engagement.

  • laptop displaying a red padlock icon sits on a wooden desk with a digital network interface background

    Reports Point to Domain Controllers as Prime Ransomware Targets

    A recent report from Microsoft reinforces warns of the critical role Active Directory (AD) domain controllers play in large-scale ransomware attacks, aligning with U.S. government advisories on the persistent threat of AD compromise.

  • Two hands shaking in the center with subtle technology icons, graphs, binary code, and a padlock in the dark blue background

    Two Areas for K-12 Schools to Assess for When to Work with a Managed Services Provider

    The complexity of today’s IT network infrastructure and increased cybersecurity risk are quickly moving beyond many school districts’ ability to manage on their own. But a new technology model, a partnership with a managed services provider, offers a way forward for schools to overcome these challenges.