AI Writing Detection Tool Uses 'Forensic Linguistic' Techniques to Check Authorship

FLINT Systems has released what it describes as the "first linguistic tool designed to detect whether a document was authored by its attributed author." The system is designed not simply to detect whether a piece of writing was authored by an AI, but whether it was written by the person claiming authorship at all.

To do this, according to the company, the system "applies forensic linguistic methodologies to create a digital linguistic fingerprint of an individual's writing style. It then creates a linguistic fingerprint of the document at question and compares the two. Testing results showed that when documents were created by anyone other than the individual who submitted the document, FLINT Systems correctly identified in over 80% of the cases."

This "fingerprinting" approach, according to the company, distinguishes it from other AI writing detection tools like GPTZero becaue it eliminates the potential errors in detection that occur when AI-written content is edited by a human.

According to the company: "By applying linguistic fingerprinting technology, the FLINT System can correctly identify when an individual did not author the document, regardless of whether or not there are elements of humanly developed texts interwoven into the AI document."

A free trial of the system is available. In a test case, I compared one of my articles with three other articles I'd written, and it determined that the article in question was 50% to 55% likely to have been written by me. (It was written by me — although, as in the case of this article, it did contain quotes from other people.)

The free trial, which requires registration, is available at free.flintai.com/home. To use it, upload some documents from a single author. The click the "Compare and Analyze" button to upload a document to compare against the previous documents.

Further details about the product; additional modules, such as threat detection and demographics; and academic and enterprise licenses can be found at flintai.com.

About the Author

David Nagel is the former editorial director of 1105 Media's Education Group and editor-in-chief of THE Journal, STEAM Universe, and Spaces4Learning. A 30-year publishing veteran, Nagel has led or contributed to dozens of technology, art, marketing, media, and business publications.

He can be reached at [email protected]. You can also connect with him on LinkedIn at https://www.linkedin.com/in/davidrnagel/ .


Featured

  • lightbulb

    Call for Speakers Now Open for Tech Tactics in Education: Overcoming Roadblocks to Innovation

    The annual virtual conference from the producers of Campus Technology and THE Journal will return on Sept. 25, 2025, with a focus on emerging trends in cybersecurity, data privacy, AI implementation, IT leadership, building resilience, and more.

  • abstract pattern of cybersecurity, ai and cloud imagery

    Report Identifies Malicious Use of AI in Cloud-Based Cyber Threats

    A recent report from OpenAI identifies the misuse of artificial intelligence in cybercrime, social engineering, and influence operations, particularly those targeting or operating through cloud infrastructure. In "Disrupting Malicious Uses of AI: June 2025," the company outlines how threat actors are weaponizing large language models for malicious ends — and how OpenAI is pushing back.

  • illustration of a human head with a glowing neural network in the brain, connected to tech icons on a cool blue-gray background

    Meta Introduces Stand-Alone AI App

    Meta Platforms has launched a stand-alone artificial intelligence app built on its proprietary Llama 4 model, intensifying the competitive race in generative AI alongside OpenAI, Google, Anthropic, and xAI.

  • semi-transparent AI brain with circuit elements under a microscope

    AI 'Microscope' Reveals the Hidden Mechanics of LLM Thought

    Anthropic has introduced new research tools designed to provide a rare glimpse into the hidden reasoning processes of advanced language models — like a "microscope" for AI.