AI-Based Plagiarism Detection Comes to Open Source LMS

Open LMS, an open-source learning management system, is getting new plagiarism detection capabilities via Copyleaks, which uses artificial intelligence to detect potential problems with students' work, including plagiarism and text generated by AI platforms like ChatGPT.

According to Open LMS: "Copyleaks uses advanced AI to detect AI-generated content, including outputs from cutting-edge AI tools such as ChatGPT-4. It also detects various forms of plagiarism while accounting for a wide range of common detection-evasion tactics such as hidden characters, paraphrasing, and even image-based text plagiarism. Through these methods, the tool provides institutions and organizations with a deeper understanding of the composition of submitted content while exposing attempts to deceive detection software."

Copyleaks can ingest work in more than 100 languages and can detect plagiarized work from sources in 30 languages. It searches for content from 60 trillion websites, more than 16,000 open-access journals, and various source code repositories.

The capabilities of Copyleaks' tools are now available to Open LMS users.

Both Open LMS and Copyleaks are used in a wide variety of settings, including academic, corporate, and non-profits.

About the Author

David Nagel is the former editorial director of 1105 Media's Education Group and editor-in-chief of THE Journal, STEAM Universe, and Spaces4Learning. A 30-year publishing veteran, Nagel has led or contributed to dozens of technology, art, marketing, media, and business publications.

He can be reached at [email protected]. You can also connect with him on LinkedIn at https://www.linkedin.com/in/davidrnagel/ .


Featured

  • lightbulb

    Call for Speakers Now Open for Tech Tactics in Education: Overcoming Roadblocks to Innovation

    The annual virtual conference from the producers of Campus Technology and THE Journal will return on Sept. 25, 2025, with a focus on emerging trends in cybersecurity, data privacy, AI implementation, IT leadership, building resilience, and more.

  • abstract pattern of cybersecurity, ai and cloud imagery

    Report Identifies Malicious Use of AI in Cloud-Based Cyber Threats

    A recent report from OpenAI identifies the misuse of artificial intelligence in cybercrime, social engineering, and influence operations, particularly those targeting or operating through cloud infrastructure. In "Disrupting Malicious Uses of AI: June 2025," the company outlines how threat actors are weaponizing large language models for malicious ends — and how OpenAI is pushing back.

  • illustration of a human head with a glowing neural network in the brain, connected to tech icons on a cool blue-gray background

    Meta Introduces Stand-Alone AI App

    Meta Platforms has launched a stand-alone artificial intelligence app built on its proprietary Llama 4 model, intensifying the competitive race in generative AI alongside OpenAI, Google, Anthropic, and xAI.

  • semi-transparent AI brain with circuit elements under a microscope

    AI 'Microscope' Reveals the Hidden Mechanics of LLM Thought

    Anthropic has introduced new research tools designed to provide a rare glimpse into the hidden reasoning processes of advanced language models — like a "microscope" for AI.