Researchers Provide Taxonomy of Gen AI Misuse

To clarify the potential risks of GenAI and provide "a concrete understanding of how GenAI models are specifically exploited or abused in practice, including the tactics employed to inflict harm," a group of researchers from Google DeepMind, Jigsaw, and Google.org recently published a paper entitled, "Generative AI Misuse: A Taxonomy of Tactics and Insights from Real-World Data."

The authors of the paper, Nahema Marchal, Rachel Xu, Rasmi Elasmar, Iason Gabriel, Beth Goldberg, and William Isaac, emphasized that, as GenAI capabilities continue to advance, understanding the specific ways in which these tools are exploited is critical for developing effective safeguards. Their "taxonomy of GenAI misuse tactics" is meant to provide a framework for identifying and addressing the potential harms associated with these technologies, they wrote, ultimately aiming to ensure their responsible and ethical use.

The researchers based their study on the qualitative analysis of approximately 200 incidents reported between January 2023 and March 2024. That analysis revealed key patterns and motivations behind the misuse of GenAI, including:

  • Manipulation of human likeness. The most prevalent tactics involve the manipulation of human likeness, such as impersonation, "sockpuppeting," and "non-consensual intimate imagery."
  • Low-tech exploitation. Most misuse cases do not involve sophisticated technological attacks, but rather exploit easily accessible GenAI capabilities requiring minimal technical expertise.
  • Emergence of new forms of misuse. The availability and accessibility of GenAI tools have introduced new forms of misuse that, although not overtly malicious or policy-violative, have concerning ethical implications, such as blurring the lines between authenticity and deception in political outreach and self-promotion.

The study also identified two categories of misuse tactics:

Exploitation of GenAI Capabilities

  • Impersonation: Creating AI-generated audio or video to mimic real people.
  • Appropriated likeness: Using or altering a person's likeness without consent.
  • Sockpuppeting: Creating synthetic online personas.
  • NCII: Generating explicit content without consent.
  • Falsification: Fabricating evidence such as reports or documents.
  • IP infringement: Using someone’s intellectual property without permission.
  • Counterfeit: Producing items that imitate original works and pass as real.
  • Scaling and amplification: Automating and amplifying content distribution.
  • Targeting & personalization: Refining outputs for targeted attacks.

Compromise of GenAI Systems

  • Adversarial inputs: Modifying inputs to cause a model to malfunction.
  • Prompt injections: Manipulating text instructions to produce harmful outputs.
  • Jailbreaking: Bypassing model restrictions and safety filters.
  • Model diversion: Repurposing models for unintended uses.
  • Steganography: Hiding messages within model outputs.
  • Data poisoning: Corrupting training datasets to introduce vulnerabilities.
  • Privacy compromise: Revealing sensitive information from training data.
  • Data exfiltration: Illicitly obtaining training data.
  • Model extraction: Stealing model architecture and parameters.

The paper provides insights for policymakers, trust and safety teams, and researchers to help them develop strategies for AI governance and mitigate real-world harms, the authors wrote. In order to protect against the diverse and growing threats posed by GenAI, they called for better technical safeguards, non-technical user-facing interventions, and ongoing monitoring of the evolving misuse landscape.

Read more here.

About the Author

John K. Waters is the editor in chief of a number of Converge360.com sites, with a focus on high-end development, AI and future tech. He's been writing about cutting-edge technologies and culture of Silicon Valley for more than two decades, and he's written more than a dozen books. He also co-scripted the documentary film Silicon Valley: A 100 Year Renaissance, which aired on PBS.  He can be reached at [email protected].

Featured

  • Abstract geometric pattern with interconnected nodes and lines

    Microsoft 365 Copilot Updates Offer Expanded AI Capabilities, Collaboration Tools

    Microsoft has announced updates to its Microsoft 365 Copilot AI assistant, including expanded AI capabilities in individual apps, the ability to create autonomous agents, and a new AI-powered collaboration workspace.

  • An open book with text transforming into smooth lines represents reading ease

    Fluency Innovator Grants to Award Free Subscriptions to WordFlight Literacy Intervention Solution

    The call for applications is now open for Foundations in Learning's Fall 2024 Fluency Innovator Grants program. Teachers and administrators from schools and districts serving grades 3-8 may apply to receive a free subscription to WordFlight, a literacy assessment and intervention solution for students with deficits in reading fluency and comprehension, for the Fall 2024 semester.

  • AI-themed background with sparse circuit lines and minimal geometric shapes

    Microsoft to Introduce AI Agent Building Tools in Copilot Studio

    In November, Microsoft plans to roll out a public preview of a new feature within Copilot Studio, allowing users to create autonomous AI "agents" designed to handle routine tasks.

  • landscape photo with an AI rubber stamp on top

    California AI Watermarking Bill Supported by OpenAI

    OpenAI, creator of ChatGPT, is backing a California bill that would require tech companies to label AI-generated content in the form of a digital "watermark." The proposed legislation, known as the "California Digital Content Provenance Standards" (AB 3211), aims to ensure transparency in digital media by identifying content created through artificial intelligence. This requirement would apply to a broad range of AI-generated material, from harmless memes to deepfakes that could be used to spread misinformation about political candidates.