1 in 10 AI Prompts Could Expose Sensitive Data

A recent study from data protection startup Harmonic Security found that nearly one in 10 prompts used by business users when interacting with generative AI tools may inadvertently disclose sensitive data.

The study, conducted in the fourth quarter of 2024, analyzed prompts across generative AI platforms such as Microsoft Copilot, OpenAI's ChatGPT, Google Gemini, Claude, and Perplexity. While the majority of AI usage by employees involved mundane tasks like summarizing text or drafting documentation, 8.5% of prompts posed potential security risks.

Sensitive Data at Risk

Among the concerning prompts, 45.8% risked exposing customer data, including billing and authentication information. Another 26.8% involved employee-related data, such as payroll details, personal identifiers, and even requests for AI-assisted employee performance reviews.
The remaining sensitive prompts included:

  • Legal and finance information (14.9%): Sales pipeline data, investment portfolios, and merger and acquisition activity.
  • Security data (6.9%): Penetration test results, network configurations, and incident reports, which could be exploited by attackers.
  • Sensitive code (5.6%): Access keys and proprietary source code.

Harmonic Security's report also flagged concerns about employees using free-tier generative AI services, which often lack robust security measures. Many free-tier services explicitly state that user data may be used to train AI models, creating further risks of unintended disclosure.

Free-Tier Usage Raises Red Flags

The study revealed significant reliance on free-tier AI services, with 63.8% of ChatGPT users, 58.6% of Gemini users, 75% of Claude users, and 50.5% of Perplexity users opting for non-enterprise plans. These services often lack critical safeguards found in enterprise versions, such as the ability to block sensitive prompts or warn users about potential risks.

"Most generative AI use is mundane, but the 8.5% of prompts we analyzed potentially put sensitive personal and company information at risk," said Alastair Paterson, co-founder and CEO of Harmonic Security, in a statement. "Organizations need to address this issue, particularly given the high number of employees using free subscriptions. The adage that 'if the product is free, you are the product' rings especially true here."

Recommendations for Risk Mitigation

Harmonic Security urged companies to implement real-time monitoring systems to track and manage data entered into generative AI tools. The firm also recommended:

  • Ensuring employees use paid or enterprise AI plans that do not train on input data.
  • Gaining visibility into prompts to understand what information is being shared.
  • Blocking or warning users about risky prompts to prevent data leakage.

While many organizations have begun implementing such measures, the report highlighted the need for broader adoption of these safeguards as generative AI becomes increasingly integrated into workplace processes.

"Generative AI tools hold immense potential for improving productivity, but without proper safeguards, they can become a liability. Organizations must act now to ensure sensitive data is protected while still leveraging the benefits of AI technology," Paterson said.

For the full report, visit the Harmonic Security site.

About the Author

John K. Waters is the editor in chief of a number of Converge360.com sites, with a focus on high-end development, AI and future tech. He's been writing about cutting-edge technologies and culture of Silicon Valley for more than two decades, and he's written more than a dozen books. He also co-scripted the documentary film Silicon Valley: A 100 Year Renaissance, which aired on PBS.  He can be reached at [email protected].

Featured

  • The AI Show

    Register for Free to Attend the World's Greatest Show for All Things AI in EDU

    The AI Show @ ASU+GSV, held April 5–7, 2025, at the San Diego Convention Center, is a free event designed to help educators, students, and parents navigate AI's role in education. Featuring hands-on workshops, AI-powered networking, live demos from 125+ EdTech exhibitors, and keynote speakers like Colin Kaepernick and Stevie Van Zandt, the event offers practical insights into AI-driven teaching, learning, and career opportunities. Attendees will gain actionable strategies to integrate AI into classrooms while exploring innovations that promote equity, accessibility, and student success.

  • Geometric illustration of colorful robotic toys with distinct features like heads, arms, wheels, and antennas on a light gradient background

    KinderLab Robotics Expands Curriculum to Serve Upper Elementary Students

    KinderLab Robotics has expanded its STEAM robotics offerings with a new curriculum to develop computational thinking and computer science skills for students in grades 3-5.

  • Stock market graphs and candlesticks breaking apart with glass-like cracks

    Chinese Startup Disrupts AI Market

    A new low-cost artificial intelligence model from China is wreaking havoc in the technology sector, with tech stocks plummeting globally as concerns grow over the potential disruption it could cause.

  • interconnected glowing nodes and circuits in blue and green, forming a neural network on a dark background with a futuristic design

    Tech Giants Launch $100 Billion National AI Infrastructure Project

    OpenAI, SoftBank, and Oracle have announced a new venture, Stargate, through which they aim to build a massive AI infrastructure network across the United States. The initiative, which was announced at the White House with President Donald Trump, has been described as the "largest AI infrastructure project in history."