Attackers Exploit Claude Code Tool to Infiltrate Global Targets

San Francisco-based AI developer Anthropic recently reported that attackers linked to China leveraged its Claude Code AI to carry out intrusions against about 30 global organizations. According to the company, the campaign occurred in mid-September and primarily targeted tech companies, financial firms, government agencies and chemical manufacturers.

"The threat actor — whom we assess with high confidence was a Chinese state-sponsored group — manipulated our Claude Code tool into attempting infiltration into roughly thirty global targets and succeeded in a small number of cases," said the company in a blog post.

The attackers reportedly began by manually selecting high-value targets and then used a jailbreak technique to circumvent Claude's security guardrails. Once activated, the model autonomously handled much of the operation, conducting reconnaissance, generating exploits, compromising credentials and facilitating data exfiltration.

Anthropic said it discovered the activity after internal monitoring flagged atypical use patterns. It subsequently disabled the affected accounts, notified relevant parties and worked with authorities to analyze the incident.

The disclosure reflects a growing concern in the cybersecurity community about the potential for advanced AI to accelerate or even automate sophisticated attacks, according to Anthropic.

"These attacks are likely to only grow in their effectiveness. To keep pace with this rapidly-advancing threat, we've expanded our detection capabilities and developed better classifiers to flag malicious activity. We're continually working on new methods of investigating and detecting large-scale, distributed attacks like this one."

In related research, Anthropic recently demonstrated how its Claude Sonnet 4.5 model can assist defenders by identifying vulnerabilities and improving patching workflows. But the company acknowledged that many of the same capabilities — especially AI-driven agency — can also be used for malicious activities.

Their solution: AI service companies and providers continue to focus on safety first from the onset of development. "While we will continue to invest in detecting and disrupting malicious attackers, we think the most scalable solution is to build AI systems that empower those safeguarding our digital environments — like security teams protecting businesses and governments, cybersecurity researchers and maintainers of critical open-source software."

Anthropic also stressed that safeguarding AI models and sharing threat intelligence across sectors will be critical to mitigating future misuse. For IT teams, the incident underscores the urgency of integrating AI-enabled defense systems into security operations.

For more information, go to the Anthropic blog.

About the Author

Chris Paoli (@ChrisPaoli5) is the associate editor for Converge360.

Featured

  • robot brain with various technology and business icons

    Google Cloud Study: Early Agentic AI Adopters See Better ROI

    Google Cloud has released its second annual ROI of AI study, finding that 52% of enterprise organizations now deploy AI agents in production environments. The comprehensive survey of 3,466 senior leaders across 24 countries highlights the emergence of a distinct group of "agentic AI early adopters" who are achieving measurably higher returns on their AI investments.

  • AI symbol racing a padlock symbol on a red running track

    AI Surpasses Cybersecurity in State Education Leader Priority List

    For the first time, artificial intelligence has moved to the top of the priority list for state education leaders — knocking cybersecurity from the number one spot, according to the 2025 State EdTech Trends report from SETDA.

  • Digital Money Bag on Circuit Board Background

    New AI Grants Program to Fund AI Infrastructure for K–12 Education

    Digital Promise has announced the launch of the K-12 AI Infrastructure Program, a multi-year initiative "aiming to close the gap between scientific principles of teaching and learning and the promise of generative artificial intelligence."

  • Red alert symbols and email icons floating in a dark digital space

    Report: Cyber Attackers Are Fully Embracing AI

    According to Google Cloud's 2026 Cybersecurity Forecast, AI will become standard for both cyber attackers and defenders, with threats expanding to virtualization systems, blockchain networks, and nation-state operations.