The National Institute of Standards and Technology has announced plans to issue a new set of cybersecurity guidelines aimed at safeguarding artificial intelligence systems, citing rising concerns over risks tied to generative models, predictive analytics, and autonomous agents.
According to the 2025 Threat Hunting Report from CrowdStrike, adversaries are not just using AI to supercharge attacks — they are actively targeting the AI systems organizations deploy in production. Combined with a surge in cloud exploitation, this shift marks a significant change in the threat landscape for enterprises.
Registration is free for this fully virtual Sept. 25 event, focused on "Overcoming Roadblocks to Innovation" in K-12 and higher education.
Threat researchers at BforeAI have identified a phishing campaign spoofing the U.S. Department of Education's G5 grant management portal.
A new report from global cybersecurity company Thales reveals that while enterprises are pouring resources into AI-specific protections, only 8% are encrypting the majority of their sensitive cloud data — leaving critical assets exposed even as AI-driven threats escalate and traditional security budgets shrink.
A new report from Backslash Security has identified significant security vulnerabilities in the Model Context Protocol (MCP), technology introduced by Anthropic in November 2024 to facilitate communication between AI agents and external tools.
IBM has introduced a new software stack for enterprise IT teams tasked with managing the complex governance and security challenges posed by autonomous AI systems.
According to a recent report from cybersecurity firm Wiz, nearly nine out of 10 organizations are already using AI services in the cloud — but fewer than one in seven have implemented AI-specific security controls.
The Cloud Security Alliance has unveiled a new artificial intelligence-powered system that automates the validation of cloud service providers' (CSPs) security assessments, aiming to improve transparency and trust across the cloud computing landscape.
A recent report from OpenAI identifies the misuse of artificial intelligence in cybercrime, social engineering, and influence operations, particularly those targeting or operating through cloud infrastructure. In "Disrupting Malicious Uses of AI: June 2025," the company outlines how threat actors are weaponizing large language models for malicious ends — and how OpenAI is pushing back.