L O A D I N G

Blog Details

  • Home
  • Cisco Introduces Enterprise-Focused Framework to Address AI Security and Safety Risks
By: Admin December 31, 2025

Cisco Introduces Enterprise-Focused Framework to Address AI Security and Safety Risks

Cisco has unveiled a new AI Security and Safety Framework designed to help enterprises anticipate and manage the growing range of risks associated with deploying artificial intelligence at scale. The company says the framework is intended to prepare organizations for an emerging wave of AI-related threats, including adversarial attacks, unsafe content generation, compromised models, supply chain vulnerabilities, and unpredictable behavior from autonomous agents.

According to Cisco, the rapid pace of AI adoption has outstripped the ability of individuals, businesses, and governments to fully understand or control how these systems behave in real-world environments. In a blog post announcing the framework, Amy Chang, who leads threat and security research within Cisco’s AI Software and Platform group, emphasized that modern AI systems are fundamentally different from traditional software. Their behavior can change over time, failure modes are often unclear, and interactions with tools, data, and other agents can produce unexpected outcomes.

Cisco positions the framework as an effort to establish a shared vocabulary and structure for AI risk management before attackers — or regulators — define those standards themselves. The company describes the initiative as one of the first comprehensive attempts to systematically categorize AI security and safety risks in a way that can be operationalized across organizations. Importantly, Cisco notes that the framework is vendor-neutral, aiming to help enterprises understand how AI systems break down, how those weaknesses are exploited, and how defenses can evolve alongside advancing AI capabilities.

At the core of the AI Security and Safety Framework is a unified taxonomy that groups AI risks into five interconnected dimensions reflecting today’s rapidly changing threat landscape:

  • Security threats and content harms, recognizing that technical exploits and unsafe outputs are often linked rather than separate problems.
  • Lifecycle-aware risk, tracking how vulnerabilities shift as AI systems move from development into deployment and real-world operation.
  • Multi-agent coordination risks, which emerge when multiple AI systems interact, share memory, or make joint decisions.
  • Multimodal attack surfaces, covering threats delivered through text, audio, images, video, code, or sensor data.
  • Audience- and context-aware utility, accounting for how AI behavior and impact can vary depending on who interacts with the system and how it is used.

Cisco highlights that attackers increasingly blend security exploits with content manipulation to reach their goals. For example, a prompt injection or poisoned training dataset may begin as a technical attack but ultimately result in harmful outputs, data leakage, or policy violations. By placing these scenarios into a single classification model, the framework encourages organizations to address both the method of attack and its downstream consequences rather than treating them as isolated issues.

Another key element of the framework is its focus on the entire AI lifecycle. Cisco notes that risks insignificant during early development can become severe once a model is connected to tools, APIs, or other agents. By mapping threats across this progression, the framework supports layered defense strategies that adapt as systems transition from experimentation to production use.

The framework also explicitly addresses agent-based AI, where multiple systems collaborate through orchestration layers, communication protocols, and shared decision logic. Cisco argues that these interactions introduce new categories of risk that traditional security models are not equipped to handle.

Finally, Cisco draws attention to the rise of multimodal threats, where attacks may arrive via images, voice commands, manipulated video, or even hidden signals embedded in sensor data. Treating these vectors consistently, the company says, is essential as enterprises deploy AI across robotics, autonomous vehicles, customer engagement platforms, and real-time monitoring systems.

With this framework, Cisco is signaling that AI security must be approached as a continuously evolving discipline — one that blends safety and security, spans the full AI lifecycle, and adapts to increasingly autonomous and interconnected systems.

Leave Comment