Upwind's Gambit: Redefining AI Security from the Inside Out

Upwind's Gambit: Redefining AI Security from the Inside Out

As enterprises rush to adopt AI, they create a massive new attack surface. Upwind claims its runtime-first platform provides the missing visibility and control.

4 days ago

Upwind's Gambit: Redefining AI Security from the Inside Out

SAN FRANCISCO, CA – December 01, 2025 – The enterprise world is in the midst of an unprecedented gold rush, but the precious metal is artificial intelligence. From automating workflows to powering customer-facing applications, businesses are integrating AI at a breakneck pace. Yet, beneath the surface of this innovation lies a burgeoning and poorly understood risk: a vast, dynamic, and largely invisible AI attack surface. Addressing this critical gap, cloud security firm Upwind has launched an integrated AI security suite, aiming to move the conversation from static defense to real-time, evidence-based protection.

The company’s announcement is more than just another product launch; it’s a strategic bet on a philosophy it calls 'inside-out' security. By embedding its new suite within its existing Unified Cloud-Native Application Protection Platform (CNAPP), Upwind is arguing that securing AI cannot be an afterthought or a siloed discipline. It must be woven into the very fabric of an organization's cloud security posture, grounded in what's actually happening at the moment of execution.

The AI Security Blind Spot

For decades, cybersecurity has relied heavily on perimeter defenses and periodic scanning. Security teams would check configurations, scan for known vulnerabilities, and build walls around their critical assets. But this model is fundamentally breaking down in the age of AI and cloud-native development. AI models, agents, and inference endpoints are not static assets; they are ephemeral, constantly changing, and distributed across a complex web of services and infrastructures.

This new reality creates a significant blind spot. Security teams often lack a cohesive way to trace AI behavior, validate the security posture of a model in production, or understand the real-world impact of an AI-driven decision. Traditional tools that rely on static snapshots and assumptions are ill-equipped to police systems that are, by design, non-deterministic. The risk is no longer just a misconfigured server, but a compromised AI agent making autonomous calls to sensitive APIs, or a large language model (LLM) being manipulated through prompt injection to leak proprietary data.

This is the core challenge Upwind aims to solve. Its premise is that you cannot secure what you cannot see, and you cannot see the behavior of a complex AI system by only looking at its configuration files. True visibility requires looking at the system in motion, at runtime.

A New Philosophy: Security from the Inside Out

Upwind’s 'inside-out' approach is a direct response to the limitations of traditional security. Instead of observing from the outside, the platform is designed to gain visibility from within the workload itself. It focuses on real traffic, API calls, data flows, and process behavior as they happen, providing what the company calls 'runtime evidence' as the basis for security decisions.

“AI security should not be a stand-alone security component,” said Amiram Shachar, Founder and CEO of Upwind, in the company's announcement. “It should be part of a larger ecosystem. It just makes perfect sense to go down this route and make sure that AI security benefits from all the data and context that our CNAPP already holds.”

This integration is key. The new suite extends Upwind’s runtime-first architecture directly into the AI layer, bringing several critical capabilities under one roof:

  • AI Security Posture Management (AI-SPM): This goes beyond simple configuration checks to correlate posture issues—like an exposed inference endpoint or an overly permissive identity role—with actual runtime activity, allowing teams to prioritize the risks that are actively being exploited or are most likely to be.

  • AI Detection & Response (AI-DR): By analyzing network activity and prompt payloads in real time, this capability is designed to detect malicious AI behavior, such as jailbreak attempts or anomalous model activity, and enable an immediate response based on live signals.

  • AI Bill of Materials (AI-BOM): Just as a Software Bill of Materials (SBOM) inventories code components, the AI-BOM maps all AI components—models, frameworks, and agent systems—to create a comprehensive, real-time inventory of what AI is running and where.

  • MCP and Agent Security: Perhaps most critically, the platform traces the full sequence of actions taken by AI agents, from the initial prompt to downstream function calls and system changes. This provides an authoritative audit trail of what an agent did, offering a powerful tool for forensics and governance.

Together, these features aim to replace assumption-based security with evidence-based security, giving teams a factual, end-to-end view of how their AI behaves in the real world.

Navigating a Crowded and Critical Market

Upwind is not alone in recognizing the urgency of this problem. The cybersecurity industry's titans are also racing to stake their claim in the AI security market. Competitors like Wiz, Palo Alto Networks, and CrowdStrike have all announced their own AI-SPM solutions, integrating AI visibility into their broader cloud security platforms. These offerings primarily focus on providing agentless visibility into AI pipelines, identifying AI resources, and flagging misconfigurations and sensitive data exposure.

Where Upwind aims to differentiate itself is in its deep-seated focus on runtime. While many competitors excel at providing a wide-angle, agentless view of cloud posture, Upwind's strategy is to provide a high-definition, real-time view from inside the workload. This 'inside-out' approach, which requires instrumentation at the application and process level, is positioned as the only effective way to catch sophisticated, in-progress attacks like prompt injections or malicious agent behavior that a periodic scan would miss. The trade-off is often between the breadth of agentless discovery and the depth of runtime analysis, and the market is still deciding which is more critical for AI.

The Strategy Behind the Launch: Funding, Vision, and Standards

The launch is also a reflection of Upwind's rapid ascent and strategic positioning. Founded in 2022 by Amiram Shachar and the team behind Spot.io (acquired by NetApp for $450 million), the company has amassed an impressive $180 million in funding. Its backers include a roster of top-tier VCs like Greylock and Craft Ventures, as well as Sheva, a venture fund founded by former NBA player Omri Casspi with investment from current NBA star Stephen Curry. This level of investment in such a young company signals strong market confidence in its vision and technology.

Furthermore, Upwind’s new capabilities appear to be closely aligned with emerging industry best practices for managing AI risk. The platform’s features directly address several of the most critical threats identified in the OWASP Top 10 for Large Language Model Applications, including prompt injection, sensitive information disclosure, and excessive agency. Its emphasis on visibility, governance, and real-time monitoring also supports the core functions outlined in the NIST AI Risk Management Framework (AI RMF), which provides a roadmap for building trustworthy and secure AI systems.

This alignment suggests a strategy that goes beyond simply building a product. By anchoring its features in recognized frameworks, Upwind is positioning itself not just as a vendor, but as a partner in helping organizations navigate the complex compliance and ethical landscape of AI. For enterprises, this is a crucial value proposition, as securing AI is as much about building trust and ensuring responsible deployment as it is about blocking attacks. The challenge for Upwind and its competitors will be to prove that their platforms can keep pace not just with threats, but with the sheer speed of AI innovation itself.

📝 This article is still being updated

Are you a relevant expert who could contribute your opinion or insights to this article? We'd love to hear from you. We will give you full credit for your contribution.

Contribute Your Expertise →
UAID: 4844