SCYTHE and Starseer Forge Alliance Against AI-Driven Cyber Threats
- Over half of organizations estimated to have at least one shadow AI application in use
- AI-powered cybersecurity market projected to exceed $86 billion by 2030
- 2025 state-sponsored campaign used agentic AI to infiltrate dozens of global targets
Experts agree that the SCYTHE-Starseer partnership addresses critical gaps in AI-driven cybersecurity, offering a proactive approach to validate defenses against increasingly sophisticated AI-native threats.
SCYTHE and Starseer Forge Alliance Against AI-Driven Cyber Threats
MIAMI, FL & KNOXVILLE, TN – January 28, 2026 – In a move to confront the next generation of cyber warfare, adversarial emulation leader SCYTHE and AI security pioneer Starseer have announced a strategic partnership. The collaboration will deliver the industry’s first commercial solution designed to help organizations test their defenses against sophisticated attacks driven by artificial intelligence and autonomous agentic workflows.
This alliance unites SCYTHE’s expertise in simulating real-world adversary behavior with Starseer’s deep visibility into the inner workings of AI models. Together, they aim to address a rapidly expanding and often invisible attack surface that is leaving many enterprises dangerously exposed.
“AI is becoming an attack surface faster than most security programs can adapt,” said Tim Schulz, CEO and co-founder of Starseer. “By combining SCYTHE’s ability to safely emulate real adversary behavior with Starseer’s depth of visibility to control the very decision-making process that AI makes, we’re giving defenders a safe and reliable way to test their security posture against AI-enabled threats.”
The New AI-Powered Battlefield
The urgency for such a solution is underscored by a dramatic evolution in the threat landscape. AI-driven attacks are no longer theoretical; they are an active and growing danger. These attacks leverage machine learning to automate, accelerate, and enhance every stage of a cyberattack, from reconnaissance to data exfiltration. Adversaries are using generative AI to create hyper-realistic phishing emails that bypass human suspicion and AI-powered malware, like the BlackMamba variant, that can alter its own code to evade signature-based detection.
One of the most alarming developments is the rise of “agentic workflow attacks.” These involve autonomous AI agents capable of planning and executing multi-step intrusions with minimal human intervention. Once inside a network, these agents can act as a new class of sophisticated insider, with the potential to cause damage at machine speed. A documented campaign in 2025 saw a state-sponsored group use agentic AI to infiltrate dozens of global targets, demonstrating a significant leap in automated cyber espionage.
This trend was also highlighted by the emergence of malware families like “LameHug,” a Python-based tool attributed to the Russian state-sponsored group APT28. LameHug utilized a large language model (LLM) via an API to dynamically generate system commands, allowing it to perform reconnaissance and steal data with greater adaptability and stealth. These real-world examples illustrate a critical need for security tools that can understand and counter AI-native attack techniques.
Unmasking the Invisible Threat of 'Shadow AI'
While external threats are evolving, the partnership also targets a critical internal vulnerability: “Shadow AI.” This term refers to the unauthorized use of generative AI applications by employees, often adopted for productivity gains without the knowledge or oversight of IT and security teams. Research indicates the problem is widespread, with over half of organizations estimated to have at least one shadow AI application in use.
This practice creates significant blind spots and introduces profound risks. Employees may inadvertently feed sensitive corporate data—including intellectual property, customer lists, and financial records—into external AI models. These platforms may retain and use that data in ways that violate compliance mandates like GDPR and expose the company to data breaches. Furthermore, many consumer-grade AI tools lack enterprise-level security controls, creating new vectors for account takeovers and expanding the organization’s attack surface.
Without visibility into which AI tools are being used and what data is being shared, organizations cannot effectively manage risk or demonstrate compliance. The SCYTHE-Starseer collaboration aims to illuminate this shadow environment, providing organizations with the means to discover and assess the risks posed by unmanaged AI assets.
A Unified Front: Combining Emulation and Assurance
The joint solution, named “Shadow AI Readiness Assessments,” is engineered to provide a comprehensive validation of an organization's security posture against these dual threats. It integrates SCYTHE’s Adversarial Exposure Validation (AEV) platform with Starseer’s AI Runtime Assurance and Detection Engineering capabilities.
SCYTHE’s platform allows security teams to safely emulate the tactics of AI-driven adversaries, creating realistic attack scenarios that test whether security controls actually work in practice. Starseer, in turn, provides the technology to inspect, analyze, and control AI models and their behavior at runtime. This allows security teams to understand a model’s origins, detect tampering, and monitor its execution for malicious activity.
The readiness assessments will validate an organization's capabilities across three core areas:
- Detection: Validating the ability to identify unauthorized, tampered, or malicious AI models operating within the enterprise.
- Response: Testing the security team’s capacity to react when an adversary co-opts internal AI infrastructure for agentic operations.
- Forensics: Ensuring the ability to conduct deep forensic analysis of AI model provenance, tampering, and adversarial use to understand and remediate incidents.
“Enterprises already know how to test endpoints, networks, and cloud controls, but very few know how to test AI as an attack surface,” said Bryson Bort, CEO and founder of SCYTHE. “This partnership lets security teams safely emulate AI-native attack paths, validate whether controls actually work, and produce defensible evidence that they can detect adversarial AI models and agents in their enterprise.”
Extending Security Frameworks for the AI Era
Strategically, the partnership positions AI security as the next frontier for Continuous Threat Exposure Management (CTEM), a proactive and continuous approach to cybersecurity risk management. By integrating AI assurance directly into existing red, blue, and purple team workflows, SCYTHE and Starseer are enabling organizations to extend their security validation programs to cover the entire AI lifecycle.
The market for AI-powered cybersecurity solutions is expanding rapidly, with projections estimating it will exceed $86 billion by 2030. This growth is fueled by the increasing complexity of cyberattacks and a persistent shortage of skilled security professionals. While a number of specialized startups and established cybersecurity giants are entering the AI security space, the SCYTHE-Starseer approach of combining adversary emulation with deep AI runtime inspection offers a distinct method for proactive validation.
By moving from unmanaged AI blind spots to an AI-validated security posture, the partnership provides a critical capability for enterprises that are increasingly deploying autonomous agents, edge AI, and internal model ecosystems at scale. This proactive stance is becoming essential in an era where the speed and sophistication of AI-driven threats can easily overwhelm traditional, reactive security measures.
