AI on Trial: Inside the Push to Make Intelligent Systems Accountable

AI on Trial: Inside the Push to Make Intelligent Systems Accountable

As AI systems grow more powerful, they also become vulnerable. A new German research initiative aims to build the tools to detect AI manipulation.

about 18 hours ago

AI on Trial: Inside the Push to Make Intelligent Systems Accountable

HALLE (SAALE), GERMANY – December 09, 2025 – As artificial intelligence becomes deeply embedded in everything from national security to financial markets, a critical question looms: what happens when these systems are secretly manipulated? Addressing this challenge head-on, digital transformation leader Atos has announced a landmark research program, 'Forensics of Intelligent Systems' (FIS), in a powerful collaboration with Germany's top academic and research institutions. The initiative aims to move beyond simply building more robust AI, focusing instead on creating the tools to investigate, prove, and ultimately hold manipulators accountable in a court of law.

This move signals a pivotal shift in the cybersecurity landscape. For years, the focus has been on defending AI from attack. Now, the industry is preparing for the inevitable day when those defenses fail, asking not just if an AI can be trusted, but how we can forensically prove it has been compromised.

The Invisible Threat in the Machine

The urgency behind the FIS program is rooted in the sophisticated and often invisible nature of AI attacks. Unlike traditional software hacks that leave clear digital footprints, AI manipulation can be subtle and devastating. The most common methods exploit the very nature of how machine learning models work.

Adversarial attacks, for instance, involve feeding a model an input that has been slightly altered in ways imperceptible to humans. Researchers have demonstrated how adding minor visual 'noise' to a stop sign can cause an autonomous vehicle's AI to interpret it as a speed limit sign. Similarly, data poisoning attacks corrupt the AI during its training phase. By injecting a small amount of malicious data—such as labeling malware samples as safe—attackers can create permanent blind spots or backdoors in security systems. A recent analysis by the EU Agency for Cybersecurity (ENISA) noted that such poisoning attacks are on a sharp rise, posing a direct threat to critical infrastructure.

These vulnerabilities extend far beyond autonomous systems. In finance, a poisoned fraud detection model could be trained to ignore a specific type of illicit transaction. In healthcare, a diagnostic AI could be manipulated to produce incorrect results, with life-or-death consequences. The OWASP LLM Top 10, a list of critical security risks for large language models, highlights prompt injection and training data poisoning as top-tier threats, underscoring the broad attack surface that now exists across industries.

Forging a Digital Chain of Custody

The FIS program is designed to create a new discipline: AI forensics. Atos, alongside Germany's cybersecurity innovation agency (Cyberagentur), the Fraunhofer Institutes for Applied and Integrated Security (AISEC) and Intelligent Analysis and Information Systems (IAIS), the Institute for Internet Security, and the University of Cologne, is building a foundation for this emerging field.

The core objective is to develop prototype methods and tools that can provide legally sound evidence of AI manipulation. To achieve this, the consortium will establish a research lab and simulation environment where AI models can be subjected to controlled attacks. The goal is to identify and secure what the project calls "legally usable traces"—the digital equivalent of fingerprints or DNA evidence for AI systems.

"Attacks on AI systems and their manipulations are often difficult to detect," said Boris Hecker, Co-CEO of Atos Germany, in the announcement. "For German security authorities, it is crucial to have advanced methods and tools that enable both early detection and legally sound analysis." This focus on legal applicability is what sets the FIS program apart. It's not just about detecting an anomaly; it's about building a case that can stand up to legal scrutiny, enabling the criminal prosecution of those who manipulate AI for malicious ends.

This multi-disciplinary approach is the program's greatest strength. It combines Fraunhofer AISEC's expertise in applied security, Fraunhofer IAIS's deep knowledge of AI algorithms, and the Institute for Internet Security's practical experience in digital forensics and incident response.

AI Under Oath: The Dawn of Legal Accountability

The initiative arrives at a critical moment, as governments worldwide grapple with how to regulate artificial intelligence. The European Union's landmark AI Act, which came into force in 2024, imposes strict transparency, robustness, and accountability requirements on high-risk AI systems. However, legislation alone is not enough; a new generation of technology is needed to enforce it.

The FIS program directly addresses this regulatory gap. By developing verifiable methods to audit AI behavior and detect tampering, the project could provide the technical underpinning required for compliance with laws like the EU AI Act. The ability to prove that an AI system was manipulated, and to understand how, will be essential for assigning liability when things go wrong.

Prof. Dr. Christian Hummert, Research Director of the Cyberagentur, highlighted this broader vision. "Intelligent systems today decide on security, mobility, and societal processes," he stated. "With FIS, we are creating a research basis for the first time that enables AI to be not only powerful but also forensically verifiable, legally secure, and thus trustworthy."

This push for verifiability is essential for building public trust. As AI makes more autonomous decisions in high-stakes environments, society will demand more than just assurances of its reliability; it will require proof. The work being done in Halle could pave the way for new standards in AI auditing and certification, much like financial audits provide assurance for corporate accounting.

A Strategic Play for European Tech Sovereignty

Beyond its technical and legal importance, the FIS program represents a significant strategic move for Atos and for Europe. In an era of intense geopolitical competition over technology, establishing leadership in secure AI is a top priority. The project's emphasis on strengthening state integrity and economic security aligns with a broader European goal of achieving 'digital sovereignty'—reducing reliance on foreign technology for critical infrastructure.

By spearheading this research with German partners, Atos is positioning itself at the forefront of a nascent but vital market for trustworthy AI solutions. The insights and tools generated by FIS will almost certainly be integrated into Atos's portfolio of cybersecurity and secure cloud services, creating a distinct competitive advantage. For clients in regulated industries like banking, healthcare, and government, the ability to deploy forensically-sound AI will become a non-negotiable requirement.

This initiative is not merely a research project; it is an investment in the foundational infrastructure of the future digital economy. As AI continues its relentless integration into society, the ability to police these systems, investigate their failures, and ensure their integrity will be as fundamental as the rule of law itself. The work beginning now aims to ensure that when an AI's decision is questioned, there is a clear and reliable way to find the answer.

📝 This article is still being updated

Are you a relevant expert who could contribute your opinion or insights to this article? We'd love to hear from you. We will give you full credit for your contribution.

Contribute Your Expertise →
UAID: 6737