Recalibrating Risk: Why AI and Empathy Are Safety's New Playbook

Recalibrating Risk: Why AI and Empathy Are Safety's New Playbook

Traditional safety metrics are failing to prevent the worst accidents. A new report reveals why AI and a human-centric approach are now non-negotiable.

3 days ago

Recalibrating Risk: Why AI and Empathy Are Safety's New Playbook

CHICAGO, IL – December 02, 2025 – For years, corporate boardrooms have taken comfort in a simple narrative: declining workplace injury rates signal safer operations. But beneath the surface of these encouraging charts lies a troubling paradox. While minor incidents are decreasing, the rate of serious injuries and fatalities (SIFs)—life-altering events that devastate families and shatter corporate reputations—has remained stubbornly flat. This measurement mirage has brought the world of Environment, Health, and Safety (EHS) to a critical crossroads.

A new report from EHS software provider Evotix and the research-to-practice think tank What Works Institute confirms this inflection point. Titled Risk Recalibrated: 2026 Executive Leadership Report on AI, SIF, and Human-Centric EHS, the analysis argues that incremental change is no longer sufficient. To truly prevent catastrophic harm, business leaders must fundamentally recalibrate their entire approach to risk, moving beyond outdated metrics to embrace a strategy built on better data, smarter technology, and a deeper understanding of human behavior.

The Measurement Mirage: When Data Fails to Protect

The foundation of any effective strategy is accurate measurement, and it is here that traditional EHS has begun to falter. The report highlights a stark disconnect, with nearly one in five EHS leaders admitting their current safety metrics have no relation to real risk. The focus on total recordable injury rates often creates a false sense of security, optimizing for the prevention of minor slips and cuts while failing to identify the precursors to fatalities.

The core of the problem lies in the very definition of what is being tracked. While 80% of organizations report having SIF prevention programs, the term itself lacks a universal standard. Different departments, sites, and companies classify these severe events in wildly inconsistent ways, resulting in muddled data, confused priorities, and an inability to benchmark performance or share effective prevention strategies. “Leaders agree on the destination—preventing life-altering harm—but not yet on the common language or tools needed to get there,” noted Jonathan English, CEO of Evotix, in the report. This misalignment, he argues, “slows progress at a time when safety leaders won’t be satisfied with incremental change.”

Without a common language for risk, a company's safety strategy is built on a shaky foundation. It becomes impossible to identify trends, allocate resources effectively, or determine if new initiatives are actually working. The push for a standardized SIF definition is therefore not just a technical exercise for safety professionals; it is a strategic imperative for any organization serious about protecting its people and its brand.

AI in the Balance: Navigating the Digital Guardian

As organizations seek to move beyond flawed legacy metrics, many are looking to artificial intelligence as a potential solution. With 94% of EHS programs already using some form of digital management, AI is the logical next frontier. The Risk Recalibrated report reveals a state of cautious curiosity: 42% of organizations are actively piloting AI in limited scopes, while another 33% are exploring use cases. Applications range from automated reporting dashboards and AI-powered training copilots (both at 44%) to more advanced incident investigations (29%).

The promise is undeniable. AI can analyze vast datasets to identify hidden patterns and predict SIF precursors with a speed and accuracy no human team could match. It can power wearable devices that monitor for fatigue or computer vision systems that spot unsafe behaviors in real time. Yet, this enthusiasm is tempered by significant and valid concerns. The report finds that leaders are most worried about data quality (58%), algorithmic bias (36%), and a lack of governance (27%).

These are not trivial hurdles. An AI model trained on poor or incomplete data—the very problem plaguing SIF tracking today—will produce flawed and dangerous predictions. Similarly, an algorithm that contains hidden biases could unfairly scrutinize certain demographics or fail to recognize risks prevalent among underrepresented groups in the data. This has led to a growing consensus around the need for “human-led AI,” where technology serves to augment, not replace, the domain expertise of seasoned safety professionals. As the International Labour Organization has warned, over-reliance on AI can reduce critical human oversight and introduce new, unforeseen risks. The challenge for brands is not simply to adopt AI, but to implement it within a robust ethical framework that ensures transparency, accountability, and human control.

Beyond Hard Hats: The Human-Centric Frontier

Perhaps the most profound shift highlighted in the report is the growing recognition that safety is not just about processes and equipment; it is deeply intertwined with human psychology. EHS leaders increasingly acknowledge that psychosocial factors like stress, mental health strain, fatigue, and cognitive overload are significant contributors to risk. The report shows high levels of recognition for the impact of work conditions (71%), mental-health strain (66%), and fatigue (60%).

However, recognition has not yet translated into action. In a striking finding, the report reveals that 89% of organizations say these crucial human-centric factors are not, or are only partially, embedded in their EHS and SIF prevention strategies. This represents the next great frontier in workplace safety.

A human-centric approach accepts that workers are not cogs in a machine. Their perception, behavior, and response to risk are shaped by less visible but highly consequential factors. A culture that lacks psychological safety—where employees fear reprisal for reporting near-misses or raising concerns—is a culture that is blind to its most significant risks. Building a truly safe workplace requires designing systems that account for human limitations and fostering an environment that prioritizes holistic well-being.

As John Dony, CEO of the What Works Institute, stated, the confluence of AI advances and a deeper appreciation for human-centric approaches presents a “compelling opportunity to radically improve risk prevention.” To seize it, leaders must move beyond the traditional confines of the safety department. This recalibration requires a unified effort from operations, technology, and human resources to build systems that are not only technologically advanced and data-driven but also fundamentally empathetic. Companies that master this synthesis will not only prevent tragedy but will also cultivate a more resilient, engaged, and productive workforce, solidifying their reputation as leaders who truly value their people.

📝 This article is still being updated

Are you a relevant expert who could contribute your opinion or insights to this article? We'd love to hear from you. We will give you full credit for your contribution.

Contribute Your Expertise →
UAID: 5606