Our latest Orca AI webinar entitled “Closing the Perception Gap: Safer Navigation in Busy Waters” convened an expert panel to confront the widening gap between the proven risks of human error in navigation and the slow, voluntary adoption of advanced safety solutions.
The well-attended online event focused on four unique perspectives vital to shaping the maritime future: Pierre Sames, Senior Vice President and Strategic Development Director at class society DNV; Captain Dennis Dude, Marine Operations Manager at gas-carrier manager TMS Cardiff Gas; Captain Gaurav Kapoor, General Manager HSSEQ and Compliance Manager at diversified vessel operator Pacific Carriers Limited (PCL); and our very own Dor Raviv, Orca AI’s CTO and Co-founder. The proceedings were moderated by Edwin Lampert of Riviera Maritime Media.
The stark reality: risk, cognitive overload and the aging bridge
The foundation of the webinar lay in data underscoring the severity of the navigation challenge. Sames shared DNV’s insights, which revealed that while the total number of maritime casualties has increased in recent years, this trend is overwhelmingly driven by machinery damages and failures, an issue largely attributed to the maintenance burdens of an aging fleet and high utilisation. Accidents specifically related to navigational failure (collision, contact and grounding) remain “relatively stable” at a high level.
However, the class society’s granular analysis revealed a disconcerting truth: 46% of all collision accidents involving container vessels are directly caused by human failures onboard. The single greatest factors are a lack of “proper lookout” or simply a “wrong decision”. This human weakness comes at an astronomical price, as Captain Kapoor noted, sharing P&I Club data pointing to USD 1.6 billion in navigation-related insurance claims between 2018 and 2023, with 59% of those claims stemming from human error. Beyond the financial impact, these incidents carry devastating potential for loss of life as well as damage to cargo and the environment.
Raviv traced the origin of the human challenge back to the ship’s command centre. He described the bridge of a large commercial vessel as operating under a technological paradox: while junior officers stepping onboard are digital natives familiar with AI in their daily lives, the bridge environment has remained largely unchanged for 70 years. The result is a chaotic “alert and data-rich environment” where the seafarer’s primary work becomes prioritizing risk and managing sheer complexity. Raviv illustrated this cognitive pressure with a video he personally recorded of a bridge overloaded with alarms, fragmented information and screens, emphasising the immense burden placed on an individual. This challenge is compounded by a looming pan-industry labour crisis, with nearly 100,000 additional seafarers expected to be needed by 2030.
AI as the digital assistant: Augmenting skills, not replacing seafarers
The solution presented was a philosophical shift toward human augmentation. Raviv was adamant that AI-powered situational awareness is not a replacement but a “data consolidation platform”. It functions by filtering the chaos, drawing insights from fragmented sensor data and “highlighting what really requires attention” to minimise cognitive load. This approach is designed to support the bridge crew, not replace them, allowing their human discretion to remain central to decision-making. Moreover, the technology enhances safety by providing anonymous data sharing among vessels – Orca AI’s Co-Captain network – to give crews foresight on unique events like severe weather and congestion ahead of their current position.
Captain Dude attested to the operational impact of this approach. He spoke of the frustration felt by captains whose modern vessels have blind spots, sometimes losing 30% of their field of view due to yard design. He said Orca AI’s operational platform featuring the SeaPod digital watchkeeper has become a game-changer by providing a visually rich and precise awareness that surpasses simple Radar plots, effectively replacing the cumbersome reality of searching for infrared binoculars in the dark. Dude credits this technology with reducing stress and enabling “much better decisions and timely decisions” across the TMS Cardiff Gas fleet.
Captain Kapoor extended the argument by stressing that AI must simplify to succeed. He took the example of a young, 20 to 22-year-old watchkeeper faced with the immense pressure of managing 30 to 40 separate pieces of equipment simultaneously. Digitalisation must move beyond simply installing more screens, he insisted, with the focus on breaking down complex procedures such as the passage plan into role-specific, digestible instructions so that a lookout is not overwhelmed by unnecessary technical jargon.
The roadblocks and responsibility of leadership
Despite the demonstrable benefits of advanced AI-powered systems, the panelists addressed why such systems remain niche rather than standard.
At the heart of the issue lies a strategic question raised during the debate: is navigational safety in congested waters a crisis that demands urgent intervention or an evolutionary challenge being managed adequately within current structures? The distinction matters, because how the industry answers it determines whether navigation support systems remain niche innovations adopted by progressive owners, or lay the foundations of a new safety standard. If the former, investment will be driven by operational necessity; if the latter, most players may wait for regulatory consensus before acting.
Sames confirmed the adoption bottleneck: DNV’s voluntary class notations for nautical safety (NAVI) are only held by about 2,000 vessels under its class, and the advanced coastal waters notation (AW) only by around 200. This signals that adoption requires incentives beyond goodwill. Sames recommends that the industry must focus on gathering granular data to quantify the safety gains, which is the only way to “move the dial” at the IMO level.
Captain Kapoor firmly placed the responsibility on shipowners: “I don’t want to put the load on, or the responsibility on to other industry stakeholders… for shipowners, it is “urgent to act”. He likened multi-million-dollar assets (ships) to luxury cars: you don’t rely solely on traffic laws and insurance to protect it. However, a significant barrier remains the unanswered legal question of liability: if a collision occurs despite the presence of an AI system, would the owner be afforded any exemption from claims given their investment?
The fundamental philosophical debate centred on the core purpose of an AI: should it be designed to compensate for human limitation (such as fatigue or inexperience) or to enhance human capability? The consensus favoured the latter. Captain Kapoor insisted that technology must not “bring down the competency of our people” but rather “enhance their competency and training” to meet the technical standard.
Captain Dude highlighted a critical way AI aids this enhancement: by recording events, the technology facilitates real-life, no-blame training. This direct feedback loop is far more appealing to seafarers than traditional simulators, which they actively dislike have to train on often during their leave time onshore.
Key takeaways
Following the presentations, the panel discussion ultimately provided clear direction for the future of the human plus AI-enabled bridge:
- The Priority is data and measurement: The single most impactful step any owner can take immediately is to implement systems that “collect, analyse and understand data” on actual operations. Sames underscored this imperative: “Only what you measure you can improve”. Data forms the foundation for justified improvements and is the necessary currency for eventual regulatory change.
- Training is foundational for trust: AI’s success depends on it being perceived as a trustworthy tool, not a policing mechanism. This requires human-to-human, hands-on training to ensure seafarers can fully utilise the complex systems. Crucially, as Raviv pointed out, organisations must also transparently train crews on the AI’s limitations and caveats, affirming that the final judgment and constant vigilance remain with the human.
- Ensure interfaces and physical layouts are user-friendly: The industry must move beyond simply mounting more screens. Captain Dude noted that the physical layout of the bridge remains a huge challenge, while systems must simplify data presentation and be positioned for optimal end-user effect.
- Embrace the role of industry leadership: Since the IMO is not currently treating this as an urgent regulatory priority, progressive owners and technology providers must continue to lead by investing in quality, safe AI and collectively generate the evidence needed to establish the standards of tomorrow. Cybersecurity must be treated with vigilance, going beyond compliance to engage in active penetration testing while maintaining full transparency with clients.
While the session clearly demonstrated why the perception gap must be addressed, it also reflected where the industry currently stands: growing recognition of how cognitive overload affects decision-making on the bridge and increasing interest in technologies that help make sense of complexity rather than add to it. Looking ahead, improving navigation safety in busy waters will rely on continued efforts to support clearer judgment and earlier intervention on the bridge.