Everyone says safety matters. But what if the way we’re managing it is actually making things worse?
That was the quiet provocation behind our recent webinar titled “The illusion of safety: what we’re getting wrong about crews, tech and fatigue“. This wasn’t another session of technology optimism or regulatory cheerleading; it was a frank reckoning with what’s really happening at sea – overburdened crews, rising fatigue and a growing reliance on systems that promise safety but mostly deliver distraction.
The panel featured Torbjörn Dimblad, Chief Information Officer at shipmanager Anglo-Eastern; Captain Giorgios Asteros, Operations Director at Maran Tankers Management; Dupali Kuchekar, Product Manager at Lloyd’s Register; and Dor Raviv, Orca AI Co-founder and CTO.
Technology Must Serve the Crew
Across the discussion, a clear theme emerged: safety technology must serve the crew, not the spreadsheet. Tools should lighten the load, not add layers of noise or oversight.
“There are no magic black boxes at the office,” said Torbjörn Dimblad. “The same dashboards we see ashore are available to the crew on board.”
For Anglo-Eastern, that means using Starlink to build real-time collaboration, automating admin tasks, and ensuring every digital layer adds clarity, not confusion. “If we can do it on shore, let’s not ask the ship to do it.”
Bureaucracy is Not Safety
Captain Asteros agreed but warned that too much of the industry is heading in the wrong direction. “We’re seeing a rise in bureaucracy – not safety,” he said. Crews are often expected to manage multiple reporting systems while navigating some of the world’s most congested waters. Maran Tankers adopted Orca AI, for example, not as another dashboard but as a tool that simplifies and supports. “We wanted a system that doesn’t ask the crew to make more decisions. Just one that helps them see.”
From Surveillance to Support
That shift – from surveillance to support – was echoed by Dor Raviv, who described Orca’s vision as a digital co-pilot that stands beside the human, not in front of them. “Crews today aren’t just tired. They’re overloaded,” he said. “There’s too much data, too many alarms and not enough clarity. AI can help by filtering the noise and surfacing what actually matters.”
He pointed to real-world cases where AI flagged potential risks long before they became incidents, and noted that crew feedback is essential in refining how the system works. “We don’t want automation for the sake of automation. We want to reduce mental load and improve judgment, not replace it.”
From a systems and regulatory perspective, Dupali Kuchekar of Lloyd’s Register pointed out that many current frameworks still focus on hours of rest rather than the true drivers of fatigue. “Most systems have been designed around machines, not people,” she said. “If we want meaningful safety, we need to start with human factor engineering – purpose-based design that begins with how crews actually operate under stress.”
She advocated for deeper cross-industry collaboration, more transparent feedback loops, and a move away from one-size-fits-all compliance. “Autonomy doesn’t mean replacing anyone. It means giving people the tools and time to do their work with greater focus and less friction.”
Taken together, the discussion offered a sobering but hopeful message. The problem isn’t technology – it’s how we use it. If handled well, AI can help restore a kind of practical safety that respects human capacity. But it requires culture change, not just software updates.
As Raviv concluded: “AI isn’t here to take the helm. It’s here to stand beside the human – with clarity, with confidence and with respect for everything they already carry.”
<iframe width=”560″ height=”315″ src=”https://www.youtube.com/embed/CFNay3G0qwA?si=mfHAECqqwRIRKmSn” title=”YouTube video player” frameborder=”0″ allow=”accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share” referrerpolicy=”strict-origin-when-cross-origin” allowfullscreen></iframe>