At Orca AI’s recent Oslo event – Navigating Tomorrow: AI-Powered Safety and Performance in Maritime Operations – hosted at the Eero Rooftop (in a building designed by Finnish-American architect Eero Saarinen that formerly housed the US embassy), the discussion stayed firmly anchored in operational reality.
The panel brought together two shipowners – Nils Aden of Germany’s Harren Group and Trym Otto Sjølie of SFL Management – alongside Kaare Haug, Country Manager Norway, Bureau Veritas, and Frank Relou, Orca AI Regional Sales Manager standing in for CEO and Co-founder Yarden Gross – combining operational, regulatory and technology perspectives in a focused conversation on what is already changing onboard.
The session was moderated by Elizabeth Solvang of communications company AKP, which manages the Blue Maritime Cluster in Sunnmøre on Norway’s west coast.
Set against a backdrop of the crisis in the Middle East, GNSS disruption and increasingly congested sea lanes, the central question was straightforward: what is actually improving safety and performance at sea – and where are the limits?
Increasingly complex operating environment
If there was one shared observation, it was that the operating environment has become harder to manage.
For globally active owners like Harren, exposure to disruption is almost unavoidable. “Whenever something happens somewhere, it’s pretty likely that we have an exposure,” Aden noted, pointing to a shift away from predictable risk patterns towards more fragmented and fast-moving threats.
Sjølie described a similar reality from an operational standpoint where rerouting is not always an option. “What was okay yesterday is not okay tomorrow,” he said, reflecting a move towards continuous monitoring and reassessment rather than fixed procedures.
In practice, that has meant a shift towards what he described as a more “war room”-style mode of operation, with frequent updates, cross-functional input and near real-time reassessment of risk as situations evolve.
The result is a more dynamic and less predictable operating model – one that places greater pressure on crews to interpret incomplete or conflicting information in real time.
AI as decision support, not substitution
Within that context, AI is being applied in a pragmatic way.
Across the panel, there was clear alignment that AI today functions as decision support, not decision maker. The AI-powered Orca AI platform enhances situational awareness, processes data at scale and highlights potential risks earlier, but it does not remove responsibility from the bridge.
“As of now, we see AI as a helpful tool to assist decision-making… but ultimate responsibility is clearly with the human on board,” Aden said.
That principle remains central, particularly in degraded environments where traditional inputs such as GNSS may be unreliable. In those situations, crews are still expected to validate, cross-check and ultimately decide.
From a technology perspective, the value lies in complementarity. AI can continuously monitor the environment and process multiple inputs simultaneously. “It never gets tired… it’s constantly 24/7 on watch,” Relou noted.
But he emphasised that final judgment – especially in edge cases – remains human.
Trust is key adoption barrier
If capability is not the limiting factor, trust often is.
A familiar pattern is often that systems are installed, but not fully used. Not because they don’t work but because crews don’t yet trust or fully understand them.
“There must be an interface… you need to understand how it arrives at its answer,” Sjølie said, highlighting the importance of transparency in decision support systems.
This is where implementation tends to succeed or fail. As reflected in the event summary, top-down rollouts risk creating “just another screen on the bridge”– technology that exists but adds little operational value.
In practice, trust is built incrementally. Crews begin to rely on systems when they see consistent, tangible value – such as early detection of small or low-visibility targets they might otherwise have missed.
Data quality is foundational
A recurring theme – less visible, but critical – was data.
Shipping companies already generate huge volumes of it. The challenge is not access, but usability.
“You need proper data… otherwise your output can be wrong,” Sjølie said.
This reflects a broader industry gap. Data is often fragmented across systems, inconsistently structured and difficult to interpret in an operational context. Without quality and context, it cannot support decision-making, AI-driven or otherwise.
Value depends not on how much data is available, but on whether it can be trusted and used.
Integration is the real step-change
Looking ahead, the discussion pointed less towards new tools and more towards better integration.
Today’s vessels and organisations operate with multiple systems – navigation, maintenance, performance, reporting – each producing data, but rarely connected in a meaningful way.
“The most impactful change will be when the different tools start speaking to each other,” Aden noted.
This is where AI begins to move beyond incremental gains. Not by adding another layer of technology, but by connecting existing ones, turning fragmented data into coherent, usable insight.
Gradual shift towards distributed operations
The panel also explored how roles may evolve between ship and shore.
Sjølie pointed to a gradual shift rather than a sudden transition. Responsibility remains onboard, but more analytical and data-driven tasks are moving ashore, where additional capacity and oversight can support decision-making.
Haug added that regulation is still catching up with these developments. Today’s systems remain firmly within a decision-support framework, but the industry is moving towards a point where clearer boundaries, and responsibilities, will need to be defined as autonomy matures.
At the same time, early forms of autonomy are already being tested in practice. As Relou described it, current capabilities (referring to Orca AI’s Co-Captain functionality) are closer to “a super-smart autopilot” – augmenting navigation with perception and decision support, rather than replacing it outright.
Key takeaways
Several themes stood out consistently across the discussion:
- AI is already delivering value, particularly in real-time situational awareness and decision support
- Human responsibility remains central, regardless of the level of automation
- Trust, training and leadership are the main barriers to adoption – not technology
- Data quality and structure are foundational, but still uneven across the industry
- The next phase of progress will come from system integration, not additional standalone tools
- Autonomy will develop gradually, through human–machine collaboration rather than replacement
Ultimately, the discussion made one point very clear: AI is not changing the fundamentals of safe navigation.
What it is changing is how information is processed, how risks are identified and how decisions are supported.
In a maritime environment defined increasingly by complexity and uncertainty, the value of AI lies not in generating more data, but in helping crews act on insights earlier and with greater confidence.