Rules governing ship safety are traditionally prescriptive. This made sense for the structural elements, whose behaviour obeyed Newtonian physics. Digital electronics on the face of it seem highly predictable too. Surely such systems operate in a black and white binary world? They either switch on or off. There is no murky grey area in between the two states.
Indeed, this is true for simple systems, where the logic is rudimentary and all possible outcomes can be tested and proven. However, today’s multipurpose computers – think desktop PCs – have complex operating systems and are liable to have weak spots that programmers overlooked and cyber attackers can exploit.
With complex computers there are simply too many chains of logic to test exhaustively. With too many things to think about, programmers started to abstract certain common functions. In other words, they started assuming – and trusting – these common functions would perform as they should.
The trend began with core input/output and file management tasks, but gradually encompassed more advanced functions until they reached today’s shiny operating systems. So when a program tells a printer to print “hello”, the printer driver does the hard work of translating those characters into graphics, and telling the laser where to burn.
This approach is efficient: it saved repeatedly reinventing the wheel. However, nothing is perfect. And there were always edge cases or exceptions where these modules produced unexpected results or would break down entirely.
Things get exponentially more complicated when, first, these abstractions are arranged in tall stacks, each depending on the level below, and second, computers are connected and start talking to each other. Hackers realised they could exploit the ‘edge cases’ or weaknesses to trick a computer into doing something it should not.
In some respects, the complexity in today’s network resembles the human brain. You can know how an individual neuron functions at a biochemical level, but you cannot predict or mechanically determine the workings of an entire brain. There is simply too much going on. Any hope of using ladder logic – a graphical diagram for industrial digital computers based on circuit diagrams – to understand and detect weaknesses has long gone.
No fortress is impenetrable
So where do this leave today’s vessel owners and operators? After a period of either ignoring the problem or pretending it didn’t affect shipping, attitudes to cyber threats are beginning to change. If there was one positive outcome of the NotPetya ransomware attack on Maersk last year, it was awakening owners and operators to the fact that these threats are not hypothetical dangers from science-fiction future but are real and have real-world consequences.
This coincided with a collective realisation that absolute prevention – building an impenetrable fortress – is not a feasible solution. Instead, it is dawning on the industry that risk-based management is probably the best way forward.
When the Oil Companies International Marine Forum released a major update to the Tanker Manager Self-Assessment programme in April 2017, it introduced a new section, Element 13, on maritime security. In addition to physical security issues, such as piracy, this zeroed in on cyber-security. With the new version taking effect in January 2018, tanker operators had less than a year to prepare themselves or stand a chance of losing charters. The Tanker Manager Self-Assessment programme approach is wholly risk-based.
Element 13 states that, as a minimum, operators must introduce procedures for identifying threats applicable to a vessel and shore sites. Gaining a higher ranking requires preparing guidance and/or mitigation measures for these identified threats, as well as the promoting of cyber-security good-practice among vessel personnel. Ensuring that security procedures are regularly updated moves an operator up another notch. To reach the highest ranking, novel or innovative methods for minimising cyber-risk must be demonstrated.
However, it doesn’t stop there. Cyber-risk management exerts a gravitational pull on other elements of the programme. For example, fostering a corporate culture that takes cyber risks seriously will clearly have a leadership dimension (Element 1). Meanwhile, software and system configuration management must be addressed in management of change (Element 7). The ranking obtained in the latter depends on having set procedures and processes for assessing impact and approving and implementing changes.
Timeline to enforcement
Owners and operators of other vessel types have more time to get their house in order. The International Maritime Organization does not require cyber-risk management to be incorporated into a company’s International Safety Management (ISM) Code until 1 January 2021. After that date, a ship could be detained if its cyber-risk policies are pulled up by port state control inspectors. Furthermore, it is probable that port state control will eventually mount a concentrated campaign during which authorities will specifically target cyber-risk compliance.
That said, speculation is rife as to whether port state control internally has sufficient competence to assess and police compliance, or whether a spot-check is the right way to do so.
Experience with ECDIS certainly gives credence to the view that when technology is involved, it can be difficult for inspectors to gain a true picture of what is happening on board. The threat landscape is evolving faster than ever.
In the 2000s, office IT systems were the predominant target. In other words, the PC on your desk. But these days, incidences of attacks directed at operational technology (OT): the programmable logic controllers, ruggedised industrial digital computers that control electromechanical processes, and other industrial control systems that drive machinery. So whereas before it was mostly a company’s finances and reputation at risk, the threat has now escalated to safety of life, property, and environment.
Previously, operational technology systems were isolated from the vessel network. And still today a case can be made for keeping certain mission-critical systems, such as a loading computer or engine control system, disconnected. However, ‘air gaps’ aren’t a reliable form of protection. Inserting a USB memory stick directly into a piece of hardware to transfer files can eliminate this protection easily.
Moreover, they are not a long-term solution. As digitalisation accelerates, there will be increasing pressure to connect these systems for the sake of efficiency. Meanwhile, connectivity is increasingly hardwired into PCs. Cabling will be available in the expectation that the machine will be connected at a later date, whether for a major software update or for it to be repurposed. Like so many aspects of life these days, it will become ever harder to stay off the grid.
Some specialised maritime software packages only run on outdated or deprecated operating systems. Once an OS is declared as officially discontinued, the software developer is under no obligation to continue providing security updates, so any security weaknesses discovered after that date will remain open. (The apposite exception that proves the rule is the special deal the UK’s Royal Navy struck with Microsoft giving it a lifeline to maintain Windows XP on its submarines).
Taking advantage of the weaknesses in obsolete systems would require detailed insider knowledge and a long chain of conditions to be met, but they nevertheless remain exploitable. And it is the sort of vulnerability that might be used by a well-resourced state actor that probably has a stronger motive and more patience than a behoodied teenager operating from a bedroom.
That said, targeted cyber attacks on ships by hackers, while theoretically possible are, by most accounts, exceptional. Being caught up in a non-targeted attack poses a much bigger danger. Shipping forms one link in complex logistics and global supply chains, which expands the possible attack surface area and makes it hard to determine the boundaries of responsibility. The NotPetya ransomware incident at Maersk, which affected back-office systems but ultimately disrupted vessel schedules, demonstrated the ripple-effect of an attack or system failure on an adjacent link in the chain. In such a complex environment, trying to erect rigid defences that protect only the shipping part seems a fruitless endeavour.
A risk-based approach can be applied in addressing other challenges faced by shipowners. It flexes to accommodate the array of different hardware and software variants found on different vessels. Many operators have aspired to standardise across the fleet, but when it sinks in that two externally identical routers may have different firmware or chipsets inside, the never-ending nature of such a task becomes clear.
Instead, risk-based action plans can ensure an owner’s ability to cope with software or hardware procured from third-parties. A shipowner has limited leverage over vendors when it comes to seeking assurances on performance. Microsoft might make an exception and bend over for the Royal Navy but is unlikely to accommodate similar requests from the owner of four bulk carriers based in Piraeus. It also offers a degree of protection against rogue code or vulnerabilities residing in software, burned into firmware, or hardwired into hardware unbeknown even to the developer.
The risk-based approach is also flexible enough to cope with possibly the most unpredictable component in a ship system: the seafarer. While it is premature to give a verdict on the effectiveness of the various awareness raising industry campaigns and training programmes, anecdotal evidence is not encouraging.
According to a technical superintendent for a major tanker operator speaking at a recent IMarEST expert group meeting, too many crew are still sticking USB sticks where they should not. And the message that simply ‘charging’ a smartphone or similar device via a USB port is a potential malware infection route certainly has not hit home.
A more sensible approach
However, even if these bad habits can be cured, it is unrealistic to expect a handful of crew to possess the competency of a seasoned IT-pro in order contain a serious cyber attack on board ship in the middle of the Atlantic. Again, the more sensible option would be to give consideration to potential scenarios and failure modes. From this, a a set of instructions or guidance can be prepared for seafarers to follow and prevent the situation from worsening so the vessel can limp home.
There is a saying in the army that when the map and terrain don’t match, you are better proceeding according to the terrain. Digital technologies may, deep down, function with binary certainty. But the complexity of today’s systems combined with the fallibility of their creators, poorly trained users, and the existence of malevolent actors means their behaviour is anything but predictable. Let us not pretend otherwise and make sure we prepare accordingly.
Follow Kevin Tester on twitter @MITE