The recent WannaCry ransomware spreading around the world has been both tragic and predictable. Tragic because it knocked out organizations doing important work, like the NHS in the UK. Predictable because there is a growing gap between security practices in the information technology (“IT”) and operational technology (“OT”) arenas.
In IT, we’ve learned long ago that software is inherently buggy and a reasonable defense against that is to patch — automatically and as quickly as possible once a patch is released. This reduces the window of opportunity for attackers.
We can talk about other causes – the NSA weaponizing zero-day exploits or hackers stealing and remarketing that stuff – both are problems – but I don’t imagine either of these things is going to end any time soon.
We can point to Microsoft for introducing the bug in the first place, but to be fair their coding practices have been pretty good over the years and their response to security problems has been exemplary.
What remains is us, the end customers patching known problems.
Our IT shops generally do a pretty good job, though this bit of ransomware certainly caught out a few who may have not had the skills, mandate or funding to do it right.
What we aren’t talking about is systems that are not managed by any IT organization. Operational systems control the doors and heating and cooling systems in our offices. They run devices ranging from security scanners at airports to camera surveillance at the mall. These are “operational technology” — same basic technology as IT, but used to interface with and manage physical systems.
The trouble with OT is that it gets installed by people without IT skills. Heating/ventilation/air conditioning (HVAC) vendors install PCs that keep us warm or cool. Physical security vendors install camera and door control systems. The list goes on. These are often people without IT skills. Worse, the systems they deploy are installed and forgotten. They keep running, without anyone thinking much about them, for decades.
Here’s a cool example: an old Commodore Amiga system running HVAC in a school for 30 years:
Historically, these systems have not been connected to any network. Their security basically relied on physical isolation, both from other computers and — behind locked doors — from unauthorized people. It didn’t matter if the code was buggy and exploitable, because only one or two authorized people could physically interact with them.
The world is changing, however. Your HVAC or security vendor wants to be able to assist you without a site visit. You want to be able to monitor who just walked into the building without leaving your desk. These systems are getting connected to (at least) private networks and in some cases to the public Internet.
That’s a problem, because these systems run old code, without anyone looking after security, such as firewalls, OS patches, intrusion detection, anti-malware, etc.
This is the brave new world of “Internet of Things” where old, unpatched devices perform critical functions and also get IP addresses.
We should worry. It doesn’t matter how good a job we do building IoT systems today, how confident can we be that what we build today will still be secure in 10 years? 20? 30?