As the dust just begins to settle on the highest-profile ransomware attack to date – one that has apparently affected systems around the globe and caused particular havoc in the UK’s NHS – the recriminations are just starting.
Is the root cause a bad attachment on an email, opened in error by a busy employee? Or is is systematic under-investment in IT? The knock-on effect of an ideologically-driven government, perhaps? Maybe it’s poor operating system design. Or is is laziness in not applying patches and upgrades? Is it the fault of some foreign government determined to bring the British health system to its knees? Or maybe some socially maladjusted teenager who didn’t realise the potential impact of their behaviour? Perhaps it’s a massive plot by organised crime. Is the NSA culpable for having discovered and kept secret a system vulnerability? Or maybe the NSA or whoever developed the apparent distribution vector for the malware, based on that vulnerability? Or can we blame those who ‘stole’ that information and disclosed it.
The story continues to unfold as I write, and the urge to assign blame will probably pull in many more possible culprits. Surely least blame should attach to the “someone” who clicked where they shouldn’t: a system brought down by a moment’s inattention from a lowly receptionist is certainly not fit for purpose.
Devotees of particular operating systems are quick to say “this wouldn’t happen with Linux” or to extol the virtues of their shiny Apple Macs. But one of the pernicious features of ransomware is that it doesn’t require any particular operating system smartness: in technical terms, it is a “user-space” program. It doesn’t need to do anything other than open files, edit them (in an unusual way, by encrypting the contents), and overwrite the original. These actions take place on every desktop every day, and all our desktop operating systems are inherently vulnerable through their design.
Of course, the spread of such malware does rely on features of operating systems and application programs. Many people will remember when Microsoft software was rife with major flaws and vulnerabilities, leading to endless virus problems. Most of those are history now, but numerous design decisions contributing to the details of operating system features were taken in the 1990s are still with us, and still making their effects felt.
The last decade or so has seen a massive growth in security awareness – from IT professionals and from everyday users of systems. The result is much better management of security incidents. I’d wager that if the latest ransomware attack had been seen a decade ago, the impact would have been even worse because there wouldn’t have been nearly as much planning in place to handle the problem. But for all that awareness, and even substantial investment, security incidents are getting bigger, more spectacular, more far-reaching: and, critically, more damaging for individuals and for society.
We’re getting better. But the cyber security problems are multiplying faster. In the name of the “internet of things” we’re rapidly deploying millions of new devices whose typical security characteristics are rather worse than those of a PC 15 years ago. And no one has a systematic plan for patching those, or turning them off before they become dangerous.
And let’s not be in any doubt: internet of things devices are potentially dangerous in a way that our old-fashioned information systems and file servers are not. These devices control real things, with real kinetic energy. Things that go bang when you mis-direct them. Medical equipment that may be life-or-death for the patient. Self-driving cars that could endanger not just their own passengers, but many others too – or could just bring the economy to a standstill through gridlock. A future malware attack might not just stop a few computers: what if all the dashboards on the M25 suddenly demanded a $300 payment?
Ultimately as a society, we’ll get the level of security we are willing to pay for. So far, for the most part, technology gives us massive benefits, and security incidents are an occasional inconvenience. Maybe that balance will persist: but my money would be on that balance shifting towards the downsides for quite a while to come.
There are plenty of good bits of security technology deployed; many more are in laboratories just waiting for the right funding opportunity. There are many great initiatives to make sure that there are lots of of security experts ready for the workforce in a few years time – but we also need to ensure that all programmers, engineers and technicians build systems with these concerns in mind. What’s more, we need bankers, politicians, doctors, lawyers, managers, and dozens of other professions similarly to take security seriously in planning systems, processes, and regulations.
Big security challenges are part of the future of technology: the solution is not to reach for simplistic solutions or finger-pointing, but to make progress on many fronts. We can do better.