Responding to the Global Ransomware Melt-down

As the dust just begins to settle on the highest-profile ransomware attack to date – one that has apparently affected systems around the globe and caused particular havoc in the UK’s NHS – the recriminations are just starting.

Is the root cause a bad attachment on an email, opened in error by a busy employee?  Or is is systematic under-investment in IT?  The knock-on effect of an ideologically-driven government, perhaps? Maybe it’s poor operating system design.  Or is is laziness in not applying patches and upgrades?   Is it the fault of some foreign government determined to bring the British health system to its knees? Or maybe some socially maladjusted teenager who didn’t realise the potential impact of their behaviour?  Perhaps it’s a massive plot by organised crime.  Is the NSA culpable for having discovered and kept secret a system vulnerability?  Or maybe the NSA or whoever developed the apparent distribution vector for the malware, based on that vulnerability?  Or can we blame those who ‘stole’ that information and disclosed it.

The story continues to unfold as I write, and the urge to assign blame will probably pull in many more possible culprits.  Surely least blame should attach to the “someone” who clicked where they shouldn’t: a system brought down by a moment’s inattention from a lowly receptionist is certainly not fit for purpose.

Devotees of particular operating systems are quick to say “this wouldn’t happen with Linux” or to extol the virtues of their shiny Apple Macs.  But one of the pernicious features of ransomware is that it doesn’t require any particular operating system smartness: in technical terms, it is a “user-space” program.  It doesn’t need to do anything other than open files, edit them (in an unusual way, by encrypting the contents), and overwrite the original.  These actions take place on every desktop every day, and all our desktop operating systems are inherently vulnerable through their design.

Of course, the spread of such malware does rely on features of operating systems and application programs.  Many people will remember when Microsoft software was rife with major flaws and vulnerabilities, leading to endless virus problems.  Most of those are history now, but numerous design decisions contributing to the details of operating system features were taken in the 1990s are still with us, and still making their effects felt.

The last decade or so has seen a massive growth in security awareness – from IT professionals and from everyday users of systems.  The result is much better management of security incidents.  I’d wager that if the latest ransomware attack had been seen a decade ago, the impact would have been even worse because there wouldn’t have been nearly as much planning in place to handle the problem.  But for all that awareness, and even substantial investment, security incidents are getting bigger, more spectacular, more far-reaching: and, critically, more damaging for individuals and for society.

We’re getting better.  But the cyber security problems are multiplying faster.  In the name of the “internet of things” we’re rapidly deploying millions of new devices whose typical security characteristics are rather worse than those of a PC 15 years ago.  And no one has a systematic plan for patching those, or turning them off before they become dangerous.

And let’s not be in any doubt: internet of things devices are potentially dangerous in a way that our old-fashioned information systems and file servers are not.  These devices control real things, with real kinetic energy.  Things that go bang when you mis-direct them.  Medical equipment that may be life-or-death for the patient.   Self-driving cars that could endanger not just their own passengers, but many others too – or could just bring the economy to a standstill through gridlock.  A future malware attack might not just stop a few computers: what if all the dashboards on the M25 suddenly demanded a $300 payment?

Ultimately as a society, we’ll get the level of security we are willing to pay for.  So far, for the most part, technology gives us massive benefits, and security incidents are an occasional inconvenience.  Maybe that balance will persist: but my money would be on that balance shifting towards the downsides for quite a while to come.

There are plenty of good bits of security technology deployed; many more are in laboratories just waiting for the right funding opportunity.  There are many great initiatives to make sure that there are lots of of security experts ready for the workforce in a few years time – but we also need to ensure that all programmers, engineers and technicians build systems with these concerns in mind.  What’s more, we need bankers, politicians, doctors, lawyers, managers, and dozens of other professions similarly to take security seriously in planning systems, processes, and regulations.

Big security challenges are part of the future of technology: the solution is not to reach for simplistic solutions or finger-pointing, but to make progress on many fronts.  We can do better.

 

 

When is it too obsolete?

A Telegraph Report tells of a major UK hospital falling victim to a ransomware attack.

A source at the trust told Health Service Journal that the attack had affected thousands of files on the trust’s Windows XP operating system, and the trust’s file sharing system between departments has been turned off while an investigation takes place.

Of course, what stands out there is the mention of Windows XP: this operating system was declared obsolete nearly three years ago – though some customers have been able to purchase extended support from Microsoft.  Was the hospital one of them?  Let’s assume not, for a moment – after all, the fee for that support is unlikely to be small.

On one level, that sounds like a classic case of cutting corners and reaping a bad outcome as a result: the longer software has been in circulation, the more likely it is to fall victim to someone probing it for vulnerabilities.  Software that’s no longer maintained will not be patched when such vulnerabilities are found, and as a result, anyone running it is easy prey.   Without a good backup strategy, ransomware can prove very expensive – and, at that, you need a backup strategy which takes account of the behaviour of ransomware, lest you find that the backup itself is as useless as the main data.

I don’t know the cost of extended support from Microsoft, but I’d be surprised if it was cheaper than the cost of licences for an up-to-date copy of the operating system: the reason for not upgrading is usually tied up in all the dependencies.  Old software needs old operating systems in order to run reliably. If your system isn’t broken, it’s easier to leave everything exactly as it was.  The world is full of old software.  It doesn’t naturally wear out, and so it may continue to be used indefinitely.  Until, that is, someone forcibly breaks it from outside.  Then you’ve suddenly got a big problem: you may pay the ransom, and get your data back.  But how long before you fall victim to another attack, because you’ve still got an obsolete system?

It’s easy to criticise people and organisations who fall victim to attacks on their out-of-date software: keeping everything updated is, you may say, part of the cost of doing business today.  But that’s easy to say, and less easy to do.  In the management of most other parts of the estate, you can do updates and maintenance at times to suit you and your cash-flow.  In the case of software maintenance, that decision may be taken out of your hands.  You might be forced into buying lots of new software – and then, suddenly, other software, and maybe even bespoke systems, as well as lots of new hardware, with costs suddenly spiraling out of control.

This problem is almost as old as the personal computer (if not older): but recent events make it much worse. First, the scale and impact of cyber attacks is raising the stakes quite alarmingly.  But meanwhile, computer technology has rather stabilized: a desktop PC doesn’t need to do anything markedly different now than it did ten years ago.  So whereas PCs used to get replaced every three years (with hard-up organisations stretching that to five, say), now there’s really no need to buy a new machine until five or six years have elapsed.  If you stretch that a bit, you will easily run past the end of the device’s support lifetime.  And then, you potentially reach a point of great instability.

And – above all – the real problem is that this is getting worse by the day.  Not only are PCs living longer, but we are adding random poorly-supported devices to the internet daily, in the name of tablets, e-readers, TVs, internet-of-things, and a dozen other causes.  Few of those are anywhere near as well-supported as Windows, and many will be expected to operate for a decade or more. It’s not going to be pretty.