InfoSecurity and Revenue Protection

I registered for the InfoSecurity show (though I didn’t manage to attend).  They did send me a badge, though.

Back in the day, these badges just contained a conventional barcode.  If you scanned the barcode, all you would see was a seemingly-meaningless ID.  Clearly that ID was a lookup key in the organisers’ registration database.    For a fee, they would rent to exhibitors a barcode scanner. The deal is, of course, that you scan the badge of each person who visits the stand, with a view to following up the conversation, and trying to sell them something (etc.) later on.   You could scan the badges with any barcode scanner (including your phone app) of course – but without access to the database, those scans would have no value.  So the fee also covers a service to let you download the registration details (name, address, ..) of those whose IDs you scanned.  The fee is not inconsiderable.

Now, I see that this year’s badges contained a QR code as well as the old-fashioned barcode.  I was a bit surprised to see the following when I scanned my badge:

{“CJe”;”BE8FYTR”,”DO”;”Vojwfstjuz pg Pygpse”,”G”;”Boesfx”,”KU”;”Qspg pg Tztufnt Tfdvsjuz”,”T”;”Nbsujo”}

At first, that looks like gibberish.  Some advanced encryption, maybe?  Well, no: you don’t need to be a Bletchley Park veteran to spot that in fact the encryption scheme is about 2000 years old (according to legend, at least).  A bright ten-year-old could sort that one out.  (Lookup: Caesar’s Cipher, or ROT-25).

I assume they’re still trying to push the considerable rental fee for the barcode scanners, but, really, there’s a lot of information on offer without paying.  Maybe the fact that the email address isn’t there would be reason enough to continue to hand over the cash.

The choice of Caesar’s Cipher is perhaps rather an embarrassment for a trade show dedicated to the latest and greatest in Information Security, though: one might justifiably say that it looks amateurish.  Either you’re attempting serious revenue (and privacy) protection: in which case modern strong encryption should be used, or you don’ t mind that the data is disclosed: in which case, why not use plain text?

 

Footnote: this issue was reported two years ago (shows how long since I last went to InfoSecurity!) by Scott Helme.  So clearly the show organisers aren’t too bothered.

Responding to the Global Ransomware Melt-down

As the dust just begins to settle on the highest-profile ransomware attack to date – one that has apparently affected systems around the globe and caused particular havoc in the UK’s NHS – the recriminations are just starting.

Is the root cause a bad attachment on an email, opened in error by a busy employee?  Or is is systematic under-investment in IT?  The knock-on effect of an ideologically-driven government, perhaps? Maybe it’s poor operating system design.  Or is is laziness in not applying patches and upgrades?   Is it the fault of some foreign government determined to bring the British health system to its knees? Or maybe some socially maladjusted teenager who didn’t realise the potential impact of their behaviour?  Perhaps it’s a massive plot by organised crime.  Is the NSA culpable for having discovered and kept secret a system vulnerability?  Or maybe the NSA or whoever developed the apparent distribution vector for the malware, based on that vulnerability?  Or can we blame those who ‘stole’ that information and disclosed it.

The story continues to unfold as I write, and the urge to assign blame will probably pull in many more possible culprits.  Surely least blame should attach to the “someone” who clicked where they shouldn’t: a system brought down by a moment’s inattention from a lowly receptionist is certainly not fit for purpose.

Devotees of particular operating systems are quick to say “this wouldn’t happen with Linux” or to extol the virtues of their shiny Apple Macs.  But one of the pernicious features of ransomware is that it doesn’t require any particular operating system smartness: in technical terms, it is a “user-space” program.  It doesn’t need to do anything other than open files, edit them (in an unusual way, by encrypting the contents), and overwrite the original.  These actions take place on every desktop every day, and all our desktop operating systems are inherently vulnerable through their design.

Of course, the spread of such malware does rely on features of operating systems and application programs.  Many people will remember when Microsoft software was rife with major flaws and vulnerabilities, leading to endless virus problems.  Most of those are history now, but numerous design decisions contributing to the details of operating system features were taken in the 1990s are still with us, and still making their effects felt.

The last decade or so has seen a massive growth in security awareness – from IT professionals and from everyday users of systems.  The result is much better management of security incidents.  I’d wager that if the latest ransomware attack had been seen a decade ago, the impact would have been even worse because there wouldn’t have been nearly as much planning in place to handle the problem.  But for all that awareness, and even substantial investment, security incidents are getting bigger, more spectacular, more far-reaching: and, critically, more damaging for individuals and for society.

We’re getting better.  But the cyber security problems are multiplying faster.  In the name of the “internet of things” we’re rapidly deploying millions of new devices whose typical security characteristics are rather worse than those of a PC 15 years ago.  And no one has a systematic plan for patching those, or turning them off before they become dangerous.

And let’s not be in any doubt: internet of things devices are potentially dangerous in a way that our old-fashioned information systems and file servers are not.  These devices control real things, with real kinetic energy.  Things that go bang when you mis-direct them.  Medical equipment that may be life-or-death for the patient.   Self-driving cars that could endanger not just their own passengers, but many others too – or could just bring the economy to a standstill through gridlock.  A future malware attack might not just stop a few computers: what if all the dashboards on the M25 suddenly demanded a $300 payment?

Ultimately as a society, we’ll get the level of security we are willing to pay for.  So far, for the most part, technology gives us massive benefits, and security incidents are an occasional inconvenience.  Maybe that balance will persist: but my money would be on that balance shifting towards the downsides for quite a while to come.

There are plenty of good bits of security technology deployed; many more are in laboratories just waiting for the right funding opportunity.  There are many great initiatives to make sure that there are lots of of security experts ready for the workforce in a few years time – but we also need to ensure that all programmers, engineers and technicians build systems with these concerns in mind.  What’s more, we need bankers, politicians, doctors, lawyers, managers, and dozens of other professions similarly to take security seriously in planning systems, processes, and regulations.

Big security challenges are part of the future of technology: the solution is not to reach for simplistic solutions or finger-pointing, but to make progress on many fronts.  We can do better.

 

 

When is it too obsolete?

A Telegraph Report tells of a major UK hospital falling victim to a ransomware attack.

A source at the trust told Health Service Journal that the attack had affected thousands of files on the trust’s Windows XP operating system, and the trust’s file sharing system between departments has been turned off while an investigation takes place.

Of course, what stands out there is the mention of Windows XP: this operating system was declared obsolete nearly three years ago – though some customers have been able to purchase extended support from Microsoft.  Was the hospital one of them?  Let’s assume not, for a moment – after all, the fee for that support is unlikely to be small.

On one level, that sounds like a classic case of cutting corners and reaping a bad outcome as a result: the longer software has been in circulation, the more likely it is to fall victim to someone probing it for vulnerabilities.  Software that’s no longer maintained will not be patched when such vulnerabilities are found, and as a result, anyone running it is easy prey.   Without a good backup strategy, ransomware can prove very expensive – and, at that, you need a backup strategy which takes account of the behaviour of ransomware, lest you find that the backup itself is as useless as the main data.

I don’t know the cost of extended support from Microsoft, but I’d be surprised if it was cheaper than the cost of licences for an up-to-date copy of the operating system: the reason for not upgrading is usually tied up in all the dependencies.  Old software needs old operating systems in order to run reliably. If your system isn’t broken, it’s easier to leave everything exactly as it was.  The world is full of old software.  It doesn’t naturally wear out, and so it may continue to be used indefinitely.  Until, that is, someone forcibly breaks it from outside.  Then you’ve suddenly got a big problem: you may pay the ransom, and get your data back.  But how long before you fall victim to another attack, because you’ve still got an obsolete system?

It’s easy to criticise people and organisations who fall victim to attacks on their out-of-date software: keeping everything updated is, you may say, part of the cost of doing business today.  But that’s easy to say, and less easy to do.  In the management of most other parts of the estate, you can do updates and maintenance at times to suit you and your cash-flow.  In the case of software maintenance, that decision may be taken out of your hands.  You might be forced into buying lots of new software – and then, suddenly, other software, and maybe even bespoke systems, as well as lots of new hardware, with costs suddenly spiraling out of control.

This problem is almost as old as the personal computer (if not older): but recent events make it much worse. First, the scale and impact of cyber attacks is raising the stakes quite alarmingly.  But meanwhile, computer technology has rather stabilized: a desktop PC doesn’t need to do anything markedly different now than it did ten years ago.  So whereas PCs used to get replaced every three years (with hard-up organisations stretching that to five, say), now there’s really no need to buy a new machine until five or six years have elapsed.  If you stretch that a bit, you will easily run past the end of the device’s support lifetime.  And then, you potentially reach a point of great instability.

And – above all – the real problem is that this is getting worse by the day.  Not only are PCs living longer, but we are adding random poorly-supported devices to the internet daily, in the name of tablets, e-readers, TVs, internet-of-things, and a dozen other causes.  Few of those are anywhere near as well-supported as Windows, and many will be expected to operate for a decade or more. It’s not going to be pretty.

lock-down or personal control?

A little piece of news (h/t Jonathan Zittrain) caught my eye:

The gist is that Cisco have been upgrading the firmware in Linksys routers so that they can be managed remotely (under the user’s control, normally) from Cisco’s cloud.  The impact of this is hotly debated, and I think Cisco disputes the conclusions reached by the journalists at Extreme Tech.  Certainly there’s a diffusion of control, a likelihood of future automatic updates, and various content restrictions (against, for example, using the service – whatever that entails – for “obscene, pornographic, or offensive purposes”, things which are generally lawful), and a general diminution of privacy.

Some commentators have reacted by suggesting buying different products (fair enough, depending on where the market leads), ‘downgrading’ to older firmware (not a long-term solution, given the onward march of vulnerability discovery),  or flashing one’s own firmware from some open source router development project (this is a surprisingly fertile area).  To someone interested in trusted computing and secure boot, the latter is quite interesting – you can do it now, but can/should you be able to in the future?

One of the central themes of some of the stuff around secure boot is that we want cryptographically-strong ways to choose which software is and is not booted, and leave this choice well away from the users, because they can’t be trusted to make the right decision.  That’s not some patronizing comment about the uneducated or unaware – I’d apply it to myself.  (I speak as someone who hasn’t knowingly encountered a Windows virus/trojan/work in decades despite using Windows daily: I think my online behaviour is fairly safe, but I am under no illusions about how easy it would be to become a victim.)  Whilst there are good reasons to want to retain a general power to program something like a PC to do whatsoever I wish, I can imagine that the majority of internet-connected devices in the future will be really quite tightly locked-down.  And this, in general, seems a good thing. I don’t demand to be able to re-program my washing machine, and I shouldn’t be able to demand to re-program my digital photo frame, or even my TV set-top box.

However, whereas my washing machine has no immediate prospect of connection to the internet (though the smart grid could change all that), many other devices are connected – and their mis-design will compromise my privacy or my data’s integrity, or worse.  And, indeed, involuntary remote updates could break them or reduce their functionality (a mess that already exists, and is really not well-addressed by consumer protection legislation).  I might have a reasonably secure household of internet junk until some over-eager product manager decides to improve one of my devices overnight one day. I could wake to find that my household had seized up and all my privacy gone, if the attacker followed the locus of the product updates.  This really isn’t particularly far-fetched.

So, I am faced with a tension.  A paradox, almost.  I am not well-placed to make security decisions by myself; none of us is, especially when we don’t even realise the decisions we are making are security-related.  But in this complex advertising-driven world, those to whom I am likely to delegate (perhaps involuntarily) such decisions are (a) themselves imperfect, and much more worryingly (b) highly motivated to monetize my relationship with them in every way possible.  Goodbye privacy.

The market might help to remedy the worst excesses in these relationships,  but when its dominated by large vendors suing each other over imagined patent infringements, I’m not optimistic about it resolving this tension in a timely manner.

Regulation could be a solution, but is seldom timely and often regulates the wrong thing (as the current mess of cookie regulation in the EU amply demonstrates).

Perhaps there is a role for third-party, not-for-profit agencies which help to regulate the deployment of security and privacy-related technologies – receiving delegated control of a variety of authorization decisions, for example, to save you from wondering if you want App X to be allowed to do Y.  They could in principle represent consumer power and hold the ring against both the vendors who want to make the user into the product and the attackers whose motives are simpler.  But will we build technologies in such a way as to permit the rise of such agencies?  It will take a concerted effort, I think.

Any other ideas?

Guess again

Over in a fenland blog, there is a little discussion going on about passwords.  Evidently, Google has been doing some advertising about what makes a good password, and this has come in for some criticism.

In that blog post, Joseph Bonneau proposes an alternative one-liner:

A really strong password is one that nobody else has ever used.

(One of the commentors (J. Carlio) suggests modifications to add something about memorability. )

This is a seductive idea: it is, broadly, true.  It encapsulates the idea that you are trying to defeat brute force attacks, and that these generally succeed by attempting plausible passwords.

But I don’t think it’s good advice.    That is mainly because many people are poor with estimates that surround very large numbers: whether the likelihood of my password being used by someone else is one in a thousand, one in a million, one in a trillion (the word of the week, thanks to national debts) is something that, I would say, few people have a good intuition about.  In just the same way, people are poor at risk assessment for unlikely events.

Continue reading

secure boot gone viral

After the spread of some information, and some mis-information, the story of secure boot in Windows 8, achieved through security features of UFEI has gone viral.  Linux forums all over the web are suddenly full of activists angry about some evil corporation wanting to deprive them of free software.  The aforementioned company has issued a long blog post by way of clarification.

The issue

In brief, the point is that there are plans to replace the BIOS – the main body of firmware which sets up a PC to be ready to run an operating system.  A key task of the BIOS in most systems is to start the boot loader program (or ‘pre-OS environment’), which in turn loads the operating system and gets it running.  This process is prone to attack: if someone can run an alternative boot loader (or reconfigure it, or launch a different operating system), they can either steal all the data on your disk, or they can launch something which looks just like your operating system, but is subverted to send copies of all your secrets – such as everything you type – to the bad guys. Continue reading

seeking evidence

Conventional wisdom says:

  1. Security through obscurity doesn’t work.  You may hide your needle in a haystack, but it’s likely to come back and stick into you (or someone who will sue you) when you least want it to.  Much better to lock your needle in a safe.
  2. You shouldn’t roll your own controls: whether crypto, or software, or architectures, or procedures.  The wisdom of the crowd is great, and the vendor can afford better security expertise than your own project can, because the vendor can amortise the cost over a much broader base than you can ever manage.

And yet, when I want to protect an asset against a fairly run-of-the-mill set of threats, it’s very far from clear to me whether that asset will  be safer if I protect it with COTS products or if I build my own, perhaps quirky and not necessarily wonderful, product.

cloud failure modalities

There’s a tale of woe getting some airtime on the interwebs from an angst-ridden New York undergraduate (reading between the lines) who has somehow had an entire, quite substantial, google account deleted. The post’s contention is (or includes) the idea that deleting such a profile is tantamount to deleting one’s life, I think. The facts of the case are murky – I’d link to some Google+ discussions, but I can’t find a way to do that – but regardless of this particular young person’s predicament, the story highlights some bigger questions about trusting cloud services. Continue reading

Outsourcing undermined

In the current headlong rush towards cloud services – outsourcing, in other words – leads to increasingly complex questions about what the service provider is doing with your data.  In classical outsourcing, you’d usually be able to drive to the provider’s data centre, and touch the disks and tapes holding your precious bytes (if you paid enough, anyway).  In a service-oriented world with global IT firms using data centres which follow the cheapest electricity, sometimes maybe themselves buying services from third parties, that becomes a more difficult task.

A while ago, I was at a meeting where someone posed the question “What happens when the EU’s Safe Harbour Provisions meet the Patriot Act?”.  The former is the loophole by which personal data (which normally cannot leave the EU) is allowed to be exported to data processors in third countries, provided they demonstrably meet standards equivalent to those imposed on data processors within the EU.  The latter is a far-reaching piece of legislation allowing US law enforcement agencies powers of interception and seizure of data.  The consensus at the meeting was that, of course the Patriot Act would win: the conclusion that Safe Harbour is of limited value.  Incidentally, this neatly illustrates the way that information assurance is about far more than just some crypto (or even cloud) technology.

Today, ZDNet reports that the data doesn’t even have to leave the EU for it to be within the reach of the Patriot Act:  Microsoft launched their ‘Office 365’ product, and admitted in answer to a question that data belonging to (relating to) someone in the EU, residing on Microsoft’s servers within the EU, would be surrendered by Microsoft – a US company – to US law enforcement upon a Patriot Act-compliant request.  Surely, then, any multinational (at least, those with offices? headquarters? in the US) is in the same position.  Where the subject of such a request includes personal information,  that faces them with a potential tension: they either break US law or they break EU law.  I suppose they just have to ask themselves which carries the stiffer penalties.

Now, is this a real problem or just a theoretical one? Is it a general problem with trusting the cloud, or a special case that need not delay us too long?   On one level, it’s a fairly unique degree of legal conflict, from two pieces of legislation that were rather deliberately made to be high minded and far reaching in their own domains.  But, in general, cloud-type activity is bound to raise jurisdictional conflicts: the data owner, the data processor, and the cloud service provider(s) may all be in different, or multiple, countries, and any particular legal remedy will be pursued in whichever country gives the best chance of success.

Can technology help with this?  Not as much as we might wish, I think.  The best we can hope for, I think, is an elaborate overlay of policy information and metadata so that the data owner can make rational risk-based decisions.  But that’s a big, big piece of standards work, and making it comprehensible and usable will be challenging. And, it looks like there could be at least a niche market for service providers who make a virtue of not being present in multiple jurisdictions.  In terms of trusted computing, and deciding whether the service metadata is accurate, perhaps we will need a new root of trust for location…