cloud failure modalities

There’s a tale of woe getting some airtime on the interwebs from an angst-ridden New York undergraduate (reading between the lines) who has somehow had an entire, quite substantial, google account deleted. The post’s contention is (or includes) the idea that deleting such a profile is tantamount to deleting one’s life, I think. The facts of the case are murky – I’d link to some Google+ discussions, but I can’t find a way to do that – but regardless of this particular young person’s predicament, the story highlights some bigger questions about trusting cloud services. Continue reading

disk erasure

A recent pointer to Peter Guttman’s new book on security engineering (looks good, by the way) reminds me that Guttman’s name is associated with the woeful tale of disk erasure norms.

The argument goes this way: ‘normal’ file erases (from your windows shell, say) merely change pointers in tables, and do not remove any data from the disk.  A fairly unsophisticated process will recover your ‘deleted’ files.  Wiser people ensure that the file is deleted from the media itself – by writing zeros over the sectors on the disk that formerly contained the file.  Guttman’s argument was that because of minor variations in disk head alignment with the platters, this is insufficient to ensure the complete removal of the residual magnetic field from the former data.  There is a possibility that, with the right equipment, someone could recover the formerly-present files.  So he has an algorithm involving, I think, 35 passes, writing various patterns, calculated to destroy any remaining underlying data.

Now, the problem appears/appeared real enough: various government standards have, for decades now, ruled that magnetic media which has held classified material cannot be declassified but must be destroyed  before leaving secure custody.  Whether anyone has ever managed to recover a non-trivial amount of data from a once-zeroed disk is much less clear: as far as I know, there’s not a lot in the open literature to suggest it’s possible, and none of the companies specializing in data recovery will offer it as a service.  Furthermore, since Guttman did his original work, disk design has evolved (and the ‘size’ of the bits on the disk become so small that any residual effect is going to be truly minimal), and disk manufacturers have built a ‘secure erase’ into their controllers for quite a few years now.  Even better, the new generation of self-encrypting drives can be rendered harmless by the deletion of just one key (don’t do this by accident!).

Yet, the perception persists that the simple solutions are insufficient. Let us leave aside Government security standards and think simply of commercial risk.  Multi-pass erasure is downright time-consuming.  You can buy a disk-shredding service –  but this attracts quite a fee.  So it is not uncommon simply to archive used disks in a warehouse somewhere (with or without a single zeroing pass, I suppose).  How long you would keep those disks for is unclear: until their data ceases to be valuable, I suppose.  But without a detailed inventory of their contents, that cannot easily be determined.  So perhaps you have to keep them forever.

My simple question is: which attracts the lower risk (and/or the lower total predicated cost)? (a) Zeroing a disk and putting it in a skip, or (b) Warehousing it until the end of the lifetime of the data it holds?  You can postulate whatever adversary model you wish.  The answer is not obvious to me.  And if we can’t make a simple determination about risk in this case (because, frankly, the parameters are all lost in the noise), what possible chance do we have of using risk calculations to make decisions in the design of more complex systems?

email retention

Someone in the group suggested a blog post on email retention.  It’s a good topic, because it tracks the co-evolution of technology and process.

The Evolution

Back in the day, storage was expensive – relative to the cost of sending an email, anyway.   To save space, people would routinely delete emails when they were no longer relevant.

Then storage got cheap, and even though attachments got bigger and bigger, storing email ceased to be a big deal.  By the late 1990s, one of my colleagues put it to me that the time you might spend deciding whether or not to delete something already cost more than the cost of storing it forever.  I have an archive copy of every email I’ve sent or received – in professional or personal capacities – since about 1996 (even most or all of the spam).

Happily, one other technology helped with making email retention worthwhile: indexing.  This, too, is predicated on having enough storage to store the index, and having enough CPU power to build the index.  All of this we now have.

However, a third force enters the fray: lawyers started discovering the value of emails as evidence (even if it’s rubbish as evidence, needing massive amounts of corroboration if it is not to be forged trivially). And many people – including some senior civil servants, it seems – failed to spot this in time, and were very indiscreet in their subsequently- subpoenaed communications.

As a result, another kind of lawyers – corporate lawyers – issued edicts which first required, and then forced, employees of certain companies to delete any email more than six months old.  That way, they cannot be ‘discovered’ in an adverse legal case, because they have already been erased.

Never mind that many people’s entire working lives are mediated by email today: the email is an accurate and (if permitted) complete record of decisions taken and the process by which that happened.  Never mind that emails have effectively replaced the hardbound notebooks that many engineers and scientists would use to retain their every thought and discussion (though in many places good lab practice retains such notebooks).  Never mind that although it is creaking under the present strain of ‘spam’ and ‘nearly spam’ (the stuff that I didn’t want, but was sent by coworkers, not random strangers), we simply haven’t got anything better.

The state-of-the art

So now, in those companies, there is no email stored for more than six months, yes?  Well, no.  Of course not.  There are lots of emails which are just too valuable to delete.  And so people extract them from the mail system and store them elsewhere.  Or forward them to private email accounts.  There are, and always will be, many ways to defeat the corporate mail reaper.  The difference is that the copies are not filed systematically, are not subject to easy search by the organisation, and will probably not be disclosed to regulators or in other legal discovery processes.  This is the state-of-the art in every such organisation I’ve encountered (names omitted to protect the … innocent?).

Sooner or later, a quick-witted external lawyer is going to guess that this kind of informal archiving might help their case, and is going to manage to dig deeper into the adversary’s filestores and processes.  When they find some morsels of beef secreted in unlikely places, there will be a rush of corporate panic.

The solution

It’s easy to spot the problem.  It’s much harder to know what to do about it.  Threatening the employees isn’t very productive – especially if the ‘security’ rule is at odds with the goal of getting their work done.  Making it a sacking offence, say, to save an email outside the corporate mail  system is just going to make people more creative about what they save, and how they save it.  Unchecked retention, on the other hand, will certainly leave the organisation ‘remembering’ things it would much rather it had ‘forgotten’.

At least it would be better if the ‘punishment’ matched the crime: restricting the retention of email places the control in the wrong place.   It would be much better to reserve the stiff penalties for those engaged in libel, corporate espionage, anticompetitive behaviour, and the rest.  Undoubtedly, that would remain a corporate risk, and a place where prevention seems better than cure: the cost to the organisation may be disproportionately greater than any cost that can be imposed upon the individual.  But surely it’s the place to look, because in the other direction lies madness.

Spectacular Fail!

Step 1.  Install “Windows XP Mode” using Microsoft Virtual PC on Microsoft Windows 7.

Step 2. Windows XP warns that there is no anti-virus program installed.  Use the supplied Internet Explorer 6 to try to download Microsoft Security Essentials.  Browsing to the page fails.   I happen to know that this is a manifestation of a browser incompatibility.

Step 3. Use the Bing search box on the default home page of Internet Explorer 6 to search for “Internet Explorer 9”.   You have to scroll down a long way before finding a real Microsoft link: who knows how much Malware the earlier links would serve up?!

Words fail me, really.

on an unfortunate tension

It’s frustrating when you’re not allowed to use electronic devices during the first and last fifteen minutes of a flight – sometimes much longer. I rather resent having to carry paper reading material, or to stare at the wall in those periods. On today’s flight, they even told us to switch off e-book readers.

E-book readers! Don’t these people realise that the whole point of epaper is that you don’t turn it off: it consumes a minimal amount of power, so that the Kindle can survive a month on a single charge. It has no ‘off’ switch per se, its slide switch simply invoking the “screen saver” mode. This doesn’t change the power consumption by much: it just replaces the on-screen text with pictures, and disables the push buttons.

And the answer is that of course they don’t know this stuff. Why would they? Indeed, it would be absurd to expect a busy cabin attendant to be able to distinguish, say, an ebook reader from a tablet device. If we accept for a moment the shaky premise that electronic devices might interfere with flight navigation systems, then we must accept that the airlines need to ensure that as many as possible of these are swiched off – even those with no off switch to speak of, whose electromagnetic emissions would be difficult to detect at a distance of millimetres.

Of course, this is a safety argument, but much the same applies to security. Even the best of us would struggle to look at a device, look at an interface, and decide whether it is trustworthy. This, it seems to me, is a profound problem. I’m sure evolutionary psychologists could tell us in some detail about the kind of risks we are adapted to evaluate. Although we augment those talents through nurture and education, cyber threats look different every day. Children who have grown up in a digital age will have developed much keener senses for evaluating cyber-goodness than those coming to these things later in life, but we should not delude ourselves into thinking this is purely a generational thing.

People have studied the development of trust, at some length. Although the clues for trusting people seem to be quite well established, we seem to be all over the place in deciding whether to trust an electronic interface – and will tend to do so on the basis of scant evidence. (insert citations here). That doesn’t really bode well for trying to improve the situation. In many ways, the air stewardess’s cautionary approach has much to commend it, but the adoption of computing technology always seems to have been led by a ‘try it and see’ curiosity, and we destroy that at out peril.

About.me : how far we’ve come

about.me offers everyone a Web2.0 home page.  Besides an interesting trend in narcissism, two things strike me:

  1. Is this the ultimate page for the stalker? (or identity thief)  If  ‘yes’, then you presumably find value in the security through obscurity of having a selection of un-linked social networks.  That, in itself is an interesting discussion to have.
  2. The page links to many  leading content providers, without the need to give to about.me a single password (at last!).  Of course, the click-through for each of the sites entails giving permission to about.me to do almost anything with your account … but at least you can review and revoke that later (oauth is bullet-proof, right?! 🙂 ) .  Many of them, in turn, are happy to use Google or Facebook as authenticators (I noticed today that you can make a whole Yahoo! account just from a Google cross-authentication). It would be interesting to map what depends on what, these days.

All in all, this seems like progress of some sort.  It’s all starting to work, isn’t it?

Is about.me a good source of authoritative information about the named individual?  Hmm. I’m not sure about that: but if  ‘identity’ means anything at all, surely it means something about your ongoing and persistent relationships and interactions.

Andrew can’t encrypt, either

A seminal paper once explained why Johnny can’t encrypt.  I thought I could, but the combined forces of mail clients and certification authorities seem to be working together to confound me.

Although PGP is the grand-daddy of email security, its core security model – of a web-of-trust, each user authenticating their friends’ public keys – was problematic at least from a scalability perspective.  I had a PGP key, once, a long time ago, but barely used it – and barely had anyone to talk to.  S/MIME offered a more corporate-feeling alternative: an email signing and encryption scheme built into most mail clients and based around X.509 certificates backed by major vendors.

S/MIME was something I could use.  I used to have a Thawte ‘Freemail’ certificate: these were issued for free, on the strength of being able to receive email at a specified address. If you wanted your ‘real name’ in the certificate, Thawte had its own ‘web of trust’ through which your identity could be verified.  This kind-of worked, and I used the certificates with Thunderbird quite happily.  I found that quite a number of people were sending me signed emails, and Thunderbird was able to verify the signatures; and from time to time I even found a correspondent who wanted to exchange encrypted messages.  All of my outgoing mail was signed, for a year or two.

Well, almost all of my outgoing mail was signed.  Occasionally I would use a client – such as Gmail – which didn’t facilitate that.  In that time, not one email recipient complained of receiving an unsigned email: if signatures were the norm, you’d have thought that someone would have spotted the anomaly and questioned whether the message truly came from me.

But that arrangement came to an end: Thawte stopped issuing Freemail certificates; and Thunderbird 3.0 was so difficult to use that I abandoned it in favour of the mail client everyone loves (?): Outlook.  The Outlook handling of certificates is a little obscure, but in Office 2007, it was quite functional.  When I upgraded to Office 2010, not only did I start receiving increasingly cryptic error messages, several recipients told me that my signed messages appeared blank to them.

So I stopped signing.  I retained the certificate (and corresponding keys) I had, lest anyone send me an encrypted message.  This, they continue to do occasionally, but now I get further cryptic error messages and no sign of a decrypted email.

Where do the certificates come from?  Well, Comodo continue to offer a free email client certificate.  I have one, so I must have managed to persuade their software to issue one, once upon a time.  But today I am totally failing to manage that: the certificate is issued, but I get a browser error when I try to download and install it.  This is before I attempt the awkward feat of trying to transfer it from browser to mail client (if that step is still needed).  Even trying to retrieve an old certificate runs me up against requests for long-forgotten passwords.

This is a long tale of woe, and I have omitted many of the gory details.  The upshot is that I, who understand the workings of email really quite well, and the principles of cryptography and X.509 certificates, and the broad design of my web browser and email client in this area, am neither able to sign nor encrypt (nor decrypt) email today. I’m using mainstream software and the apparent best efforts of major vendors, but the outcome is quite unusable to me.

I’ve invested quite a few hours in trying to make this work.  I have a hunch that with a few more hours’ effort I might get somewhere – but my confidence in that is ebbing away.  Andrew can’t encrypt.  🙁