seeking evidence

Conventional wisdom says:

  1. Security through obscurity doesn’t work.  You may hide your needle in a haystack, but it’s likely to come back and stick into you (or someone who will sue you) when you least want it to.  Much better to lock your needle in a safe.
  2. You shouldn’t roll your own controls: whether crypto, or software, or architectures, or procedures.  The wisdom of the crowd is great, and the vendor can afford better security expertise than your own project can, because the vendor can amortise the cost over a much broader base than you can ever manage.

And yet, when I want to protect an asset against a fairly run-of-the-mill set of threats, it’s very far from clear to me whether that asset will  be safer if I protect it with COTS products or if I build my own, perhaps quirky and not necessarily wonderful, product.

audit in the 21st century

One of the big annoyances in my working life is the procedure for claiming expenses.  Our University seems eager to retain processes which would have made sense in the first half of the 20th century, which involve a large – and increasing – volume of paper.

One of the problems with this is that as a method of holding people to account, it’s very poor.  The big items of expenditure are conference registration fees, airfares, and hotel bills.  In many cases, the receipt for each of these reaches me by email.  The process of claiming the money involves printing out those emails (PDFs, whatever), and stapling them to a claim form.  If the cost is in foreign currency, I also (bizarrely) have to nominate  an exchange rate, and print out a page from somewhere like to prove that that rate existed at some nearby point in time.

Of course, any of those evidences could trivially be falsified. Continue reading

cloud failure modalities

There’s a tale of woe getting some airtime on the interwebs from an angst-ridden New York undergraduate (reading between the lines) who has somehow had an entire, quite substantial, google account deleted. The post’s contention is (or includes) the idea that deleting such a profile is tantamount to deleting one’s life, I think. The facts of the case are murky – I’d link to some Google+ discussions, but I can’t find a way to do that – but regardless of this particular young person’s predicament, the story highlights some bigger questions about trusting cloud services. Continue reading

Webinos versus Meego

One of the systems security projects we’re working on in Oxford is webinos – a secure, cross-device web application environment.   Webinos will provide a set of standard APIs so that developers who want to use particular device capabilities – such as location services, or media playback – don’t need to customise their mobile web app to work on every platform.  This should help prevent the fragmentation of the web application market and is an opportunity to introduce a common security model for access control to device APIs.  Webinos is aimed at mobile phones, cars, smart TVs and PCs, and will probably be implemented initially as a heavy-weight web browser plugin on Android and other platforms.

By a staggering coincidence, the Meego project has a similar idea and a similarly broad ranges of devices it intends to work on.  However, Meego is aimed at native applications, and is built around the Qt framework.  Meego is also a complete platform rather than a browser plugin, containing a Linux kernel.  Meego requires that all applications are signed, and can enforce mandatory access controls through the SMACK Linux Security Module.

In terms of security, these two projects have some important differences.  Meego can take advantage of all kinds of interesting trusted infrastructure concepts, including Trusted Execution Environments and Trusted Platform Modules, as it can instrument the operating system to support hardware security features.  Meego can claim complete control of the whole platform, and mediate all attempts to run applications, checking that only those with trusted certificates are allowed (whitelisting).  Webinos has neither of these luxuries.  It can’t insist on a certain operating system (in fact, we would rather it didn’t) and can only control access to web applications, not other user-space programs.  This greatly limits the number of security guarantees we can make, as our root of trust is the webinos software itself rather than an operating system kernel or tamper-proof hardware.

This raises an interesting question.  If I am the developer of a system such as webinos, can I provide security to users – who may entrust my system with private and valuable data – without having full control of the complete software stack?  Is the inclusion of a hardened operating system necessary for me to create a secure application?  Is it reasonable for me to offload this concern to the user and the user’s system administrator (who are likely to be the same person?)

While it seems impractical for developers to ship an entire operating system environment with every application they create, isn’t this exactly what is happening with the rise of virtualization?


Outsourcing undermined

In the current headlong rush towards cloud services – outsourcing, in other words – leads to increasingly complex questions about what the service provider is doing with your data.  In classical outsourcing, you’d usually be able to drive to the provider’s data centre, and touch the disks and tapes holding your precious bytes (if you paid enough, anyway).  In a service-oriented world with global IT firms using data centres which follow the cheapest electricity, sometimes maybe themselves buying services from third parties, that becomes a more difficult task.

A while ago, I was at a meeting where someone posed the question “What happens when the EU’s Safe Harbour Provisions meet the Patriot Act?”.  The former is the loophole by which personal data (which normally cannot leave the EU) is allowed to be exported to data processors in third countries, provided they demonstrably meet standards equivalent to those imposed on data processors within the EU.  The latter is a far-reaching piece of legislation allowing US law enforcement agencies powers of interception and seizure of data.  The consensus at the meeting was that, of course the Patriot Act would win: the conclusion that Safe Harbour is of limited value.  Incidentally, this neatly illustrates the way that information assurance is about far more than just some crypto (or even cloud) technology.

Today, ZDNet reports that the data doesn’t even have to leave the EU for it to be within the reach of the Patriot Act:  Microsoft launched their ‘Office 365’ product, and admitted in answer to a question that data belonging to (relating to) someone in the EU, residing on Microsoft’s servers within the EU, would be surrendered by Microsoft – a US company – to US law enforcement upon a Patriot Act-compliant request.  Surely, then, any multinational (at least, those with offices? headquarters? in the US) is in the same position.  Where the subject of such a request includes personal information,  that faces them with a potential tension: they either break US law or they break EU law.  I suppose they just have to ask themselves which carries the stiffer penalties.

Now, is this a real problem or just a theoretical one? Is it a general problem with trusting the cloud, or a special case that need not delay us too long?   On one level, it’s a fairly unique degree of legal conflict, from two pieces of legislation that were rather deliberately made to be high minded and far reaching in their own domains.  But, in general, cloud-type activity is bound to raise jurisdictional conflicts: the data owner, the data processor, and the cloud service provider(s) may all be in different, or multiple, countries, and any particular legal remedy will be pursued in whichever country gives the best chance of success.

Can technology help with this?  Not as much as we might wish, I think.  The best we can hope for, I think, is an elaborate overlay of policy information and metadata so that the data owner can make rational risk-based decisions.  But that’s a big, big piece of standards work, and making it comprehensible and usable will be challenging. And, it looks like there could be at least a niche market for service providers who make a virtue of not being present in multiple jurisdictions.  In terms of trusted computing, and deciding whether the service metadata is accurate, perhaps we will need a new root of trust for location…

on an unfortunate tension

It’s frustrating when you’re not allowed to use electronic devices during the first and last fifteen minutes of a flight – sometimes much longer. I rather resent having to carry paper reading material, or to stare at the wall in those periods. On today’s flight, they even told us to switch off e-book readers.

E-book readers! Don’t these people realise that the whole point of epaper is that you don’t turn it off: it consumes a minimal amount of power, so that the Kindle can survive a month on a single charge. It has no ‘off’ switch per se, its slide switch simply invoking the “screen saver” mode. This doesn’t change the power consumption by much: it just replaces the on-screen text with pictures, and disables the push buttons.

And the answer is that of course they don’t know this stuff. Why would they? Indeed, it would be absurd to expect a busy cabin attendant to be able to distinguish, say, an ebook reader from a tablet device. If we accept for a moment the shaky premise that electronic devices might interfere with flight navigation systems, then we must accept that the airlines need to ensure that as many as possible of these are swiched off – even those with no off switch to speak of, whose electromagnetic emissions would be difficult to detect at a distance of millimetres.

Of course, this is a safety argument, but much the same applies to security. Even the best of us would struggle to look at a device, look at an interface, and decide whether it is trustworthy. This, it seems to me, is a profound problem. I’m sure evolutionary psychologists could tell us in some detail about the kind of risks we are adapted to evaluate. Although we augment those talents through nurture and education, cyber threats look different every day. Children who have grown up in a digital age will have developed much keener senses for evaluating cyber-goodness than those coming to these things later in life, but we should not delude ourselves into thinking this is purely a generational thing.

People have studied the development of trust, at some length. Although the clues for trusting people seem to be quite well established, we seem to be all over the place in deciding whether to trust an electronic interface – and will tend to do so on the basis of scant evidence. (insert citations here). That doesn’t really bode well for trying to improve the situation. In many ways, the air stewardess’s cautionary approach has much to commend it, but the adoption of computing technology always seems to have been led by a ‘try it and see’ curiosity, and we destroy that at out peril.