A new(ish) attack vector?

I just acquired a nice new Android-TV-on-a-stick toy.  £30 makes it an interesting amusement, whether or not it becomes part of daily use.  But that price-point also raises a whole lot of new potential for danger.

Like all Android devices, in order to do anything very interesting with it, I have to supply some Google account credentials when setting it up.  And then to play music from the Amazon Cloud, I have to give some Amazon credentials.  And then, likewise, for Facebook, or Skype, or whatever apps you might want to run on the thing.  When cheap devices abound, we are going to have dozens of them in our lives, just for convenience, and need to share accounts and content between them.

But there’s the rub: anyone could build and ship an Android device.  And beside having all the good behaviours baked in, it could also arrange to do bad things with any of those accounts. This is the trusted device problem thrown into sharp relief.  The corporate world has been wrestling with a similar issue under the banner of ‘bring your own device’ for some time now: but plainly it is a consumer problem too.  Whereas I may trust network operators to sell only phones that they can vouch for, the plethora of system-on-a-stick devices which we are surely about to see will occupy a much less well-managed part of the market.

could use special blank accounts on untrusted devices, but the whole point of having cloud-based services (for want of a better name) is that I can access them from anywhere.  A fresh Google account for the new device would work just fine – but it wouldn’t give me access to all the content, email, and everything else tied to my account.  In fact, the Google account is among the safer things on the new device: thanks to their two-step authentication, stealing the Google password is not enough to provide the keys to the kingdom (though, the authenticator runs on … an Android phone!).

The problem isn’t entirely new, nor purely theoretical: just recently there were news reports of a PC vendor shipping a tainted version of Windows on brand new desktops – and such complexities have concerned the owners of high-value systems for some time.  Management of the supply chain is a big deal when you have real secrets or high-integrity applications.  But in consumer-land, it is quite difficult to explain that you shouldn’t blindly type that all-important password wherever you see a familiar logo: it is fairly difficult to explain that there is scope for something akin to phishing via an attack on a subsystem you cannot even see

Would trusted computing technologies provide a solution?  Well, I suppose they might help: that initial step of registering the new device and the user’s account with the Google servers could contain a remote attestation step, wherein it ‘proves’ that it really is running a good copy of Android.  This might be coupled with some kind of secure boot mandated by a future OS version: you might couple that with trusted virtualization in order to offer safe compartments for valuable applications.

But these things are really quite fraught with complexity (in terms of technology, innovation, liability, network effects, and more) so that I fear their value would be in doubt.  Nevertheless, I don’t really see an alternative on the table at the moment.

You could invent a ‘genuine product’ seal of some kind – a hologram stuck to the product box, perhaps – but who will run that programme, and how much liability will they assume?  You could assume that more locked-down devices are safer – but, reconsidering the supply chain management problem again, how long before we see devices that have been ‘jailbroken’, tained, and re-locked, before being presented through legitimate vendors?

The model is quite badly broken, and a new one is needed. I’d like to hope that the webinos notion of a personal zone (revokable trust among a private cloud of devices) will help: now that the design has matured, the threat model needs revisiting.  The area’s ripe for some more research.

lock-down or personal control?

A little piece of news (h/t Jonathan Zittrain) caught my eye:

The gist is that Cisco have been upgrading the firmware in Linksys routers so that they can be managed remotely (under the user’s control, normally) from Cisco’s cloud.  The impact of this is hotly debated, and I think Cisco disputes the conclusions reached by the journalists at Extreme Tech.  Certainly there’s a diffusion of control, a likelihood of future automatic updates, and various content restrictions (against, for example, using the service – whatever that entails – for “obscene, pornographic, or offensive purposes”, things which are generally lawful), and a general diminution of privacy.

Some commentators have reacted by suggesting buying different products (fair enough, depending on where the market leads), ‘downgrading’ to older firmware (not a long-term solution, given the onward march of vulnerability discovery),  or flashing one’s own firmware from some open source router development project (this is a surprisingly fertile area).  To someone interested in trusted computing and secure boot, the latter is quite interesting – you can do it now, but can/should you be able to in the future?

One of the central themes of some of the stuff around secure boot is that we want cryptographically-strong ways to choose which software is and is not booted, and leave this choice well away from the users, because they can’t be trusted to make the right decision.  That’s not some patronizing comment about the uneducated or unaware – I’d apply it to myself.  (I speak as someone who hasn’t knowingly encountered a Windows virus/trojan/work in decades despite using Windows daily: I think my online behaviour is fairly safe, but I am under no illusions about how easy it would be to become a victim.)  Whilst there are good reasons to want to retain a general power to program something like a PC to do whatsoever I wish, I can imagine that the majority of internet-connected devices in the future will be really quite tightly locked-down.  And this, in general, seems a good thing. I don’t demand to be able to re-program my washing machine, and I shouldn’t be able to demand to re-program my digital photo frame, or even my TV set-top box.

However, whereas my washing machine has no immediate prospect of connection to the internet (though the smart grid could change all that), many other devices are connected – and their mis-design will compromise my privacy or my data’s integrity, or worse.  And, indeed, involuntary remote updates could break them or reduce their functionality (a mess that already exists, and is really not well-addressed by consumer protection legislation).  I might have a reasonably secure household of internet junk until some over-eager product manager decides to improve one of my devices overnight one day. I could wake to find that my household had seized up and all my privacy gone, if the attacker followed the locus of the product updates.  This really isn’t particularly far-fetched.

So, I am faced with a tension.  A paradox, almost.  I am not well-placed to make security decisions by myself; none of us is, especially when we don’t even realise the decisions we are making are security-related.  But in this complex advertising-driven world, those to whom I am likely to delegate (perhaps involuntarily) such decisions are (a) themselves imperfect, and much more worryingly (b) highly motivated to monetize my relationship with them in every way possible.  Goodbye privacy.

The market might help to remedy the worst excesses in these relationships,  but when its dominated by large vendors suing each other over imagined patent infringements, I’m not optimistic about it resolving this tension in a timely manner.

Regulation could be a solution, but is seldom timely and often regulates the wrong thing (as the current mess of cookie regulation in the EU amply demonstrates).

Perhaps there is a role for third-party, not-for-profit agencies which help to regulate the deployment of security and privacy-related technologies – receiving delegated control of a variety of authorization decisions, for example, to save you from wondering if you want App X to be allowed to do Y.  They could in principle represent consumer power and hold the ring against both the vendors who want to make the user into the product and the attackers whose motives are simpler.  But will we build technologies in such a way as to permit the rise of such agencies?  It will take a concerted effort, I think.

Any other ideas?

Centre of Excellence

Today, the University is recognised by GCHQ as an Academic Centre of Excellence in Cyber Security Research.  We share the title with seven other UK Universities.

The aim of the scheme is to promote original research which helps to advance the state of the art: this means both getting such research done in the UK, and developing a pool of expertise for academia, government, and business.  Wise and talented individuals are clearly crucial, and it’s a privilege to work with a large such group in Oxford.

The scheme is modelled after one which has been running in the USA for some time, fronted by the NSA.  Their programme is extensive, and has other components – such as Centres of Excellence in Education, which we might expect to see here in the fullness of time.

Is close association with GCHQ a double-edged sword?  They host the national technical authority for information assurance and so have a good idea of the kind of research which may be useful.  In principle, they also have deep pockets (though not nearly as deep as those of their American counterparts mentioned above).  On the other hand, universities will not stay excellent for very long if they are (or are seen to be) the patsies of the state.  Happily, I think this is recognised by all sides, and we can look forward to some creative tension made possible precisely because our research is “outside the fence”.

There’s an interesting clash of mind-sets to hold in mind, too:  the University is an open and exceptionally international place.  Security research is pursued by people who have not just diverse skills but also a variety of affiliations and loyalties – in contrast to those who work in security for the State, who must generally be citizens or otherwise able to hold an advanced clearance. Having the one inform the other sets up tensions – all the more so if there is to be two-way flow of ideas.  Perhaps an interesting topic of research for those in social sciences would be to ask what a “clearance” might look like in the cyber domain – if not based on citizenship, what might be the baseline for assuming shared objectives?

That is one of many cases which show that the concerns of Cyber Security clearly spread beyond the technical domain in which they first arise. Inter-disciplinary study is essential for the future of this topic.  The next blog post will reproduce an article I recently wrote on the subject.

Guess again

Over in a fenland blog, there is a little discussion going on about passwords.  Evidently, Google has been doing some advertising about what makes a good password, and this has come in for some criticism.

In that blog post, Joseph Bonneau proposes an alternative one-liner:

A really strong password is one that nobody else has ever used.

(One of the commentors (J. Carlio) suggests modifications to add something about memorability. )

This is a seductive idea: it is, broadly, true.  It encapsulates the idea that you are trying to defeat brute force attacks, and that these generally succeed by attempting plausible passwords.

But I don’t think it’s good advice.    That is mainly because many people are poor with estimates that surround very large numbers: whether the likelihood of my password being used by someone else is one in a thousand, one in a million, one in a trillion (the word of the week, thanks to national debts) is something that, I would say, few people have a good intuition about.  In just the same way, people are poor at risk assessment for unlikely events.

Continue reading

Aren’t spammers subtle?

Not having managed a blog with such a public profile before, I’m intrigued by the behaviour of those wanting to spam the comments field.

The blog is set up so that individuals from within the group can post using their Oxford credentials. Others can post comments, but the first time they comment, the comment must be moderated.

Some try posting their adverts for herbal remedies right away. Those are easy to spot and throw away.

There are several, though, who have posted comments like “I like this blog. You make good points.” I assume that the aim of these is that they are likely to get approved by a semi-vigilant moderator because then the commenter becomes a trusted poster.  Presumably, the advertising spam would follow thereafter.

I remark on this

  • (a) because other members of the group may moderate comments, and should be on the lookout for this ‘trojan’ behaviour;
  • (b) because it points to a greater degree of tenacity on the part of the spammers than I would have realised existed;
  • (c) because it seems a particularly hard problem to solve, CAPTCHAs notwithstanding.

revisiting email retention

I have an archive copy of just about every email I’ve sent or received since about 1996, and certainly haven’t deleted an email since 1998 – not even the spam.  Many people know this – and many colleagues do something similar.

I suppose I have two reasons for doing this:

  • trawling the archives is occasionally useful (for finding information, or confirming what someone said, or being reminded what I said); because just about all of my work is eventually mediated (in and out) by email, the mailbox archive plays the role of a professional journal.
  • the process of filing and deciding what to retain and what to delete is insanely time-consuming, and easily costs more than the now insanely cheap cost of disc storage and associated backups.
This approach actually extends beyond email – I haven’t really deleted a file in a decade or so.

secure boot gone viral

After the spread of some information, and some mis-information, the story of secure boot in Windows 8, achieved through security features of UFEI has gone viral.  Linux forums all over the web are suddenly full of activists angry about some evil corporation wanting to deprive them of free software.  The aforementioned company has issued a long blog post by way of clarification.

The issue

In brief, the point is that there are plans to replace the BIOS – the main body of firmware which sets up a PC to be ready to run an operating system.  A key task of the BIOS in most systems is to start the boot loader program (or ‘pre-OS environment’), which in turn loads the operating system and gets it running.  This process is prone to attack: if someone can run an alternative boot loader (or reconfigure it, or launch a different operating system), they can either steal all the data on your disk, or they can launch something which looks just like your operating system, but is subverted to send copies of all your secrets – such as everything you type – to the bad guys. Continue reading

seeking evidence

Conventional wisdom says:

  1. Security through obscurity doesn’t work.  You may hide your needle in a haystack, but it’s likely to come back and stick into you (or someone who will sue you) when you least want it to.  Much better to lock your needle in a safe.
  2. You shouldn’t roll your own controls: whether crypto, or software, or architectures, or procedures.  The wisdom of the crowd is great, and the vendor can afford better security expertise than your own project can, because the vendor can amortise the cost over a much broader base than you can ever manage.

And yet, when I want to protect an asset against a fairly run-of-the-mill set of threats, it’s very far from clear to me whether that asset will  be safer if I protect it with COTS products or if I build my own, perhaps quirky and not necessarily wonderful, product.

audit in the 21st century

One of the big annoyances in my working life is the procedure for claiming expenses.  Our University seems eager to retain processes which would have made sense in the first half of the 20th century, which involve a large – and increasing – volume of paper.

One of the problems with this is that as a method of holding people to account, it’s very poor.  The big items of expenditure are conference registration fees, airfares, and hotel bills.  In many cases, the receipt for each of these reaches me by email.  The process of claiming the money involves printing out those emails (PDFs, whatever), and stapling them to a claim form.  If the cost is in foreign currency, I also (bizarrely) have to nominate  an exchange rate, and print out a page from somewhere like www.xe.com to prove that that rate existed at some nearby point in time.

Of course, any of those evidences could trivially be falsified. Continue reading