I just acquired a nice new Android-TV-on-a-stick toy. £30 makes it an interesting amusement, whether or not it becomes part of daily use. But that price-point also raises a whole lot of new potential for danger.
Like all Android devices, in order to do anything very interesting with it, I have to supply some Google account credentials when setting it up. And then to play music from the Amazon Cloud, I have to give some Amazon credentials. And then, likewise, for Facebook, or Skype, or whatever apps you might want to run on the thing. When cheap devices abound, we are going to have dozens of them in our lives, just for convenience, and need to share accounts and content between them.
But there’s the rub: anyone could build and ship an Android device. And beside having all the good behaviours baked in, it could also arrange to do bad things with any of those accounts. This is the trusted device problem thrown into sharp relief. The corporate world has been wrestling with a similar issue under the banner of ‘bring your own device’ for some time now: but plainly it is a consumer problem too. Whereas I may trust network operators to sell only phones that they can vouch for, the plethora of system-on-a-stick devices which we are surely about to see will occupy a much less well-managed part of the market.
I could use special blank accounts on untrusted devices, but the whole point of having cloud-based services (for want of a better name) is that I can access them from anywhere. A fresh Google account for the new device would work just fine – but it wouldn’t give me access to all the content, email, and everything else tied to my account. In fact, the Google account is among the safer things on the new device: thanks to their two-step authentication, stealing the Google password is not enough to provide the keys to the kingdom (though, the authenticator runs on … an Android phone!).
The problem isn’t entirely new, nor purely theoretical: just recently there were news reports of a PC vendor shipping a tainted version of Windows on brand new desktops – and such complexities have concerned the owners of high-value systems for some time. Management of the supply chain is a big deal when you have real secrets or high-integrity applications. But in consumer-land, it is quite difficult to explain that you shouldn’t blindly type that all-important password wherever you see a familiar logo: it is fairly difficult to explain that there is scope for something akin to phishing via an attack on a subsystem you cannot even see
Would trusted computing technologies provide a solution? Well, I suppose they might help: that initial step of registering the new device and the user’s account with the Google servers could contain a remote attestation step, wherein it ‘proves’ that it really is running a good copy of Android. This might be coupled with some kind of secure boot mandated by a future OS version: you might couple that with trusted virtualization in order to offer safe compartments for valuable applications.
But these things are really quite fraught with complexity (in terms of technology, innovation, liability, network effects, and more) so that I fear their value would be in doubt. Nevertheless, I don’t really see an alternative on the table at the moment.
You could invent a ‘genuine product’ seal of some kind – a hologram stuck to the product box, perhaps – but who will run that programme, and how much liability will they assume? You could assume that more locked-down devices are safer – but, reconsidering the supply chain management problem again, how long before we see devices that have been ‘jailbroken’, tained, and re-locked, before being presented through legitimate vendors?
The model is quite badly broken, and a new one is needed. I’d like to hope that the webinos notion of a personal zone (revokable trust among a private cloud of devices) will help: now that the design has matured, the threat model needs revisiting. The area’s ripe for some more research.