A little piece of news (h/t Jonathan Zittrain) caught my eye:
The gist is that Cisco have been upgrading the firmware in Linksys routers so that they can be managed remotely (under the user’s control, normally) from Cisco’s cloud. The impact of this is hotly debated, and I think Cisco disputes the conclusions reached by the journalists at Extreme Tech. Certainly there’s a diffusion of control, a likelihood of future automatic updates, and various content restrictions (against, for example, using the service – whatever that entails – for “obscene, pornographic, or offensive purposes”, things which are generally lawful), and a general diminution of privacy.
Some commentators have reacted by suggesting buying different products (fair enough, depending on where the market leads), ‘downgrading’ to older firmware (not a long-term solution, given the onward march of vulnerability discovery), or flashing one’s own firmware from some open source router development project (this is a surprisingly fertile area). To someone interested in trusted computing and secure boot, the latter is quite interesting – you can do it now, but can/should you be able to in the future?
One of the central themes of some of the stuff around secure boot is that we want cryptographically-strong ways to choose which software is and is not booted, and leave this choice well away from the users, because they can’t be trusted to make the right decision. That’s not some patronizing comment about the uneducated or unaware – I’d apply it to myself. (I speak as someone who hasn’t knowingly encountered a Windows virus/trojan/work in decades despite using Windows daily: I think my online behaviour is fairly safe, but I am under no illusions about how easy it would be to become a victim.) Whilst there are good reasons to want to retain a general power to program something like a PC to do whatsoever I wish, I can imagine that the majority of internet-connected devices in the future will be really quite tightly locked-down. And this, in general, seems a good thing. I don’t demand to be able to re-program my washing machine, and I shouldn’t be able to demand to re-program my digital photo frame, or even my TV set-top box.
However, whereas my washing machine has no immediate prospect of connection to the internet (though the smart grid could change all that), many other devices are connected – and their mis-design will compromise my privacy or my data’s integrity, or worse. And, indeed, involuntary remote updates could break them or reduce their functionality (a mess that already exists, and is really not well-addressed by consumer protection legislation). I might have a reasonably secure household of internet junk until some over-eager product manager decides to improve one of my devices overnight one day. I could wake to find that my household had seized up and all my privacy gone, if the attacker followed the locus of the product updates. This really isn’t particularly far-fetched.
So, I am faced with a tension. A paradox, almost. I am not well-placed to make security decisions by myself; none of us is, especially when we don’t even realise the decisions we are making are security-related. But in this complex advertising-driven world, those to whom I am likely to delegate (perhaps involuntarily) such decisions are (a) themselves imperfect, and much more worryingly (b) highly motivated to monetize my relationship with them in every way possible. Goodbye privacy.
The market might help to remedy the worst excesses in these relationships, but when its dominated by large vendors suing each other over imagined patent infringements, I’m not optimistic about it resolving this tension in a timely manner.
Regulation could be a solution, but is seldom timely and often regulates the wrong thing (as the current mess of cookie regulation in the EU amply demonstrates).
Perhaps there is a role for third-party, not-for-profit agencies which help to regulate the deployment of security and privacy-related technologies – receiving delegated control of a variety of authorization decisions, for example, to save you from wondering if you want App X to be allowed to do Y. They could in principle represent consumer power and hold the ring against both the vendors who want to make the user into the product and the attackers whose motives are simpler. But will we build technologies in such a way as to permit the rise of such agencies? It will take a concerted effort, I think.
Any other ideas?