InfoSecurity and Revenue Protection

I registered for the InfoSecurity show (though I didn’t manage to attend).  They did send me a badge, though.

Back in the day, these badges just contained a conventional barcode.  If you scanned the barcode, all you would see was a seemingly-meaningless ID.  Clearly that ID was a lookup key in the organisers’ registration database.    For a fee, they would rent to exhibitors a barcode scanner. The deal is, of course, that you scan the badge of each person who visits the stand, with a view to following up the conversation, and trying to sell them something (etc.) later on.   You could scan the badges with any barcode scanner (including your phone app) of course – but without access to the database, those scans would have no value.  So the fee also covers a service to let you download the registration details (name, address, ..) of those whose IDs you scanned.  The fee is not inconsiderable.

Now, I see that this year’s badges contained a QR code as well as the old-fashioned barcode.  I was a bit surprised to see the following when I scanned my badge:

{“CJe”;”BE8FYTR”,”DO”;”Vojwfstjuz pg Pygpse”,”G”;”Boesfx”,”KU”;”Qspg pg Tztufnt Tfdvsjuz”,”T”;”Nbsujo”}

At first, that looks like gibberish.  Some advanced encryption, maybe?  Well, no: you don’t need to be a Bletchley Park veteran to spot that in fact the encryption scheme is about 2000 years old (according to legend, at least).  A bright ten-year-old could sort that one out.  (Lookup: Caesar’s Cipher, or ROT-25).

I assume they’re still trying to push the considerable rental fee for the barcode scanners, but, really, there’s a lot of information on offer without paying.  Maybe the fact that the email address isn’t there would be reason enough to continue to hand over the cash.

The choice of Caesar’s Cipher is perhaps rather an embarrassment for a trade show dedicated to the latest and greatest in Information Security, though: one might justifiably say that it looks amateurish.  Either you’re attempting serious revenue (and privacy) protection: in which case modern strong encryption should be used, or you don’ t mind that the data is disclosed: in which case, why not use plain text?

 

Footnote: this issue was reported two years ago (shows how long since I last went to InfoSecurity!) by Scott Helme.  So clearly the show organisers aren’t too bothered.

A new(ish) attack vector?

I just acquired a nice new Android-TV-on-a-stick toy.  £30 makes it an interesting amusement, whether or not it becomes part of daily use.  But that price-point also raises a whole lot of new potential for danger.

Like all Android devices, in order to do anything very interesting with it, I have to supply some Google account credentials when setting it up.  And then to play music from the Amazon Cloud, I have to give some Amazon credentials.  And then, likewise, for Facebook, or Skype, or whatever apps you might want to run on the thing.  When cheap devices abound, we are going to have dozens of them in our lives, just for convenience, and need to share accounts and content between them.

But there’s the rub: anyone could build and ship an Android device.  And beside having all the good behaviours baked in, it could also arrange to do bad things with any of those accounts. This is the trusted device problem thrown into sharp relief.  The corporate world has been wrestling with a similar issue under the banner of ‘bring your own device’ for some time now: but plainly it is a consumer problem too.  Whereas I may trust network operators to sell only phones that they can vouch for, the plethora of system-on-a-stick devices which we are surely about to see will occupy a much less well-managed part of the market.

could use special blank accounts on untrusted devices, but the whole point of having cloud-based services (for want of a better name) is that I can access them from anywhere.  A fresh Google account for the new device would work just fine – but it wouldn’t give me access to all the content, email, and everything else tied to my account.  In fact, the Google account is among the safer things on the new device: thanks to their two-step authentication, stealing the Google password is not enough to provide the keys to the kingdom (though, the authenticator runs on … an Android phone!).

The problem isn’t entirely new, nor purely theoretical: just recently there were news reports of a PC vendor shipping a tainted version of Windows on brand new desktops – and such complexities have concerned the owners of high-value systems for some time.  Management of the supply chain is a big deal when you have real secrets or high-integrity applications.  But in consumer-land, it is quite difficult to explain that you shouldn’t blindly type that all-important password wherever you see a familiar logo: it is fairly difficult to explain that there is scope for something akin to phishing via an attack on a subsystem you cannot even see

Would trusted computing technologies provide a solution?  Well, I suppose they might help: that initial step of registering the new device and the user’s account with the Google servers could contain a remote attestation step, wherein it ‘proves’ that it really is running a good copy of Android.  This might be coupled with some kind of secure boot mandated by a future OS version: you might couple that with trusted virtualization in order to offer safe compartments for valuable applications.

But these things are really quite fraught with complexity (in terms of technology, innovation, liability, network effects, and more) so that I fear their value would be in doubt.  Nevertheless, I don’t really see an alternative on the table at the moment.

You could invent a ‘genuine product’ seal of some kind – a hologram stuck to the product box, perhaps – but who will run that programme, and how much liability will they assume?  You could assume that more locked-down devices are safer – but, reconsidering the supply chain management problem again, how long before we see devices that have been ‘jailbroken’, tained, and re-locked, before being presented through legitimate vendors?

The model is quite badly broken, and a new one is needed. I’d like to hope that the webinos notion of a personal zone (revokable trust among a private cloud of devices) will help: now that the design has matured, the threat model needs revisiting.  The area’s ripe for some more research.

Guess again

Over in a fenland blog, there is a little discussion going on about passwords.  Evidently, Google has been doing some advertising about what makes a good password, and this has come in for some criticism.

In that blog post, Joseph Bonneau proposes an alternative one-liner:

A really strong password is one that nobody else has ever used.

(One of the commentors (J. Carlio) suggests modifications to add something about memorability. )

This is a seductive idea: it is, broadly, true.  It encapsulates the idea that you are trying to defeat brute force attacks, and that these generally succeed by attempting plausible passwords.

But I don’t think it’s good advice.    That is mainly because many people are poor with estimates that surround very large numbers: whether the likelihood of my password being used by someone else is one in a thousand, one in a million, one in a trillion (the word of the week, thanks to national debts) is something that, I would say, few people have a good intuition about.  In just the same way, people are poor at risk assessment for unlikely events.

Continue reading

Aren’t spammers subtle?

Not having managed a blog with such a public profile before, I’m intrigued by the behaviour of those wanting to spam the comments field.

The blog is set up so that individuals from within the group can post using their Oxford credentials. Others can post comments, but the first time they comment, the comment must be moderated.

Some try posting their adverts for herbal remedies right away. Those are easy to spot and throw away.

There are several, though, who have posted comments like “I like this blog. You make good points.” I assume that the aim of these is that they are likely to get approved by a semi-vigilant moderator because then the commenter becomes a trusted poster.  Presumably, the advertising spam would follow thereafter.

I remark on this

  • (a) because other members of the group may moderate comments, and should be on the lookout for this ‘trojan’ behaviour;
  • (b) because it points to a greater degree of tenacity on the part of the spammers than I would have realised existed;
  • (c) because it seems a particularly hard problem to solve, CAPTCHAs notwithstanding.

revisiting email retention

I have an archive copy of just about every email I’ve sent or received since about 1996, and certainly haven’t deleted an email since 1998 – not even the spam.  Many people know this – and many colleagues do something similar.

I suppose I have two reasons for doing this:

  • trawling the archives is occasionally useful (for finding information, or confirming what someone said, or being reminded what I said); because just about all of my work is eventually mediated (in and out) by email, the mailbox archive plays the role of a professional journal.
  • the process of filing and deciding what to retain and what to delete is insanely time-consuming, and easily costs more than the now insanely cheap cost of disc storage and associated backups.
This approach actually extends beyond email – I haven’t really deleted a file in a decade or so.