InfoSecurity and Revenue Protection

I registered for the InfoSecurity show (though I didn’t manage to attend).  They did send me a badge, though.

Back in the day, these badges just contained a conventional barcode.  If you scanned the barcode, all you would see was a seemingly-meaningless ID.  Clearly that ID was a lookup key in the organisers’ registration database.    For a fee, they would rent to exhibitors a barcode scanner. The deal is, of course, that you scan the badge of each person who visits the stand, with a view to following up the conversation, and trying to sell them something (etc.) later on.   You could scan the badges with any barcode scanner (including your phone app) of course – but without access to the database, those scans would have no value.  So the fee also covers a service to let you download the registration details (name, address, ..) of those whose IDs you scanned.  The fee is not inconsiderable.

Now, I see that this year’s badges contained a QR code as well as the old-fashioned barcode.  I was a bit surprised to see the following when I scanned my badge:

{“CJe”;”BE8FYTR”,”DO”;”Vojwfstjuz pg Pygpse”,”G”;”Boesfx”,”KU”;”Qspg pg Tztufnt Tfdvsjuz”,”T”;”Nbsujo”}

At first, that looks like gibberish.  Some advanced encryption, maybe?  Well, no: you don’t need to be a Bletchley Park veteran to spot that in fact the encryption scheme is about 2000 years old (according to legend, at least).  A bright ten-year-old could sort that one out.  (Lookup: Caesar’s Cipher, or ROT-25).

I assume they’re still trying to push the considerable rental fee for the barcode scanners, but, really, there’s a lot of information on offer without paying.  Maybe the fact that the email address isn’t there would be reason enough to continue to hand over the cash.

The choice of Caesar’s Cipher is perhaps rather an embarrassment for a trade show dedicated to the latest and greatest in Information Security, though: one might justifiably say that it looks amateurish.  Either you’re attempting serious revenue (and privacy) protection: in which case modern strong encryption should be used, or you don’ t mind that the data is disclosed: in which case, why not use plain text?

 

Footnote: this issue was reported two years ago (shows how long since I last went to InfoSecurity!) by Scott Helme.  So clearly the show organisers aren’t too bothered.

Responding to the Global Ransomware Melt-down

As the dust just begins to settle on the highest-profile ransomware attack to date – one that has apparently affected systems around the globe and caused particular havoc in the UK’s NHS – the recriminations are just starting.

Is the root cause a bad attachment on an email, opened in error by a busy employee?  Or is is systematic under-investment in IT?  The knock-on effect of an ideologically-driven government, perhaps? Maybe it’s poor operating system design.  Or is is laziness in not applying patches and upgrades?   Is it the fault of some foreign government determined to bring the British health system to its knees? Or maybe some socially maladjusted teenager who didn’t realise the potential impact of their behaviour?  Perhaps it’s a massive plot by organised crime.  Is the NSA culpable for having discovered and kept secret a system vulnerability?  Or maybe the NSA or whoever developed the apparent distribution vector for the malware, based on that vulnerability?  Or can we blame those who ‘stole’ that information and disclosed it.

The story continues to unfold as I write, and the urge to assign blame will probably pull in many more possible culprits.  Surely least blame should attach to the “someone” who clicked where they shouldn’t: a system brought down by a moment’s inattention from a lowly receptionist is certainly not fit for purpose.

Devotees of particular operating systems are quick to say “this wouldn’t happen with Linux” or to extol the virtues of their shiny Apple Macs.  But one of the pernicious features of ransomware is that it doesn’t require any particular operating system smartness: in technical terms, it is a “user-space” program.  It doesn’t need to do anything other than open files, edit them (in an unusual way, by encrypting the contents), and overwrite the original.  These actions take place on every desktop every day, and all our desktop operating systems are inherently vulnerable through their design.

Of course, the spread of such malware does rely on features of operating systems and application programs.  Many people will remember when Microsoft software was rife with major flaws and vulnerabilities, leading to endless virus problems.  Most of those are history now, but numerous design decisions contributing to the details of operating system features were taken in the 1990s are still with us, and still making their effects felt.

The last decade or so has seen a massive growth in security awareness – from IT professionals and from everyday users of systems.  The result is much better management of security incidents.  I’d wager that if the latest ransomware attack had been seen a decade ago, the impact would have been even worse because there wouldn’t have been nearly as much planning in place to handle the problem.  But for all that awareness, and even substantial investment, security incidents are getting bigger, more spectacular, more far-reaching: and, critically, more damaging for individuals and for society.

We’re getting better.  But the cyber security problems are multiplying faster.  In the name of the “internet of things” we’re rapidly deploying millions of new devices whose typical security characteristics are rather worse than those of a PC 15 years ago.  And no one has a systematic plan for patching those, or turning them off before they become dangerous.

And let’s not be in any doubt: internet of things devices are potentially dangerous in a way that our old-fashioned information systems and file servers are not.  These devices control real things, with real kinetic energy.  Things that go bang when you mis-direct them.  Medical equipment that may be life-or-death for the patient.   Self-driving cars that could endanger not just their own passengers, but many others too – or could just bring the economy to a standstill through gridlock.  A future malware attack might not just stop a few computers: what if all the dashboards on the M25 suddenly demanded a $300 payment?

Ultimately as a society, we’ll get the level of security we are willing to pay for.  So far, for the most part, technology gives us massive benefits, and security incidents are an occasional inconvenience.  Maybe that balance will persist: but my money would be on that balance shifting towards the downsides for quite a while to come.

There are plenty of good bits of security technology deployed; many more are in laboratories just waiting for the right funding opportunity.  There are many great initiatives to make sure that there are lots of of security experts ready for the workforce in a few years time – but we also need to ensure that all programmers, engineers and technicians build systems with these concerns in mind.  What’s more, we need bankers, politicians, doctors, lawyers, managers, and dozens of other professions similarly to take security seriously in planning systems, processes, and regulations.

Big security challenges are part of the future of technology: the solution is not to reach for simplistic solutions or finger-pointing, but to make progress on many fronts.  We can do better.

 

 

Where is all the nodejs malware?

We’re using nodejs extensively in our current research project – webinos – and I have personally enjoyed programming with it and the myriad of useful 3rd-party modules available online.

However, I’ve always been concerned about the ease at which new modules are made available and may be integrated into bigger systems.  If I want to create a website that supports QR code generation, for example, as a developer I might do the following:

  1. Visit google and search for “nodejs qrcode”.  The first result that comes up is this module – https://github.com/soldair/node-qrcode .  From a brief look at the github page, it seems to do exactly what I want.
  2. Download the module locally using ‘npm install qrcode’.  This fetches the module from the npmjs registry and then installs it using the process defined in the package.json file bundled with this module.
  3. Test the module to see how it works, probably using the test cases included in the module download.
  4. Integrate the module into webinos and then add it to webinos’ package.json file.
  5. When the changes make their way into the main project, everyone who downloads and installs webinos will also download and install the qrcode module.

I’m going to go out on a limb and suggest that this is a common way of behaving.  So what risk am I (and anyone developing with nodejs modules) taking?

If you’re using nodejs in production, you’re putting it on a web server.  By necessity, you are also also giving it access to your domain certificates and private keys.  You may also be running nodejs as root (even though this is a bad idea).  As such, that nodejs module (which has full access to your operating system) can steal those keys, access your user database, install further malware and take complete control of your webserver.  It can also take control of the PCs you and your fellow developers use every day.

The targets are juicy and the protection is minimal.

And yet, so far, I have encountered no malware (or at least none that I know about).  Some modules have been reported, apparently, but not many.   Why is that?

It could partly be because the npmjs repository offers a way for malware to be reported and efficiently removed.  Except that it doesn’t.  It may do so informally, but it’s not obvious how one might report malware, and there’s no automatic revocation mechanism or update system for already-deployed modules.

It could be that the source code for most modules is open and therefore malware authors dare not submit malicious modules for fear of being exposed, and those that do rapidly are.  Indeed, in the case of the qrcode module (and most nodejs modules) I can inspect the source code to my heart’s content.  However, the “many eyes” theory of open source security is known to be unreliable and it is unreasonable to suppose that this would provide any level of protection for anything but the most simple of modules.

I can only assume, therefore, that there is little known nodejs malware because the nodejs community are all well-intentioned people.  It may also be because developers who use nodejs modules form a relationship with the developer of the module and therefore establish enough trust to rely on their software.

However, another way of putting it is: nobody has written any yet.

The problem isn’t unique – any third party software could be malicious, not just nodejs modules – but the growing popularity of nodejs makes it a particularly interesting case.  The ease at which modules can be downloaded and used, in combination with their intended target being highly privileged, is cause for concern.

Disagree?  Think that I’ve missed something?  Send me an email – details here.

Update – I’ve blogged again about this subject over at webinos.org

Do Garfinkel’s design patterns apply to the web?

A widely cited publication in usable security research is Simson L. Garfinkel’s thesis: “Design Principles and Patterns for Computer Systems That Are Simultaneously Secure and Usable”.  In Chapter 10 he describes six principles and about twenty patterns which can be followed in order to align security and usability in system design.

We’ve been referring to these patterns throughout the webinos project when designing the system and security architecture.  However, it’s interesting to note that the web (and web applications) actually directly contradict many of them.  Does this make the web insecure?  Or does it suggest that the patterns and principles are inadequate?  Either way, in this blog post I’m going to explore the relationship between some of these principles and the web.

Continue reading

Guess again

Over in a fenland blog, there is a little discussion going on about passwords.  Evidently, Google has been doing some advertising about what makes a good password, and this has come in for some criticism.

In that blog post, Joseph Bonneau proposes an alternative one-liner:

A really strong password is one that nobody else has ever used.

(One of the commentors (J. Carlio) suggests modifications to add something about memorability. )

This is a seductive idea: it is, broadly, true.  It encapsulates the idea that you are trying to defeat brute force attacks, and that these generally succeed by attempting plausible passwords.

But I don’t think it’s good advice.    That is mainly because many people are poor with estimates that surround very large numbers: whether the likelihood of my password being used by someone else is one in a thousand, one in a million, one in a trillion (the word of the week, thanks to national debts) is something that, I would say, few people have a good intuition about.  In just the same way, people are poor at risk assessment for unlikely events.

Continue reading

Aren’t spammers subtle?

Not having managed a blog with such a public profile before, I’m intrigued by the behaviour of those wanting to spam the comments field.

The blog is set up so that individuals from within the group can post using their Oxford credentials. Others can post comments, but the first time they comment, the comment must be moderated.

Some try posting their adverts for herbal remedies right away. Those are easy to spot and throw away.

There are several, though, who have posted comments like “I like this blog. You make good points.” I assume that the aim of these is that they are likely to get approved by a semi-vigilant moderator because then the commenter becomes a trusted poster.  Presumably, the advertising spam would follow thereafter.

I remark on this

  • (a) because other members of the group may moderate comments, and should be on the lookout for this ‘trojan’ behaviour;
  • (b) because it points to a greater degree of tenacity on the part of the spammers than I would have realised existed;
  • (c) because it seems a particularly hard problem to solve, CAPTCHAs notwithstanding.

revisiting email retention

I have an archive copy of just about every email I’ve sent or received since about 1996, and certainly haven’t deleted an email since 1998 – not even the spam.  Many people know this – and many colleagues do something similar.

I suppose I have two reasons for doing this:

  • trawling the archives is occasionally useful (for finding information, or confirming what someone said, or being reminded what I said); because just about all of my work is eventually mediated (in and out) by email, the mailbox archive plays the role of a professional journal.
  • the process of filing and deciding what to retain and what to delete is insanely time-consuming, and easily costs more than the now insanely cheap cost of disc storage and associated backups.
This approach actually extends beyond email – I haven’t really deleted a file in a decade or so.

secure boot gone viral

After the spread of some information, and some mis-information, the story of secure boot in Windows 8, achieved through security features of UFEI has gone viral.  Linux forums all over the web are suddenly full of activists angry about some evil corporation wanting to deprive them of free software.  The aforementioned company has issued a long blog post by way of clarification.

The issue

In brief, the point is that there are plans to replace the BIOS – the main body of firmware which sets up a PC to be ready to run an operating system.  A key task of the BIOS in most systems is to start the boot loader program (or ‘pre-OS environment’), which in turn loads the operating system and gets it running.  This process is prone to attack: if someone can run an alternative boot loader (or reconfigure it, or launch a different operating system), they can either steal all the data on your disk, or they can launch something which looks just like your operating system, but is subverted to send copies of all your secrets – such as everything you type – to the bad guys. Continue reading

seeking evidence

Conventional wisdom says:

  1. Security through obscurity doesn’t work.  You may hide your needle in a haystack, but it’s likely to come back and stick into you (or someone who will sue you) when you least want it to.  Much better to lock your needle in a safe.
  2. You shouldn’t roll your own controls: whether crypto, or software, or architectures, or procedures.  The wisdom of the crowd is great, and the vendor can afford better security expertise than your own project can, because the vendor can amortise the cost over a much broader base than you can ever manage.

And yet, when I want to protect an asset against a fairly run-of-the-mill set of threats, it’s very far from clear to me whether that asset will  be safer if I protect it with COTS products or if I build my own, perhaps quirky and not necessarily wonderful, product.

audit in the 21st century

One of the big annoyances in my working life is the procedure for claiming expenses.  Our University seems eager to retain processes which would have made sense in the first half of the 20th century, which involve a large – and increasing – volume of paper.

One of the problems with this is that as a method of holding people to account, it’s very poor.  The big items of expenditure are conference registration fees, airfares, and hotel bills.  In many cases, the receipt for each of these reaches me by email.  The process of claiming the money involves printing out those emails (PDFs, whatever), and stapling them to a claim form.  If the cost is in foreign currency, I also (bizarrely) have to nominate  an exchange rate, and print out a page from somewhere like www.xe.com to prove that that rate existed at some nearby point in time.

Of course, any of those evidences could trivially be falsified. Continue reading