# Lost Treasures

Some say computer science rediscovers old ideas every twenty years or so. Justin mentioned it last week in the context of explicit vs implicit information flows. I was reminded again today when I saw a call for papers from IEEE Security & Privacy titled ‘Lost Treasures of Computer Security & Privacy’ [http://www.computer.org/portal/web/computingnow/spcfp6] for a special issue next year. The list of topics the editors seek makes for fascinating reading, but I wish to note a different, practical reason.

When tracking down a reference a few months ago, I ran into an example of what librarians call a ‘black hole’ or ‘dark age’: periods of history inaccessible due to changing technology. The document I was looking for contains hearings before the U.S. Senate Select Committee on Small Business, 85th Congress, in 1957. But when I went to that room in the regional depository library, all I found were pieces of shelving on the floor. The microform collection is being digitised and decades of microfilm are ‘temporarily unavailable’, where temporary may mean upwards of a year or more.

What other instances of forgotten lore have you personally encountered?

# secure boot gone viral

After the spread of some information, and some mis-information, the story of secure boot in Windows 8, achieved through security features of UFEI has gone viral.  Linux forums all over the web are suddenly full of activists angry about some evil corporation wanting to deprive them of free software.  The aforementioned company has issued a long blog post by way of clarification.

The issue

In brief, the point is that there are plans to replace the BIOS – the main body of firmware which sets up a PC to be ready to run an operating system.  A key task of the BIOS in most systems is to start the boot loader program (or ‘pre-OS environment’), which in turn loads the operating system and gets it running.  This process is prone to attack: if someone can run an alternative boot loader (or reconfigure it, or launch a different operating system), they can either steal all the data on your disk, or they can launch something which looks just like your operating system, but is subverted to send copies of all your secrets – such as everything you type – to the bad guys. Continue reading

# How not to look like a spearphishing attack

This is not a protip on how to make your spearphishing attacks more effective.

Today I received an email on my work account. It happens to be at a large defence contractor, and that’s relevant. Because spear phishing attacks are a primary threat in my environment, and they look just like this:

From: [redacted] SPAWARSYSCEN-ATLANTIC, 987654 [[redacted]@navy.mil]
To:
From:
Attachment: Newsletter_1.docx

All,

Attached is our latest Newsletter.  Please review.

r/
[name redacted]
ASSO/Security Specialist
SSC-Atlantic SSO
[telephone redacted]
[fax redacted]
https://iweb.spawar.navy.mil/depts/[redacted]
For Official Use Only - Privacy Sensitive - Any misuse or unauthorized disclosure may result in both civil or criminal penalties.

A few things stand out in that email: the empty To: and Cc: fields, the extremely generic filename of the attachment, the fact that the attachment is, or at least appears to be, a Word document; and in the body of the message, the odd capitalisation of ‘Newsletter’, the imperative phrase ‘Please review.’

I took the precaution of examining the mail headers in detail.  Thanks, Microsoft, for making that difficult to do.  The Received: header chain looked reassuring; it came from the expected place.  Interestingly, I only now noticed that the email was digitally signed.  The icon is so tiny I overlooked it.  Thank you, Microsoft, again for hiding that piece of important information from me.

I was still wary about the attachment, though.  After a suitable period of contemplation, I clicked on it.  The expected warning message from the OS appeared: “you should not open files received from unknown senders”.  Why show me that warning message when it knows that the message is digitally signed?  Instead of saying it’s from an unknown sender, why not show me the certificate path of the digital signature?  My future career prospects flashing before my eyes, I hesitated.  Instead of opening the attachment at once, I decided to try scanning it first with my computer’s anti-virus programme.

And promptly received a demand for the Administrator password—which I don’t have—because apparently that’s not something users are allowed to do.

So my question for the community is, how can this problem be solved?  Crippling suspicion can’t be good for the efficiency of organisations.

P.S. It was not a spear-phishing attack.  I had a nice conversation with the sender later and we comiserated over the state of trust on the internet.

# seeking evidence

Conventional wisdom says:

1. Security through obscurity doesn’t work.  You may hide your needle in a haystack, but it’s likely to come back and stick into you (or someone who will sue you) when you least want it to.  Much better to lock your needle in a safe.
2. You shouldn’t roll your own controls: whether crypto, or software, or architectures, or procedures.  The wisdom of the crowd is great, and the vendor can afford better security expertise than your own project can, because the vendor can amortise the cost over a much broader base than you can ever manage.

And yet, when I want to protect an asset against a fairly run-of-the-mill set of threats, it’s very far from clear to me whether that asset will  be safer if I protect it with COTS products or if I build my own, perhaps quirky and not necessarily wonderful, product.

# audit in the 21st century

One of the big annoyances in my working life is the procedure for claiming expenses.  Our University seems eager to retain processes which would have made sense in the first half of the 20th century, which involve a large – and increasing – volume of paper.

One of the problems with this is that as a method of holding people to account, it’s very poor.  The big items of expenditure are conference registration fees, airfares, and hotel bills.  In many cases, the receipt for each of these reaches me by email.  The process of claiming the money involves printing out those emails (PDFs, whatever), and stapling them to a claim form.  If the cost is in foreign currency, I also (bizarrely) have to nominate  an exchange rate, and print out a page from somewhere like www.xe.com to prove that that rate existed at some nearby point in time.

Of course, any of those evidences could trivially be falsified. Continue reading

# cloud failure modalities

There’s a tale of woe getting some airtime on the interwebs from an angst-ridden New York undergraduate (reading between the lines) who has somehow had an entire, quite substantial, google account deleted. The post’s contention is (or includes) the idea that deleting such a profile is tantamount to deleting one’s life, I think. The facts of the case are murky – I’d link to some Google+ discussions, but I can’t find a way to do that – but regardless of this particular young person’s predicament, the story highlights some bigger questions about trusting cloud services. Continue reading

# Privacy in the social world – the Google+ way

Facebook has faced criticisms  with regards to privacy issues. But how is Google+ attempting to address that? Well, users can put friends in ‘circles’. These circles are more of partitions rather access groups. Default circles include ‘family’, ‘acquaintances’, and ‘friends’. You can share posts, pictures etc with only selected circles.

From my personal experience in the last few days that I have played with Google+, it seems straight forward enough to drag and drop friends into different circles and create new circles if necessary. If it works according to how I understand it is supposed to work, then I think many of the facebook privacy issues will be addressed. This, of course, is ignoring that Google+ is integrated with Gmail and Google search uses Gmail and Google+ data for its own competitive advantage.

What doesn’t surprise me, though, is that in the 26 days that Google+ has been operational the majority of the 18+ million users who have adopted (well, mostly likely just trying it out) it are tech users. Of course there may be a number of reasons for this but in my view, I think that tech users understand issues around privacy and have some ideas of how they can achieve privacy in their various social interactions (please note how I loosely use the term ‘privacy’ in this article). When a product that promises and attempts to address privacy concerns, tech users are likely to adopt it early on and experiment with it.

But if my assumption here is correct, how long will it take for the almost 1 billion facebook users to realise and jump ship? Oh, wrong question. Are facebook users actually concerned about their privacy? If they are, will they know when a better product comes on the market (not that Google+ is that product)? And most importantly, will they jump ship once that product all privacy campaigners are yearning for? In attempting to answer these questions, let’s ignore the effect of externalities. I would argue that since facebook users DO NOT understand or care about privacy (I would like to see a study that nullifies this claim), a robust and privacy enhanced social networking tool is unlikely to take off, at least among the general population.

# local news

I’m delighted to say that Professor Sadie Creese will be joining the Department of Computer Science – hopefully in October, but perhaps later – to become Professor of Cyber Security and bring leadership to our activity in that area.

Prof. Creese studied for her DPhil in Oxford, with Bill Roscoe as supervisor.  She then worked at QinetiQ before moving to Warwick University.  Coming with her will be Professor Michael Goldsmith and about eight other research staff.  The objective of this move is to create a large centre of expertise in Oxford, able to take an internationally-leading role in research around cyber security, information assurance, and related fields.  This is of course a major step forward in the vision I have been touting for some time (my ‘world domination plan’ as Ivan put it), and has every prospect of making Oxford an even more attractive partner for funders and other projects.  We will be looking for ways to enhance cross-disciplinary working in order that we can make genuine steps forward in this area.

# disk erasure

A recent pointer to Peter Guttman’s new book on security engineering (looks good, by the way) reminds me that Guttman’s name is associated with the woeful tale of disk erasure norms.

The argument goes this way: ‘normal’ file erases (from your windows shell, say) merely change pointers in tables, and do not remove any data from the disk.  A fairly unsophisticated process will recover your ‘deleted’ files.  Wiser people ensure that the file is deleted from the media itself – by writing zeros over the sectors on the disk that formerly contained the file.  Guttman’s argument was that because of minor variations in disk head alignment with the platters, this is insufficient to ensure the complete removal of the residual magnetic field from the former data.  There is a possibility that, with the right equipment, someone could recover the formerly-present files.  So he has an algorithm involving, I think, 35 passes, writing various patterns, calculated to destroy any remaining underlying data.

Now, the problem appears/appeared real enough: various government standards have, for decades now, ruled that magnetic media which has held classified material cannot be declassified but must be destroyed  before leaving secure custody.  Whether anyone has ever managed to recover a non-trivial amount of data from a once-zeroed disk is much less clear: as far as I know, there’s not a lot in the open literature to suggest it’s possible, and none of the companies specializing in data recovery will offer it as a service.  Furthermore, since Guttman did his original work, disk design has evolved (and the ‘size’ of the bits on the disk become so small that any residual effect is going to be truly minimal), and disk manufacturers have built a ‘secure erase’ into their controllers for quite a few years now.  Even better, the new generation of self-encrypting drives can be rendered harmless by the deletion of just one key (don’t do this by accident!).

Yet, the perception persists that the simple solutions are insufficient. Let us leave aside Government security standards and think simply of commercial risk.  Multi-pass erasure is downright time-consuming.  You can buy a disk-shredding service –  but this attracts quite a fee.  So it is not uncommon simply to archive used disks in a warehouse somewhere (with or without a single zeroing pass, I suppose).  How long you would keep those disks for is unclear: until their data ceases to be valuable, I suppose.  But without a detailed inventory of their contents, that cannot easily be determined.  So perhaps you have to keep them forever.

My simple question is: which attracts the lower risk (and/or the lower total predicated cost)? (a) Zeroing a disk and putting it in a skip, or (b) Warehousing it until the end of the lifetime of the data it holds?  You can postulate whatever adversary model you wish.  The answer is not obvious to me.  And if we can’t make a simple determination about risk in this case (because, frankly, the parameters are all lost in the noise), what possible chance do we have of using risk calculations to make decisions in the design of more complex systems?