seeking evidence

Conventional wisdom says:

  1. Security through obscurity doesn’t work.  You may hide your needle in a haystack, but it’s likely to come back and stick into you (or someone who will sue you) when you least want it to.  Much better to lock your needle in a safe.
  2. You shouldn’t roll your own controls: whether crypto, or software, or architectures, or procedures.  The wisdom of the crowd is great, and the vendor can afford better security expertise than your own project can, because the vendor can amortise the cost over a much broader base than you can ever manage.

And yet, when I want to protect an asset against a fairly run-of-the-mill set of threats, it’s very far from clear to me whether that asset will  be safer if I protect it with COTS products or if I build my own, perhaps quirky and not necessarily wonderful, product.

Plainly, there are lots of edge cases – the world is full of highly specialized systems with ‘inadqeuate’ security controls, but by and large this doesn’t matter because the work factor (in total cost) for the attacker is rendered too great by the obscurity of the design.  This is very different from the margin of error we enjoy with strong cryptographic controls, yet it makes for a meaningful risk balance. The problem comes from not knowing what the real parameters to the risk model are.

If that sounds esoteric, consider a simplistic comment I read about stuxnet: Anyone using Windows to control critical industrial processes should be sectioned immediately.”  That would be conventional wisdom among many technical people – but what are the alternatives?  Microsoft does employ some exceptionally smart security people.  Would a Linux installation actually have been more robust in the presence of a determined attacker?  Would a specialized OS have delivered the necessary functionality and usability?  Would compromising on the latter have introduced its own risks?

In the more general case (a medium-value network-attached database, say), the shape of the argument ought to be better worked-out.  Shall  I run commodity RDBMS system A, on commodity operating system B, or shall I roll my own?  It seems likely to me that the home-grown solution will be much more robust in the face of ‘script kiddies’ (and, as is now out in the open, tabloid journalists), but will fail quickly and catastrophically in the face of a smart, well-informed attacker. But that is little more than a hunch.

How could we assemble some data to help in taking such decisions?

Comments are closed.