Mon Jul 25 21:31:53 EDT 2011

computer security is hard, let's give up

Attention conservation notice: 1140 words about a blog post that's more than a year old, and was written about a guy who doesn't know very much about computer security.

"Digital Security, the Red Queen, and Sexual Computing"

There is a technology trend which even the determinedly non-technical should care about. The bad guys are winning. And even though I am only talking about the bad guys in computing — writers of viruses, malware and the like — they are actually the bad guys of all technology, since computing is now central to every aspect of technology. They might even be the bad guys of civilization in general, since computing-driven technology is central to our attacks on all sorts of other global problems ranging from global poverty to AIDS, cancer, renewable energy and Al Qaeda.

[...]

What we want is an architectural paradigm that can churn the gene pool of computing design at a controllable rate, independently of advances in functionality. In other words, if you have a Windows PC, and I have one, we should be able to have our computers date, mix things up, and replace themselves with two new progeny, every so many weeks, while leaving the functional interface of the systems essentially unchanged. Malware threat levels go up? Reproduce faster.

EDIT: The plot thickens! The book Venkat refers to was written in 1993, by noted dirtbag Matt Ridley, as in, the guy who ran Northern Rock into the ground in 2007, burning twenty six billion pounds of taxpayer money in the process. So not only is the idea bad, but it was based on a book written by a literal criminal. Neat.

Non sequitur. Venkat has extended a metaphor too far.

I've got a whole cloud of objections to this idea. There's just so much wrong with this post that I'm staring at a blank page, wondering where to even begin.

Hell, starting with the most shallow objection first: I can't imagine the solution to "software is poorly designed" is "randomly designed software."

Just how the hell is this supposed to work, anyway? What's the implementation path? Do you randomly permute API names? This would certainly prevent remote exploits, but how do you communicate API names to benign programs without allowing malicious programs access to the trusted computing path?

You can use cryptographic signing of executables, which has worked great at preventing exploits, but has also been great at inciting widespread anger, since step one to get a certificate from the OS vendor tends to be "give us a hundred bucks". When the Feds are explicitly allowing hackers to break your security schemes, you've got a problem.

Even if this scheme was technically possible without vast pointless expenditure of money, the big problem with evolution is that is only works by death. It advances via the death of the unfit organisms, which means that, in practice, the users will see a lot of broken computers. How is an evolutionary computer that constantly crashes any better than a conventional computer that constantly crashes?

Additionally, while I could at least hypothetically visualize a computer that randomized API calls, there's a very important interface that you can't touch, and that's the user interface. Randomizing the locations and captions of buttons would make it completely impossible to use, or document, so that part of the program would have to remain static.

And so the attacker would just programatically simulate mouse clicks, and completely circumvent the randomized APIs.

Also, the hypothesis is that sexual reproduction prevents parasite infection, which as anyone who has spent a hypochondrial hour browsing the "parasites that infect human beings" category on wikipedia knows, parasites still exist. Now, strictly speaking, this is unfair, since the Red Queen hypothesis is that sexual reproduction only discourage parasitism, but in his essay Venkat goes on to pitch this as some kind of revolution in computer security, which will entirely prevent infection by computer viruses, for everyone, everywhere. That isn't true in the original system, and I doubt it would be true in a computer system.

It's also unnecessary.

One of the (many, many) differences between computer virii and chemical virii is that, in the real world, physics doesn't privilege the defender. Chemical reactions run at the same rate in the bacteria as they do in the bacteriophage. The defender can discourage infection by spamming the local environment with oxygen, or by poking holes in their cell membranes, or changing the pH, but they cannot deny them the passage of time.

The same cannot be said of a computer.

On a computer, the processor has to be ordered to execute code. You can download a virus, and it'll sit on your hard drive until the end of time, and will never do any harm unless you execute it.

Unlike organic life, computers are secure by default. This is an absolutely fundamental difference, and it makes a lot of comparisons with organic life invalid. Calling malicious programs "viruses" was a comparison more clever than informative, since the popular conception of a virus is "invisible thing that makes you sick", when in point of fact parvovirus has more in common with a catchy song than it does with an infectious organism, such as e. coli.

Indeed this false parallel has resulted in real harm, since Rao has now wasted some of his precious life writing a fundamentally incorrect essay, rather than writing amusing newsletters on how to be a jerk.

A Linux machine with Apache Nginx configured to serve static, unencrypted HTML, like the machine that runs bbot.org, cannot be hacked. There's nothing to hack. There hasn't been a remote exploit in Apache Nginx in years. It's a goddamn rock, and if someone discovered a remote exploit in static Apache Nginx today I'd eat my hat. You can hammer a web server with traffic until it falls over, but that isn't a hack.[1]

So how is randomly designed software supposed to improve on that?

Software sucks, yeah. But it's not doomed to suck. It can be improved, and it can be perfected, using automated theorem proving to prove a program to be formally correct. As in, a hardcore, mathematical proof, without doubt or flaw.

Automated theorem proving is in its infancy, however. It's pretty easy to prove trivial programs to be correct, but for any program that does something interesting, you tend to run into various amusing NP-hard problems. It's entirely possible that we'll never see a formal proof for Firefox.

But that's no reason to not try.


1: This sounds like a specious cop-out, but there is a valid distinction. A single hacker can spend an hour to take advantage of an exploit, and sit back and watch the internet explode but it would probably take a hundred computers with consumer-grade internet connections to take bbot.org offline, or ten times that number if you didn't want the owners of those computers to start wondering why their computer had abruptly become so slow; and it would only stay offline as long as those computers were dedicated to keeping it down. Or exactly one computer. Whoops.

I still stand by my original statement, though. Denial of service is not remote code execution.


Posted by | Permanent link | File under: important, nerdery