Sun Jan 8 14:58:11 EST 2012

when "pretty secure" isn't secure enough

"Richard Stallman Was Right All Along"

"As a member of the Walkman generation, I have made peace with the fact that I will require a hearing aid long before I die, and of course, it won't be a hearing aid, it will be a computer I put in my body," Doctorow explains, "So when I get into a car - a computer I put my body into - with my hearing aid - a computer I put inside my body - I want to know that these technologies are not designed to keep secrets from me, and to prevent me from terminating processes on them that work against my interests."

Something I've been thinking about off and on for the last seven years or so, is what the security model for an em would look like.

Background info, for non-transhumanists: "Em" is a short, pithy word coined by Robin Hanson to refer to a person running on a computer. The basic idea behind Whole Brain Emulation is to scan a human brain with an electron microscope, then make a model of all the scanned atoms, and run that model in a physics simulator, which will run all the chemical interactions between neurons like it was a physical brain. This model will have all the memories of the person that was scanned, but has all the advantages of software: functional immortality, easy copying, can be run millions of times faster than real time...

The problem arises when you start to think about what kind of computer you're going to run this simulation on. It must be completely, flawlessly, secure. It absolutely cannot be hacked, because once you lose control of that computer, that's the ballgame. A copy of your brain-state is you. It's got all your memories, knows all your passwords.

That's bad. It gets worse: a brain-state is software, it can't "die" in the organic sense of the word. You could torture it to death, over and over, for a thousand years; if you felt like it. "They populate the simulation spaces of its mind, exploring all the possible alternative endings to their life."

So it's pretty clear that the operating system for a em is going to have be very special indeed. Quebes isn't paranoid enough. OpenBSD isn't paranoid enough. seL4 isn't paranoid enough. You will need a degree of paranoia hitherto unseen outside of nuclear weapons safety protocols and space shuttle flight control systems. Multiple, concentric, airgapped systems. ASICs that refuse to export their contents. Physical safety interlocks. Power draw monitoring. (Here being used in a somewhat unusual way: monitoring the power draw of a secure processor to verify that it hasn't been compromised) Provably secure code. Self-destruct charges!

Some of the sting of "killing yourself rather than be captured by the enemy" is taken out by having a couple dozen copies as backup, however.

This is a bar set amazingly, impossibly high; and it goes absolutely without saying that no general-purpose commercial OS clears it. However, many of the freedom-destroying technologies cut both ways. The Xbox 360, which has been out for six years now, uses code signing to enforce a closed platform. Downside: no third-party software, at all. Upside: there has never been a virus on the 360. (apt-get uses a weak form of code signing, and to the best of my knowledge, has never distributed a virus either)

The Trusted Platform Module can be used to build a computer which you can only install Windows on, but can also be used by Linux to protect against certain attacks.

This blog post doesn't really have a point, I just wanted to talk about some stuff. Sorry.

I'm certainly not saying that there's some kind of tradeoff between open-source and security. That would just be utter, blithering nonsense. I guess if there's any point here that I'm flailing in the direction of, it's that there are certain dual-use technologies, which are in danger of being misused by people looking to make money at the expense of the users; also known as the Facebook strategy.


Posted by | Permanent link | File under: important, Linux