I guess there are lots of things dust theory could be about: household dust and its ability to find hidey-holes resistant to all attempts at cleaning; galactic theories of stellar dust clouds and planetary accretion; the so-called ‘smart dust’ of military science-fiction, an extended, self-organising sensor-computing network. But Dust Theory is none of these things.
There are some concepts which seem strange beyond imagining, yet are difficult-to-impossible to refute.
The idea that the whole universe began one second ago, with everyone’s “memories” pre-built in.
The idea that time doesn’t flow at all, that all times simply pre-exist and that each of our conscious thoughts of “now” are simply cross-sections of that greater space-time bloc-universe.
The ontological argument, which proves that a God-like being must exist.
The Doomsday Argument, which uses statistical reasoning to show that the great age of human civilisation is drawing to an end quite soon (e.g. within 10,000 years). The Dust Theory we are going to talk about is like one of those.
Computer simulation of brain functioning is coming along apace. IBM’s “Blue Brain” project (website here) is modelling brain neurons on its Blue Gene supercomputer. Let’s project this work only a few years into the future. IBM have programmed a complex computer object which represents Blue Brain’s chief programmer, called Brian, plus a mini-simulated world, a woodland glade, for simulated-Brian to walk around in.
When we run the simulation on IBM’s supercomputer, simulated-Brian is having just the same experiences as he ‘walks’ around the simulated woodland glade as real-Brian would have walking around a similar, but real, woodland glade. I think we can trust IBM’s programmers to get the simulated neurons working exactly the same as real ones.
Simulated-Brian doesn’t know he’s simulated - after all, his thinking exactly duplicates that of the real-Brian. Their two thought processes would only diverge if they had different experiences. For example, simulated-Brian might try to exit the glade and find there is no reality beyond it. That would surely give him much food for thought!
A computer works by loading a data structure, or pattern, into memory and then updating it in discrete steps. What seems like a continuous flow of thinking in simulated-Brian’s “brain” is, at the microsecond-scale, a series of distinct updates to individual simulated neurons. Of course, the same thing is true in our brains as well. Neurons either fire or they don’t and all thinking is the result of billions of these discrete neuron-events.
Let’s run the computer simulation of Brian in his woodland glade for a minute. Simulated-Brian wanders around, thinking how pretty it all is. He picks up a flower and smells it, appreciates the scent. Life is good, he feels. There. 60 seconds and we ... stop.
The simulation in the computer advances one step every 6 microseconds. This is easily fast enough to correctly simulate biological neurons, which operate much more slowly. As the simulation advances during that minute, we write out each discrete state onto a vast disk store. How many states did we save? How many 6 microsecond slices are there in a minute? The answer is ten million. Each simulation-slice is a complex sequence of binary ones and zeros, like all computer data. Each simulation-slice represents all the neurons in simulated-Brian’s brain plus the woodland glade objects + information about light and sound and so on. That’s just what a slice of a computer simulation actually is.
Now that we have those 10 million slices, we don’t have to use the complex program which computed each slice from the previous one. Our 10 million slice database is like a reel of movie film. If we simply load each slice into the computer every 6 microseconds, the simulation runs as before - Brian wanders around the glade, thinks how pretty it is, picks up and smells a flower: life is good.
Still with us? Still happy with the argument? Informed opinion is that so far so good. But now it starts to get seriously weird.
By running the simulation in a computer, we have decoupled the 'reality' of simulated-Brian and the simulated woodland glade from the laws of physics. We can now do things we could never do in our own reality.
If we run the simulation faster or slower (one slice every second?) a little thought will show that it makes no difference to the experience of simulated Brian.
What about if we run the slices backwards, or out-of-order? Since each slice is a self-contained entity which is structurally independent of any other slice, then it will not matter in what order the slices are run: simulated-Brian has the same delightful walk in the wood regardless.
OK, now a big one. What ‘value do we add’ by running the slices at all? After all, they already exist on the computer disk - all of them. Simply pulling slices into a computer, one after another, may help us make sense of the simulation. It’s then brought into time-congruence with our own linear experience. But it can make no difference to the simulation itself. Just by having all the ten million slices on the disk, we have somehow smeared a minute of simulated-Brian’s time into purely space. It’s hard for us to imagine that on that disk, simulated-Brian is - ‘in parallel’ - having that one minute experience, but he must be.
Stay with it, it gets even weirder as we finally get to the promised Dust Theory.
What’s special about any particular simulation-slice on the disk? It’s just a pattern of magnetism on the disk surface. Although we didn’t labour the point, when a slice gets transferred to the computer its physical form changes several times: first into a sequence of electromagnetic pulses on the connecting cables, then into some physical structure in computer memory. Geographical position and physical encoding were wildly different, yet the pattern was the same. If we had run the simulation on a global cluster of computers, with one slice in England and the next loaded onto a computer in California, the simulation would have worked just the same.
So why do we need a computer at all? The universe is a big place with a lot of material in it, often structured in complex patterns. Suppose that all over the universe there were patterns of material which, just by chance, were precise encodings of the ten million slices of simulated-Brian in his simulated woodland glade. Then by their very existence, simulated-Brian would have his woodland glade experience. You and I would never know that - to us it all just looks like random piles of dust - but Brian would nevertheless be there, having that experience.
The universe is a truly big place, and complex. Probably every pattern of any complexity is out there in the dust somewhere. There are collections of patterns which exactly mirror the pattern of your neurons over all the lives you could ever lead. Even as you read this, there are many, many versions of you in the universe, encoded as simulations in the dust, and unaware that they are simulations. Perhaps you are one of those simulations - you could never disprove a sufficiently accurate one.
That’s Dust Theory.
Greg Egan used this as the basis of his book Permutation City and has written an essay on Dust Theory here.
No comments:
Post a Comment