Showing posts with label Dust Theory. Show all posts
Showing posts with label Dust Theory. Show all posts

Sunday, July 11, 2010

Entropy and Memory

It’s overwhelmingly probable that all of your memories are false.

Consider:

Entropy is a measure of the disorder of a system. The higher the entropy, the higher the disorder.

If a deck of cards is ordered by suit and then within each suit by ascending rank, then that’s a low entropy state. This is because out of the 8.06 * 10 to the 67th (52!) possible unique arrangements of the cards in a standard 52 card deck, there’s only 24 that fit that particular description.

A “random looking” arrangement of the deck is a high entropy state, because there are trillions of unique arrangements of a standard 52 card deck that will fit the description of looking “randomly shuffled”.

Same with the egg. There are (relatively) few ways to arrange the molecules of an egg that will result in it looking unbroken, compared to the huge number of ways that will result in it looking broken. SO, unbroken egg…low entropy. Broken egg…high entropy.

AND the same with the universe…there are (again, relatively) few ways to arrange the atoms of the universe in a way that makes it resemble what we see with people and trees and planets and stars and galaxies, compared with the gargantuan number of ways to arrange things so that it resembles a generic looking cloud of dust.

OKAY. Now.

Of the relatively few ways that the elementary particles of the universe can be arranged so as to resemble what we see around us today, only a tiny fraction of those particle arrangements will have values for momentum and position that are consistent with them having arrived at that state 13.7 billion years after something like the Big Bang.

The vast majority of the particle arrangements that macroscopically resemble the world around us will *instead* have particles in states (e.g., with positions and velocities) that are consistent with the particles having previously been in something more like a giant dust cloud.

By which I mean: If we take their current positions and velocities, and work backwards to see where they came from, and go back far enough in time, eventually we will not arrive at the Big Bang. Instead we will arrive at a state resembling a giant dust cloud (probably a very thin, spread-out dust cloud).

SO, bottom line:

Out of all the possible configurations that the universe could be in, ones that have people, and planets, and stars, and galaxies are extremely rare.

Further, even if we then only consider those extremely rare possible configurations that have people, and planets, and stars, and galaxies – the ones with particles in states (e.g., with positions and velocities) that are consistent with having arrived at this configuration 13.7 billion years after something like the Big Bang are STILL rare.

We don’t know the exact state of our universe’s particles, but in statistical mechanics the Principle of Indifference requires us to consider all possible microscopic states that are consistent with our current macroscopic state equally likely.

So given all of the above, and our current knowledge of the laws of physics, the most likely explanation is that all of your current memories are false and that yesterday the universe was in a HIGHER state of entropy, not a lower state (as would be required by any variation of the Big Bang theory).

Physical systems with low states of entropy are very rare, by definition. So it’s very improbable (but not impossible) that the unlikely low entropy state of the universe of today is the result of having evolved from an EVEN MORE UNLIKELY lower entropy universe that existed yesterday.

Instead, statistically it’s overwhelmingly more probable that the unlikely low entropy state of the universe today is the result of a random fluctuation from a HIGHER entropy universe that existed yesterday.

And thus your memories of a lower entropy yesterday are most likely due to this random fluctuation, not due to yesterday actually having had a lower entropy than today.

[Based on my reading of Sean Carroll's book "From Eternity to Here"]

Sunday, February 15, 2009

The irrelevance of causality

Preserving some older stuff here:

In my opinion, causality is a physical implementation detail whose specifics vary from system to system, and even from possible universe to possible universe, but which is ultimately not important to the experience of consciousness.

So in the previous post my goal was to show that mappings from a dust cloud to a brain are as valid as mappings from a computer simulation to a brain. And I'm making the assumption that an accurate computer simulation of a brain would produce consciousness just as an actual brain would.


It's difficult to say much about dust cloud dynamics, whereas it's relatively easy to talk about how computers work. So assuming that there is an equivalence between computers and dust clouds, from here forward I'll mainly talk on computers.

So, returning to the previously mentioned computer simulation, the simulation consists of two parts: data and program. The data describes a brain in arbitrarily fine detail, the program describes the steps that should be taken to change the data over time in such a way as to maintain a consistent mapping to a real brain that is also evolving over time.

A physical computer that implements a simulation basically sets up a system of physical events that when chained together, map the input data (brain at time slice t1) to a set of output data (brain at time slice t2). The "computation" is just a mapping process, or an arbitrarily long sequence of mapping processes.

Consider the boolean logic gates that make up a digital computer. A NAND gate for example. So any physical system that takes two inputs that can be interpretted as "0" or "1" and maps those inputs to some sort of output that also can be interpretted as "0" or "1", and does so in such a way that two "1" inputs will produce a "0" output and all other combinations of inputs will produce a "1" output, must be said to implement the computations defined by the boolean NAND operation.

In a digital computer, this might be done by combining two NMOS transistors and two PMOS transistors in such a way that the direction of currently flow at the output line is interpretted as "0" or "1". BUT, you could also implement this same operation using dominos, with the state of the "output domino" as fallen or standing indicates "0" or "1". Or you could do it with water, pipes, and valves with the direction of water flow indicating "0" or "1" at the output pipe.

Note that there doesn't need to be just two discrete values for input and output, "0" and "1". The range of values for input and output just have to be mappable to "0" and "1".

Also note, that we only need the mapping of input to output to hold for the times that we rely on it to produce correct values. We don't care if a year later the NAND gate implementation has broken. We don't care if a day later it no longer works. We don't care if 1 second later the mapping of inputs to outputs by the physical system no longer holds. All we care about is that at the time we needed it to do our NAND calculation, the mapping held and the NAND implementation produced the correct results (regardless of why it produced the correct results).

Okay, so we have a lot of data that describes a brain, and we have a program which describes in abstract terms the sequence of steps necessary to transform the brain data over time in such a way as to maintain a consistent mapping to an actual brain. And we want to run our program on a computer.

There are many, many types of computers, with a large range of architectures, that would be capable of running our simulation. And depending on which we choose, we will end up with a wide variety of physical representations for the data, and also a wide variety of execution methods for the program.

We could run the simulation on a powerful digital computer, with the data stored as bits in RAM, and the program executed on one processor sequentially or on many processors in parallel. Or we could run the simulation on a huge scaled up version of a Babbage Analytical Engine with millions of punch cards. Or we could print out the source code and the data and execute the program by hand, using a pencil and paper to store and update various values in memory (similar to Searle's Chinese Room scenario). OR we could even construct something like a mechanical brain whose structure mimics the structure of an actual human brain, with mechanical neurons paralleling the operation of actual neurons, and also with analogues for neurotransmitters and glial cells and all the rest.

In all of these cases, the causal structure of the executing simulation would be vastly different from case to case. And yet, if there always existed a mapping from the simulation back to the original human brain, then I would assume that the simulation was accurate and was resulting in subjective experience for the simulated consciousness.

In fact, due to things like optimizing compilers, out-of-order-execution circuitry, and branch-prediction circuitry, not to mention automatic parallelization and various forms of hyperthreading, PLUS the added causal interruptions due to preemptive multitasking -- the actual causal structure of the executing program might bear relatively little resemblence to what you would expect from examining the source code of the simulation program.

Also note that we could do things to optimize the simulation execution like cache intermediate results in lookup tables to avoid recomputing frequently occuring values, OR even restructure the entire simulation program in a way that is mathematically equivalent to the original and produces equivalent output, but which in fact shares none of the original program's causal structure.

A final scenario:

Say that we are running our simulation on a digital computer. The simulation is doing the calculations necessary to transform the brain state data at t1 into the state at t2. At a crucial moment, a cosmic ray zings in from outer space and disrupts the flow of electrons in the CPU that is doing an important calculation, and so the calculation is not done. HOWEVER, but sheer coincidence, the correct output value that would have been produced is already on the output lines of the boolean logic gates that provide the data to be written to memory, and indeed this random, but in this case correct, value is written to memory, and the simulation goes on as if nothing improper had happened.

Now, in this case, the causal chain was broken, but do to an unlikely but not impossible stroke of good fortune, the integrity of the simulation was maintained, and presumably consciousness was still produced. Obviously the simulated consciousness wouldn't notice anything amiss, because noticing something amiss would require a change of data. And no data was changed.

So the bottom line for all of the above is that it is possible to think of many different scenarios where the causal structure differs, and even of examples where the causal chains that form the structure are broken, but as long as the correct outputs are produced in such a way that the mapping from the simulation to the original brain holds, then I think that consciousness would still result from the simulation.

From this I conclude that causality is an implementation detail of the system used to calculate the outputs, and that any system (even those that involve random breaks in the causal chain) that produces outputs that can be mapped to a human brain, will produce consciousness in the same way that the human brain does. Only the outputs matter. Which is to say that only the information matters.

Being able to follow the causal chain of the simulation is important in being able to interpret the outputs of the simulation, and also is important to being able to have confidence that the simulation is actually running correctly, AND is also important in terms of knowing how to feed inputs into the simulation (assuming that the simulated consciousness isn't living in a simulated world with provides the inputs).

So causality is critical to us in viewing, interpretting, and interacting with the simulation.

HOWEVER, I don't see that causality is important in producing consciousness.

More on Dust Theory

"Dust Theory" is really no different than the idea that an accurate computer simulation of your brain would be conscious. The exact same reasoning holds. If you say that Dust Theory in this case is incorrect, then I think you're also saying that an accurate computer simulation of your brain would not be conscious either, despite the fact that it would respond to inputs in exactly the same way as your real brain would (a necessary condition of being an "accurate" simulation).


As I mentioned, the state of the dust cloud particles evolves in time according to the laws of physics. There is a causal connection between the state of the dust cloud at time t1 and any subsequent time (t2). Why? Because it's the same cloud of dust with particles drifting around affecting each other, in the same way that the particles of your brain drift around and affect each other.

But, taking the total state of the dust cloud at time t2, you should be able to work back to the state of the cloud t1 (setting aside possible problems due to quantum indeterminancy). Starting at t2 you would follow the causal chain of each particle back and eventually find out where it was at t1. Though, you would need a massive amount of processing power to do this of course, due to the n-body problem. But this is a thought experiment, so assume that we have that processing power.

So, as is true of the dust cloud, a computer simulation of your brain is "accurate" if there exists a mapping of the state of the computer simulation to the state of your brain at a given time. A simulation run on a digital computer is done as arbitrarily small, discrete time slices, so at the end of each time slice you would compare the state of the computer to the state of your brain, and if there was a consistent mapping, then the computer is accurately simulating your consciousness.

In this case, if you agree that the computer simulation is conscious, then where does consciousness exist in the simulation process? We do the calculations for each time slice one processor instruction at a time. There are probably billions of processor instructions executed per time slice. Which processor instructions cause "consciousness" to occur?

Keep in mind that the processor may be interrupted many times to run programs other than the brain simulation. So at any time in the process of calculating the next time slice, the processor might be interrupted and diverted to another program's calculations for an arbitrary length of time. When the computer returns to the simulation it will pick up where it left off, and the integrity and accuracy of the simulation will not be affected.

This is equivalent to the dust cloud *only sometimes* being in a state that maps to the state of your brain. So the dust cloud spends 10 minutes drifting in a configuration that maps to the state of your brain. Then it drifts out of synch and the dust cloud particles no longer map to the state of your brain. A billion years go by with the dust cloud particles out of synch. Finally the dust cloud again drifts into a configuration that maps to the state of your brain for another 10 minutes. And so on. There is a causal connection between the two 10 minute interludes in the same way that there is causal continuity with the computer simulation even when the computer is occasionally diverted to execute other programs due to the demands of preemptive multitasking.

Also note that the speed of the computer has no impact on how the simulated consciousness perceives the passage of time. If the computer takes a year to compute 1 minute of subjective time for the simulated consciousness, that will still only feel like 1 minute for the consciousness being simulated. Conversely, if the computer runs faster than the human brain and it only takes 1 minute to compute 1 year of subjective time for the simulated consciousness, that year will still feel like a year to the simulated consciousness, even though it actually only took 1 minute of "external" time.

So, I think this pretty much shows that the basic idea of Dust Theory is correct, IF you accept that an accurate computer simulation of a brain would also be conscious. If you don't accept that a computer simulation would be conscious, then I have a whole seperate set of arguments I can make on that subject (ha!).

So, this is just step 1 of my multi-step response to Pete and Teed. I also want to address the non-causal-chain case of Dust Theory, and also Teed's Hypothesis of Extended Cognition. But I need to go eat supper.

Again, Hans Moravec covers a lot of this in:

http://www.frc.ri.cmu.edu/~hpm/project.archive/general.articles/1998/SimConEx.98.html

Consciousness

A couple of interesting debates on consciousness, functionalism, platonism, computationalism, dust theory, etc. in the comments of these two threads:

Guide To Reality
AND:
The Splintered Mind

That Allen guy is a freaking genius!

Dust Theory

A great post describing Dust Theory:

I guess there are lots of things dust theory could be about: household dust and its ability to find hidey-holes resistant to all attempts at cleaning; galactic theories of stellar dust clouds and planetary accretion; the so-called ‘smart dust’ of military science-fiction, an extended, self-organising sensor-computing network. But Dust Theory is none of these things.
There are some concepts which seem strange beyond imagining, yet are difficult-to-impossible to refute.

The idea that the whole universe began one second ago, with everyone’s “memories” pre-built in.

The idea that time doesn’t flow at all, that all times simply pre-exist and that each of our conscious thoughts of “now” are simply cross-sections of that greater space-time bloc-universe.

The ontological argument, which proves that a God-like being must exist.

The Doomsday Argument, which uses statistical reasoning to show that the great age of human civilisation is drawing to an end quite soon (e.g. within 10,000 years).
The Dust Theory we are going to talk about is like one of those.

Computer simulation of brain functioning is coming along apace. IBM’s “Blue Brain” project (website here) is modelling brain neurons on its Blue Gene supercomputer. Let’s project this work only a few years into the future. IBM have programmed a complex computer object which represents Blue Brain’s chief programmer, called Brian, plus a mini-simulated world, a woodland glade, for simulated-Brian to walk around in.

When we run the simulation on IBM’s supercomputer, simulated-Brian is having just the same experiences as he ‘walks’ around the simulated woodland glade as real-Brian would have walking around a similar, but real, woodland glade. I think we can trust IBM’s programmers to get the simulated neurons working exactly the same as real ones.

Simulated-Brian doesn’t know he’s simulated - after all, his thinking exactly duplicates that of the real-Brian. Their two thought processes would only diverge if they had different experiences. For example, simulated-Brian might try to exit the glade and find there is no reality beyond it. That would surely give him much food for thought!

A computer works by loading a data structure, or pattern, into memory and then updating it in discrete steps. What seems like a continuous flow of thinking in simulated-Brian’s “brain” is, at the microsecond-scale, a series of distinct updates to individual simulated neurons. Of course, the same thing is true in our brains as well. Neurons either fire or they don’t and all thinking is the result of billions of these discrete neuron-events.

Let’s run the computer simulation of Brian in his woodland glade for a minute. Simulated-Brian wanders around, thinking how pretty it all is. He picks up a flower and smells it, appreciates the scent. Life is good, he feels. There. 60 seconds and we ... stop.

The simulation in the computer advances one step every 6 microseconds. This is easily fast enough to correctly simulate biological neurons, which operate much more slowly. As the simulation advances during that minute, we write out each discrete state onto a vast disk store. How many states did we save? How many 6 microsecond slices are there in a minute? The answer is ten million. Each simulation-slice is a complex sequence of binary ones and zeros, like all computer data. Each simulation-slice represents all the neurons in simulated-Brian’s brain plus the woodland glade objects + information about light and sound and so on. That’s just what a slice of a computer simulation actually is.

Now that we have those 10 million slices, we don’t have to use the complex program which computed each slice from the previous one. Our 10 million slice database is like a reel of movie film. If we simply load each slice into the computer every 6 microseconds, the simulation runs as before - Brian wanders around the glade, thinks how pretty it is, picks up and smells a flower: life is good.

Still with us? Still happy with the argument? Informed opinion is that so far so good. But now it starts to get seriously weird.

By running the simulation in a computer, we have decoupled the 'reality' of simulated-Brian and the simulated woodland glade from the laws of physics. We can now do things we could never do in our own reality.

If we run the simulation faster or slower (one slice every second?) a little thought will show that it makes no difference to the experience of simulated Brian.

What about if we run the slices backwards, or out-of-order? Since each slice is a self-contained entity which is structurally independent of any other slice, then it will not matter in what order the slices are run: simulated-Brian has the same delightful walk in the wood regardless.

OK, now a big one. What ‘value do we add’ by running the slices at all? After all, they already exist on the computer disk - all of them. Simply pulling slices into a computer, one after another, may help us make sense of the simulation. It’s then brought into time-congruence with our own linear experience. But it can make no difference to the simulation itself. Just by having all the ten million slices on the disk, we have somehow smeared a minute of simulated-Brian’s time into purely space. It’s hard for us to imagine that on that disk, simulated-Brian is - ‘in parallel’ - having that one minute experience, but he must be.

Stay with it, it gets even weirder as we finally get to the promised Dust Theory.

What’s special about any particular simulation-slice on the disk? It’s just a pattern of magnetism on the disk surface. Although we didn’t labour the point, when a slice gets transferred to the computer its physical form changes several times: first into a sequence of electromagnetic pulses on the connecting cables, then into some physical structure in computer memory. Geographical position and physical encoding were wildly different, yet the pattern was the same. If we had run the simulation on a global cluster of computers, with one slice in England and the next loaded onto a computer in California, the simulation would have worked just the same.

So why do we need a computer at all? The universe is a big place with a lot of material in it, often structured in complex patterns. Suppose that all over the universe there were patterns of material which, just by chance, were precise encodings of the ten million slices of simulated-Brian in his simulated woodland glade. Then by their very existence, simulated-Brian would have his woodland glade experience. You and I would never know that - to us it all just looks like random piles of dust - but Brian would nevertheless be there, having that experience.

The universe is a truly big place, and complex. Probably every pattern of any complexity is out there in the dust somewhere. There are collections of patterns which exactly mirror the pattern of your neurons over all the lives you could ever lead. Even as you read this, there are many, many versions of you in the universe, encoded as simulations in the dust, and unaware that they are simulations. Perhaps you are one of those simulations - you could never disprove a sufficiently accurate one.

That’s Dust Theory.

Greg Egan used this as the basis of his book Permutation City and has written an essay on Dust Theory here.