Sunday, February 15, 2009

The irrelevance of causality

Preserving some older stuff here:

In my opinion, causality is a physical implementation detail whose specifics vary from system to system, and even from possible universe to possible universe, but which is ultimately not important to the experience of consciousness.

So in the previous post my goal was to show that mappings from a dust cloud to a brain are as valid as mappings from a computer simulation to a brain. And I'm making the assumption that an accurate computer simulation of a brain would produce consciousness just as an actual brain would.


It's difficult to say much about dust cloud dynamics, whereas it's relatively easy to talk about how computers work. So assuming that there is an equivalence between computers and dust clouds, from here forward I'll mainly talk on computers.

So, returning to the previously mentioned computer simulation, the simulation consists of two parts: data and program. The data describes a brain in arbitrarily fine detail, the program describes the steps that should be taken to change the data over time in such a way as to maintain a consistent mapping to a real brain that is also evolving over time.

A physical computer that implements a simulation basically sets up a system of physical events that when chained together, map the input data (brain at time slice t1) to a set of output data (brain at time slice t2). The "computation" is just a mapping process, or an arbitrarily long sequence of mapping processes.

Consider the boolean logic gates that make up a digital computer. A NAND gate for example. So any physical system that takes two inputs that can be interpretted as "0" or "1" and maps those inputs to some sort of output that also can be interpretted as "0" or "1", and does so in such a way that two "1" inputs will produce a "0" output and all other combinations of inputs will produce a "1" output, must be said to implement the computations defined by the boolean NAND operation.

In a digital computer, this might be done by combining two NMOS transistors and two PMOS transistors in such a way that the direction of currently flow at the output line is interpretted as "0" or "1". BUT, you could also implement this same operation using dominos, with the state of the "output domino" as fallen or standing indicates "0" or "1". Or you could do it with water, pipes, and valves with the direction of water flow indicating "0" or "1" at the output pipe.

Note that there doesn't need to be just two discrete values for input and output, "0" and "1". The range of values for input and output just have to be mappable to "0" and "1".

Also note, that we only need the mapping of input to output to hold for the times that we rely on it to produce correct values. We don't care if a year later the NAND gate implementation has broken. We don't care if a day later it no longer works. We don't care if 1 second later the mapping of inputs to outputs by the physical system no longer holds. All we care about is that at the time we needed it to do our NAND calculation, the mapping held and the NAND implementation produced the correct results (regardless of why it produced the correct results).

Okay, so we have a lot of data that describes a brain, and we have a program which describes in abstract terms the sequence of steps necessary to transform the brain data over time in such a way as to maintain a consistent mapping to an actual brain. And we want to run our program on a computer.

There are many, many types of computers, with a large range of architectures, that would be capable of running our simulation. And depending on which we choose, we will end up with a wide variety of physical representations for the data, and also a wide variety of execution methods for the program.

We could run the simulation on a powerful digital computer, with the data stored as bits in RAM, and the program executed on one processor sequentially or on many processors in parallel. Or we could run the simulation on a huge scaled up version of a Babbage Analytical Engine with millions of punch cards. Or we could print out the source code and the data and execute the program by hand, using a pencil and paper to store and update various values in memory (similar to Searle's Chinese Room scenario). OR we could even construct something like a mechanical brain whose structure mimics the structure of an actual human brain, with mechanical neurons paralleling the operation of actual neurons, and also with analogues for neurotransmitters and glial cells and all the rest.

In all of these cases, the causal structure of the executing simulation would be vastly different from case to case. And yet, if there always existed a mapping from the simulation back to the original human brain, then I would assume that the simulation was accurate and was resulting in subjective experience for the simulated consciousness.

In fact, due to things like optimizing compilers, out-of-order-execution circuitry, and branch-prediction circuitry, not to mention automatic parallelization and various forms of hyperthreading, PLUS the added causal interruptions due to preemptive multitasking -- the actual causal structure of the executing program might bear relatively little resemblence to what you would expect from examining the source code of the simulation program.

Also note that we could do things to optimize the simulation execution like cache intermediate results in lookup tables to avoid recomputing frequently occuring values, OR even restructure the entire simulation program in a way that is mathematically equivalent to the original and produces equivalent output, but which in fact shares none of the original program's causal structure.

A final scenario:

Say that we are running our simulation on a digital computer. The simulation is doing the calculations necessary to transform the brain state data at t1 into the state at t2. At a crucial moment, a cosmic ray zings in from outer space and disrupts the flow of electrons in the CPU that is doing an important calculation, and so the calculation is not done. HOWEVER, but sheer coincidence, the correct output value that would have been produced is already on the output lines of the boolean logic gates that provide the data to be written to memory, and indeed this random, but in this case correct, value is written to memory, and the simulation goes on as if nothing improper had happened.

Now, in this case, the causal chain was broken, but do to an unlikely but not impossible stroke of good fortune, the integrity of the simulation was maintained, and presumably consciousness was still produced. Obviously the simulated consciousness wouldn't notice anything amiss, because noticing something amiss would require a change of data. And no data was changed.

So the bottom line for all of the above is that it is possible to think of many different scenarios where the causal structure differs, and even of examples where the causal chains that form the structure are broken, but as long as the correct outputs are produced in such a way that the mapping from the simulation to the original brain holds, then I think that consciousness would still result from the simulation.

From this I conclude that causality is an implementation detail of the system used to calculate the outputs, and that any system (even those that involve random breaks in the causal chain) that produces outputs that can be mapped to a human brain, will produce consciousness in the same way that the human brain does. Only the outputs matter. Which is to say that only the information matters.

Being able to follow the causal chain of the simulation is important in being able to interpret the outputs of the simulation, and also is important to being able to have confidence that the simulation is actually running correctly, AND is also important in terms of knowing how to feed inputs into the simulation (assuming that the simulated consciousness isn't living in a simulated world with provides the inputs).

So causality is critical to us in viewing, interpretting, and interacting with the simulation.

HOWEVER, I don't see that causality is important in producing consciousness.

No comments: