Sunday, February 15, 2009

Platonic Reality and Simulation

Okay, let's say we take a "brain simulator" (my primary assumption in this post being that such a thing is possible) and load the digitized version of your brain into it and run it. We can ask the simulated "you" questions and it should respond as you would have. Seems theoretically feasible, right?


Now that we've made sure the "brain simulator" works, lets go around and collect a bunch of data. Say, from the thermal vibrations of rocks. Or the water droplet distribution of clouds. Or the pattern of cosmic rays hitting a sensor. Once we've collected a few hundred terabytes of this data, let's feed it into our "brain simulator" and start it.

The vast majority of the time, we won't get anything meaningful out of the simulator...it'll just be garbage. BUT, if we collect enough data and try enough times, eventually we'll find a dataset that will produce a "person" when we run it in our brain simulator.

The person will have memories of a life which will have nothing to do with the source of the data. He won't remember being in a cloud or where ever the data came from. What he remembers will be determined by what "brain" is described by the data. Maybe he'll remember being an accountant in chicago, or maybe he'll have kind of garbled memories. Garbled memories are more likely, since we found him in random data, but if we keep trying out data sets, eventually we could find someone with a fully coherent set of memories...maybe strange memories, but coherent.

SO, these people we find...they're presumably as real to themselves as you are to yourself (if our starting assumption that a brain can be accurately simulated is correct). Their pasts are as real to them as your past is to you. And their "data" existed out in the world before you measured it and plugged it into your simulator. SO, did they exist before you found them? Were they able to pull themselves out of the "cloud water droplet patterns" in the same way you are able to pull yourself out of the "neuron firing patterns" or the "interactions of atoms and molecules" of your brain?

So when we pulled the data from the cloud or where ever, we just pulled one time slice and start our simulation from that. But in the same way that you can calculate different time slices from the brain simulator on different processors in a computer, or even on completely different computers, maybe the next time slice of the person we found was the next cloud over?

How important is causality in moving between states? Does there have to be a causal link between one state and the next? I'm not sure why there would have to be. Seems like the relevant information is in the states. How you move from one informational state to another is irrelevant.

Continuing:

So if you run the brain simulator 10 times, using the same starting data and the same inputs, the person being simulated will experience EXACTLY the same thing, 10 times. Right?

Now, we save the inputs and outputs of every neuron when we do the first run.

On the second run we take ONE neuron and we don't actually do calculations for it on each time slice. Instead, for each set of inputs to that neuron, we just do a lookup to find the previous outputs and pass those along to the next neuron. But all the other neurons we do do the calculations as before. Does the person being simulated still experience the same thing?

Now on the third run, we take 1000 neurons and just do lookups for their outputs on each time slice. All other neurons still do the same calculations as for the first run. Does the person being simulated still experience the same thing?

Now on the fourth run, we take ONE MILLION neurons and just do lookups. Does the person being simulated still experience the same thing?

Now on the NINTH run, we do lookups on all EXCEPT 1 neuron, which we actually calculate the outputs for. Does the person being simulated still experience the same thing?

Now on the TENTH run, for each time slice we don't do any "neural" calculations. All we do is lookups. Does the person being simulated still experience the same thing?

So it's easy to say that the first, and second, and third runs are accurate simulations...but the ninth and tenth runs? What exactly are we doing here that simulates a person?

But if the 10th run doesn't simulate anybody, did the ninth? Eighth? At what point do we stop simulating the person? Or do we stop simulating the person?

Obviously if after the 10th run, we take it's output, pick up where it stopped and resume simulating so that we can ask the person being simulated how it felt in the 10th run, they're not going to know that anything was different between the 1st and the 10th run.

SO: This is what I'm talking about...by taking small, reasonable steps we went from a reasonable, relatively unridiculous starting point: that it's possible to simulate a brain on a computer, to a rather ridiculous conclusion, BUT the chain of reasoning seems to be relatively straightforward, I think. Do you?

Assuming that it's possible to simulate a brain (and thus a mind) on a computer, the rest seems to follow, right? Or wrong?

No comments: