Friday, March 13, 2009

Mind Reading

Interpreting neural activity:
"fMRI scanners enable us to see the bigger picture of what is happening in people's brains," she says. " By looking at activity over tens of thousands of neurons, we can see that there must be a functional structure – a pattern – to how these memories are encoded. Otherwise, our experiment simply would not have been possible to do."

Disequilibrium

Klein on Summers:
That, at least, is how it should go. But at the moment, the normal market processes are broken. "It was a central insight of Keynes’ General Theory that two or three times each century, the self-equilibrating properties of markets break down as stabilizing mechanisms are overwhelmed by vicious cycles. And the right economic metaphor becomes an avalanche rather than a thermostat. That is what we are experiencing right now." Summers went on to tick off five of the mechanisms that seem to be feeding on themselves rather than being tamed by the market's Econ 101 tendency towards balance:


* Declining asset prices lead to margin calls and de-leveraging, which leads to further declines in prices.

* Lower asset prices means banks hold less capital. Less capital means less lending. Less lending means lower asset prices.

* Falling home prices lead to foreclosures, which lead home prices to fall even further.

* A weakened financial system leads to less borrowing and spending which leads to a weakened economy, which leads to a weakened financial system.

* Lower incomes lead to less spending, which leads to less employment, which leads to lower incomes.

Financial intermediation

Very interesting:
An intermediary can "add value" by reducing investors' risk in comparison to disintermediated investment, by for example, investing in a better-diversified portfolio than an investor would. An intermediary can very effectively reduce liquidity risks to investors, again by since idiosyncratic liquidity demands are themselves diversifiable. But risk reduction via these techniques can never reduce risk to zero. In fact, investing via an intermediary can never alter the fact that 100% of invested capital is at risk — business performance is not uncorrelated and projects can fail completely. Also, usually idiosyncratic liquidity demand occasionally become highly correlated, due to bank runs or real need for cash. Statistical attempts to quantify these risks are misleading at best, as the distributions from which inferences are drawn are violently nonstationary — the world is always changing, the past is never a great guide to the future for very long. Fundamentally, the value intermediaries can add by diversifying over investments and liquidity requirements is very modest, and ought to be acknowledged as such.

Wednesday, March 11, 2009

Adam Smith

On trust and markets:
[Adam] Smith explained why this kind of trust does not always exist. Even though the champions of the baker-brewer-butcher reading of Smith enshrined in many economics books may be at a loss to understand the present crisis (people still have very good reason to seek more trade, only less opportunity), the far-reaching consequences of mistrust and lack of confidence in others, which have contributed to generating this crisis and are making a recovery so very difficult, would not have puzzled him.

There were, in fact, very good reasons for mistrust and the breakdown of assurance that contributed to the crisis today. The obligations and responsibilities associated with transactions have in recent years become much harder to trace thanks to the rapid development of secondary markets involving derivatives and other financial instruments. This occurred at a time when the plentiful availability of credit, partly driven by the huge trading surpluses of some economies, most prominently China, magnified the scale of brash operations. A subprime lender who misled a borrower into taking unwise risks could pass off the financial instruments to other parties remote from the original transaction. The need for supervision and regulation has become much stronger over recent years. And yet the supervisory role of the government in the US in particular has been, over the same period, sharply curtailed, fed by an increasing belief in the self-regulatory nature of the market economy. Precisely as the need for state surveillance has grown, the provision of the needed supervision has shrunk.

A nation of jailers

Glenn Loury:
Nor is it merely the scope of the mass imprisonment state that has expanded so impressively in the United States. The ideas underlying the doing of criminal justice — the superstructure of justifications and rationalizations — have also undergone a sea change. Rehabilitation is a dead letter; retribution is the thing. The function of imprisonment is not to reform or redirect offenders. Rather, it is to keep them away from U.S. “The prison,” writes sociologist David Garland, “is used today as a kind of reservation, a quarantine zone in which purportedly dangerous individuals are segregated in the name of public safety.” We have elaborated what are, in effect, a “string of work camps and prisons strung across a vast country housing millions of people drawn mainly from classes and racial groups that are seen as politically and economically problematic.” We have, in other words, marched quite a long way down the punitive road, in the name of securing public safety and meting out to criminals their just deserts.
...
My recitation of the brutal facts about punishment in today’s America may sound to some like a primal scream at this monstrous social machine that is grinding poor black communities to dust. And I confess that these facts do at times leave me inclined to cry out in despair. But my argument is intended to be moral, not existential, and its principal thesis is this: we law-abiding, middle-class Americans have made collective decisions on social and incarceration policy questions, and we benefit from those decisions. That is, we benefit from a system of suffering, rooted in state violence, meted out at our behest. Put differently our society — the society we together have made — first tolerates crime-promoting conditions in our sprawling urban ghettos, and then goes on to act out rituals of punishment against them as some awful form of human sacrifice.

Tuesday, March 10, 2009

Quantum Swampman

The original Swampman thought experiment is here.

An alternate version of "Swampman": Let's get rid of the lightning bolt (too dramatic), and just go with quantum fluctuations. Virtual particles. That sort of thing.

In this version, Davidson is vaporized by the unlikely but not impossible simultaneous quantum tunneling of all of his constituent particles for random distances in many random directions. BUT, by another completely unrelated and hugely unlikely (but not impossible) quantum event, a whole new set of exactly identical particles materialize from the background vacuum energy of space and Swampman is born. However, the process happened so rapidly that Swampman is not even aware that he isn't Davidson, and no one watching would have noticed the change either.

Now, 10 minutes go by, as Swampman continues his oblivious enjoyment of the murky swamp, and then suddenly the same thing happens again. Swampman is replaced by Swampman-2. And again it happens so fast that neither Swampman-2 nor any observers notice it.

Now, 10 seconds go by and Swampman-2 is replaced by Swampman-3, via the same mechanism. And 1 second later, the same thing again. And so on until you get down to whatever timeslice you like.

So in this case, neither Davidson/Swampman nor anyone else who is observing him OR interacting with him will have any idea that anything is amiss. And the reason that no one notices anything wrong is that there is a continuity, and a relationship, between the information that is represented by the particles of the various versions of the Swampmen and Davidson, EVEN IF there is no causal connection between the particles themselves.

Sunday, March 8, 2009

Bankers can't be trusted

Obviously.
There is another argument, implicit or explicit, for the nationalisation of banks; we can not trust bankers not to leave with the cash, let alone spend any of the assistance provided by the government in the public interest. Two recent studies that analyse the experience of recent years show that bankers will not hesitate to enrich themselves at the expense of the public if they have the opportunity.

The Watchmen

A good review by Roger Ebert.

Which continued this link to a good youtube video on quantum mechanics and electrons.

Wednesday, March 4, 2009

A shadow universe

A bizarre universe may be lurking in the shadows
Such hidden worlds might sound strange, but they emerge naturally from complex theories such as string theory, which attempts to mesh together the very small and the very large. Hidden worlds may, literally, be all around us. They could, in theory, be populated by a rich menagerie of particles and have their own forces. Yet we would be unaware of their existence because the particles interact extremely weakly with the familiar matter of our universe. Of late, physicists have been taking seriously the idea that particles from such hidden sectors could be dark matter.

The Sun

Ten Things You Didn't Know About The Sun:
When you go outside at night and look at the stars, almost all the stars you see are within 100 light years from us, and only a handful of the extreme brightest ones can be seen from farther away. If you were to pluck the Sun from the solar system and plop it down in some random location in our Galaxy, there's a better than 99.99999% chance it would be invisible to the naked eye.

Sunday, March 1, 2009

Dennett on Zombies

The Zombic Hunch:
Must we talk about zombies? Apparently we must. There is a powerful and ubiquitous intuition that computational, mechanistic models of consciousness, of the sort we naturalists favor, must leave something out–something important. Just what must they leave out? The critics have found that it’s hard to say, exactly: qualia, feelings, emotions, the what-it’s-likeness (Nagel)[13] or the ontological subjectivity (Searle)[14] of consciousness. Each of these attempts to characterize the phantom residue has met with serious objections and been abandoned by many who nevertheless want to cling to the intuition, so there has been a gradual process of distillation, leaving just about all the reactionaries, for all their disagreements among themselves, united in the conviction that there is a real difference between a conscious person and a perfect zombie–let’s call that intuition the Zombic Hunch–leading them to the thesis of Zombism: that the fundamental flaw in any mechanistic theory of consciousness is that it cannot account for this important difference.[15] A hundred years from now, I expect this claim will be scarcely credible, but let the record show that in 1999, John Searle, David Chalmers, Colin McGinn, Joseph Levine and many other philosophers of mind don’t just feel the tug of the Zombic Hunch (I can feel the tug as well as anybody), they credit it. They are, however reluctantly, Zombists, who maintain that the zombie challenge is a serious criticism. It is not that they don’t recognize the awkwardness of their position. The threadbare stereotype of philosophers passionately arguing about how many angels can dance on the head of a pin is not much improved when the topic is updated to whether zombies–admitted by all to be imaginary beings–are (1) metaphysically impossible, (2) logically impossible, (3) physically impossible, or just (4) extremely unlikely to exist. The reactionaries have acknowledged that many who take zombies seriously have simply failed to imagine the prospect correctly. For instance, if you were surprised by my claim that the Steinberg cartoon would be an equally apt metaphorical depiction of the goings on in a zombie’s head, you had not heretofore understood what a zombie is (and isn’t). More pointedly, if you still think that Chalmers and I are just wrong about this, you are simply operating with a mistaken concept of zombies, one that is irrelevant to the philosophical discussion. (I mention this because I have found that many onlookers, scientists in particular, have a hard time believing that philosophers can be taking such a preposterous idea as zombies seriously, so they generously replace it with some idea that one can take seriously–but one that does not do the requisite philosophical work. Just remember, by definition, a zombie behaves indistinguishably from a conscious being–in all possible tests, including not only answers to questions [as in the Turing test] but psychophysical tests, neurophysiological tests–all tests that any “third-person” science can devise.)

We Earth Neurons

Daniel Dennett:
Some years ago a friend of mine in the Peace Corps told me about his efforts on behalf of a tribe of gentle Indians deep in the Brazilian forest. I asked him if he had been required to tell them about the conflict between the USA and the USSR. Not at all, he replied. There would be no point in it. They had not only never heard of either America or the Soviet Union, they had never even heard of Brazil! Who would have guessed that it is still possible to be a human being living in, and subject to the laws of, a nation without the slightest knowledge of that fact? If we find this astonishing, it is because we human beings, unlike all other species on the planet, are knowers. We are the ones–the only ones–who have figured out what we are, and where we are, in this great universe. And we are even beginning to figure out how we got here.

These quite recent discoveries are unnerving, to say the least. What you are–what each of us is–is an assemblage of roughly a trillion cells, of thousands of different sorts. Most of these cells are “daughters” of the egg and sperm cell whose union started you (there are also millions of hitchhikers from thousands of different lineages stowed away in your body), but each cell is a mindless mechanism, a largely autonomous micro-robot, no more conscious than a bacterium, and not a single one of the cells that compose you knows who you are, or cares.

Saturday, February 21, 2009

Death is not an option

Glah. I don't like the sound of this (via Overcoming Bias)

Hypothetical Apostasy

An interesting proposal.

Quantum Computing and the Church-Turing Thesis

Another interesting article.
The Turing machine model seems to capture the entire concept of computability, according to the following thesis[62]:

Church Turing Thesis: A Turing machine can compute any function computable by a reasonable physical device.

What does “reasonable physical device” mean? This thesis is a physical statement, and as such it cannot be proven. But one knows a physically unreasonable device when one sees it. Up till now there are no candidates for counterexamples to this thesis (but see Ref. [103]). All physical systems, (including quantum systems), seem to have a simulation by a Turing Machine.

Quantum Computing: Introduction

More on the relationship between quantum computing and classical computing. An interesting read.
Another thing which can be expressed in many different ways is information. For example, the two statements ``the quantum computer is very interesting'' and ``l'ordinateur quantique est tres interessant'' have something in common, although they share no words. The thing they have in common is their information content. Essentially the same information could be expressed in many other ways, for example by substituting numbers for letters in a scheme such as a -> 97, b -> 98, c -> 99 and so on, in which case the english version of the above statement becomes 116 104 101 32 113 117 97 110 116 117 109... . It is very significant that information can be expressed in different ways without losing its essential nature, since this leads to the possibility of the automatic manipulation of information: a machine need only be able to manipulate quite simple things like integers in order to do surprisingly powerful information processing, from document preparation to differential calculus, even to translating between human languages. We are familiar with this now, because of the ubiquitous computer, but even fifty years ago such a widespread significance of automated information processing was not forseen.

Quantum Computing vs. Turing Machines

SEP on Quantum Computing:
Although the original Church-Turing thesis involved the abstract mathematical notion of computability, physicists as well as computer scientists often interpret it as saying something about the scope and limitations of physical computing machines. For example, Wolfram (1985) claims that any physical system can be simulated (to any degree of approximation) by a universal Turing machine, and that complexity bounds on Turing machine simulations have physical significance. For example, if the computation of the minimum energy of some system of n particles requires at least an exponentially increasing number of steps in n, then the actual relaxation of this system to its minimum energy state will also take an exponential time. Aharonov (1998) strengthens this thesis (in the context of showing its putative incompatibility with quantum mechanics) when she says that a probabilistic Turing machine can simulate any reasonable physical device at polynomial cost. Further examples for this thesis can be found in Copeland (1996).

AND:
Theoretical as it may seem, the question "what is quantum in quantum computing?" has an enormous practical consequence. One of the embarrassments of quantum computing is the fact that, so far, only one algorithm has been discovered, namely Shor's, for which a quantum computer is significantly faster than any known classical one. It is almost certain that one of the reasons for this scarcity of quantum algorithms is related to the lack of our understanding of what makes a quantum computer quantum (see also Preskill 1998 and Shor 2004). As an ultimate answer to this question one would like to have something similar to Bell's (1964) famous theorem, i.e., a succinct crispy statement of the fundamental difference between quantum and classical systems, encapsulated in the non-commutative character of observables. Quantum computers, unfortunately, do not seem to allow such simple characterization. Observables—in the quantum circuit model there are only two, the preparation of the initial state and the observation of the final state, in the same basis, and of the same variable, at the end of the computation—are not as important here as in Bell's case since any measurement commutes with itself. The non-commutativity in quantum computing lies much deeper, and it is still unclear how to cash it into useful currency. Quantum computing skeptics (Levin 2003) happily capitalize on this puzzle: If no one knows why quantum computers are superior to classical ones, how can we be sure that they are, indeed, superior?

The Illusion of Time

This is a very good article. Reality is not what you think it is.
That Rovelli's approach yields the correct probabilities in quantum mechanics seems to justify his intuition that the dynamics of the universe can be described as a network of correlations, rather than as an evolution in time. "Rovelli's work makes the timeless view more believable and more in line with standard physics," says Dean Rickles, a philosopher of physics at the University of Sydney in Australia.

With quantum mechanics rewritten in time-free form, combining it with general relativity seems less daunting, and a universe in which time is fundamental seems less likely. But if time doesn't exist, why do we experience it so relentlessly? Is it all an illusion?

Yes, says Rovelli, but there is a physical explanation for it. For more than a decade, he has been working with mathematician Alain Connes at the College de France in Paris to understand how a time-free reality could give rise to the appearance of time. Their idea, called the thermal time hypothesis, suggests that time emerges as a statistical effect, in the same way that temperature emerges from averaging the behaviour of large groups of molecules (Classical and Quantum Gravity, vol 11, p 2899).

Imagine gas in a box. In principle we could keep track of the position and momentum of each molecule at every instant and have total knowledge of the microscopic state of our surroundings. In this scenario, no such thing as temperature exists; instead we have an ever-changing arrangement of molecules. Keeping track of all that information is not feasible in practice, but we can average the microscopic behaviour to derive a macroscopic description. We condense all the information about the momenta of the molecules into a single measure, an average that we call temperature.

According to Connes and Rovelli, the same applies to the universe at large. There are many more constituents to keep track of: not only do we have particles of matter to deal with, we also have space itself and therefore gravity. When we average over this vast microscopic arrangement, the macroscopic feature that emerges is not temperature, but time. "It is not reality that has a time flow, it is our very approximate knowledge of reality that has a time flow," says Rovelli. "Time is the effect of our ignorance."

Quark Star

Quark star may hold secret to early universe.
When supernovae explode, they leave behind either a black hole or a dense remnant called a neutron star. However, recent calculations suggest a third possibility: a quark star, which forms when the pressure falls just short of creating a black hole.

Friday, February 20, 2009

Limits of knowledge

A mathematical theory places limits on how much a physical entity can know about the past, present or future
Deep in the deluge of knowledge that poured forth from science in the 20th century were found ironclad limits on what we can know. Werner Heisenberg discovered that improved precision regarding, say, an object’s position inevitably degraded the level of certainty of its momentum. Kurt Gödel showed that within any formal mathematical system advanced enough to be useful, it is impossible to use the system to prove every true statement that it contains. And Alan Turing demonstrated that one cannot, in general, determine if a computer algorithm is going to halt.

Thursday, February 19, 2009

A Quantum Threat to Special Relativity

Einstein
This conclusion turns everything upside down. Einstein, Bohr and everyone else had always taken it for granted that any genuine incompatibility between quantum mechanics and the principle of locality would be bad news for quantum mechanics. But Bell had now shown that locality was incompatible not merely with the abstract theoretical apparatus of quantum mechanics but with certain of its empirical predictions as well. Experimenters—in particular work by Alain Aspect of the Institute of Optics in Palaiseau, France, and his co-workers in 1981 and later—have left no doubt that those predictions are indeed correct. The bad news, then, was not for quantum mechanics but for the principle of locality—and thus, presumably, for special relativity, because it at least appears to rely on a presumption of locality.

Tuesday, February 17, 2009

F-22 Secrets Revealed

Cooooool...

February 12, 2009: The U.S. Air Force has released some performance data on the F-22. The stealthiness factor of the F-22 has turned out to be better than predicted. For radar purposes, the F-22 is about the size of a steel marble. The F-35 comes out as a steel golf ball.

Monday, February 16, 2009

On fundamental concepts

On the subject of what is "fundamental", it seems to me that any theory of reality has to have something fundamental at the foundation that is taken as a given.

With materialism, the foundation is energy, or maybe spacetime, or quantum fields, or some combination of all three. But unless you just accept the existance of these things as fundamental brute facts, the next question is obviously "What is energy?", or "Where did spacetime come from?", or "why does it work that way?". Even if you introduce a more basic concept (e.g., strings, or spin networks, or whatever), then you can ask the exact same questions about that new fundamental concept.

With a religious view, you say that some supreme being or supernatural force is at the foundation of reality. But this introduces the question of "What is God?" or "Where did God come from?" or "What is God's motivation?" or "How did God do these things?"

In my view, the best candidate for the fundamental core of reality is: information. With the extra assumption (which is well grounded I think) that certain types of information have conscious first person subjective experience (something similar to Chalmers' "double aspect theory of information", http://consc.net/papers/facing.html).

The idea that information exists independently of any physical substrate, and without needing a source, (as in Modal or Platonic Realism) is I think not too big a stretch. And once you take this as your fundamental basis of reality, there really are no other questions. Everything else follows.

This does lead one to conclude that most conscious observers see chaotic and nonsensical realities, because most possible information patterns are random-ish and chaotic. BUT, so be it. We have examples of such conscious observers right here in every day life. People with schizophrenia, dementia, hallucinations, etc. All of these conditions are caused by disruptions in the information represented by the brain. Which is why I think that even starting with the assumption of physicalism, you're still lead back to idealism.

And of course, you have experience of nonsensical realities yourself, when you dream. I would say the worlds we encounter in our dreams are just as real (or unreal) as the world we see when we are awake, BUT we don't spend much time there, and when we wake our memories of the dream worlds fade and lose intensity. So we give them subordinate status.

So, to summarize, I would say that every possible conscious observer exists in a reality of their own perceptions. And every perceivable reality (both hellish and heavenly) IS perceived by every observer capable of perceiving it. And the reason for this is that the information for these perceptions exists in a platonic sense.

The universe

Martin Rees on mathematics:

String theory involves scales a billion billion times smaller than any we can directly probe. At the other extreme, our cosmological theories suggest that the universe is vastly more extensive than the patch we can observe with our telescopes. It may even be infinite. The domain that astronomers call "the universe" - the space, extending more than 10 billion light years around us and containing billions of galaxies, each with billions of stars, billions of planets (and maybe billions of biospheres) - could be an infinitesimal part of the totality.

There is a definite horizon to direct observations: a spherical shell around us, such that no light from beyond it has had time to reach us since the big bang. However, there is nothing physical about this horizon. If you were in the middle of an ocean, it is conceivable that the water ends just beyond your horizon - except that we know it doesn't. Likewise, there are reasons to suspect that our universe - the aftermath of our big bang - extends hugely further than we can see.

That is not all: our big bang may not be the only one. An idea called eternal inflation developed largely by Andrei Linde at Stanford University in Palo Alto, California, envisages big bangs popping off, endlessly, in an ever-expanding substratum. Or there could be other space-times alongside ours - all embedded in a higher-dimensional space. Ours could be but one universe in a multiverse.

Free will

St. Augustine:

Who can embrace wholeheartedly what gives him no delight? But who can determine for himself that what will delight him should come his way, and, when it comes, that it should, in fact, delight him?

Sunday, February 15, 2009

The irrelevance of causality

Preserving some older stuff here:

In my opinion, causality is a physical implementation detail whose specifics vary from system to system, and even from possible universe to possible universe, but which is ultimately not important to the experience of consciousness.

So in the previous post my goal was to show that mappings from a dust cloud to a brain are as valid as mappings from a computer simulation to a brain. And I'm making the assumption that an accurate computer simulation of a brain would produce consciousness just as an actual brain would.


It's difficult to say much about dust cloud dynamics, whereas it's relatively easy to talk about how computers work. So assuming that there is an equivalence between computers and dust clouds, from here forward I'll mainly talk on computers.

So, returning to the previously mentioned computer simulation, the simulation consists of two parts: data and program. The data describes a brain in arbitrarily fine detail, the program describes the steps that should be taken to change the data over time in such a way as to maintain a consistent mapping to a real brain that is also evolving over time.

A physical computer that implements a simulation basically sets up a system of physical events that when chained together, map the input data (brain at time slice t1) to a set of output data (brain at time slice t2). The "computation" is just a mapping process, or an arbitrarily long sequence of mapping processes.

Consider the boolean logic gates that make up a digital computer. A NAND gate for example. So any physical system that takes two inputs that can be interpretted as "0" or "1" and maps those inputs to some sort of output that also can be interpretted as "0" or "1", and does so in such a way that two "1" inputs will produce a "0" output and all other combinations of inputs will produce a "1" output, must be said to implement the computations defined by the boolean NAND operation.

In a digital computer, this might be done by combining two NMOS transistors and two PMOS transistors in such a way that the direction of currently flow at the output line is interpretted as "0" or "1". BUT, you could also implement this same operation using dominos, with the state of the "output domino" as fallen or standing indicates "0" or "1". Or you could do it with water, pipes, and valves with the direction of water flow indicating "0" or "1" at the output pipe.

Note that there doesn't need to be just two discrete values for input and output, "0" and "1". The range of values for input and output just have to be mappable to "0" and "1".

Also note, that we only need the mapping of input to output to hold for the times that we rely on it to produce correct values. We don't care if a year later the NAND gate implementation has broken. We don't care if a day later it no longer works. We don't care if 1 second later the mapping of inputs to outputs by the physical system no longer holds. All we care about is that at the time we needed it to do our NAND calculation, the mapping held and the NAND implementation produced the correct results (regardless of why it produced the correct results).

Okay, so we have a lot of data that describes a brain, and we have a program which describes in abstract terms the sequence of steps necessary to transform the brain data over time in such a way as to maintain a consistent mapping to an actual brain. And we want to run our program on a computer.

There are many, many types of computers, with a large range of architectures, that would be capable of running our simulation. And depending on which we choose, we will end up with a wide variety of physical representations for the data, and also a wide variety of execution methods for the program.

We could run the simulation on a powerful digital computer, with the data stored as bits in RAM, and the program executed on one processor sequentially or on many processors in parallel. Or we could run the simulation on a huge scaled up version of a Babbage Analytical Engine with millions of punch cards. Or we could print out the source code and the data and execute the program by hand, using a pencil and paper to store and update various values in memory (similar to Searle's Chinese Room scenario). OR we could even construct something like a mechanical brain whose structure mimics the structure of an actual human brain, with mechanical neurons paralleling the operation of actual neurons, and also with analogues for neurotransmitters and glial cells and all the rest.

In all of these cases, the causal structure of the executing simulation would be vastly different from case to case. And yet, if there always existed a mapping from the simulation back to the original human brain, then I would assume that the simulation was accurate and was resulting in subjective experience for the simulated consciousness.

In fact, due to things like optimizing compilers, out-of-order-execution circuitry, and branch-prediction circuitry, not to mention automatic parallelization and various forms of hyperthreading, PLUS the added causal interruptions due to preemptive multitasking -- the actual causal structure of the executing program might bear relatively little resemblence to what you would expect from examining the source code of the simulation program.

Also note that we could do things to optimize the simulation execution like cache intermediate results in lookup tables to avoid recomputing frequently occuring values, OR even restructure the entire simulation program in a way that is mathematically equivalent to the original and produces equivalent output, but which in fact shares none of the original program's causal structure.

A final scenario:

Say that we are running our simulation on a digital computer. The simulation is doing the calculations necessary to transform the brain state data at t1 into the state at t2. At a crucial moment, a cosmic ray zings in from outer space and disrupts the flow of electrons in the CPU that is doing an important calculation, and so the calculation is not done. HOWEVER, but sheer coincidence, the correct output value that would have been produced is already on the output lines of the boolean logic gates that provide the data to be written to memory, and indeed this random, but in this case correct, value is written to memory, and the simulation goes on as if nothing improper had happened.

Now, in this case, the causal chain was broken, but do to an unlikely but not impossible stroke of good fortune, the integrity of the simulation was maintained, and presumably consciousness was still produced. Obviously the simulated consciousness wouldn't notice anything amiss, because noticing something amiss would require a change of data. And no data was changed.

So the bottom line for all of the above is that it is possible to think of many different scenarios where the causal structure differs, and even of examples where the causal chains that form the structure are broken, but as long as the correct outputs are produced in such a way that the mapping from the simulation to the original brain holds, then I think that consciousness would still result from the simulation.

From this I conclude that causality is an implementation detail of the system used to calculate the outputs, and that any system (even those that involve random breaks in the causal chain) that produces outputs that can be mapped to a human brain, will produce consciousness in the same way that the human brain does. Only the outputs matter. Which is to say that only the information matters.

Being able to follow the causal chain of the simulation is important in being able to interpret the outputs of the simulation, and also is important to being able to have confidence that the simulation is actually running correctly, AND is also important in terms of knowing how to feed inputs into the simulation (assuming that the simulated consciousness isn't living in a simulated world with provides the inputs).

So causality is critical to us in viewing, interpretting, and interacting with the simulation.

HOWEVER, I don't see that causality is important in producing consciousness.

More on Dust Theory

"Dust Theory" is really no different than the idea that an accurate computer simulation of your brain would be conscious. The exact same reasoning holds. If you say that Dust Theory in this case is incorrect, then I think you're also saying that an accurate computer simulation of your brain would not be conscious either, despite the fact that it would respond to inputs in exactly the same way as your real brain would (a necessary condition of being an "accurate" simulation).


As I mentioned, the state of the dust cloud particles evolves in time according to the laws of physics. There is a causal connection between the state of the dust cloud at time t1 and any subsequent time (t2). Why? Because it's the same cloud of dust with particles drifting around affecting each other, in the same way that the particles of your brain drift around and affect each other.

But, taking the total state of the dust cloud at time t2, you should be able to work back to the state of the cloud t1 (setting aside possible problems due to quantum indeterminancy). Starting at t2 you would follow the causal chain of each particle back and eventually find out where it was at t1. Though, you would need a massive amount of processing power to do this of course, due to the n-body problem. But this is a thought experiment, so assume that we have that processing power.

So, as is true of the dust cloud, a computer simulation of your brain is "accurate" if there exists a mapping of the state of the computer simulation to the state of your brain at a given time. A simulation run on a digital computer is done as arbitrarily small, discrete time slices, so at the end of each time slice you would compare the state of the computer to the state of your brain, and if there was a consistent mapping, then the computer is accurately simulating your consciousness.

In this case, if you agree that the computer simulation is conscious, then where does consciousness exist in the simulation process? We do the calculations for each time slice one processor instruction at a time. There are probably billions of processor instructions executed per time slice. Which processor instructions cause "consciousness" to occur?

Keep in mind that the processor may be interrupted many times to run programs other than the brain simulation. So at any time in the process of calculating the next time slice, the processor might be interrupted and diverted to another program's calculations for an arbitrary length of time. When the computer returns to the simulation it will pick up where it left off, and the integrity and accuracy of the simulation will not be affected.

This is equivalent to the dust cloud *only sometimes* being in a state that maps to the state of your brain. So the dust cloud spends 10 minutes drifting in a configuration that maps to the state of your brain. Then it drifts out of synch and the dust cloud particles no longer map to the state of your brain. A billion years go by with the dust cloud particles out of synch. Finally the dust cloud again drifts into a configuration that maps to the state of your brain for another 10 minutes. And so on. There is a causal connection between the two 10 minute interludes in the same way that there is causal continuity with the computer simulation even when the computer is occasionally diverted to execute other programs due to the demands of preemptive multitasking.

Also note that the speed of the computer has no impact on how the simulated consciousness perceives the passage of time. If the computer takes a year to compute 1 minute of subjective time for the simulated consciousness, that will still only feel like 1 minute for the consciousness being simulated. Conversely, if the computer runs faster than the human brain and it only takes 1 minute to compute 1 year of subjective time for the simulated consciousness, that year will still feel like a year to the simulated consciousness, even though it actually only took 1 minute of "external" time.

So, I think this pretty much shows that the basic idea of Dust Theory is correct, IF you accept that an accurate computer simulation of a brain would also be conscious. If you don't accept that a computer simulation would be conscious, then I have a whole seperate set of arguments I can make on that subject (ha!).

So, this is just step 1 of my multi-step response to Pete and Teed. I also want to address the non-causal-chain case of Dust Theory, and also Teed's Hypothesis of Extended Cognition. But I need to go eat supper.

Again, Hans Moravec covers a lot of this in:

http://www.frc.ri.cmu.edu/~hpm/project.archive/general.articles/1998/SimConEx.98.html

Platonic Reality and Simulation

Okay, let's say we take a "brain simulator" (my primary assumption in this post being that such a thing is possible) and load the digitized version of your brain into it and run it. We can ask the simulated "you" questions and it should respond as you would have. Seems theoretically feasible, right?


Now that we've made sure the "brain simulator" works, lets go around and collect a bunch of data. Say, from the thermal vibrations of rocks. Or the water droplet distribution of clouds. Or the pattern of cosmic rays hitting a sensor. Once we've collected a few hundred terabytes of this data, let's feed it into our "brain simulator" and start it.

The vast majority of the time, we won't get anything meaningful out of the simulator...it'll just be garbage. BUT, if we collect enough data and try enough times, eventually we'll find a dataset that will produce a "person" when we run it in our brain simulator.

The person will have memories of a life which will have nothing to do with the source of the data. He won't remember being in a cloud or where ever the data came from. What he remembers will be determined by what "brain" is described by the data. Maybe he'll remember being an accountant in chicago, or maybe he'll have kind of garbled memories. Garbled memories are more likely, since we found him in random data, but if we keep trying out data sets, eventually we could find someone with a fully coherent set of memories...maybe strange memories, but coherent.

SO, these people we find...they're presumably as real to themselves as you are to yourself (if our starting assumption that a brain can be accurately simulated is correct). Their pasts are as real to them as your past is to you. And their "data" existed out in the world before you measured it and plugged it into your simulator. SO, did they exist before you found them? Were they able to pull themselves out of the "cloud water droplet patterns" in the same way you are able to pull yourself out of the "neuron firing patterns" or the "interactions of atoms and molecules" of your brain?

So when we pulled the data from the cloud or where ever, we just pulled one time slice and start our simulation from that. But in the same way that you can calculate different time slices from the brain simulator on different processors in a computer, or even on completely different computers, maybe the next time slice of the person we found was the next cloud over?

How important is causality in moving between states? Does there have to be a causal link between one state and the next? I'm not sure why there would have to be. Seems like the relevant information is in the states. How you move from one informational state to another is irrelevant.

Continuing:

So if you run the brain simulator 10 times, using the same starting data and the same inputs, the person being simulated will experience EXACTLY the same thing, 10 times. Right?

Now, we save the inputs and outputs of every neuron when we do the first run.

On the second run we take ONE neuron and we don't actually do calculations for it on each time slice. Instead, for each set of inputs to that neuron, we just do a lookup to find the previous outputs and pass those along to the next neuron. But all the other neurons we do do the calculations as before. Does the person being simulated still experience the same thing?

Now on the third run, we take 1000 neurons and just do lookups for their outputs on each time slice. All other neurons still do the same calculations as for the first run. Does the person being simulated still experience the same thing?

Now on the fourth run, we take ONE MILLION neurons and just do lookups. Does the person being simulated still experience the same thing?

Now on the NINTH run, we do lookups on all EXCEPT 1 neuron, which we actually calculate the outputs for. Does the person being simulated still experience the same thing?

Now on the TENTH run, for each time slice we don't do any "neural" calculations. All we do is lookups. Does the person being simulated still experience the same thing?

So it's easy to say that the first, and second, and third runs are accurate simulations...but the ninth and tenth runs? What exactly are we doing here that simulates a person?

But if the 10th run doesn't simulate anybody, did the ninth? Eighth? At what point do we stop simulating the person? Or do we stop simulating the person?

Obviously if after the 10th run, we take it's output, pick up where it stopped and resume simulating so that we can ask the person being simulated how it felt in the 10th run, they're not going to know that anything was different between the 1st and the 10th run.

SO: This is what I'm talking about...by taking small, reasonable steps we went from a reasonable, relatively unridiculous starting point: that it's possible to simulate a brain on a computer, to a rather ridiculous conclusion, BUT the chain of reasoning seems to be relatively straightforward, I think. Do you?

Assuming that it's possible to simulate a brain (and thus a mind) on a computer, the rest seems to follow, right? Or wrong?

Time and Thermodynamics

Hypothetical situation. All of the particles in the universe kick into reverse and start going backwards. For some reason every particle in the universe instantaneously reverses course. And also space begins contracting instead of expanding. Everything in the universe hits a rubberwall and bounces back 180 degrees.


So now instead of expanding, everything is on an exact "rewind" mode, and we're headed back to the "Big Bang".

The laws of physics work the same in both directions...if you solve them forward in time, you can take your answers, reverse the equations and get your starting values, right?

This is what they always go on about with the "arrow of time". The laws of physics work the same forwards and backwards in time. It's not impossible for an egg to "unscramble", it's just very very very very very unlikely. But if it did so, no laws of physics would be broken. And, in fact, if you wait long enough, it will eventually happen.

Okay, so everything has reversed direction. The actual reversal process is, of course, impossible. But after everything reverses, everything just plays out by the normal laws of physics. Only that one instant of reversal breaks the laws of physics.

TIME is still moving forward in the same direction as before. We didn't reverse time. We just reversed the direction of every particle.

So, now photons and neutrinos no longer shoot away from the sun - instead now they shoot towards the sun, which when the photons and the neutrinos and gamma rays hit helium atoms, the helium atoms split back into individual hydrogen atoms, and absorb some energy in the process. Again, no physical laws are broken, and time is moving forward.

Now, back on earth, everything is playing out in reverse as well. You breath in carbon dioxide and absorb heat from your surroundings and use the heat to break the carbon dioxide into carbon and oxygen. You exhale the oxygen, and you turn the carbon into sugars, which you eventually return to your digestive track where it's reconstituted into food, which you regurgitate onto your fork and place it back onto your plate.

Okay. So, still no physical laws broken. Entropy is decreasing, but that's not impossible, no laws of physics are being broken.

In this case, it must happen because we perfectly reversed the trajectory of every particle in the universe.

NOW. Your brain is also working backwards. But exactly backwards from before. Every thought that you had yesterday, you will have again tomorrow, in reverse. You will unthink it.

My question is, what would you experience in this case? What would it be like to live in this universe where time is still going forward, but where all particles are retracing their steps precisely?

The laws of phsyics are still working exactly as before, but because all particle trajectories were perfectly reversed, everything is rolling back towards the big bang.

In my opinion, we wouldn't notice any difference. We would NOT experience the universe moving in reverse, we would still experience it moving forward exactly as we do now...we would still see the universe as expanding even though it was contracting, we would still see the sun giving off light and energy even though it was absorbing both. In other words, we would still see a universe with increasing entropy even though we actually would live in a universe with decreasing entropy.

And why would that be the case? Because our mental states determine what is the past for us and what is the future. There is no "external arrow of time". The arrow of time is internal. The past is the past because we remember it and because the neurons of our brains tell us that it has already happened to us. The future is the future because it's unknown, and because the neurons of our brains tell us that it will happen to us soon.

If there is an external arrow of time, it is irrelevant, because as this thought experiment shows, it doesn't affect the way we perceive time. Our internal mental state at any given instant determines what is the future and what is the past for us.

In fact, you could run the universe forwards and backwards as many times as you wanted like this. We would never notice anything. We would always percieve increasing entropy. For us, time would always move forward, never backwards.

My point being, as always, that our experience of reality is always entirely dependent on our brain state. We can't know ANYTHING about the universe that is not represented in the information of our brain state at any given instant.

Forwards or backwards, it's all just particles moving around, assuming various configurations, some of which give rise to consciousness.

Theory of Nothing

From Russell Standish's book "Theory of Nothing":

Computationalism is a possible model of observerhood;not only that an appropriately programmed computer might be conscious, but that we all, as conscious observers, are equivalent to some computer program as yet unknown.

2500 years ago, Plato introduced a theory in which there is a Plenitude of ideal forms, to which objects in the real world are imperfect copies.

Jurgen Schmidhuber introduced a different idea of the Plenitude[111], by considering the output of a small computer program called a universal dovetailer, an idea first suggested by Bruno Marchal[93]. The dovetailer creates all possible programs (which are after all nite strings of computer instructions), and after creating each program runs a step of that program, and a step of each previously created program. In this way, the universal dovetailer executes all possible computer programs, which to a computationalist like Schmidhuber includes our universe amongst many other possible universes.

Many Worlds

Ha!
Here is a sample conversation between two Everettistas, who have fallen from a plane and are hurtling towards the ground without parachutes:

Mike: What do you think our chances of survival are?

Ron: Don't worry, they're really good. In the vast majority of possible worlds, we didn't even take this plane trip.

Consciousness

A couple of interesting debates on consciousness, functionalism, platonism, computationalism, dust theory, etc. in the comments of these two threads:

Guide To Reality
AND:
The Splintered Mind

That Allen guy is a freaking genius!

Dust Theory

A great post describing Dust Theory:

I guess there are lots of things dust theory could be about: household dust and its ability to find hidey-holes resistant to all attempts at cleaning; galactic theories of stellar dust clouds and planetary accretion; the so-called ‘smart dust’ of military science-fiction, an extended, self-organising sensor-computing network. But Dust Theory is none of these things.
There are some concepts which seem strange beyond imagining, yet are difficult-to-impossible to refute.

The idea that the whole universe began one second ago, with everyone’s “memories” pre-built in.

The idea that time doesn’t flow at all, that all times simply pre-exist and that each of our conscious thoughts of “now” are simply cross-sections of that greater space-time bloc-universe.

The ontological argument, which proves that a God-like being must exist.

The Doomsday Argument, which uses statistical reasoning to show that the great age of human civilisation is drawing to an end quite soon (e.g. within 10,000 years).
The Dust Theory we are going to talk about is like one of those.

Computer simulation of brain functioning is coming along apace. IBM’s “Blue Brain” project (website here) is modelling brain neurons on its Blue Gene supercomputer. Let’s project this work only a few years into the future. IBM have programmed a complex computer object which represents Blue Brain’s chief programmer, called Brian, plus a mini-simulated world, a woodland glade, for simulated-Brian to walk around in.

When we run the simulation on IBM’s supercomputer, simulated-Brian is having just the same experiences as he ‘walks’ around the simulated woodland glade as real-Brian would have walking around a similar, but real, woodland glade. I think we can trust IBM’s programmers to get the simulated neurons working exactly the same as real ones.

Simulated-Brian doesn’t know he’s simulated - after all, his thinking exactly duplicates that of the real-Brian. Their two thought processes would only diverge if they had different experiences. For example, simulated-Brian might try to exit the glade and find there is no reality beyond it. That would surely give him much food for thought!

A computer works by loading a data structure, or pattern, into memory and then updating it in discrete steps. What seems like a continuous flow of thinking in simulated-Brian’s “brain” is, at the microsecond-scale, a series of distinct updates to individual simulated neurons. Of course, the same thing is true in our brains as well. Neurons either fire or they don’t and all thinking is the result of billions of these discrete neuron-events.

Let’s run the computer simulation of Brian in his woodland glade for a minute. Simulated-Brian wanders around, thinking how pretty it all is. He picks up a flower and smells it, appreciates the scent. Life is good, he feels. There. 60 seconds and we ... stop.

The simulation in the computer advances one step every 6 microseconds. This is easily fast enough to correctly simulate biological neurons, which operate much more slowly. As the simulation advances during that minute, we write out each discrete state onto a vast disk store. How many states did we save? How many 6 microsecond slices are there in a minute? The answer is ten million. Each simulation-slice is a complex sequence of binary ones and zeros, like all computer data. Each simulation-slice represents all the neurons in simulated-Brian’s brain plus the woodland glade objects + information about light and sound and so on. That’s just what a slice of a computer simulation actually is.

Now that we have those 10 million slices, we don’t have to use the complex program which computed each slice from the previous one. Our 10 million slice database is like a reel of movie film. If we simply load each slice into the computer every 6 microseconds, the simulation runs as before - Brian wanders around the glade, thinks how pretty it is, picks up and smells a flower: life is good.

Still with us? Still happy with the argument? Informed opinion is that so far so good. But now it starts to get seriously weird.

By running the simulation in a computer, we have decoupled the 'reality' of simulated-Brian and the simulated woodland glade from the laws of physics. We can now do things we could never do in our own reality.

If we run the simulation faster or slower (one slice every second?) a little thought will show that it makes no difference to the experience of simulated Brian.

What about if we run the slices backwards, or out-of-order? Since each slice is a self-contained entity which is structurally independent of any other slice, then it will not matter in what order the slices are run: simulated-Brian has the same delightful walk in the wood regardless.

OK, now a big one. What ‘value do we add’ by running the slices at all? After all, they already exist on the computer disk - all of them. Simply pulling slices into a computer, one after another, may help us make sense of the simulation. It’s then brought into time-congruence with our own linear experience. But it can make no difference to the simulation itself. Just by having all the ten million slices on the disk, we have somehow smeared a minute of simulated-Brian’s time into purely space. It’s hard for us to imagine that on that disk, simulated-Brian is - ‘in parallel’ - having that one minute experience, but he must be.

Stay with it, it gets even weirder as we finally get to the promised Dust Theory.

What’s special about any particular simulation-slice on the disk? It’s just a pattern of magnetism on the disk surface. Although we didn’t labour the point, when a slice gets transferred to the computer its physical form changes several times: first into a sequence of electromagnetic pulses on the connecting cables, then into some physical structure in computer memory. Geographical position and physical encoding were wildly different, yet the pattern was the same. If we had run the simulation on a global cluster of computers, with one slice in England and the next loaded onto a computer in California, the simulation would have worked just the same.

So why do we need a computer at all? The universe is a big place with a lot of material in it, often structured in complex patterns. Suppose that all over the universe there were patterns of material which, just by chance, were precise encodings of the ten million slices of simulated-Brian in his simulated woodland glade. Then by their very existence, simulated-Brian would have his woodland glade experience. You and I would never know that - to us it all just looks like random piles of dust - but Brian would nevertheless be there, having that experience.

The universe is a truly big place, and complex. Probably every pattern of any complexity is out there in the dust somewhere. There are collections of patterns which exactly mirror the pattern of your neurons over all the lives you could ever lead. Even as you read this, there are many, many versions of you in the universe, encoded as simulations in the dust, and unaware that they are simulations. Perhaps you are one of those simulations - you could never disprove a sufficiently accurate one.

That’s Dust Theory.

Greg Egan used this as the basis of his book Permutation City and has written an essay on Dust Theory here.

Karl Marx

Okay, I'm going to start posting random stuff now.

First, Karl Marx on the Taiping Rebellion.