Wednesday, July 21, 2010

Time Without Becoming

Quentin Meillassoux

Time Without Becoming

"I call 'facticity' the absence of reason for any reality; in other words, the impossibility of providing an ultimate ground for the existence of any being. We can only attain conditional necessity, never absolute necessity. If definite causes and physical laws are posited, then we can claim that a determined effect must follow. But we shall never find a ground for these laws and causes, except eventually other ungrounded causes and laws: there is no ultimate cause, nor ultimate law, that is to say, a cause or a law including the ground of its own existence. But this facticity is also proper to thought. The Cartesian Cogito clearly shows this point: what is necessary, in the Cogito, is a conditional necessity: if I think, then I must be. But it is not an absolute necessity: it is not necessary that I should think. From the inside of the subjective correlation, I accede to my own facticity, and so to the facticity of the world correlated with my subjective access to it. I do it by attaining the lack of an ultimate reason, of a causa sui, able to ground my existence.


That’s why I don’t believe in metaphysics in general: because a metaphysics always believes, in one way or the other, in the principle of reason: a metaphysician is a philosopher who believes it is possible to explain why things must be what they are, or why things must necessarily change, and perish- why things must be what they are, or why things must change as they do change. I believe on the contrary that reason has to explain why things and why becoming itself can always become what they are not- and why there is no ultimate reason for this game. In this way, “factial speculation” is still a rationalism, but a paradoxical one: it is a rationalism which explain why things must be without reason, and how precisely they can be without reason. Figures are such necessary modalities of facticity - and non-contradiction is the first figure I deduce from the principle of factiality. This demonstrates that one can reason about the absence of reason- if the very idea of reason is subjected to a profound transformation, if it becomes a reason liberated from the principle of reason- or, more exactly: if it is a reason which liberates us from principle of reason.

Now, my project consists of a problem which I don’t resolve in After Finitude, but which I hope to resolve in the future: it is a very difficult problem, one that I can’t rigorously set out here, but that I can sum up in this simple question: Would it be possible to derive, to draw from the principle of factiality, the ability of the natural sciences to know, by way of mathematical discourse, reality in itself, by which I mean our world, the factual world as it is actually produced by Hyperchaos, and which exists independently of our subjectivity? To answer this very difficult problem is a condition of a real resolution of the problem of ancestrality, and this constitutes the theoretical finality of my present work."

Monday, July 19, 2010

Re: The Irrationality of Physicalism

A response:

"And in either case the counter argument is the same, c.f. "The Evolution of Reason" by William S. Cooper."

AND, my response to the response:

Maybe. But it’s not a very good counter argument.

A long-ish response, but there are several quotes from the book that add up in length.

So logic reduces to biology. Fine. And biology reduces to...what? Initial conditions and causal laws, that’s what.

So, from the "The Evolution of Reason":

“Evolution is not the law enforcer but the law giver - not so much a police force as a legislature. The laws of logic are not independent of biology but implicit in the very evolutionary processes that enforce them. The processes determine the laws.

If the latter understanding is correct, logical rules have no separate status of their own but are theoretical constructs of evolutionary biology. Logical theory ought then in some sense to be deducible entirely from biological considerations. The concept of scientific reduction is helpful in expressing that thought. In the received methodological terminology the idea of interest can be articulated as the following hypothesis.

REDUCIBILITY THESIS: Logic is reducible to evolutionary theory.”

So obviously evolution is not a law enforcer or a law giver. It isn’t a causal law, but rather a consequence of causal laws.

Cooper claims that logic reduces to evolutionary theory. And what does evolutionary theory reduce to? Initial conditions and fundamental causal laws acting on fundamental entities.

Assuming physicalism, the causal laws of our universe applied to a suitable set of initial conditions will, in time, exhibit features that we categorize as “evolutionary”. Some of these evolutionary processes may give rise to entities that have conscious experiences, and some of those conscious experiences will be of holding this, that, or the other beliefs about logic. But those beliefs are a result of fundamental laws acting on fundamental entities, and not associated with any sort of independently existing platonic standard of “logical reasoning”.

This is the gist of my post, and seems to be the main gist of his book. We do part company eventually though. I’ll save that part for last.


“‘How do humans manage to reason?’ Since the form of this question is the same as that of the first, it would be natural to attack it in a similar two-pronged fashion. [...] Somewhere in the latter part there would be talk of selective forces acting on genetic variation, of fitness, of population models, etc. [...] The laws of Reason should not be addressed independently of evolutionary theory, according to the thesis. Reasoning is different from all other adaptations in that the laws of logic are aspects of the laws of adaptation themselves. Nothing extra is needed to account for logic - only a drawing out of the consequences of known principles of natural selection.”

Selective forces? What would have caused those selective forces? What do these selective forces reduce to? Why these selective forces instead of some others?

Natural selection? Well, there are causally neutral “filters” (metaphorically speaking), but these metaphorical filters are as much a consequence of the universe’s initial conditions and causal laws as the organisms that are (metaphorically) selected.

Evolution is a consequence of causal laws, not a causal law itself. In this it is like the first law of thermodynamics - which is a consequence of the time invariance of the causal laws, not a causal law itself. Evolution and the first law of thermodynamics are descriptions of how things are, not explanations.

So as I said, if physicalism is true then the arguments that we present and believe are those entailed by the physics that underlies our experiences, and by nothing else.

In this view, evolution is also just a manifestation of those same underlying physical forces. And logic is merely an aspect of the experiences generated by the more fundamental activities of quarks and electrons.

In this vein, he says:

“If evolutionary considerations control the relevant aspects of decision behavior, and these determine in turn the rest of the machinery of logic, one can begin to discern the implicative chain that makes Reducibility Theory thinkable.


If the evolutionary control over the logic is indeed so total as to constrain it entirely, there is no need to perpetuate the fiction that logic has a life of its own. It is tributary to the larger evolutionary mechanism.”

All we have to do is add that the universe’s initial conditions and causal laws control the evolutionary considerations, and my point is practically made.

The main point of contention between my argument and Cooper’s is:

“In this way the general evolutionary tendency to optimize fitness turns out to imply, in and of itself, a tendency for organisms to be rational. Once this is shown there is no need to look for the source of logical principles elsewhere, for the logical behavior is shaped directly by the evolutionary forces acting on their own behalf. Because the biological processes expressed in the population models wholly entail the logical rules, and are sufficient to predict and explain rational behavior, no separate account of logic is needed.”

Optimize fitness? Again, evolution isn’t something imposed from outside the system, and it’s not a causal law. If fitness of some group is optimized over time, that’s just a consequence of system’s initial conditions and causal laws.

In a deterministic system, the rise of that group was destined to happen. In an indeterministic system, the rise of that group was a result of the interplay between the initial conditions, the deterministic part of causal framework, and the outcome of the random coin flips.

So, he seems to imply that initial conditions and causal laws must give rise to rational actors. But as he says, there is no independent standard of rationality. Rationality is relative to the rules of the particular physical system. So the behaviors that a system most commonly gives rise to are, by definition, “rational”.

So rational is a meaningless label. In his formulation above it just means “whatever ends up being the most commonly manifested behaviors.”

But it’s not commonly manifested because it’s rational. Rather, it’s labeled rational because it’s commonly manifested.

Saturday, July 17, 2010

A Summer of Madness


“On July 5, 1996,” Michael Greenberg starts, “my daughter was struck mad.” No time is wasted on preliminaries, and Hurry Down Sunshine moves swiftly, almost torrentially, from this opening sentence, in tandem with the events that it tells of. The onset of mania is sudden and explosive: Sally, the fifteen-year-old daughter, has been in a heightened state for some weeks, listening to Glenn Gould’s Goldberg Variations on her Walkman, poring over a volume of Shakespeare’s sonnets till the early hours. Greenberg writes:

Flipping open the book at random I find a blinding crisscross of arrows, definitions, circled words. Sonnet 13 looks like a page from the Talmud, the margins crowded with so much commentary the original text is little more than a speck at the center.

Sally has also been writing singular, Sylvia Plath–like poems. Her father surreptitiously glances at these, finds them strange, but it does not occur to him that her mood or activity is in any way pathological. She has had learning difficulties from an early age, but she is now triumphing over these, finding her intellectual powers for the first time. Such exaltation is normal in a highly gifted fifteen-year-old. Or so it seems.

But, on that hot July day, she breaks—haranguing strangers in the street, demanding their attention, shaking them, and then suddenly running full tilt into a stream of traffic, convinced she can bring it to a halt by sheer willpower (with quick reflexes, a friend yanks her out of the way just in time).

Ultimate Explanations of the Universe

An excellent book by Michael Heller.


"The tendency to pursue 'ultimate explanations' is inherent in the mathematical and experimental method in yet another way (and another sense). Whenever the scientist faces a challenging problem, the scientific method requires him to never give up, never seek an explanation outside the method. If we agree - at least on a working basis - to designate as the universe everything that is accessible to the mathematical and experimental method, then this methodological principle assumes the form of a postulate which in fact requires that the universe be explained by the universe itself. In this sense scientific explanations are 'ultimate,' since they do not admit of any other explanations except ones which are within the confines of the method.

However, we must emphasise that this postulate and the sense of 'ultimacy' it implies have a purely methodological meaning, in other words they oblige the scientist to adopt an approach in his research as if other explanations were neither existent nor needed." - Michael Heller, The Totalitarianism of the Method.


"The longing to attain the ultimate explanation lingers in the implications of every scientific theory, even in a fragmentary theory of one part or aspect of the world. For why should only that part, that aspect of the world be comprehensible? It is only a part or an aspect of an entirety, after all, and if that entirety should be unexplainable, then why should only a tiny fragment thereof lend itself to explanation? But consider the reverse: if a tiny part were to elude explanation, it would leave a gap, rip a chasm, in the understanding of the entirety."


"Peter van Inwagen proposed a rather peculiar answer to the question why there exists anything at all. His reasoning is as follows. there may exist an infinite number of worlds full of diverse beings, but only one empty world. Therefore the probability of the empty world is zero, while the probability of a (non-empty) is one.

This apparently simple reasoning is based on very strong an essentially arbitrary assumptions. First of all, that there may exist an infinite number of worlds (that they have at least a potential existence); secondly, that probability theory as we know it may be applied to them (in other words that probability theory is in a sense aprioristic with respect to these worlds); and thirdly, that they come into being on the principle of 'greater probability.' The following question may be put with respect to this mental construct: 'Why does it exist, rather than nothing?'"

Friday, July 16, 2010

The Irrationality of Physicalism

If Physicalism is true, then the belief in Physicalism can’t be rationally justified.

If physicalism is true, then our beliefs and experiences are a result of the universe’s initial conditions and causal laws (which may have a probabilistic aspect).

Therefore, assuming physicalism, we don’t present or believe arguments for reasons of logic or rationality. Instead, the arguments that we present and believe are those entailed by the physics that underlies our experiences.

It is *possible* that we live in a universe whose initial conditions and causal laws are such that our arguments *are* logical. But in a physicalist framework that’s not why we present or believe those arguments. The fact that the arguments may be logical is superfluous to why we make or believe them.

Obviously there’s nothing that says that our physically generated experiences and beliefs have to be true or logical. In fact, we have dreams, hallucinations, delusions, schizophrenics, and madmen as proof that there is no such requirement.

So arguing for physicalism is making an argument that states that no one presents or believes arguments for reasons of logic.

Note that the exact same argument can be applied to mathematical realism, or any other position that posits that consciousness is caused by or results from some underlying process.

Thursday, July 15, 2010

Putnam Mapping

"The mapping account says, roughly, that a computing system is a concrete system such that there is a computational description that maps onto a physical description of the system. If any mapping is acceptable, it can be shown that almost every physical system implements every computation (Putnam 1988, Searle 1992). This trivialization result can be avoided by putting appropriate restrictions on acceptable mappings; for instance, legitimate mappings must respect causal relations between physical states (Chrisley 1995, Chalmers 1996, Copeland 1996, Scheutz 2001).

Still, there remain mappings between (many) computational descriptions and any physical system. Under the mapping account, everything performs at least some computations. This still strikes some as a trivialization of computationalism. Furthermore, it doesn"t do justice to computer science, where only relatively few systems count as performing computations. Those who want to restrict the notion of computation further have to look beyond the mapping account of computation."

"Putnam's proposal, and its historical importance, was analyzed in detail in Piccinini forthcoming b. According to Putnam (1960, 1967,1988), a system is a computing mechanism if and only if there is a mapping between a computational description and a physical description of the system. By computational description, Putnam means a formal description of the kind used in computability theory, such as a Turing Machine or a finite state automaton. Putnam puts no constraints on how to find the mapping between the computational and the physical description, allowing any computationally identified state to map onto any physically identified state. It is well known that Putnam's account entails that most physical systems implement most computations. This consequence of Putnam's proposal has been explicitly derived by Putnam (1988, pp. 95-96, 121-125) and Searle (1992, chap. 9)."

Tuesday, July 13, 2010

The Granite Universe

The world that I perceive seems pretty orderly. When I drive to work, it's always where I expect it to be. The people are always the same. I pick up where I left off on the previous day, and life generally proceeds in an orderly and predictable way. Even when something unexpected happens, I can generally trace back along a chain of cause and effect and determine why it happened, and understand both why I didn't expect it and why I probably could have.

In my experience thus far, there have been no "Alice in Wonderland" style white rabbits that suddenly appear in a totally inexplicable way, make a few cryptic remarks while checking their pocket watch, and then scurry off.

Why do I never see such white rabbits?

Well, at first glance, something like physicalism seems like the obvious choice to explain my reality's perceived order - to explain both what I experience AND what I *don't* experience. The world is reducible to fundamental particles (waves, strings, whatever) which have certain properties (mass, velocity, spin, charge, etc) that determine how they interact, and it all adds up to what I see.

In this view, what I see is ultimately determined by the starting conditions of the universe, plus the physical laws that govern the interaction of the fundamental elements of the universe, applied over how-many-ever billions of years. While no explanation is given for the initial conditions, or why the fundamental laws of physics are what they are, if you get past that then from a cause-and-effect stand point physicalism offers a pretty solid explanation for why my world is orderly and predictable, and why I don't see white rabbits.

And in the form of functionalism/computationalism + evolution it even offers a pretty good foundation for explaining the existence and mechanism of human behavior and ability.

But physicalism has a major drawback: It doesn't obviously explain the experience of consciousness that goes with human behavior and ability. Particles, waves, mass, spin, matter how you add them up, there doesn't seem to be any way to get conscious experience.

Which is a problem, since consciousness is the portal through which we access everything else. My conscious experience is what I know. I "know" of other things only when they enter into my conscious awareness.

So, physicalism does explain why we see, what we see, and why we don't see white rabbits. But it doesn't seem to explain the conscious experience OF seeing what we see.

Further, by positing an independently existing and well ordered external universe to explain our orderly perceptions, we have just pushed the question back one level. The new questions are, why does this external universe exist and why is it so orderly? And this initially seems justified by the fact that physicalism explains how it is possible for us to make correct predictions.

BUT, actually it explains nothing.

Nothing has been explained because we are PART of the system that we are trying to explain by appealing to physicalism. If the order and predictability of our experiences are due to the initial conditions of the universe and the laws of physics, then we inhabit a universe whose entire future, including our existence and all of our activities and experiences, is fixed. Frozen in place by unbreakable causal chains.

Effectively (and maybe actually), the entire future of the universe can be seen as existing simultaneously with its beginning. We could just as well say that the entire past, present, and future came into being at one instant, and we are just experiencing our portion of it in slices.

But there is no "explanation" here. This "block universe" just IS. It just exists. It came into being for no reason, for no purpose, with no meaning. It exists in the form that it does, and there is no answer to the question "why?". We are part of that universe, existing entirely within it and contained by it. Therefore we also just exist. For no reason, for no purpose, with no meaning, our future history also frozen in place by causal chains. What is true for the universe as a whole is true for it's contents.

To try an make what I'm saying more clear: let's imagine a real block. Say, a block of speckled granite. Now let's consider two adjacent specks of white and gray. Why are they adjacent? What caused them to be adjacent? Well, if we consider this block of granite within the context of our universe, then we can say that there is a reason in that context as to why they are adjacent. There is an explanation, which has to do with the laws of physics and the contingent details of the geologic history of the area where this block of granite was formed (which is in turn derived from the contingent details of the initial state of our entire universe).

But if we take this block of granite to be something that just exists, uncaused and unique, like our universe, then there can be no explanation. The two specks are just adjacent. That's it. No further explanation is possible. The block of granite just exists as it is and that's the way it is. We *can* say something like, "there's a vein of white and a vein of gray in this block, and those two specks exist at the boundary of the veins and so they are adjacent", but while this sounds like an explanation, it really is just a statement of observed fact. It doesn't "explain" anything. And even this observation is made from "outside" the block, an option not available with our universe.

If some sort of conscious intelligence exists within speck patterns of the 2-D slices of the granite block (2-D because we've lost a dimension on our example...the third spatial dimension of the block will be time for these speck-beings), then who knows whether they will even be conscious of being made from specs of granite and of existing within this granite block with it's grey and white veins. Maybe the speck-patterns that they are formed from will be such that their experience is of living in a 3+1 dimensional world such as ours. But regardless, there can be no explanation as to why their experiences are what they are. Their experiences will be as uncaused as the existence of the block whose speckled nature gives rise to those experiences.

So physicalism in fact offers no advantage over just asserting that our conscious experience just exists. Why are my perceptions orderly and why are my predictions about what will happen next usually correct? Because that's just the way it is...and this is true whether you posit an external universe or just conclude that conscious experience exists uncaused.

Monday, July 12, 2010

Quentin Meillassoux on Sufficient Reason and Non-Contradiction

In his book “After Finitude”, he explains that the principle of facticity (which he also refers to as “the principle of unreason”) stands in contrast to Leibniz’s “Principle of Sufficient Reason”, which states that anything that happens does so for a definite reason.

From pg. 33 of After Finitude:

“But we also begin to understand how this proof [the ontological proof of God] is intrinsically tied to the culmination of a principle first formulated by Leibniz, although already at work in Descartes, viz., the principle of sufficient reason, according to which for every thing, every fact, and every occurence, there must be a reason why it is thus and so rather than otherwise.

For not only does such a principle require that there be a possible explanation for every worldly fact; it also requires that thought account for the unconditioned totality of beings, as well as for their being thus and so. Consequently, although thought may well be able to account for the facts of the world by invoking this or that global law - nevertheless, it must also, according to the principle of reason, account for why these laws are thus and not otherwise, and therefore account for why the world is thus and not otherwise. And even were such a ‘reason for the world’ to be furnished, it would yet be necessary to account for this reason, and so on ad infinitum.

If thought is to avoid an infinite regress while submitting to the principle of reason, it is incumbent upon it to uncover a reason that would prove capable of accounting for everything, including itself - a reason no conditioned by any other reason, and which only the ontological argument is capable of uncovering, since the latter secures the existence of an X through the determination of this X alone, rather than through the determination of some entity other than X - X must be because it is perfect, and hence causa sui, or sole cause of itself.

If every variant of dogmatic metaphysics is characterized by the thesis that *at least one entity* is absolutely necessary (the thesis of real necessity) it becomes clear how metaphysics culminates in the thesis according to which *every* entity is absolutely necessary (the principle of sufficient reason). Conversely, to reject dogmatic metaphysics means to reject all real necessity, and a fortiori to reject the principle of sufficient reason, as well as the ontological argument, which is the keystone that allows the system of real necessity to close in upon itself. Such a refusal enjoins one us to maintain that there is no legitimate demonstration that a determinate entity should exist unconditionally.”

As to the principle of non-contradiction:

Pg. 60:

“We are no longer upholding a variant of the principle of sufficient reason, according to which there is a necessary reason why everything is the way it is rather than otherwise, but rather the absolute truth of a *principle of unreason*. There is no reason for anything to be or to remain the way it is; everything must, without reason, be able not to be and/or be other than it is.

What we have here is a principle, and even, we could say, an anhypothetical principle; not in the sense in which Plato used this term to describe the Idea of the Good, but rather in the Aristotelian sense. By ‘anhypothetical principle’, Aristotle meant a fundamental proposition that could not be deduced from any other, but which could be proved by argument. This proof, which could be called ‘indirect’ or ‘refutational’, proceeds not by deducing the principle from some other proposition - in which case it would no longer count as a principle - but by pointing out the inevitable inconsistency into which anyone contesting the truth of the principle is bound to fall. One establishes the principle without deducing it, by demonstrating that anyone who contests it can do so only by presupposing it to be true, thereby refuting him or herself. Aristotle sees in non-contradiction precisely such a principle, one that is established ‘refutationally’ rather than deductively, because any coherent challenge to it already presupposes its acceptance. Yet there is an essential difference between the principle of unreason and the principle of non-contradiction; viz. what Aristotle demonstrates ‘refutationally’ is that no one can *think* a contradiction, but he has not thereby demonstrated that contradiction is absolutely impossible. Thus the strong correlationist could contrast the facticity of this principle to its absolutization - she would acknowledge that she cannot think contradiction, but she would refuse to acknowledge that this proves its absolute impossibility. For she will insist that nothing proves that what is possible in-itself might not differ toto caelo from what is thinkable for us. Consequently the principle of non-contradiction is anhypothetical with regard to what is thinkable, but not with regard to what is possible.”

Continuing on pg. 77:

“It could be objected that we have conflated contradiction and inconsistency. In formal logic, an ‘inconsistent system’ is a formal system all of whose well-formed statements are true. If this formal system comprises the operator of negation, we say that an axiomatic is inconsistent if *every* contradiction which can be formulated within it is true. By way of contrast, a formal system is said to be non-contradictory when (being equipped with the operator of negation) it does not allow *any* contradiction to be true. Accordingly, it is perfectly possible for a logical system to *be* contradictory without thereby being inconsistent - all that is required is that it give rise to *some* contradictory statements which are true, without permitting *every* contradiction to be true. This is the case with ‘paraconsistent’ logics, in which some but not all contradictions are true. Clearly then, for contemporary logicians, it is not non-contradiction that provides the criterion for what is thinkable, but rather inconsistency. What every logic - as well as every logos more generally - wants to avoid is a discourse so trivial that it renders every well-formulated statement, as well as its negation, equally valid. But contradiction is logically thinkable so long as it remains ‘confined’ within limits such that it does not entail the truth of every contradiction.”

Sunday, July 11, 2010

Determinism vs. Indeterminism

Ultimately I think the difference between deterministic and indeterministic laws is not significant.

If a physical law is deterministic then under it's influence Event A will "cause" Result X 100% of the time.

Why does Event A always lead to Result X? Because that's the law. There is no deeper reason.

If a physical law is indeterministic, then under it's influence Event B will "cause" Result Q, R, or S according to some probability distribution.

Let's say that the probability distribution is 1/3 for each outcome.

If Event B leads to Result R, why does it do so? Because that's the law. There is no deeper reason.

Event A causes Result X 100% of the time.

Event B causes Result R 33.3333% of the time.

Why? There is no reason. That's just the way it is.

Determinism could be seen as merely a special case of indeterminism...the case where all probabilities are set to either 0% or 100%.

So even if we are in an universe with indeterministic laws, this doesn’t have any major impact on what metaphysical conclusions we arrive at. Even assuming indeterministic physicalism, there are still initial conditions and there are still laws - the laws just have an intrinsically probabilistic aspect.

These probablistic laws are like the rules of a card game that includes a certain amount of randomness...for instance, requiring occasional random shuffling of the deck. But the number of cards, the suits, the ranks, and the rules of the game themselves are not random...those aspects are determined.

Similarly, in quantum mechanics using the Schrodinger equation, the evolution of the wavefunction describing the physical system is taken to be deterministic, with only the "collapse" process introducing an indeterministic aspect.

But as with the card example, the impact of this random aspect is limited in scope. No matter how random it gets, it doesn't change the rules of the game. No matter how randomly the deck is shuffled, you still only ever have 52 cards, 4 suits, and 13 ranks. The randomness is constrained by the deterministic aspects of the game.

Another example of constrained indeterminism are computer programs that use randomness, for instance the Randomized Quicksort. No matter what pivots you randomly select, the algorithm is still going to correctly sort your list. At worst, it will take longer than usual. Because the randomness of the pivot selection is constrained by the context provided by the deterministic aspects of the program.

The same goes for our universe in the indeterministic case. The randomness of indeterminism only increases the probability of the existence of conscious life that discovers something true about the underlying nature of the universe *IF* the initial conditions and the non-random aspect of the causal laws allow for this to be the case.

Is it possible that our causal laws are such that any given starting conditions (with respect to the distribution of energy and/or matter) eventually lead to conscious life that knows true things about the universe?

Here we return to our analogy of the quicksort algorithm, which can start with any randomly arranged list and always produce a sorted list from it.

Note, though, that the quicksort algorithm is a very, very special algorithm. If you just randomly generate programs and try to run them, the probability of getting one that will correctly sort any unordered list is very low compared to the probability of getting a program that won't do anything useful at all, or sorts the list incorrectly, or will only correctly sort lists with special starting orders, or sorts the list but does so very inefficiently.

Equivalently, if you just randomly chose sets of causal laws from a list of all possible combinations, the probability of selecting a set of laws that can start from almost any random arrangement of matter and from that always produce conscious life that perceives true things about the laws that gave rise to it must also be very low.

The "no miracles" argument against scientific realism

By "no miracles" I'm referring to Hilary Putnam's observation:

“The positive argument for realism is that it is the only philosophy that doesn't make the success of science a miracle”

Let's assume that our best scientific theories tell us something true about the way the world *really* is, in an ontological sense. And further, for simplicity, let's assume a deterministic interpretation of those theories.

In this view, the universe as we know it began ~13.7 billion years ago. We'll set aside any questions about what, if anything, preceded the first instant and just draw a line there and call that our "initial state".

Given the specifics of that initial state, plus the particular causal laws of physics that we have, the universe can only evolve along one path. The state of the universe at this moment is entirely determined by two, and only two, things: its initial state and its casual laws.

But this means that the development of our scientific theories *about* the universe was also entirely determined by the initial state of the universe and it's causal laws. Our discovery of the true nature of the universe has to have been "baked into" the structure of the universe in its first instant.

By comparison, how many sets of possible initial states plus causal laws are there that would give rise to conscious entities who develop *false* scientific theories about their universe? It seems to me that this set of "deceptive" universes is likely much larger than the set of "honest" universes.

What would make universes with honest initial conditions + causal laws more probable than deceptive ones? For every honest universe it would seem possible to have an infinite number of deceptive universes that are the equivalent of "The Matrix" - they give rise to conscious entities who have convincing but incorrect beliefs about how their universe really is. These entities' beliefs are based on perceptions that are only illusions, or simulations (naturally occurring or intelligently designed), or hallucinations, or dreams.

It seems to me that it would be a bit of a miracle if it turned out that we lived in a universe whose initial state and causal laws were such that they gave rise to conscious entities whose beliefs about their universe were true beliefs.

A similar argument can also be made if we choose an indeterministic interpretation of our best scientific theories (e.g., quantum mechanics), though it involves a few extra steps.

Infinity and Probability

How about this:

Lets assume we have an infinitely long array of squares. And a fair 6-sided dice.

We roll the dice an infinite number of times and write each roll's number into a square.

When we finish, how many squares have a "1" written in them? An infinite number, right?

How many squares have an even number written in them? Also an infinite number.

How many squares have a number OTHER than "1" written in them? Again, an infinite number.

Therefore, the squares with "1" can be put into a one-to-one correspondence with the "not-1" squares...correct?

Now, while we have this one-to-one correspondence between "1" and "not-1" squares set up, let's put a sticker with an "A" on it in the "1" squares. And a sticker with a "B" on it in the "not-1" squares. We'll need the same number of "A" and "B" stickers, obviously. Aleph-null.

So, if we throw a dart at a random location on the array of squares, what is the probability of hitting a square with a "1" in it?

What is the probability of hitting a square with a "A" sticker?

The two questions don't have a compatible answers, right? So, in this scenario, probability is useless. It just doesn't apply. You should have no expectations about either outcome.

BUT. NOW. Let's erase the numbers and remove the stickers and start over.

This time, let's just fill in the squares with a repeating sequence of 1,2,3,4,5,6,1,2,3,4,5,6,1,2,3,...

And then, let's do our same trick about putting the "1" squares into a one-to-one mapping with the "not-1" squares, and putting an "A" sticker on the "1" squares, and a "B" sticker on the "not-1" squares.

Now, let's throw a dart at a random location on the array of squares. What is the probability of hitting a square with a "1" in it?

What is the probability of hitting a square with a "A" sticker on it?

THIS time we have some extra information! There is a repeating pattern to the numbers and the stickers. No matter where the dart hits, we know the layout of the area. This is our "measure" that allows us to ignore the infinite aspect of the problem and apply probability.

For any area the dart hits, there will always be an equal probability of hitting a 1, 2, 3, 4, 5, *or* 6. As you'd expect. So the probability of hitting a square with a "1" in it is ~16.67%.

Any area where the dart hits will have a repeating pattern of one "A" sticker followed by five "B" stickers. So the probability of hitting an "A" sticker is ~16.67%.

The answers are now compatible, thanks to the extra "structural" information that gave us a way to ignore the infinity.

In other words, you can't apply probability to infinite sets, but you can apply it to the *structure* of an infinite set.

If the infinite set has no structure, then you're out of luck. At best you can talk about the method used to generate the infinite set...but if this method involves randomness, it's not quite the same thing.

Entropy and Memory

It’s overwhelmingly probable that all of your memories are false.


Entropy is a measure of the disorder of a system. The higher the entropy, the higher the disorder.

If a deck of cards is ordered by suit and then within each suit by ascending rank, then that’s a low entropy state. This is because out of the 8.06 * 10 to the 67th (52!) possible unique arrangements of the cards in a standard 52 card deck, there’s only 24 that fit that particular description.

A “random looking” arrangement of the deck is a high entropy state, because there are trillions of unique arrangements of a standard 52 card deck that will fit the description of looking “randomly shuffled”.

Same with the egg. There are (relatively) few ways to arrange the molecules of an egg that will result in it looking unbroken, compared to the huge number of ways that will result in it looking broken. SO, unbroken egg…low entropy. Broken egg…high entropy.

AND the same with the universe…there are (again, relatively) few ways to arrange the atoms of the universe in a way that makes it resemble what we see with people and trees and planets and stars and galaxies, compared with the gargantuan number of ways to arrange things so that it resembles a generic looking cloud of dust.

OKAY. Now.

Of the relatively few ways that the elementary particles of the universe can be arranged so as to resemble what we see around us today, only a tiny fraction of those particle arrangements will have values for momentum and position that are consistent with them having arrived at that state 13.7 billion years after something like the Big Bang.

The vast majority of the particle arrangements that macroscopically resemble the world around us will *instead* have particles in states (e.g., with positions and velocities) that are consistent with the particles having previously been in something more like a giant dust cloud.

By which I mean: If we take their current positions and velocities, and work backwards to see where they came from, and go back far enough in time, eventually we will not arrive at the Big Bang. Instead we will arrive at a state resembling a giant dust cloud (probably a very thin, spread-out dust cloud).

SO, bottom line:

Out of all the possible configurations that the universe could be in, ones that have people, and planets, and stars, and galaxies are extremely rare.

Further, even if we then only consider those extremely rare possible configurations that have people, and planets, and stars, and galaxies – the ones with particles in states (e.g., with positions and velocities) that are consistent with having arrived at this configuration 13.7 billion years after something like the Big Bang are STILL rare.

We don’t know the exact state of our universe’s particles, but in statistical mechanics the Principle of Indifference requires us to consider all possible microscopic states that are consistent with our current macroscopic state equally likely.

So given all of the above, and our current knowledge of the laws of physics, the most likely explanation is that all of your current memories are false and that yesterday the universe was in a HIGHER state of entropy, not a lower state (as would be required by any variation of the Big Bang theory).

Physical systems with low states of entropy are very rare, by definition. So it’s very improbable (but not impossible) that the unlikely low entropy state of the universe of today is the result of having evolved from an EVEN MORE UNLIKELY lower entropy universe that existed yesterday.

Instead, statistically it’s overwhelmingly more probable that the unlikely low entropy state of the universe today is the result of a random fluctuation from a HIGHER entropy universe that existed yesterday.

And thus your memories of a lower entropy yesterday are most likely due to this random fluctuation, not due to yesterday actually having had a lower entropy than today.

[Based on my reading of Sean Carroll's book "From Eternity to Here"]