Friday, November 5, 2010

A Thing of This World

Review Article: Chronicling the Post-Kantian Erosion of Noumena -
In the conclusion, Braver returns to Kant, presenting once again the guiding hypothesis of the book: Kant as the common ground between the analytic and continental tradition. His most interestingly speculative claim is that the two traditions emerge from an internal dichotomy within Kant’s system:

My claim is that continental thought follows the spirit of his epistemology, while analytic thought follows the practical (which is rather ironic, given analytic philosophy’s emphasis on epistemology and continental’s insistence on the ubiquity of the ethical). Continental thought embodies the spirit of Kant’s theoretical work: we are essentially finite beings conditioned by forces beyond our control, and the job of philosophy is to help us understand these, not overcome them; there is nothing beyond them. Analytic philosophy takes up the ethical ethos: although we may be conditioned by accidental features, philosophy uses reason to pierce these conditions so that we can find truth which escapes their influence. (501-502)

Ultimately, Braver presents continental philosophy as a constant struggle with human finitude and the way contingent factors therefore influence subjectivity and the practice of philosophy itself. On the other hand, the analytic tradition was begotten by the ambition of pure rational thought to escape existential finitude and grasp truth- and things- ‘in themselves’.

Tuesday, November 2, 2010

Probability, Necessity, and Infinity

Quentin Meillassoux, "Potentiality and Virtuality":

"We have at our disposal the means to reformulate Hume's problem without abandoning the ontological perspective in favour of the epistemic perspective largely dominant today. Beginning to resolve the problem of induction comes down to delegitimating the probabilistic reasoning at the origin of the refusal of the contingency of laws. More precisely, it is a matter of showing what is fallacious in the inference from the contingency of laws to the frequency (and thus the observability) of their changing. This amounts to refusing the application of probability to the contingency of laws, thereby producing a valuable conceptual distinction between contingency understood in this radical sense and the usual concept of contingency conceived as chance subject to the laws of probability. Given such a distinction, it is no longer legitimate to maintain that the phenomenal stability of laws compels us to suppose their necessity."

Tuesday, October 5, 2010

The Contingency of Nature’s Laws

Jeremy Dunham on Humean Lawlessness:
For Meillassoux, time has the ability to bring forth events which have absolutely no connection to the preceding situation. Freed from the principle of sufficient reason, we can be sure that metaphysical questions such as 'why these laws?' and 'where did we come from?' can be answered: 'From nothing. For nothing'. By denying causal power in nature, Meillassoux denies that the future need have any relation to the past and in doing so privileges logic above nature. However, Meillassoux’s explanation of our laws becomes rather like recourse to a Deus ex Machina, albeit a godless one. This becomes clearer in his argument concerning the emergence of conscious perception. One of the most common vitalist arguments against the Humean idea that the universe is nothing more than a contingent multiplicity of unconnected events, is that life could not possibly come from not-life: how could consciousness come from purely lifeless matter? Meillassoux agrees that one cannot 'short of sheer fantasy' find the seeds of the birth of consciousness in matter. Conscious perception, like the laws of nature, must have come ex nihilo—from nothing.

Wednesday, September 29, 2010

Nomologicalism vs. Accidentalism

Reality is either governed by rules, or it isn’t.

If all events transpire according to some rule or law, this is nomologicalism.

If there is no reason behind why events unfold as they do, this is accidentalism.

There are two variants of nomologicalism: deterministic and probabilistic. The laws that govern the unfolding of events are either deterministic or probabilistic in nature.

Note, however, that deterministic nomologicalism could be seen as just a special case of probabilistic nomologicalism - the case where all conceivable outcomes of any particular event are assigned a probability of either 0% or 100%. This is analogous to a Turing Machine just being a special kind of Probabilistic Automaton, one with transition probabilities of 0% or 100%.

But if nomologicalism is true then the question is: why is it true? Why do these governing laws exist and how are they enforced?

If there is no reason that we have the laws that we do, or there is no reason that they continue to hold, then this itself amounts to a kind of accidentalism. If there is no reason that the laws are as they are, then they could have been and may yet be otherwise. And if there is no reason that they continue to hold, then they may very well cease to hold at any instant.

In this case, the current state of things is accidental...there’s no reason it couldn’t have been otherwise, there’s no reason it won’t become otherwise.

But, if there is a reason that nomologicalism is true, and thus a reason for why our particular governing laws exist and a reason for why they are consistently enforced, then what is the reason for that reason?

If there is no reason for the reason, then this again amounts to a kind of accidentalism.

The only way to avoid accidentalism is to posit an infinite hierarchy of reasons for reasons for reasons for reasons...and so on. An infinity of reasons.

Saturday, September 25, 2010

More on Intelligence

A definition of intelligence from the Merriam-Webster dictionary:

"The ability to apply knowledge to manipulate one's environment or to think abstractly as measured by objective criteria (as tests)."

But what is an ability in a deterministic universe?

For any given input, a deterministic system can only react in one way.

If you expose a deterministic system to a set of inputs that represent a particular environment, the system will react in the one and only way it can to that set of inputs.

Knowledge is just the internal state of the deterministic system.

This is true of a human. This is true of a bacterium. This is true of a Roomba vacuum cleaner. This is true of a hurricane. This is true of a rock.

And, as I pointed out in the latter half of my earlier post, probabilistic systems are no better.

Intelligence is an arbitrary criterion based only on how things "seem" to you, and which has no other basis in how things are.

So, that is what I mean by:

"The word 'intelligence' doesn't refer to anything except the experiential requirements that the universe places on you as a consequence of its causal structure."

Tuesday, September 21, 2010

Intelligence and Nomologicalism

What is the significance of intelligence in a universe with deterministic laws?

Your performance on any IQ test is not due to your possessing some property called "intelligence", but rather is an inevitable outcome of the universe's initial conditions and governing causal laws.

The questions you are asked, the answers you give, the problems you are presented with, the solutions you develop - these were all implicit in the universe's first instant.

You, and the rest of the universe, are essentially "on rails". The unfolding of events and your experience of them is dictated by the deterministic causal laws.

Even if time flows (e.g. presentism), the causal structure of the universe is static...events can only transpire one way.

So, what can be said of intelligence in such a universe? Well...only what the deterministic laws require you to say about it. What can be believed about intelligence in such a universe? Obviously only what the deterministic laws require you to believe.

Solving a problem correctly is no more impressive or significant than rain falling "correctly". You answer the question in the only way the deterministic laws allow. The rain falls in the only way that the deterministic laws allow.

The word "intelligence" doesn't refer to anything except the experiential requirements that the universe places on you as a consequence of its causal structure.

=*=

What about the significance of intelligence in a universe with probabilistic laws?

The only change from the deterministic case is that the course of events isn't precisely predictable, even in principle.

However, the flow of events is still governed by the probabilistic causal laws. Which just means that to the extent that the flow of events isn't determined, it's random.

Again, the analogy with poker comes to mind: the rules of poker are stable and unchanging, while the randomness of the shuffle adds an element of unpredictability as to which cards you are actually dealt. So, to the extent that poker isn't determined, it's random.

The questions you're going to be asked and the problems you're going to be presented with in a probabilistic universe aren't predictable...but neither are your answers or your solutions, which result from the exact same underlying rule set. Again, to the extent that any of these things aren't determined, they're random.

Adding a random component to an otherwise deterministic framework does increase the number of possible states that are reachable from a given initial condition, but it doesn't add anything qualitatively new to the content of those states or to the process as a whole. Nothing new is added to the deterministic case that would give the word "intelligence" anything extra to refer to.

Monday, September 20, 2010

The Necessity of Contingency

"The truth about the world, he said, is that anything is possible. Had you not seen it all from birth and thereby bled it of its strangeness it would appear to you for what it is, a hat trick in a medicine show, a fevered dream, a trance bepopulate with chimeras having neither analogue nor precedent, an itinerant carnival, a migratory tentshow whose ultimate destination after many a pitch in many a mudded field is unspeakable and calamitous beyond reckoning.

The universe is no narrow thing and the order within it is not constrained by any latitude in its conception to repeat what exists in one part in any other part. Even in this world more things exist without our knowledge than with it and the order in creation which you see is that which you have put there, like a string in a maze, so that you shall not lose your way. For existence has its own order and that no man's mind can compass, that mind itself being but a fact among others."

-- Blood Meridian, Cormac McCarthy

Wednesday, September 15, 2010

Time and Possibility

Let's divide your entire life, from your first conscious experience to your last, into 1 hour slices.

And let's instantiate each slice as it's own mini-universe. Each mini-universe complete with it's own initial conditions and causal laws - but containing only what is necessary to generate a given slice of your experience.

These mini-universes are made of the same stuff (whatever it actually is) as our universe, and each mini-universe exists as a independent isolated entity within the timeless Meillassouxian space of possibilities.

So if (as an example) quarks and electrons cause consciousness, this means that a mini-universe would spring into existence for each 1 hour slice, with each mini-universe containing only the minimum complement of quarks and electrons with the necessary initial states required to cause one particular hour of your experience. And, after one hour, the mini-universe ends.

This is conceivable, right?

So now we have these 525,600 mini-universes (assuming you have ~60 years of conscious experience over the course of your life), each holding 1 hour's worth of reality, each causally disconnected from all of the the other slices and everything else. And each existing eternally in the space of possibilities.

Would this kind of existence be worse than your current existence? Would it "feel" different?

What test could you perform that would assure you that the above scenario isn't actually your present situation?

Okay, now let's say that instead of 525,600 slices that are each 1 hour long, we have 1,892,160,000 slices that are each 1 second long. How would your total experience differ?

Now let's say we go to .001 second long slices. And then .00000000001 second slices. And so on. At some point does your conscious experience become noticeably distorted, or disappear? If so at what point, and why?

Saturday, September 11, 2010

After Finitude

An interesting review:
The notion of 'absolute time' that accompanies Meillassoux's acausal ontology is a time that seems endowed with only one dimension – the instant. It may well be that 'only the time that harbours the capacity to destroy every determinate reality, while obeying no determinate law – the time capable of destroying, without reason or law, both worlds and things – can be thought as an absolute.' The sense in which such an absolute can be thought as distinctively temporal is less obvious. Rather than any sort of articulation of past, present and future, Meillassoux's time is a matter of spontaneous and immediate irruption ex nihilo. Time is reduced, here, to a succession of 'gratuitous sequences'.

What does it mean to say that something did exist, but no longer does? This concept seems to require the further existence of an actual dimension of "Time".

But applying Meillassoux's principle of facticity: why should there be an actual time dimension? Why should things necessarily exist "in time"?

It seems more likely to me that time is just an aspect of subjective experience. In reality, all events exist simultaneously, eternally, and without order. Time only *seems* to flow in an ordered succession of moments, from past to future.

The actual ordering of events is irrelevant, because our experience imposes it's own ordering "from within".

Sunday, September 5, 2010

Hyperchaos, Time, and Dreams

So we have facticity...the absence of reason for any reality.

And we have something that exists for no reason. Which means that it could suddenly cease to exist.

But even if it ceases to exist in the present for me, there's still the fact that it *did* exist in the past. Nothing can erase that fact, can it?

Can we change the past? If we do, there's still the fact that the past *was* different before we changed it. So we have two pasts: the original one and the altered one. But then why not go back and change the past again? We could have 1000 pasts...P1, P2, P3, P4, etc. We're now starting to build up another time dimension that runs perpendicular to our "changeable" past. What we originally thought of as the past becomes more like a spatial dimension (its contents can change) and our new dimension takes on the properties we originally attributed to "normal" time.

Okay, that's a bit of a digression. Back to the original point:

The question then is what is the difference between the present and the past?

Even if Hyperchaos time isn't the same as ordinary time, it still serves the same purpose...to provide a way of separating or differentiating things. According to Quentin Meillassoux something can be red and not-red, but not at the same "time".

But if something is red, and then it's not-red, how do we really know it's the same thing? Maybe the red-thing was zapped out of existence and instantly replaced by a new thing identical to it in every way *except* that it's not-red?

However, note that we have another undefined term floating around: what is a "something"? What are "things"?

Here we hit the problem I have with physicalism. I can only talk about how things seem to me. Not how they really are. I *don't know* what things are. I only know how they seem.

Redness isn't an aspect of apples...it's an aspect of my experience of apples. Even the apples that appear in my dreams. But for a color blind person, redness would *not* even be an aspect of their experience of apples.

It is possible that there are things that have some existence independent of the way they seem to me, but I can't say anything about that existence.

Alternatively, it seems equally possible that all that exists are experiences that aren't of "any thing"...like my experience of apples in my dreams. These dream-apples only exist within my experience, and aren't backed by any real "thing".

This actually solves the problem of non-contradiction. If there are no things, there can be no contradictory things.

But can there be contradictory experiences? Can I experience a red and not-red apple? Maybe, maybe not. But who cares? It's just an experience.

Can I simultaneously experience and not-experience an apple? Sure. "Not-experiencing" something just means that I didn't have that experience. To simultaneously experience it and not-experience it would just be to experience it.

Sunday, August 29, 2010

Ordinatio Ex Machina

More thoughts on Idealist Accidentalism.

By "idealist" I'm referring to metaphysical idealism...that what fundamentally exists is mental, not physical. And by mental I mean either consciousness or existing only as an aspect of consciousness. For example, there is my conscious experience of a dream, and then there are the things that appear in my dreams that I am conscious of...houses and chairs and trees and people. Both categories of things are mental. The trees that appear in my dreams only exist as an aspect of the dream.

And by "accidentalism" I mean the theory that nothing that exists or occurs is caused. There is nothing that connects or controls the flow of events. The only rule is that there are no rules to appeal to.

So "idealist accidentalism"...the view that what exists is mental, and that there is no underlying process that explains or governs this existence.

Explaining the order of our experience by positing the existence of orderly underlying processes (as with reductive physicalism, for example) is just begging the question...because then what explains the order of those underlying processes?

The total amount of mystery was conserved. We just transferred the mystery to a new location - from our conscious experience to a hypothetical underlying process. We are unwilling to accept that our experiences "just are" orderly, so instead we appeal to an underlying process which "just is" orderly. "Ordinatio Ex Machina".

Not only that, but this reductionist approach raises the question of why we would be so lucky as to have our conscious experiences generated by underlying processes that "cause" us to have correct knowledge of those very processes.

We can only know what the underlying process causes us to know. Thus, the tendency to believe true things can't be a special feature of humans. Rather, it would be a special feature of the process that underlies human experience.

Note that this is a problem with any rule-based explanation of reality, not just with reductive physicalism and the like.

But the only alternative to a rule-based explanation of reality is accidentalism, isn't it?

Friday, August 27, 2010

Idealistic Accidentalism

Zero hits on Google. I claim it as mine.

Sunday, August 8, 2010

Science and Happiness

If evolutionary theory is correct, it seems to me that if the overall environment remained relatively stable for an extended period of time - then regardless of how it ended up, humans would be at about same level of happiness.

A paradise or a hell, the species should evolve towards the same overall happiness level.

We can only be "excessively" happy, or excessively unhappy, in a world that we aren't well adapted to.

My reasoning is that happiness serves a purpose...it motivates us to do things that enhance our reproductive success.

Unhappiness also serves a purpose...it motivates us to avoid things that decrease our reproductive success.

Happiness is useless as a motivational tool if it's too hard *or* too easy to achieve.

Unhappiness is useless as a motivational tool if it's too hard *or* too easy to avoid.

There has to be some optimum "motivational" mix of happiness and unhappiness...and I'd think it's always approximately the same mix.

Even in a hellish world, humans would be about as happy as they would be in a paradise...once they (as a species) had adapted.

Which brings me to my next point. IF evolutionary theory is true, then scientific advancements only increase human happiness to the extent that it puts us into situations that we're not well adapted to.

AND, given enough time, we *will* adapt to all scientific advancements...and a key part of this adaptation will be to reduce the amount of happiness that they generate.

We can only be "happier" than cavemen when we are in a situation that we are not well adapted to.

For instance, food. Most people really like sweets and salty greasy foods. Much more than they like bland vegetables and whatnot.

The acquisition of junk food makes us happy BECAUSE those things were hard to acquire a few hundred years ago...and if you're living in resource-poor circumstances, then calories and salt are just what the doctor ordered.

BUT...we're now out of equilibrium. Junk food is at least as easy to get as vegetables, if not easier. So our evolved preferences push us to consume more than is good for us.

Given time, and if we allowed heart disease and diabetes to do their work, the human race would eventually lose their taste for such unhealthy fare, as those with genetic tendencies in that direction died off. Anticipating a greasy meal of pizza and consuming it would no longer make us as happy. Because that happiness is too easily satisfied to provide the optimal level of motivation.

In the future, I would think that our taste for junk food will decrease while our taste for vegetables and fruit will increase.

Further, this "adjustment process" isn't just true of food. It should be true of everything.

Even something that IS good for us will cause less happiness if its easily available, because there's no real harm in not being highly motivated to get it - since you'll get it even if you're relatively indifferent to it. Also, even good things can become detrimental if over-indulged in.

Wednesday, July 21, 2010

Time Without Becoming

Quentin Meillassoux

Time Without Becoming

"I call 'facticity' the absence of reason for any reality; in other words, the impossibility of providing an ultimate ground for the existence of any being. We can only attain conditional necessity, never absolute necessity. If definite causes and physical laws are posited, then we can claim that a determined effect must follow. But we shall never find a ground for these laws and causes, except eventually other ungrounded causes and laws: there is no ultimate cause, nor ultimate law, that is to say, a cause or a law including the ground of its own existence. But this facticity is also proper to thought. The Cartesian Cogito clearly shows this point: what is necessary, in the Cogito, is a conditional necessity: if I think, then I must be. But it is not an absolute necessity: it is not necessary that I should think. From the inside of the subjective correlation, I accede to my own facticity, and so to the facticity of the world correlated with my subjective access to it. I do it by attaining the lack of an ultimate reason, of a causa sui, able to ground my existence.

[...]

That’s why I don’t believe in metaphysics in general: because a metaphysics always believes, in one way or the other, in the principle of reason: a metaphysician is a philosopher who believes it is possible to explain why things must be what they are, or why things must necessarily change, and perish- why things must be what they are, or why things must change as they do change. I believe on the contrary that reason has to explain why things and why becoming itself can always become what they are not- and why there is no ultimate reason for this game. In this way, “factial speculation” is still a rationalism, but a paradoxical one: it is a rationalism which explain why things must be without reason, and how precisely they can be without reason. Figures are such necessary modalities of facticity - and non-contradiction is the first figure I deduce from the principle of factiality. This demonstrates that one can reason about the absence of reason- if the very idea of reason is subjected to a profound transformation, if it becomes a reason liberated from the principle of reason- or, more exactly: if it is a reason which liberates us from principle of reason.

Now, my project consists of a problem which I don’t resolve in After Finitude, but which I hope to resolve in the future: it is a very difficult problem, one that I can’t rigorously set out here, but that I can sum up in this simple question: Would it be possible to derive, to draw from the principle of factiality, the ability of the natural sciences to know, by way of mathematical discourse, reality in itself, by which I mean our world, the factual world as it is actually produced by Hyperchaos, and which exists independently of our subjectivity? To answer this very difficult problem is a condition of a real resolution of the problem of ancestrality, and this constitutes the theoretical finality of my present work."

Monday, July 19, 2010

Re: The Irrationality of Physicalism

A response:

"And in either case the counter argument is the same, c.f. "The Evolution of Reason" by William S. Cooper."

AND, my response to the response:


Maybe. But it’s not a very good counter argument.

A long-ish response, but there are several quotes from the book that add up in length.

So logic reduces to biology. Fine. And biology reduces to...what? Initial conditions and causal laws, that’s what.

So, from the "The Evolution of Reason":

“Evolution is not the law enforcer but the law giver - not so much a police force as a legislature. The laws of logic are not independent of biology but implicit in the very evolutionary processes that enforce them. The processes determine the laws.

If the latter understanding is correct, logical rules have no separate status of their own but are theoretical constructs of evolutionary biology. Logical theory ought then in some sense to be deducible entirely from biological considerations. The concept of scientific reduction is helpful in expressing that thought. In the received methodological terminology the idea of interest can be articulated as the following hypothesis.

REDUCIBILITY THESIS: Logic is reducible to evolutionary theory.”


So obviously evolution is not a law enforcer or a law giver. It isn’t a causal law, but rather a consequence of causal laws.

Cooper claims that logic reduces to evolutionary theory. And what does evolutionary theory reduce to? Initial conditions and fundamental causal laws acting on fundamental entities.

Assuming physicalism, the causal laws of our universe applied to a suitable set of initial conditions will, in time, exhibit features that we categorize as “evolutionary”. Some of these evolutionary processes may give rise to entities that have conscious experiences, and some of those conscious experiences will be of holding this, that, or the other beliefs about logic. But those beliefs are a result of fundamental laws acting on fundamental entities, and not associated with any sort of independently existing platonic standard of “logical reasoning”.

This is the gist of my post, and seems to be the main gist of his book. We do part company eventually though. I’ll save that part for last.

Continuing:

“‘How do humans manage to reason?’ Since the form of this question is the same as that of the first, it would be natural to attack it in a similar two-pronged fashion. [...] Somewhere in the latter part there would be talk of selective forces acting on genetic variation, of fitness, of population models, etc. [...] The laws of Reason should not be addressed independently of evolutionary theory, according to the thesis. Reasoning is different from all other adaptations in that the laws of logic are aspects of the laws of adaptation themselves. Nothing extra is needed to account for logic - only a drawing out of the consequences of known principles of natural selection.”

Selective forces? What would have caused those selective forces? What do these selective forces reduce to? Why these selective forces instead of some others?

Natural selection? Well, there are causally neutral “filters” (metaphorically speaking), but these metaphorical filters are as much a consequence of the universe’s initial conditions and causal laws as the organisms that are (metaphorically) selected.

Evolution is a consequence of causal laws, not a causal law itself. In this it is like the first law of thermodynamics - which is a consequence of the time invariance of the causal laws, not a causal law itself. Evolution and the first law of thermodynamics are descriptions of how things are, not explanations.

So as I said, if physicalism is true then the arguments that we present and believe are those entailed by the physics that underlies our experiences, and by nothing else.

In this view, evolution is also just a manifestation of those same underlying physical forces. And logic is merely an aspect of the experiences generated by the more fundamental activities of quarks and electrons.

In this vein, he says:

“If evolutionary considerations control the relevant aspects of decision behavior, and these determine in turn the rest of the machinery of logic, one can begin to discern the implicative chain that makes Reducibility Theory thinkable.

[...]

If the evolutionary control over the logic is indeed so total as to constrain it entirely, there is no need to perpetuate the fiction that logic has a life of its own. It is tributary to the larger evolutionary mechanism.”


All we have to do is add that the universe’s initial conditions and causal laws control the evolutionary considerations, and my point is practically made.

The main point of contention between my argument and Cooper’s is:

“In this way the general evolutionary tendency to optimize fitness turns out to imply, in and of itself, a tendency for organisms to be rational. Once this is shown there is no need to look for the source of logical principles elsewhere, for the logical behavior is shaped directly by the evolutionary forces acting on their own behalf. Because the biological processes expressed in the population models wholly entail the logical rules, and are sufficient to predict and explain rational behavior, no separate account of logic is needed.”

Optimize fitness? Again, evolution isn’t something imposed from outside the system, and it’s not a causal law. If fitness of some group is optimized over time, that’s just a consequence of system’s initial conditions and causal laws.

In a deterministic system, the rise of that group was destined to happen. In an indeterministic system, the rise of that group was a result of the interplay between the initial conditions, the deterministic part of causal framework, and the outcome of the random coin flips.

So, he seems to imply that initial conditions and causal laws must give rise to rational actors. But as he says, there is no independent standard of rationality. Rationality is relative to the rules of the particular physical system. So the behaviors that a system most commonly gives rise to are, by definition, “rational”.

So rational is a meaningless label. In his formulation above it just means “whatever ends up being the most commonly manifested behaviors.”

But it’s not commonly manifested because it’s rational. Rather, it’s labeled rational because it’s commonly manifested.

Saturday, July 17, 2010

A Summer of Madness

Manic-Depression:

“On July 5, 1996,” Michael Greenberg starts, “my daughter was struck mad.” No time is wasted on preliminaries, and Hurry Down Sunshine moves swiftly, almost torrentially, from this opening sentence, in tandem with the events that it tells of. The onset of mania is sudden and explosive: Sally, the fifteen-year-old daughter, has been in a heightened state for some weeks, listening to Glenn Gould’s Goldberg Variations on her Walkman, poring over a volume of Shakespeare’s sonnets till the early hours. Greenberg writes:

Flipping open the book at random I find a blinding crisscross of arrows, definitions, circled words. Sonnet 13 looks like a page from the Talmud, the margins crowded with so much commentary the original text is little more than a speck at the center.

Sally has also been writing singular, Sylvia Plath–like poems. Her father surreptitiously glances at these, finds them strange, but it does not occur to him that her mood or activity is in any way pathological. She has had learning difficulties from an early age, but she is now triumphing over these, finding her intellectual powers for the first time. Such exaltation is normal in a highly gifted fifteen-year-old. Or so it seems.

But, on that hot July day, she breaks—haranguing strangers in the street, demanding their attention, shaking them, and then suddenly running full tilt into a stream of traffic, convinced she can bring it to a halt by sheer willpower (with quick reflexes, a friend yanks her out of the way just in time).

Ultimate Explanations of the Universe

An excellent book by Michael Heller.

Quotes:

"The tendency to pursue 'ultimate explanations' is inherent in the mathematical and experimental method in yet another way (and another sense). Whenever the scientist faces a challenging problem, the scientific method requires him to never give up, never seek an explanation outside the method. If we agree - at least on a working basis - to designate as the universe everything that is accessible to the mathematical and experimental method, then this methodological principle assumes the form of a postulate which in fact requires that the universe be explained by the universe itself. In this sense scientific explanations are 'ultimate,' since they do not admit of any other explanations except ones which are within the confines of the method.

However, we must emphasise that this postulate and the sense of 'ultimacy' it implies have a purely methodological meaning, in other words they oblige the scientist to adopt an approach in his research as if other explanations were neither existent nor needed." - Michael Heller, The Totalitarianism of the Method.


====


"The longing to attain the ultimate explanation lingers in the implications of every scientific theory, even in a fragmentary theory of one part or aspect of the world. For why should only that part, that aspect of the world be comprehensible? It is only a part or an aspect of an entirety, after all, and if that entirety should be unexplainable, then why should only a tiny fragment thereof lend itself to explanation? But consider the reverse: if a tiny part were to elude explanation, it would leave a gap, rip a chasm, in the understanding of the entirety."


====


"Peter van Inwagen proposed a rather peculiar answer to the question why there exists anything at all. His reasoning is as follows. there may exist an infinite number of worlds full of diverse beings, but only one empty world. Therefore the probability of the empty world is zero, while the probability of a (non-empty) is one.

This apparently simple reasoning is based on very strong an essentially arbitrary assumptions. First of all, that there may exist an infinite number of worlds (that they have at least a potential existence); secondly, that probability theory as we know it may be applied to them (in other words that probability theory is in a sense aprioristic with respect to these worlds); and thirdly, that they come into being on the principle of 'greater probability.' The following question may be put with respect to this mental construct: 'Why does it exist, rather than nothing?'"

Friday, July 16, 2010

The Irrationality of Physicalism

If Physicalism is true, then the belief in Physicalism can’t be rationally justified.

If physicalism is true, then our beliefs and experiences are a result of the universe’s initial conditions and causal laws (which may have a probabilistic aspect).

Therefore, assuming physicalism, we don’t present or believe arguments for reasons of logic or rationality. Instead, the arguments that we present and believe are those entailed by the physics that underlies our experiences.

It is *possible* that we live in a universe whose initial conditions and causal laws are such that our arguments *are* logical. But in a physicalist framework that’s not why we present or believe those arguments. The fact that the arguments may be logical is superfluous to why we make or believe them.

Obviously there’s nothing that says that our physically generated experiences and beliefs have to be true or logical. In fact, we have dreams, hallucinations, delusions, schizophrenics, and madmen as proof that there is no such requirement.

So arguing for physicalism is making an argument that states that no one presents or believes arguments for reasons of logic.

Note that the exact same argument can be applied to mathematical realism, or any other position that posits that consciousness is caused by or results from some underlying process.

Thursday, July 15, 2010

Putnam Mapping

Piccinini:
"The mapping account says, roughly, that a computing system is a concrete system such that there is a computational description that maps onto a physical description of the system. If any mapping is acceptable, it can be shown that almost every physical system implements every computation (Putnam 1988, Searle 1992). This trivialization result can be avoided by putting appropriate restrictions on acceptable mappings; for instance, legitimate mappings must respect causal relations between physical states (Chrisley 1995, Chalmers 1996, Copeland 1996, Scheutz 2001).

Still, there remain mappings between (many) computational descriptions and any physical system. Under the mapping account, everything performs at least some computations. This still strikes some as a trivialization of computationalism. Furthermore, it doesn"t do justice to computer science, where only relatively few systems count as performing computations. Those who want to restrict the notion of computation further have to look beyond the mapping account of computation."

"Putnam's proposal, and its historical importance, was analyzed in detail in Piccinini forthcoming b. According to Putnam (1960, 1967,1988), a system is a computing mechanism if and only if there is a mapping between a computational description and a physical description of the system. By computational description, Putnam means a formal description of the kind used in computability theory, such as a Turing Machine or a finite state automaton. Putnam puts no constraints on how to find the mapping between the computational and the physical description, allowing any computationally identified state to map onto any physically identified state. It is well known that Putnam's account entails that most physical systems implement most computations. This consequence of Putnam's proposal has been explicitly derived by Putnam (1988, pp. 95-96, 121-125) and Searle (1992, chap. 9)."

Tuesday, July 13, 2010

The Granite Universe

The world that I perceive seems pretty orderly. When I drive to work, it's always where I expect it to be. The people are always the same. I pick up where I left off on the previous day, and life generally proceeds in an orderly and predictable way. Even when something unexpected happens, I can generally trace back along a chain of cause and effect and determine why it happened, and understand both why I didn't expect it and why I probably could have.

In my experience thus far, there have been no "Alice in Wonderland" style white rabbits that suddenly appear in a totally inexplicable way, make a few cryptic remarks while checking their pocket watch, and then scurry off.

Why do I never see such white rabbits?

Well, at first glance, something like physicalism seems like the obvious choice to explain my reality's perceived order - to explain both what I experience AND what I *don't* experience. The world is reducible to fundamental particles (waves, strings, whatever) which have certain properties (mass, velocity, spin, charge, etc) that determine how they interact, and it all adds up to what I see.

In this view, what I see is ultimately determined by the starting conditions of the universe, plus the physical laws that govern the interaction of the fundamental elements of the universe, applied over how-many-ever billions of years. While no explanation is given for the initial conditions, or why the fundamental laws of physics are what they are, if you get past that then from a cause-and-effect stand point physicalism offers a pretty solid explanation for why my world is orderly and predictable, and why I don't see white rabbits.

And in the form of functionalism/computationalism + evolution it even offers a pretty good foundation for explaining the existence and mechanism of human behavior and ability.

But physicalism has a major drawback: It doesn't obviously explain the experience of consciousness that goes with human behavior and ability. Particles, waves, mass, spin, velocity...no matter how you add them up, there doesn't seem to be any way to get conscious experience.

Which is a problem, since consciousness is the portal through which we access everything else. My conscious experience is what I know. I "know" of other things only when they enter into my conscious awareness.

So, physicalism does explain why we see, what we see, and why we don't see white rabbits. But it doesn't seem to explain the conscious experience OF seeing what we see.

Further, by positing an independently existing and well ordered external universe to explain our orderly perceptions, we have just pushed the question back one level. The new questions are, why does this external universe exist and why is it so orderly? And this initially seems justified by the fact that physicalism explains how it is possible for us to make correct predictions.

BUT, actually it explains nothing.

Nothing has been explained because we are PART of the system that we are trying to explain by appealing to physicalism. If the order and predictability of our experiences are due to the initial conditions of the universe and the laws of physics, then we inhabit a universe whose entire future, including our existence and all of our activities and experiences, is fixed. Frozen in place by unbreakable causal chains.

Effectively (and maybe actually), the entire future of the universe can be seen as existing simultaneously with its beginning. We could just as well say that the entire past, present, and future came into being at one instant, and we are just experiencing our portion of it in slices.

But there is no "explanation" here. This "block universe" just IS. It just exists. It came into being for no reason, for no purpose, with no meaning. It exists in the form that it does, and there is no answer to the question "why?". We are part of that universe, existing entirely within it and contained by it. Therefore we also just exist. For no reason, for no purpose, with no meaning, our future history also frozen in place by causal chains. What is true for the universe as a whole is true for it's contents.

To try an make what I'm saying more clear: let's imagine a real block. Say, a block of speckled granite. Now let's consider two adjacent specks of white and gray. Why are they adjacent? What caused them to be adjacent? Well, if we consider this block of granite within the context of our universe, then we can say that there is a reason in that context as to why they are adjacent. There is an explanation, which has to do with the laws of physics and the contingent details of the geologic history of the area where this block of granite was formed (which is in turn derived from the contingent details of the initial state of our entire universe).

But if we take this block of granite to be something that just exists, uncaused and unique, like our universe, then there can be no explanation. The two specks are just adjacent. That's it. No further explanation is possible. The block of granite just exists as it is and that's the way it is. We *can* say something like, "there's a vein of white and a vein of gray in this block, and those two specks exist at the boundary of the veins and so they are adjacent", but while this sounds like an explanation, it really is just a statement of observed fact. It doesn't "explain" anything. And even this observation is made from "outside" the block, an option not available with our universe.

If some sort of conscious intelligence exists within speck patterns of the 2-D slices of the granite block (2-D because we've lost a dimension on our example...the third spatial dimension of the block will be time for these speck-beings), then who knows whether they will even be conscious of being made from specs of granite and of existing within this granite block with it's grey and white veins. Maybe the speck-patterns that they are formed from will be such that their experience is of living in a 3+1 dimensional world such as ours. But regardless, there can be no explanation as to why their experiences are what they are. Their experiences will be as uncaused as the existence of the block whose speckled nature gives rise to those experiences.

So physicalism in fact offers no advantage over just asserting that our conscious experience just exists. Why are my perceptions orderly and why are my predictions about what will happen next usually correct? Because that's just the way it is...and this is true whether you posit an external universe or just conclude that conscious experience exists uncaused.

Monday, July 12, 2010

Quentin Meillassoux on Sufficient Reason and Non-Contradiction

In his book “After Finitude”, he explains that the principle of facticity (which he also refers to as “the principle of unreason”) stands in contrast to Leibniz’s “Principle of Sufficient Reason”, which states that anything that happens does so for a definite reason.

From pg. 33 of After Finitude:

“But we also begin to understand how this proof [the ontological proof of God] is intrinsically tied to the culmination of a principle first formulated by Leibniz, although already at work in Descartes, viz., the principle of sufficient reason, according to which for every thing, every fact, and every occurence, there must be a reason why it is thus and so rather than otherwise.

For not only does such a principle require that there be a possible explanation for every worldly fact; it also requires that thought account for the unconditioned totality of beings, as well as for their being thus and so. Consequently, although thought may well be able to account for the facts of the world by invoking this or that global law - nevertheless, it must also, according to the principle of reason, account for why these laws are thus and not otherwise, and therefore account for why the world is thus and not otherwise. And even were such a ‘reason for the world’ to be furnished, it would yet be necessary to account for this reason, and so on ad infinitum.

If thought is to avoid an infinite regress while submitting to the principle of reason, it is incumbent upon it to uncover a reason that would prove capable of accounting for everything, including itself - a reason no conditioned by any other reason, and which only the ontological argument is capable of uncovering, since the latter secures the existence of an X through the determination of this X alone, rather than through the determination of some entity other than X - X must be because it is perfect, and hence causa sui, or sole cause of itself.

If every variant of dogmatic metaphysics is characterized by the thesis that *at least one entity* is absolutely necessary (the thesis of real necessity) it becomes clear how metaphysics culminates in the thesis according to which *every* entity is absolutely necessary (the principle of sufficient reason). Conversely, to reject dogmatic metaphysics means to reject all real necessity, and a fortiori to reject the principle of sufficient reason, as well as the ontological argument, which is the keystone that allows the system of real necessity to close in upon itself. Such a refusal enjoins one us to maintain that there is no legitimate demonstration that a determinate entity should exist unconditionally.”



As to the principle of non-contradiction:

Pg. 60:

“We are no longer upholding a variant of the principle of sufficient reason, according to which there is a necessary reason why everything is the way it is rather than otherwise, but rather the absolute truth of a *principle of unreason*. There is no reason for anything to be or to remain the way it is; everything must, without reason, be able not to be and/or be other than it is.

What we have here is a principle, and even, we could say, an anhypothetical principle; not in the sense in which Plato used this term to describe the Idea of the Good, but rather in the Aristotelian sense. By ‘anhypothetical principle’, Aristotle meant a fundamental proposition that could not be deduced from any other, but which could be proved by argument. This proof, which could be called ‘indirect’ or ‘refutational’, proceeds not by deducing the principle from some other proposition - in which case it would no longer count as a principle - but by pointing out the inevitable inconsistency into which anyone contesting the truth of the principle is bound to fall. One establishes the principle without deducing it, by demonstrating that anyone who contests it can do so only by presupposing it to be true, thereby refuting him or herself. Aristotle sees in non-contradiction precisely such a principle, one that is established ‘refutationally’ rather than deductively, because any coherent challenge to it already presupposes its acceptance. Yet there is an essential difference between the principle of unreason and the principle of non-contradiction; viz. what Aristotle demonstrates ‘refutationally’ is that no one can *think* a contradiction, but he has not thereby demonstrated that contradiction is absolutely impossible. Thus the strong correlationist could contrast the facticity of this principle to its absolutization - she would acknowledge that she cannot think contradiction, but she would refuse to acknowledge that this proves its absolute impossibility. For she will insist that nothing proves that what is possible in-itself might not differ toto caelo from what is thinkable for us. Consequently the principle of non-contradiction is anhypothetical with regard to what is thinkable, but not with regard to what is possible.”


Continuing on pg. 77:

“It could be objected that we have conflated contradiction and inconsistency. In formal logic, an ‘inconsistent system’ is a formal system all of whose well-formed statements are true. If this formal system comprises the operator of negation, we say that an axiomatic is inconsistent if *every* contradiction which can be formulated within it is true. By way of contrast, a formal system is said to be non-contradictory when (being equipped with the operator of negation) it does not allow *any* contradiction to be true. Accordingly, it is perfectly possible for a logical system to *be* contradictory without thereby being inconsistent - all that is required is that it give rise to *some* contradictory statements which are true, without permitting *every* contradiction to be true. This is the case with ‘paraconsistent’ logics, in which some but not all contradictions are true. Clearly then, for contemporary logicians, it is not non-contradiction that provides the criterion for what is thinkable, but rather inconsistency. What every logic - as well as every logos more generally - wants to avoid is a discourse so trivial that it renders every well-formulated statement, as well as its negation, equally valid. But contradiction is logically thinkable so long as it remains ‘confined’ within limits such that it does not entail the truth of every contradiction.”

Sunday, July 11, 2010

Determinism vs. Indeterminism

Ultimately I think the difference between deterministic and indeterministic laws is not significant.

If a physical law is deterministic then under it's influence Event A will "cause" Result X 100% of the time.

Why does Event A always lead to Result X? Because that's the law. There is no deeper reason.

If a physical law is indeterministic, then under it's influence Event B will "cause" Result Q, R, or S according to some probability distribution.

Let's say that the probability distribution is 1/3 for each outcome.

If Event B leads to Result R, why does it do so? Because that's the law. There is no deeper reason.

Event A causes Result X 100% of the time.

Event B causes Result R 33.3333% of the time.

Why? There is no reason. That's just the way it is.

Determinism could be seen as merely a special case of indeterminism...the case where all probabilities are set to either 0% or 100%.

So even if we are in an universe with indeterministic laws, this doesn’t have any major impact on what metaphysical conclusions we arrive at. Even assuming indeterministic physicalism, there are still initial conditions and there are still laws - the laws just have an intrinsically probabilistic aspect.

These probablistic laws are like the rules of a card game that includes a certain amount of randomness...for instance, requiring occasional random shuffling of the deck. But the number of cards, the suits, the ranks, and the rules of the game themselves are not random...those aspects are determined.

Similarly, in quantum mechanics using the Schrodinger equation, the evolution of the wavefunction describing the physical system is taken to be deterministic, with only the "collapse" process introducing an indeterministic aspect.

But as with the card example, the impact of this random aspect is limited in scope. No matter how random it gets, it doesn't change the rules of the game. No matter how randomly the deck is shuffled, you still only ever have 52 cards, 4 suits, and 13 ranks. The randomness is constrained by the deterministic aspects of the game.

Another example of constrained indeterminism are computer programs that use randomness, for instance the Randomized Quicksort. No matter what pivots you randomly select, the algorithm is still going to correctly sort your list. At worst, it will take longer than usual. Because the randomness of the pivot selection is constrained by the context provided by the deterministic aspects of the program.

The same goes for our universe in the indeterministic case. The randomness of indeterminism only increases the probability of the existence of conscious life that discovers something true about the underlying nature of the universe *IF* the initial conditions and the non-random aspect of the causal laws allow for this to be the case.

Is it possible that our causal laws are such that any given starting conditions (with respect to the distribution of energy and/or matter) eventually lead to conscious life that knows true things about the universe?

Here we return to our analogy of the quicksort algorithm, which can start with any randomly arranged list and always produce a sorted list from it.

Note, though, that the quicksort algorithm is a very, very special algorithm. If you just randomly generate programs and try to run them, the probability of getting one that will correctly sort any unordered list is very low compared to the probability of getting a program that won't do anything useful at all, or sorts the list incorrectly, or will only correctly sort lists with special starting orders, or sorts the list but does so very inefficiently.

Equivalently, if you just randomly chose sets of causal laws from a list of all possible combinations, the probability of selecting a set of laws that can start from almost any random arrangement of matter and from that always produce conscious life that perceives true things about the laws that gave rise to it must also be very low.

The "no miracles" argument against scientific realism

By "no miracles" I'm referring to Hilary Putnam's observation:

“The positive argument for realism is that it is the only philosophy that doesn't make the success of science a miracle”

Let's assume that our best scientific theories tell us something true about the way the world *really* is, in an ontological sense. And further, for simplicity, let's assume a deterministic interpretation of those theories.

In this view, the universe as we know it began ~13.7 billion years ago. We'll set aside any questions about what, if anything, preceded the first instant and just draw a line there and call that our "initial state".

Given the specifics of that initial state, plus the particular causal laws of physics that we have, the universe can only evolve along one path. The state of the universe at this moment is entirely determined by two, and only two, things: its initial state and its casual laws.

But this means that the development of our scientific theories *about* the universe was also entirely determined by the initial state of the universe and it's causal laws. Our discovery of the true nature of the universe has to have been "baked into" the structure of the universe in its first instant.

By comparison, how many sets of possible initial states plus causal laws are there that would give rise to conscious entities who develop *false* scientific theories about their universe? It seems to me that this set of "deceptive" universes is likely much larger than the set of "honest" universes.

What would make universes with honest initial conditions + causal laws more probable than deceptive ones? For every honest universe it would seem possible to have an infinite number of deceptive universes that are the equivalent of "The Matrix" - they give rise to conscious entities who have convincing but incorrect beliefs about how their universe really is. These entities' beliefs are based on perceptions that are only illusions, or simulations (naturally occurring or intelligently designed), or hallucinations, or dreams.

It seems to me that it would be a bit of a miracle if it turned out that we lived in a universe whose initial state and causal laws were such that they gave rise to conscious entities whose beliefs about their universe were true beliefs.

A similar argument can also be made if we choose an indeterministic interpretation of our best scientific theories (e.g., quantum mechanics), though it involves a few extra steps.

Infinity and Probability

How about this:

Lets assume we have an infinitely long array of squares. And a fair 6-sided dice.

We roll the dice an infinite number of times and write each roll's number into a square.

When we finish, how many squares have a "1" written in them? An infinite number, right?

How many squares have an even number written in them? Also an infinite number.

How many squares have a number OTHER than "1" written in them? Again, an infinite number.

Therefore, the squares with "1" can be put into a one-to-one correspondence with the "not-1" squares...correct?

Now, while we have this one-to-one correspondence between "1" and "not-1" squares set up, let's put a sticker with an "A" on it in the "1" squares. And a sticker with a "B" on it in the "not-1" squares. We'll need the same number of "A" and "B" stickers, obviously. Aleph-null.

So, if we throw a dart at a random location on the array of squares, what is the probability of hitting a square with a "1" in it?

What is the probability of hitting a square with a "A" sticker?

The two questions don't have a compatible answers, right? So, in this scenario, probability is useless. It just doesn't apply. You should have no expectations about either outcome.

BUT. NOW. Let's erase the numbers and remove the stickers and start over.

This time, let's just fill in the squares with a repeating sequence of 1,2,3,4,5,6,1,2,3,4,5,6,1,2,3,...

And then, let's do our same trick about putting the "1" squares into a one-to-one mapping with the "not-1" squares, and putting an "A" sticker on the "1" squares, and a "B" sticker on the "not-1" squares.

Now, let's throw a dart at a random location on the array of squares. What is the probability of hitting a square with a "1" in it?

What is the probability of hitting a square with a "A" sticker on it?

THIS time we have some extra information! There is a repeating pattern to the numbers and the stickers. No matter where the dart hits, we know the layout of the area. This is our "measure" that allows us to ignore the infinite aspect of the problem and apply probability.

For any area the dart hits, there will always be an equal probability of hitting a 1, 2, 3, 4, 5, *or* 6. As you'd expect. So the probability of hitting a square with a "1" in it is ~16.67%.

Any area where the dart hits will have a repeating pattern of one "A" sticker followed by five "B" stickers. So the probability of hitting an "A" sticker is ~16.67%.

The answers are now compatible, thanks to the extra "structural" information that gave us a way to ignore the infinity.

In other words, you can't apply probability to infinite sets, but you can apply it to the *structure* of an infinite set.

If the infinite set has no structure, then you're out of luck. At best you can talk about the method used to generate the infinite set...but if this method involves randomness, it's not quite the same thing.

Entropy and Memory

It’s overwhelmingly probable that all of your memories are false.

Consider:

Entropy is a measure of the disorder of a system. The higher the entropy, the higher the disorder.

If a deck of cards is ordered by suit and then within each suit by ascending rank, then that’s a low entropy state. This is because out of the 8.06 * 10 to the 67th (52!) possible unique arrangements of the cards in a standard 52 card deck, there’s only 24 that fit that particular description.

A “random looking” arrangement of the deck is a high entropy state, because there are trillions of unique arrangements of a standard 52 card deck that will fit the description of looking “randomly shuffled”.

Same with the egg. There are (relatively) few ways to arrange the molecules of an egg that will result in it looking unbroken, compared to the huge number of ways that will result in it looking broken. SO, unbroken egg…low entropy. Broken egg…high entropy.

AND the same with the universe…there are (again, relatively) few ways to arrange the atoms of the universe in a way that makes it resemble what we see with people and trees and planets and stars and galaxies, compared with the gargantuan number of ways to arrange things so that it resembles a generic looking cloud of dust.

OKAY. Now.

Of the relatively few ways that the elementary particles of the universe can be arranged so as to resemble what we see around us today, only a tiny fraction of those particle arrangements will have values for momentum and position that are consistent with them having arrived at that state 13.7 billion years after something like the Big Bang.

The vast majority of the particle arrangements that macroscopically resemble the world around us will *instead* have particles in states (e.g., with positions and velocities) that are consistent with the particles having previously been in something more like a giant dust cloud.

By which I mean: If we take their current positions and velocities, and work backwards to see where they came from, and go back far enough in time, eventually we will not arrive at the Big Bang. Instead we will arrive at a state resembling a giant dust cloud (probably a very thin, spread-out dust cloud).

SO, bottom line:

Out of all the possible configurations that the universe could be in, ones that have people, and planets, and stars, and galaxies are extremely rare.

Further, even if we then only consider those extremely rare possible configurations that have people, and planets, and stars, and galaxies – the ones with particles in states (e.g., with positions and velocities) that are consistent with having arrived at this configuration 13.7 billion years after something like the Big Bang are STILL rare.

We don’t know the exact state of our universe’s particles, but in statistical mechanics the Principle of Indifference requires us to consider all possible microscopic states that are consistent with our current macroscopic state equally likely.

So given all of the above, and our current knowledge of the laws of physics, the most likely explanation is that all of your current memories are false and that yesterday the universe was in a HIGHER state of entropy, not a lower state (as would be required by any variation of the Big Bang theory).

Physical systems with low states of entropy are very rare, by definition. So it’s very improbable (but not impossible) that the unlikely low entropy state of the universe of today is the result of having evolved from an EVEN MORE UNLIKELY lower entropy universe that existed yesterday.

Instead, statistically it’s overwhelmingly more probable that the unlikely low entropy state of the universe today is the result of a random fluctuation from a HIGHER entropy universe that existed yesterday.

And thus your memories of a lower entropy yesterday are most likely due to this random fluctuation, not due to yesterday actually having had a lower entropy than today.

[Based on my reading of Sean Carroll's book "From Eternity to Here"]