Showing posts with label Science. Show all posts
Showing posts with label Science. Show all posts

Saturday, January 24, 2015

Evolution, Consciousness, and Existence

It is self-evident that there are conscious experiences.  However, what consciousness *is* - it’s ultimate nature - is not self-evident.  Further, what any particular conscious experience “means” is also not self-evident.


For example:  The experience of color is directly known and incontrovertible.  But what the experience of color *means* is not directly known - any proposed explanation is inferential and controvertible.


We do not have direct access to meaning.


We only have direct access to bare uninterpreted conscious experience.


So - any attempted explanation of consciousness from the outside (i.e., objectively) must be constructed from inside consciousness, by conscious processes, on a foundation of conscious experience.


Not a promising situation - because any explanation must be based entirely on conscious experiences which have no intrinsic meaning, and arrived at via conscious processes which are equally lacking in intrinsic meaning.


It “seems” like we could just stop here and accept that things are what they are.  And what else do we have other than the way things “seem”?  I experience what I experience - nothing further can be known.


HOWEVER - while we could just stop there - most of us don’t.  


For most of us, it seems that non-accepting, questioning, doubting, believing, disbelieving, desiring, grasping, wanting, unsatisfied conscious experiences just keep piling up.


Why is this?


Well - it seems like there is either an explanation for this - or it just a brute fact that has no explanation.


If there is no explanation, then we should just accept our non-acceptance, our non-stoppingness, and let it go.  Or not.  Doesn’t matter.


Alternatively, if there is an explanation - then there are two options:


  1. The explanation is not accessible to us because our conscious experiences do not “point” towards the truth of the way things are.
  2. The explanation is accessible to us, because our conscious experiences *do* point towards the truth of the way things are.


Again, if we believe that option 1 is correct, we can just stop.  Or not.  It doesn’t matter.


So - let’s *provisionally* assume that option 2 is correct.


I say “provisionally” instead of “axiomatically” because we will revisit this assumption.  Once we’ve gone as far as we can in working out the implications of it being true - we will return to this assumption and see if it still makes sense in light of where we ended up.


At this point I am willing to grant that modern science provides the best methodology for translating (extrapolating?) from our truth-pointing conscious experiences to models that represent the accessible parts of how things “really” are.  


To the extent that anything can be said about how things really are “outside of” conscious experience - science says it.


But we never have direct access to the truth - all we have are our models of the truth, which (hopefully) improve over time as we distill out the valid parts of our truth-pointing conscious experiences.


Okay - now, having said all of that - what models has modern science developed?  Apparently there are two fundamental theories:  General Relativity and Quantum Field Theory.


From Wikipedia:


GR is a theoretical framework that only focuses on the force of gravity for understanding the universe in regions of both large-scale and high-mass: stars, galaxies, clusters of galaxies, etc. On the other hand, QFT is a theoretical framework that only focuses on three non-gravitational forces for understanding the universe in regions of both small scale and low mass: sub-atomic particles, atoms, molecules, etc. QFT successfully implemented the Standard Model and unified the interactions between the three non-gravitational forces: weak, strong, and electromagnetic force.


Through years of research, physicists have experimentally confirmed with tremendous accuracy virtually every prediction made by these two theories when in their appropriate domains of applicability. In accordance with their findings, scientists also learned that GR and QFT, as they are currently formulated, are mutually incompatible - they cannot both be right. Since the usual domains of applicability of GR and QFT are so different, most situations require that only one of the two theories be used.  As it turns out, this incompatibility between GR and QFT is only an apparent issue in regions of extremely small-scale and high-mass, such as those that exist within a black hole or during the beginning stages of the universe (i.e., the moment immediately following the Big Bang).


Now - in addition to those two fundamental theories, we have other higher level theories, which are in principle reducible to GR+QFT.  Chief among these is the Theory of Evolution.  Wikipedia again:


Evolution – change in heritable traits of biological organisms over generations due to natural selection, mutation, gene flow, and genetic drift. Also known as descent with modification.


So - ultimately evolution reduces to GR+QFT as applied to some set of initial conditions (IC) that existed approximately 14 billion years ago.


I introduce evolution here because it explains how relatively complex “entities” such as human beings can “arise” from relatively simple initial conditions.  All that is required is for GR+QFT to support the existence of patterns in matter such that:


(1) The patterns vary in structure, in function, or in behaviour.


(2) The likelihood of continuance (i.e. survival of the original or the production of copies) of a pattern depends upon the variations in (1).


(3) A pattern’s characteristics are transmitted during reproduction so that there is some correlation between the nature of original patterns and their copies.


Given that GR+QFT satisfy these requirements, it is possible to picture how the right set of initial conditions (IC) can lead to simple replicators gradually evolving into more complex replicators like humans.


In this picture, human ability and behavior doesn’t arise suddenly out of a vacuum - rather it gradually develops from simpler behaviors.  


So there is a continuum from the simple to the complex.  From prions, viruses, and bacteria to tetrabaena socialis and caenorhabditis elegans to insects, fish, reptiles, mammals, apes, chimpanzees, and (most complex of all) humans.


Note that “evolution” doesn’t do any real work here.  GR+QFT+IC do all of the work.  Every aspect of evolution “reduces” to some aspect of GR+QFT+IC.


Any state of matter or change in the state of matter, including “living” matter, is explicable in terms of GR+QFT+IC.


Evolution just provides a conceptual bridge between the fundamental laws and entities of physics and the abstract higher level “patterns” that we more immediately perceive in our conscious experience - like plants, animals, etc.


Further note that computers are also complex patterns of matter - and their behaviors and abilities are reducible to and based in GR+QFT+IC, just like everything else.  It is only the patterns that are different, not the underlying principles.  Computers are a moderately complex by-product of human evolution and human selection - and not directly acted on by evolution and natural selection.  But their patterns may yet become complex enough to survive and evolve without further human involvement.


Now - given all that:  why do humans have the behaviors and abilities that they have?   Why are we “this way” instead of “some other way”?


Evolution says that we behave the way we do and have the abilities that we have because those behaviors and abilities are part of the patterns that have most successfully survived and reproduced inside the system described by GR+QFT+IC.


We have our behaviors and abilities because they “work” (or at least have worked in the past) to enable survival and reproduction.  However - they do no actual work because any change in any state of matter is ultimately due to GR+QFT+IC - which do all of the real work.  Talk of “behaviors” and “abilities” is another type of bridge between what exists - GR+QFT+IC - and what we perceive - behavior.


Why do we engage in philosophy, mathematics, and science?  Why do we concern ourselves with ethics and political theory?  These activities are all just aspects of the set of evolved patterns that constitute the human species.  We do these things because the are the inevitable manifestations of the survival and replication of patterns of matter whose state changes are governed by GR+QFT+IC.


Note that the question of free will is ultimately about the causes of behavior.  GR+QFT+IC+Evo fully address the question of why we behave as we do, without the need for anything like free will.


So - why punish or reward people if they are not “free” of GR+QFT+IC+Evo?  


Because if you “want” to change their behavior, this is what works.  Most animals, including humans, will change their behavior in response to circumstances that either threaten or improve their ability to survive and reproduce.  


Why?  Because the evolution of the patterns that these animals consist of has resulted in flexible and adaptable (though still reductionistically mechanistic) behaviors under a wide variety of circumstances.


And that’s all there is to it.  It is useless to punish or reward animals whose patterns are not sufficiently flexible to change behaviors in response.  The punishment or reward should be selected to match the animal’s inventory of adaptive responses.  


The point is not the reward or the punishment.  These are just means to an end.  The point is the desired change in behavior (in either the animal being administered to, or other animals who may be encouraged or deterred by what they observe).


Further note that why you “want” to change another animals behavior is also explicable within the framework of GR+QFT+IC+Evo.


Next we will consider how conscious experience fits into GR+QFT+IC.


It is certainly true that my experience of consciousness and my conception of GR+QFT+IC do not overlap.  For example - my experience of seeing the color yellow does not overlap with my mental conception of the photons, quarks, electrons, retinas, neurons, and visual cortices that are described by the GR+QFT+IC framework.


However - GR+QFT+IC *does* seem to provide a satisfying explanation of the *mechanics* of how I detect, process, and represent color, and evolution explains why I have the “ability” to see color.


Even so - there is still an unsatisfying “conceptual gap” between my experience of color and my understanding of the physics of color.


How can we explain this gap?


One possibility is to claim that “future science” will close the gap for us.  However - I doubt that this is true because GR+QFT is already so successful in explaining all observed behaviors of matter.  There is no promising theoretical gap in our understanding of the behavior of matter that matches up with the conceptual gap we feel exists between consciousness and matter.


So - I think a more promising approach is to show that the conceptual gap is more apparent than real.  The gap isn’t because we are missing the existence of some force or particle.  Rather the gap is due to us not looking at the existing facts in the right way.


In the GR+QFT+IC framework, our abilities and behaviors (including beliefs) have evolved because they “work” - not because they are necessarily truth-pointing.  


So our belief in an explanatory gap between our conscious experience and our conceptual model of reality *is* necessarily a result of our evolution.


We have evolved to cognitively conceptualize reality in one way (GR+QFT+IC) and we have evolved to represent our direct *experience* of reality in another way (colors, feelings, sensations) - and because there has been no evolutionary pressure to synchronize these two views, we haven’t - and so the perceived mismatch is a kind of cognitive illusion.


Perhaps, as it turns out, that conscious experience just *does* accompany certain kinds of patterns in matter and that’s all that there is to it.  The fact that this seems odd to us is just a quirk of our cognitive evolution.  Maybe it would seem otherwise with minor changes to our evolved matter patterns - but there is no evolutionary pressure pushing in this direction, so we have not gone in that direction.


In this view - conscious experience is an aspect of patterns of matter - and thus just an aspect of matter - and our intuition that it is something *other* than matter is just an accident of evolutionary history.


  1. Belief is a state of mind.
  2. States of mind are just brain states.
  3. Brain states are just patterns of matter.
  4. Patterns of matter are just matter.
  5. Matter is just GR+QFT+IC.
  6. The fact that there *seems* to be a unsatisfying epistemic gap in step 2) is just an accident of history stemming from GR+QFT+IC.  In fact, the step in #2 is no less valid than the steps in #3 or #4, both of which seem pretty unobjectionable.


When I wear my physicalist hat, this is basically the position that I take.  


SO - we have come full circle.  


  1. We started with the assumption that our conscious experience was “truth-pointing”.  
  2. We granted that modern science is the best way to distill out the truthful aspect of conscious experience.  
  3. We summarized how modern science explains human behavior and ability.
  4. We discussed how that explanation of human behavior and ability could result in an apparent conceptual gap between GR+QFT+IC and our conscious experience.
  5. We proposed a solution to this conceptual gap.


Now - given all of this - given where we ended up - let’s revisit our assumption in #1.  


Does the model of the world that modern science has constructed give us more or less confidence that our conscious experience is, in fact, “truth-pointing”?


And the answer is:  less.  In this framework, consciousness is a product of evolution - and evolution only concerns itself with what promotes survivability and reproductive success - not with what is true.  So GR+QFT+IC+Evo supports the belief that our conscious experience is *useful* in that sense - but not that it is truth-pointing.


However - if we change our starting assumption from:


  1. Conscious experience is truth-pointing


to


  1. Conscious experience is survival/reproduction-enabling.


Then we are on more consistent ground.  Then we can assert that modern science is the best way to distill out the survival-enabling aspects of our conscious experience, and that the most useful model of reality for enabling survival is GR+QFT+IC+Evo.


Which actually makes some sense...


I initially claimed that conscious experience had no directly accessible intrinsic meaning.  A conscious experience just is what it is.  Only by fitting it into a larger narrative framework does any particular conscious experience acquire meaning.


However - the narrative framework of GR+QFT+IC also lacks any ultimate meaning.


My experience of seeing yellow “means” that there are particular patterns of photons, quarks, and electrons - but what do these patterns mean?  Nothing!  They don’t mean anything beyond themselves - they just are what they are.  


So - assuming that there is something beyond conscious experience which we can know “through” conscious experience, still  leaves us with an ultimately meaningless reality.


Reversing the order of our earlier list:


  1. There is no larger meaning or purpose behind GR+QFT+IC+Evo.
  2. Matter is just GR+QFT+IC.
  3. Patterns of matter are just matter.
  4. Brain states are just patterns of matter.
  5. States of mind are just brain states.
  6. Consciousness is just states of mind.
  7. There is no larger meaning or purpose behind Consciousness.


IN SUMMARY:


  1. Consciousness is the fundamental fact.
  2. The fact of consciousness is directly known.
  3. The fact of consciousness is the only directly known fact.
  4. The contents of consciousness are experienced but are without intrinsic meaning.
  5. It is reasonable to stop here.
  6. Most of us do not stop there.
  7. Either there is a reason that we do not stop there, or there is not.
  8. If we believe there is not, we can stop here.
  9. If we believe that there is a reason, this reason is either accessible or it is not.
  10. For it to be accessible, conscious experience must be “truth-pointing”
  11. If conscious experience is not “truth-pointing” then we might as well stop here.
  12. If we assume that it is truth pointing, modern science provides the best way to distill out the truthful aspects of experience.
  13. Science ultimately leads us to GR+QFT+IC+Evo.
  14. GR+QFT+IC+Evo does not concern itself with truth - only with survival and reproduction.
  15. Our assumption that consciousness is truth-pointing must be weakened to “consciousness is survival-enabling”.
  16. GR+QFT+IC+Evo is ultimately as without intrinsic meaning as bare conscious experience.
  17. Therefore, it doesn’t really matter whether we stop at #5, #8, #11, or #16.

Monday, July 19, 2010

Re: The Irrationality of Physicalism

A response:

"And in either case the counter argument is the same, c.f. "The Evolution of Reason" by William S. Cooper."

AND, my response to the response:


Maybe. But it’s not a very good counter argument.

A long-ish response, but there are several quotes from the book that add up in length.

So logic reduces to biology. Fine. And biology reduces to...what? Initial conditions and causal laws, that’s what.

So, from the "The Evolution of Reason":

“Evolution is not the law enforcer but the law giver - not so much a police force as a legislature. The laws of logic are not independent of biology but implicit in the very evolutionary processes that enforce them. The processes determine the laws.

If the latter understanding is correct, logical rules have no separate status of their own but are theoretical constructs of evolutionary biology. Logical theory ought then in some sense to be deducible entirely from biological considerations. The concept of scientific reduction is helpful in expressing that thought. In the received methodological terminology the idea of interest can be articulated as the following hypothesis.

REDUCIBILITY THESIS: Logic is reducible to evolutionary theory.”


So obviously evolution is not a law enforcer or a law giver. It isn’t a causal law, but rather a consequence of causal laws.

Cooper claims that logic reduces to evolutionary theory. And what does evolutionary theory reduce to? Initial conditions and fundamental causal laws acting on fundamental entities.

Assuming physicalism, the causal laws of our universe applied to a suitable set of initial conditions will, in time, exhibit features that we categorize as “evolutionary”. Some of these evolutionary processes may give rise to entities that have conscious experiences, and some of those conscious experiences will be of holding this, that, or the other beliefs about logic. But those beliefs are a result of fundamental laws acting on fundamental entities, and not associated with any sort of independently existing platonic standard of “logical reasoning”.

This is the gist of my post, and seems to be the main gist of his book. We do part company eventually though. I’ll save that part for last.

Continuing:

“‘How do humans manage to reason?’ Since the form of this question is the same as that of the first, it would be natural to attack it in a similar two-pronged fashion. [...] Somewhere in the latter part there would be talk of selective forces acting on genetic variation, of fitness, of population models, etc. [...] The laws of Reason should not be addressed independently of evolutionary theory, according to the thesis. Reasoning is different from all other adaptations in that the laws of logic are aspects of the laws of adaptation themselves. Nothing extra is needed to account for logic - only a drawing out of the consequences of known principles of natural selection.”

Selective forces? What would have caused those selective forces? What do these selective forces reduce to? Why these selective forces instead of some others?

Natural selection? Well, there are causally neutral “filters” (metaphorically speaking), but these metaphorical filters are as much a consequence of the universe’s initial conditions and causal laws as the organisms that are (metaphorically) selected.

Evolution is a consequence of causal laws, not a causal law itself. In this it is like the first law of thermodynamics - which is a consequence of the time invariance of the causal laws, not a causal law itself. Evolution and the first law of thermodynamics are descriptions of how things are, not explanations.

So as I said, if physicalism is true then the arguments that we present and believe are those entailed by the physics that underlies our experiences, and by nothing else.

In this view, evolution is also just a manifestation of those same underlying physical forces. And logic is merely an aspect of the experiences generated by the more fundamental activities of quarks and electrons.

In this vein, he says:

“If evolutionary considerations control the relevant aspects of decision behavior, and these determine in turn the rest of the machinery of logic, one can begin to discern the implicative chain that makes Reducibility Theory thinkable.

[...]

If the evolutionary control over the logic is indeed so total as to constrain it entirely, there is no need to perpetuate the fiction that logic has a life of its own. It is tributary to the larger evolutionary mechanism.”


All we have to do is add that the universe’s initial conditions and causal laws control the evolutionary considerations, and my point is practically made.

The main point of contention between my argument and Cooper’s is:

“In this way the general evolutionary tendency to optimize fitness turns out to imply, in and of itself, a tendency for organisms to be rational. Once this is shown there is no need to look for the source of logical principles elsewhere, for the logical behavior is shaped directly by the evolutionary forces acting on their own behalf. Because the biological processes expressed in the population models wholly entail the logical rules, and are sufficient to predict and explain rational behavior, no separate account of logic is needed.”

Optimize fitness? Again, evolution isn’t something imposed from outside the system, and it’s not a causal law. If fitness of some group is optimized over time, that’s just a consequence of system’s initial conditions and causal laws.

In a deterministic system, the rise of that group was destined to happen. In an indeterministic system, the rise of that group was a result of the interplay between the initial conditions, the deterministic part of causal framework, and the outcome of the random coin flips.

So, he seems to imply that initial conditions and causal laws must give rise to rational actors. But as he says, there is no independent standard of rationality. Rationality is relative to the rules of the particular physical system. So the behaviors that a system most commonly gives rise to are, by definition, “rational”.

So rational is a meaningless label. In his formulation above it just means “whatever ends up being the most commonly manifested behaviors.”

But it’s not commonly manifested because it’s rational. Rather, it’s labeled rational because it’s commonly manifested.

Saturday, July 17, 2010

Ultimate Explanations of the Universe

An excellent book by Michael Heller.

Quotes:

"The tendency to pursue 'ultimate explanations' is inherent in the mathematical and experimental method in yet another way (and another sense). Whenever the scientist faces a challenging problem, the scientific method requires him to never give up, never seek an explanation outside the method. If we agree - at least on a working basis - to designate as the universe everything that is accessible to the mathematical and experimental method, then this methodological principle assumes the form of a postulate which in fact requires that the universe be explained by the universe itself. In this sense scientific explanations are 'ultimate,' since they do not admit of any other explanations except ones which are within the confines of the method.

However, we must emphasise that this postulate and the sense of 'ultimacy' it implies have a purely methodological meaning, in other words they oblige the scientist to adopt an approach in his research as if other explanations were neither existent nor needed." - Michael Heller, The Totalitarianism of the Method.


====


"The longing to attain the ultimate explanation lingers in the implications of every scientific theory, even in a fragmentary theory of one part or aspect of the world. For why should only that part, that aspect of the world be comprehensible? It is only a part or an aspect of an entirety, after all, and if that entirety should be unexplainable, then why should only a tiny fragment thereof lend itself to explanation? But consider the reverse: if a tiny part were to elude explanation, it would leave a gap, rip a chasm, in the understanding of the entirety."


====


"Peter van Inwagen proposed a rather peculiar answer to the question why there exists anything at all. His reasoning is as follows. there may exist an infinite number of worlds full of diverse beings, but only one empty world. Therefore the probability of the empty world is zero, while the probability of a (non-empty) is one.

This apparently simple reasoning is based on very strong an essentially arbitrary assumptions. First of all, that there may exist an infinite number of worlds (that they have at least a potential existence); secondly, that probability theory as we know it may be applied to them (in other words that probability theory is in a sense aprioristic with respect to these worlds); and thirdly, that they come into being on the principle of 'greater probability.' The following question may be put with respect to this mental construct: 'Why does it exist, rather than nothing?'"

Friday, July 16, 2010

The Irrationality of Physicalism

If Physicalism is true, then the belief in Physicalism can’t be rationally justified.

If physicalism is true, then our beliefs and experiences are a result of the universe’s initial conditions and causal laws (which may have a probabilistic aspect).

Therefore, assuming physicalism, we don’t present or believe arguments for reasons of logic or rationality. Instead, the arguments that we present and believe are those entailed by the physics that underlies our experiences.

It is *possible* that we live in a universe whose initial conditions and causal laws are such that our arguments *are* logical. But in a physicalist framework that’s not why we present or believe those arguments. The fact that the arguments may be logical is superfluous to why we make or believe them.

Obviously there’s nothing that says that our physically generated experiences and beliefs have to be true or logical. In fact, we have dreams, hallucinations, delusions, schizophrenics, and madmen as proof that there is no such requirement.

So arguing for physicalism is making an argument that states that no one presents or believes arguments for reasons of logic.

Note that the exact same argument can be applied to mathematical realism, or any other position that posits that consciousness is caused by or results from some underlying process.

Monday, July 12, 2010

Quentin Meillassoux on Sufficient Reason and Non-Contradiction

In his book “After Finitude”, he explains that the principle of facticity (which he also refers to as “the principle of unreason”) stands in contrast to Leibniz’s “Principle of Sufficient Reason”, which states that anything that happens does so for a definite reason.

From pg. 33 of After Finitude:

“But we also begin to understand how this proof [the ontological proof of God] is intrinsically tied to the culmination of a principle first formulated by Leibniz, although already at work in Descartes, viz., the principle of sufficient reason, according to which for every thing, every fact, and every occurence, there must be a reason why it is thus and so rather than otherwise.

For not only does such a principle require that there be a possible explanation for every worldly fact; it also requires that thought account for the unconditioned totality of beings, as well as for their being thus and so. Consequently, although thought may well be able to account for the facts of the world by invoking this or that global law - nevertheless, it must also, according to the principle of reason, account for why these laws are thus and not otherwise, and therefore account for why the world is thus and not otherwise. And even were such a ‘reason for the world’ to be furnished, it would yet be necessary to account for this reason, and so on ad infinitum.

If thought is to avoid an infinite regress while submitting to the principle of reason, it is incumbent upon it to uncover a reason that would prove capable of accounting for everything, including itself - a reason no conditioned by any other reason, and which only the ontological argument is capable of uncovering, since the latter secures the existence of an X through the determination of this X alone, rather than through the determination of some entity other than X - X must be because it is perfect, and hence causa sui, or sole cause of itself.

If every variant of dogmatic metaphysics is characterized by the thesis that *at least one entity* is absolutely necessary (the thesis of real necessity) it becomes clear how metaphysics culminates in the thesis according to which *every* entity is absolutely necessary (the principle of sufficient reason). Conversely, to reject dogmatic metaphysics means to reject all real necessity, and a fortiori to reject the principle of sufficient reason, as well as the ontological argument, which is the keystone that allows the system of real necessity to close in upon itself. Such a refusal enjoins one us to maintain that there is no legitimate demonstration that a determinate entity should exist unconditionally.”



As to the principle of non-contradiction:

Pg. 60:

“We are no longer upholding a variant of the principle of sufficient reason, according to which there is a necessary reason why everything is the way it is rather than otherwise, but rather the absolute truth of a *principle of unreason*. There is no reason for anything to be or to remain the way it is; everything must, without reason, be able not to be and/or be other than it is.

What we have here is a principle, and even, we could say, an anhypothetical principle; not in the sense in which Plato used this term to describe the Idea of the Good, but rather in the Aristotelian sense. By ‘anhypothetical principle’, Aristotle meant a fundamental proposition that could not be deduced from any other, but which could be proved by argument. This proof, which could be called ‘indirect’ or ‘refutational’, proceeds not by deducing the principle from some other proposition - in which case it would no longer count as a principle - but by pointing out the inevitable inconsistency into which anyone contesting the truth of the principle is bound to fall. One establishes the principle without deducing it, by demonstrating that anyone who contests it can do so only by presupposing it to be true, thereby refuting him or herself. Aristotle sees in non-contradiction precisely such a principle, one that is established ‘refutationally’ rather than deductively, because any coherent challenge to it already presupposes its acceptance. Yet there is an essential difference between the principle of unreason and the principle of non-contradiction; viz. what Aristotle demonstrates ‘refutationally’ is that no one can *think* a contradiction, but he has not thereby demonstrated that contradiction is absolutely impossible. Thus the strong correlationist could contrast the facticity of this principle to its absolutization - she would acknowledge that she cannot think contradiction, but she would refuse to acknowledge that this proves its absolute impossibility. For she will insist that nothing proves that what is possible in-itself might not differ toto caelo from what is thinkable for us. Consequently the principle of non-contradiction is anhypothetical with regard to what is thinkable, but not with regard to what is possible.”


Continuing on pg. 77:

“It could be objected that we have conflated contradiction and inconsistency. In formal logic, an ‘inconsistent system’ is a formal system all of whose well-formed statements are true. If this formal system comprises the operator of negation, we say that an axiomatic is inconsistent if *every* contradiction which can be formulated within it is true. By way of contrast, a formal system is said to be non-contradictory when (being equipped with the operator of negation) it does not allow *any* contradiction to be true. Accordingly, it is perfectly possible for a logical system to *be* contradictory without thereby being inconsistent - all that is required is that it give rise to *some* contradictory statements which are true, without permitting *every* contradiction to be true. This is the case with ‘paraconsistent’ logics, in which some but not all contradictions are true. Clearly then, for contemporary logicians, it is not non-contradiction that provides the criterion for what is thinkable, but rather inconsistency. What every logic - as well as every logos more generally - wants to avoid is a discourse so trivial that it renders every well-formulated statement, as well as its negation, equally valid. But contradiction is logically thinkable so long as it remains ‘confined’ within limits such that it does not entail the truth of every contradiction.”

Sunday, July 11, 2010

Determinism vs. Indeterminism

Ultimately I think the difference between deterministic and indeterministic laws is not significant.

If a physical law is deterministic then under it's influence Event A will "cause" Result X 100% of the time.

Why does Event A always lead to Result X? Because that's the law. There is no deeper reason.

If a physical law is indeterministic, then under it's influence Event B will "cause" Result Q, R, or S according to some probability distribution.

Let's say that the probability distribution is 1/3 for each outcome.

If Event B leads to Result R, why does it do so? Because that's the law. There is no deeper reason.

Event A causes Result X 100% of the time.

Event B causes Result R 33.3333% of the time.

Why? There is no reason. That's just the way it is.

Determinism could be seen as merely a special case of indeterminism...the case where all probabilities are set to either 0% or 100%.

So even if we are in an universe with indeterministic laws, this doesn’t have any major impact on what metaphysical conclusions we arrive at. Even assuming indeterministic physicalism, there are still initial conditions and there are still laws - the laws just have an intrinsically probabilistic aspect.

These probablistic laws are like the rules of a card game that includes a certain amount of randomness...for instance, requiring occasional random shuffling of the deck. But the number of cards, the suits, the ranks, and the rules of the game themselves are not random...those aspects are determined.

Similarly, in quantum mechanics using the Schrodinger equation, the evolution of the wavefunction describing the physical system is taken to be deterministic, with only the "collapse" process introducing an indeterministic aspect.

But as with the card example, the impact of this random aspect is limited in scope. No matter how random it gets, it doesn't change the rules of the game. No matter how randomly the deck is shuffled, you still only ever have 52 cards, 4 suits, and 13 ranks. The randomness is constrained by the deterministic aspects of the game.

Another example of constrained indeterminism are computer programs that use randomness, for instance the Randomized Quicksort. No matter what pivots you randomly select, the algorithm is still going to correctly sort your list. At worst, it will take longer than usual. Because the randomness of the pivot selection is constrained by the context provided by the deterministic aspects of the program.

The same goes for our universe in the indeterministic case. The randomness of indeterminism only increases the probability of the existence of conscious life that discovers something true about the underlying nature of the universe *IF* the initial conditions and the non-random aspect of the causal laws allow for this to be the case.

Is it possible that our causal laws are such that any given starting conditions (with respect to the distribution of energy and/or matter) eventually lead to conscious life that knows true things about the universe?

Here we return to our analogy of the quicksort algorithm, which can start with any randomly arranged list and always produce a sorted list from it.

Note, though, that the quicksort algorithm is a very, very special algorithm. If you just randomly generate programs and try to run them, the probability of getting one that will correctly sort any unordered list is very low compared to the probability of getting a program that won't do anything useful at all, or sorts the list incorrectly, or will only correctly sort lists with special starting orders, or sorts the list but does so very inefficiently.

Equivalently, if you just randomly chose sets of causal laws from a list of all possible combinations, the probability of selecting a set of laws that can start from almost any random arrangement of matter and from that always produce conscious life that perceives true things about the laws that gave rise to it must also be very low.

Infinity and Probability

How about this:

Lets assume we have an infinitely long array of squares. And a fair 6-sided dice.

We roll the dice an infinite number of times and write each roll's number into a square.

When we finish, how many squares have a "1" written in them? An infinite number, right?

How many squares have an even number written in them? Also an infinite number.

How many squares have a number OTHER than "1" written in them? Again, an infinite number.

Therefore, the squares with "1" can be put into a one-to-one correspondence with the "not-1" squares...correct?

Now, while we have this one-to-one correspondence between "1" and "not-1" squares set up, let's put a sticker with an "A" on it in the "1" squares. And a sticker with a "B" on it in the "not-1" squares. We'll need the same number of "A" and "B" stickers, obviously. Aleph-null.

So, if we throw a dart at a random location on the array of squares, what is the probability of hitting a square with a "1" in it?

What is the probability of hitting a square with a "A" sticker?

The two questions don't have a compatible answers, right? So, in this scenario, probability is useless. It just doesn't apply. You should have no expectations about either outcome.

BUT. NOW. Let's erase the numbers and remove the stickers and start over.

This time, let's just fill in the squares with a repeating sequence of 1,2,3,4,5,6,1,2,3,4,5,6,1,2,3,...

And then, let's do our same trick about putting the "1" squares into a one-to-one mapping with the "not-1" squares, and putting an "A" sticker on the "1" squares, and a "B" sticker on the "not-1" squares.

Now, let's throw a dart at a random location on the array of squares. What is the probability of hitting a square with a "1" in it?

What is the probability of hitting a square with a "A" sticker on it?

THIS time we have some extra information! There is a repeating pattern to the numbers and the stickers. No matter where the dart hits, we know the layout of the area. This is our "measure" that allows us to ignore the infinite aspect of the problem and apply probability.

For any area the dart hits, there will always be an equal probability of hitting a 1, 2, 3, 4, 5, *or* 6. As you'd expect. So the probability of hitting a square with a "1" in it is ~16.67%.

Any area where the dart hits will have a repeating pattern of one "A" sticker followed by five "B" stickers. So the probability of hitting an "A" sticker is ~16.67%.

The answers are now compatible, thanks to the extra "structural" information that gave us a way to ignore the infinity.

In other words, you can't apply probability to infinite sets, but you can apply it to the *structure* of an infinite set.

If the infinite set has no structure, then you're out of luck. At best you can talk about the method used to generate the infinite set...but if this method involves randomness, it's not quite the same thing.

Entropy and Memory

It’s overwhelmingly probable that all of your memories are false.

Consider:

Entropy is a measure of the disorder of a system. The higher the entropy, the higher the disorder.

If a deck of cards is ordered by suit and then within each suit by ascending rank, then that’s a low entropy state. This is because out of the 8.06 * 10 to the 67th (52!) possible unique arrangements of the cards in a standard 52 card deck, there’s only 24 that fit that particular description.

A “random looking” arrangement of the deck is a high entropy state, because there are trillions of unique arrangements of a standard 52 card deck that will fit the description of looking “randomly shuffled”.

Same with the egg. There are (relatively) few ways to arrange the molecules of an egg that will result in it looking unbroken, compared to the huge number of ways that will result in it looking broken. SO, unbroken egg…low entropy. Broken egg…high entropy.

AND the same with the universe…there are (again, relatively) few ways to arrange the atoms of the universe in a way that makes it resemble what we see with people and trees and planets and stars and galaxies, compared with the gargantuan number of ways to arrange things so that it resembles a generic looking cloud of dust.

OKAY. Now.

Of the relatively few ways that the elementary particles of the universe can be arranged so as to resemble what we see around us today, only a tiny fraction of those particle arrangements will have values for momentum and position that are consistent with them having arrived at that state 13.7 billion years after something like the Big Bang.

The vast majority of the particle arrangements that macroscopically resemble the world around us will *instead* have particles in states (e.g., with positions and velocities) that are consistent with the particles having previously been in something more like a giant dust cloud.

By which I mean: If we take their current positions and velocities, and work backwards to see where they came from, and go back far enough in time, eventually we will not arrive at the Big Bang. Instead we will arrive at a state resembling a giant dust cloud (probably a very thin, spread-out dust cloud).

SO, bottom line:

Out of all the possible configurations that the universe could be in, ones that have people, and planets, and stars, and galaxies are extremely rare.

Further, even if we then only consider those extremely rare possible configurations that have people, and planets, and stars, and galaxies – the ones with particles in states (e.g., with positions and velocities) that are consistent with having arrived at this configuration 13.7 billion years after something like the Big Bang are STILL rare.

We don’t know the exact state of our universe’s particles, but in statistical mechanics the Principle of Indifference requires us to consider all possible microscopic states that are consistent with our current macroscopic state equally likely.

So given all of the above, and our current knowledge of the laws of physics, the most likely explanation is that all of your current memories are false and that yesterday the universe was in a HIGHER state of entropy, not a lower state (as would be required by any variation of the Big Bang theory).

Physical systems with low states of entropy are very rare, by definition. So it’s very improbable (but not impossible) that the unlikely low entropy state of the universe of today is the result of having evolved from an EVEN MORE UNLIKELY lower entropy universe that existed yesterday.

Instead, statistically it’s overwhelmingly more probable that the unlikely low entropy state of the universe today is the result of a random fluctuation from a HIGHER entropy universe that existed yesterday.

And thus your memories of a lower entropy yesterday are most likely due to this random fluctuation, not due to yesterday actually having had a lower entropy than today.

[Based on my reading of Sean Carroll's book "From Eternity to Here"]

Friday, March 13, 2009

Mind Reading

Interpreting neural activity:
"fMRI scanners enable us to see the bigger picture of what is happening in people's brains," she says. " By looking at activity over tens of thousands of neurons, we can see that there must be a functional structure – a pattern – to how these memories are encoded. Otherwise, our experiment simply would not have been possible to do."

Sunday, March 8, 2009

The Watchmen

A good review by Roger Ebert.

Which continued this link to a good youtube video on quantum mechanics and electrons.

Wednesday, March 4, 2009

A shadow universe

A bizarre universe may be lurking in the shadows
Such hidden worlds might sound strange, but they emerge naturally from complex theories such as string theory, which attempts to mesh together the very small and the very large. Hidden worlds may, literally, be all around us. They could, in theory, be populated by a rich menagerie of particles and have their own forces. Yet we would be unaware of their existence because the particles interact extremely weakly with the familiar matter of our universe. Of late, physicists have been taking seriously the idea that particles from such hidden sectors could be dark matter.

The Sun

Ten Things You Didn't Know About The Sun:
When you go outside at night and look at the stars, almost all the stars you see are within 100 light years from us, and only a handful of the extreme brightest ones can be seen from farther away. If you were to pluck the Sun from the solar system and plop it down in some random location in our Galaxy, there's a better than 99.99999% chance it would be invisible to the naked eye.

Saturday, February 21, 2009

Quantum Computing and the Church-Turing Thesis

Another interesting article.
The Turing machine model seems to capture the entire concept of computability, according to the following thesis[62]:

Church Turing Thesis: A Turing machine can compute any function computable by a reasonable physical device.

What does “reasonable physical device” mean? This thesis is a physical statement, and as such it cannot be proven. But one knows a physically unreasonable device when one sees it. Up till now there are no candidates for counterexamples to this thesis (but see Ref. [103]). All physical systems, (including quantum systems), seem to have a simulation by a Turing Machine.

Quantum Computing: Introduction

More on the relationship between quantum computing and classical computing. An interesting read.
Another thing which can be expressed in many different ways is information. For example, the two statements ``the quantum computer is very interesting'' and ``l'ordinateur quantique est tres interessant'' have something in common, although they share no words. The thing they have in common is their information content. Essentially the same information could be expressed in many other ways, for example by substituting numbers for letters in a scheme such as a -> 97, b -> 98, c -> 99 and so on, in which case the english version of the above statement becomes 116 104 101 32 113 117 97 110 116 117 109... . It is very significant that information can be expressed in different ways without losing its essential nature, since this leads to the possibility of the automatic manipulation of information: a machine need only be able to manipulate quite simple things like integers in order to do surprisingly powerful information processing, from document preparation to differential calculus, even to translating between human languages. We are familiar with this now, because of the ubiquitous computer, but even fifty years ago such a widespread significance of automated information processing was not forseen.

Quantum Computing vs. Turing Machines

SEP on Quantum Computing:
Although the original Church-Turing thesis involved the abstract mathematical notion of computability, physicists as well as computer scientists often interpret it as saying something about the scope and limitations of physical computing machines. For example, Wolfram (1985) claims that any physical system can be simulated (to any degree of approximation) by a universal Turing machine, and that complexity bounds on Turing machine simulations have physical significance. For example, if the computation of the minimum energy of some system of n particles requires at least an exponentially increasing number of steps in n, then the actual relaxation of this system to its minimum energy state will also take an exponential time. Aharonov (1998) strengthens this thesis (in the context of showing its putative incompatibility with quantum mechanics) when she says that a probabilistic Turing machine can simulate any reasonable physical device at polynomial cost. Further examples for this thesis can be found in Copeland (1996).

AND:
Theoretical as it may seem, the question "what is quantum in quantum computing?" has an enormous practical consequence. One of the embarrassments of quantum computing is the fact that, so far, only one algorithm has been discovered, namely Shor's, for which a quantum computer is significantly faster than any known classical one. It is almost certain that one of the reasons for this scarcity of quantum algorithms is related to the lack of our understanding of what makes a quantum computer quantum (see also Preskill 1998 and Shor 2004). As an ultimate answer to this question one would like to have something similar to Bell's (1964) famous theorem, i.e., a succinct crispy statement of the fundamental difference between quantum and classical systems, encapsulated in the non-commutative character of observables. Quantum computers, unfortunately, do not seem to allow such simple characterization. Observables—in the quantum circuit model there are only two, the preparation of the initial state and the observation of the final state, in the same basis, and of the same variable, at the end of the computation—are not as important here as in Bell's case since any measurement commutes with itself. The non-commutativity in quantum computing lies much deeper, and it is still unclear how to cash it into useful currency. Quantum computing skeptics (Levin 2003) happily capitalize on this puzzle: If no one knows why quantum computers are superior to classical ones, how can we be sure that they are, indeed, superior?