DNA Is Not Information

The cell is an amazing thing.  It makes copies of itself. It reads DNA within itself, and builds protein.

Trillions of cells within an organism function in harmony to contribute to the overall system.  It’s amazing really.  It was actually a deeper understanding of the cell that led me to become an atheist.

However, with this amazing complexity comes trouble.  The cell is too complex.  The orchestra of DNA, RNA, enzymes, and hormones makes it hard for a species a half-chromosome away from chimpanzees to understand.  So, we make due with what we have, and our compensation mechanism relies on language.

But there’s a problem with how we use language.

We can say “DNA contains information…”.  It certainly is helpful to frame it like this, because it makes it easier for us to understand it.  Humans have a tough time with complicated things that aren’t happening within our macroscopic level.  We just aren’t “wired” to think that way very well.  So, we use metaphors and analogies.  These are helpful tools to help us gain a better understanding of a complex universe.

It’s easy to get carried away, though.

What we’re prone to do is to follow metaphors and analogies past their point of usefulness.

For instance, information, as we understand it, is a byproduct of human thought, either directly or indirectly.

Therefore, since DNA is information, it must be a byproduct of some other intelligent being’s thought, right?

And boom, we’ve got the fallacy of equivocation.

Worse than that, though, we’ve begged the question, because we’ve assumed that DNA could not have come from natural mechanisms.  Of course, the only evidence we have is that it did come through natural mechanisms.

Where does it leave us when we torture our metaphors like this?  It essentially allows us to live in an abstract fantasy land where we can invent any logical deduction that suits us.  But here’s the thing:  when evidence doesn’t line up with our deductions, what does that say about our deductions?  To me, it says our deductions are wrong.

But the really insidious thing about this abstract fantasy land is that it allows us to deduce things that are unfalsifiable.  There’s simply no way to test them.  It’s the height of intellectual laziness and dishonesty.

Advertisements

Absurd Claims And Why We Should Ignore Them

“… in science there is no ‘knowledge’, in the sense in which Plato and Aristotle understood the word, in the sense which implies finality; in science, we never have sufficient reason for the belief that we have attained the truth. … This view means, furthermore, that we have no proofs in science (excepting, of course, pure mathematics and logic). In the empirical sciences, which alone can furnish us with information about the world we live in, proofs do not occur, if we mean by ‘proof’ an argument which establishes once and for ever the truth of a theory.”

-Sir Karl Popper, The Problem of Induction, 1953

There are plenty of things that cannot be proved wrong.  For instance, did you know I have an invisible pet dragon who occasionally visits Santa Claus in the North Pole, and was hatched from an egg that was laid inside a teapot that orbits the sun between Earth and Mars?

Don’t believe me?  Prove my dragon doesn’t exist.

It’s perfectly intuitive, to most people, why my claim is nonsense, but what tools in the background do we use to discern this?  How are we so sure?

After all, isn’t appeal to incredulity a logical fallacy?  Just because something is hard to believe doesn’t make it false.

It’s an interesting exercise to evaluate absurd claims, such as the idea that the solar system is geocentric, or that biblical creationism explains the earth’s diversity of life.

The nauseating and terrifying problem in real life is that most of the claims we evaluate on a daily basis are not as ridiculous as these examples, even though they may be just as incorrect.

Most reasonable people take well-explained science at face value, but when we’re tasked with explaining how we really know the earth orbits the sun (and not the other way around, as Aristotle guessed), it’s often difficult for us to explain; in fact, I’d guess most people are not qualified to give a decent explanation, aside from “science says so”.

Though the “science says so” argument isn’t a bad threshold for how one accepts facts, it’s unsatisfying to people who are curious about how the world works.  The prerequisite of such deeper knowledge means throwing out bad ideas and focusing attention on better ones — it is the best optimization humans have for improving their knowledge base, and this method is precisely what science is good at.

Let’s walk-through this thought experiment.  What are the characteristics of my dragon claim that make it absurd?

If you’re of the opinion that dragons do not exist, then the first red flag is that I’ve posited the existence of a creature that doesn’t exist; in other words, I’ve violated a natural rule.  But dragons could exist…maybe the problem is that science just hasn’t seen them, yet.  Maybe there are millions of them.  There’s strike one in our attempt to attach the absurd label to the dragon claim.

As for the celestial teapot in which the dragon was hatched:  have you been to outer space?  Have you physically observed the teapot isn’t there?  Maybe if you went to outer space and looked in the teapot, you’d see dragon shell pieces, which would prove that a dragon was hatched in the teapot.  Again, we’ve failed to disprove the claim.

What about my dragon’s lack of visibility?  Granted, we don’t see many creatures that have the ability to be invisible (aside from bacteria and other micro-organisms), but that doesn’t mean they don’t exist.  Wouldn’t it stand to reason that a creature with the capacity to change its own visibility might not have yet been observed by scientists?  In fact, that would be an obvious implication.

All the objections we raise to assert the claim’s absurdity can be easily countered, and if we’re being honest, these counter-arguments are reasonable responses.  Yet that doesn’t make the claim any less absurd.  It’s quite the dilemma:  If we can’t even disprove an invisible pet dragon who sometimes visits Santa Clause, what chance do we have of proving or disproving anything?

The short answer is:  we can’t disprove things like this.  Unfalsifiable claims, by definition, prevent disproof.  In this regard, they are the worst sort of trickery; what makes it worse is when we see people couple such claims with exploitation of natural and existential fear, such as how religion exploits the fear of death; it really is the worst sort of mongering.  It’s also the oldest, and most common trick charlatans have used to accumulate wealth and power.

Yet, even though we have a real blindspot in our capacity to disprove obviously false claims, humans now have more collective knowledge than ever, and our collective body of knowledge is growing faster than it ever has, in the history of humanity.

Is this a dichotomy?  A paradox?  Some violation of logic?

Of course not. Even though we can sit around worrying about all the things that we don’t know, or couldn’t possibly know, and how that must mean we have no capacity to really understand anything, such as what the Hovind clan would have us believe, we’ve still managed to make more progress than most members of humanity would have ever imagined.

The facilitator of this progress is simply the recognition that knowledge *should* be tentative, that we’re likely to collect evidence that will alter our current understanding, and that a claim must meet some burden of proof in order to justify its validity.  In other words, there should be a good reason to believe something, before we even consider its truth value.  This bias we have towards rock-solid truth, and the inclination to frame things in such rigid terms, is, in fact, our main impediment as a species.

The fundamental problem with my dragon claim is that it is unfalsifiable.  There is no path to disprove it.  And even if the claim is disproved in its current form, the millions of iterations it could undergo to further exempt it from falsification provides proof that it’s not a useful claim, and the only value it gives is as a case study for how to disregard claims.

Anyone can make up anything, and if our standard for acceptance was simply the inability to disprove it, we’d believe a lot of nonsensical, completely contradictory things.

For instance:  many years ago, a population of invisible gnomes who live in my bedroom killed every single living invisible dragon, and now invisible dragons are extinct.

Wait a minute?  What about my pet invisible dragon that was hatched from an egg inside Russell’s teapot?  The invisible gnome story contradicts my invisible dragon claim, yet neither claim could be disproved.

What should we believe then?  What merits belief?  How do we construct our realities such that we maximize our chances for believing things that are correct, while minimizing the chance we’ll believe something that is incorrect?

If progress we’ve made over the past 400 years is any indication, then the best strategy we have for maximizing our correctness is by putting the burden of proof on a claim, with the expectation that there exists some conceivable strategy to disprove it.  If we concoct a reasonable test to disprove the claim, then conduct the test and get results that prohibit us from disproving it, then we’ve added weight to the claim.

A simple example of this is the fact that if we found a modern mammal skeleton in Cambrian strata, that would disprove many underlying tenets of evolution.  Since we fail to find such a skeleton, weight gets added to evolutionary theory.  Does the absence of mammal skeletons prove evolutionary theory?  No.  But it adds weight, because it is a conceivable outcome.

Humans are not naturally inclined to rigidly test ideas.  There were nearly 2000 years between the birth of formalized logic and the advent of the scientific method.  It’s not particularly intuitive to approach the world this way, which is why humans have (and still do) come up with so many bad ideas.  The scientific method is probably the closest thing we have to a silver bullet, and its underlying beauty is in its recognition that it could never be perfect or provide a single, ultimate answer to everything.

Scientific Theories Are Better Than Crackpot Ideas

What are we certain we know, and what are we fairly confident is so?

For religious people, science can be hard.  Where religion is conservative and unchanging, science is tentative and progressive.  Where religion claims absolute answers, science gives the best answer we have for now.

Some people take this scientific framework to mean that science isn’t good (“oh, it’s nothing but a theory”), but the truth of the matter is that science is the best tool humans have come up with precisely because of its tentative nature, and because it’s so good at eliminating ideas that are clearly incorrect.

The trouble with humans is that we’re pattern seekers, we’re hard-wird to “solve” problems quickly, and we’re prone to equating correlation with causation.  This psychological phenomenon constantly gets humans into trouble, because we do it over and over again.  When our Heidelbergensis ancestors heard leaves rustling, that might have very well meant that a lion was stalking them; however, what’s the cost for our caveman ancestor who assumes they’re being hunted, when in reality, they are not?  The cost is very low; however, if our caveman ancestor ignored the rustling, and it turned out there was a lion stalking them, then he’s pretty much dead meat, along with Mrs. Caveman and baby caveman.

We’re programmed to think correlations important, even when they’re not, and even when there is a lot error in the relationship between two events, because historically, the cost of ignoring relationships was very high. I suspect this is why theological constructs, such as Pascal’s Wager, don’t set off BS alarms with a lot of people.

Because we live in a secular age that emphasizes pragmatism and improvement, we as a species, have developed tools to overcome our faulty biological programming, and one of those tools is called science.

Most of the things we know fall into the category of “we’re fairly confident”.  There are problems with asserting that we know very much at all, especially if you subscribe to the notion that science is better at telling us what isn’t so, rather than what is.  For skeptics, it’s important not to fall into the trap of claiming that we know too much, because it causes our brains to take shortcuts, and ignore important facts.

In science, we test hypotheses, and we look for ways to reject them.  The beauty of this framework is that it allows us overcome our biological dispositions, and to throw away ideas that are wrong, so we can build better working theories.  The more our assumptions align with reality, the more confident we become in our working theory.

Most people who are skeptical of science underestimate how well some scientific theories have been tested, which I presume is what makes them so confident in their position.

But the major problem with these people’s alternative explanation is that it lacks the most important tenet of what makes science so good:  there’s no way to test it, and therefore, it has no precise explanatory power.

For instance, if I ask a question like, why is it that only the liver can metabolize fructose in the human body?

If your answer is “God made it so,” then it raises the question: how do you know that, how do you test that, and what does that allow us to predict?

As it goes in science, one of the competing theories for why it is that only the liver can metabolize fructose is because earlier organisms could only metabolize glucose (and maybe protein, and fatty acids).  So when some organism developed the ability to metabolize fructose, that gave rise to a symbiosis between plants and that organism, where the plants most effective at coupling fructose with seeds would have been the most successful at reproducing; likewise, the organisms most effective at metabolizing fructose would have had easier access to food.


*Source: thepaleodiet.com

In this theory, there is quite a lot of explanatory power, and it would be possible to prove it wrong by finding fossil evidence, or even evidence in living organisms that cast doubt on this idea.

The potential vulnerability the scientific paradigm exposes is that it allows us to be imprecise.  For a long time, I saw this as a problem, but when you work through it, it becomes clear why it’s not a problem at all.

The example I like to use to demonstrate this is the Newton vs Einstein story.  Newton’s classical theories worked really well for hundreds of years.  It was not until our tools improved that we realized there was a slight error in Newton’s model with regards to the motion of Mercury, and science was tasked to develop a better theory.  Einstein and his contemporaries contributed the appropriate improvement.

Another example of how science can be temporarily incorrect is demonstrated in the history of pi.  The Babylonians were able to get as close as 3.125 around 3500 years ago.  1500 years later, Archimedes computed pi to 3.1428.  The Chinese got as close as 3.1547 by the 3rd Century, and within a few hundred years, got it almost right.  The point is that the exact measurement of the variable pi was imperfectly defined for thousands of years.  It was not until our tools improved that we could improve precision on its true value.

To the naked eye, the implications of pi=3.1428 are imperceptible.  It takes a serious and precise investigation into the matter to determine that the ratio of a circle’s circumference to its radius (squared) is not 3.1428.

Here we arrive at a conundrum:  the problem isn’t what might be wrong, but what might be substituted precisely in place of it.

Herein lies the reason why there’s no problem when we appeal to scientific consensus to formulate our opinions, and why there is a problem when we go looking for views espoused by tiny minorities of scientists, such as D. Russell Humphreys, who advocates that creationism can explain the origins of our world and universe.

If a large majority of scientists hold a view that is plainly incorrect, and the refutation to this incorrect belief is well-established, that puts downward pressure on the belief, because it creates a niche for scientists to publish papers that easily demonstrate what’s wrong with the consensus, and why the incorrect belief disagrees with experiment.  When the refutation to the incorrect belief is scrutinized under peer review, and then is eventually published, the refutation propogates.

Most scientists would love the opportunity to debunk a commonly held belief, because doing so would increase their prestige in the scientific community.

If theories like evolution and the big bang were as plainly incorrect as creationists claim, there would indeed be a larger and growing number of scientists who dispute them, because our measurement tools are better than they’ve ever been, and the speed at which information propagates has never been faster.

Science is full of examples of superceded scientific theories .  For instance, it used to be held that bad air was the cause of disease.  It was not until the germ theory of disease was developed that superseded this belief.  Similarly, prior to Newton’s work, Aristotle’s physics were the primary tool that physicists had to measure the world.  Chemistry replaced alchemy, astronomy replaced astrology, Neptunism replaced Plutonism (with regards to the age of the earth).

Science is indeed the antithesis of religion for precisely the reason that religion exists – humans want an unchanging solution.  That’s not what science does.  Science is self-correcting, and anything that does that is, by definition, evolving.

Science Hates Religion?

Science is better at disproving things than it is at proving them. The Popperian sentiment, as discussed by Richard Feynman in this video is that science can never know if something is right…it can only know if it’s wrong.

When the logical or practical implications of an idea don’t match reality, then that idea is wrong. So, when a theory exists that replaced another theory, the implication is that the newer theory described reality, predicted outcomes, or was more consistently accurate than its predecessor.

For example, consider the following hypothesis: Radioactive decay used to go a lot faster than it does now. At some point in history, radioactive decay slowed down to its current rate.

Background: radioactive decay, as simply as I can put it, is the process of unstable, radioactive material naturally losing particles over time. Scientific convention says that radioactive decay is measurable and consistent. Each radioactive material has its own half-life, and these half-lives are well understood. Half-life refers to the amount of time it takes for half the radioactive material’s to be lost. More specifically, half-life is measured using the probability that 50% of the atom’s particles will be lost, and this is often computed logarithmically.  When radioactive material loses its particles, it actually is converted to another element; for instance, certain isotopes of Uranium convert to distinct isotopes of lead.  There are many examples of various radioactive isotopes decaying into other isotopes.

This radioactive-decay-rate-change hypothesis would be very appealing to young Earth Creationists (assuming that many of them know what radioactive decay is), because it would support their notion that the Earth is very young; however, in order for this idea to have much merit, it ought to be able to demonstrate some of the following things:

1. It ought to be logically sound
2. It ought to agree with experiment or, at the very least, observation
3. It ought to allow for accurate predictions
4. It ought to paint a picture that is difficult to refute
5. It ought to be more correct than any competing idea

Let’s walk through these bullet points.

Is this hypothesis logically sound? What sort of evidence would support its soundness? For me, the logical soundness of the idea that radioactive decay rate has changed over time could be demonstrated with any of the following observations:

1. We see inconsistency in observed decay rates
2. We see geological evidence that would support this (melting in particular strata, other energy increase events)
3. We could stimulate increased decay rate

But none of these observations are actually observed. So, there is already a serious problem with the hypothesis that decay frequency changes in meaningful ways over time

Does this hypothesis agree with experiment or observations? Since we cannot manage to speed up decay rate, despite enormous efforts to do so, we fail to corroborate on that front. A couple years ago, physicists noticed that solar flares could have minor impacts on decay of manganese-54 and chlorine-36. However, as far as I can tell, we haven’t noticed this happening with other isotopes. Moreover, the change in decay rate was relatively small and did not appear to affect overall half-life in general…certainly not to the extent where one could logically justify the notion that the Earth began in 4004 BC.

Does the changing decay rate hypothesis allow us to make accurate predictions about anything? One thing it might imply is that, going forward, we will continue to observe changes in decay rate; yet, we haven’t seen decay rate change, even though we’ve been paying attention since the late 1800s. This, I think is another nail in the coffin for this hypothesis.

Does changing decay rate paint a picture of reality that makes it difficult to refute? Consider for a moment, this example: if we suppose that water freezes at 0 degrees celsius, then it should follow that anytime the temperature is at-or-below that level, water, when exposed for a prolonged amount of time, would freeze. It would be rather difficult to refute this water freezing hypothesis, unless of course you could demonstrate that water doesn’t freeze after prolonged exposure to temperatures at-or-below 0 degrees celsius.  It turns out that water, with prolonged exposure, does freeze at this temperature.

This changing decay rate should be similarly difficult to refute, and paint a picture that makes sense, given our observations. Assuming that decay rate used to go much faster, but at some point decreased, there should be some sort of natural phenomenon or evidence that demonstrates that. In fact, it would be remarkably clear if that was the case.  But no.  We don’t see any evidence of any of that.

This leads into the final item: is this hypothesis more correct than any competing hypothesis?  No.  And so far, there hasn’t been anything compelling about this hypothesis. It’s ridiculous. Hypothesis fail.

The reason young Earth creationists are so frustrated with science is because science doesn’t agree with their ideas. Therefore, according to many young Earth creationists, all scientists are in a conspiracy with the government and the media, with the intent of disproving God. Makes sense, right?  If the science doesn’t agree, kill the science.

The real reason science disagrees with Young Earth Creationist dogma is because the ideas are shit, and its advocates are idiots.

Does science disprove God? I don’t know. I don’t think so. I think nearly every single statement in the Christian bible could be proven false, and you would still fail to disprove God (of the Deism or Pantheism variety, not to mention the God of other monotheistic or polytheistic religions). But like I mentioned at the begging of this post, science is better at disproving than it is at proving. I doubt that science (alone) could ever disprove God. But the real question is: why do we believe in a God? Part of the reason is because the bible explained things that we once did not understand. For instance, we now understand a rainbow is caused by reflection, refraction, and dispersion of light in water droplets. Genesis explains that rainbows are God’s reminder to himself not to kill everyone with a flood, again.

It’s a little bit crazy that the people who are so adamantly rejecting science are not at all adamantly opposed to these silly fairy tales. Motivated reasoning much?

The Origins of The Universe And Life

I’d be hard-pressed to find more challenging and complicated questions than how the universe began and how inorganic molecules managed to come alive. There certainly are no shortages of hypotheses and theses for how to explain these phenomena, but clearly, humans don’t have all the answers yet (although most people have no idea how close we really are to understanding these phenomena).

In that sense, Christians and other theists have some justification of putting the burden of proof onto atheists, given that the atheist worldview is absent of supernatural phenomena.

The thing that always struck me as odd, even when I was a Christian, was that those same people, who would gleefully ridicule atheists who reject supernatural intervention, would go on to explain that life’s origins included women being formed from a rib, sneaky snakes, forbidden fruit, and zoo arcs crafted to defend against global, year-long floods.

Humans have only had formalized logic for maybe 2500 years, and it took almost two thousand years more for us to figure out how to estimate the area under a curve in mathematics.    Couldn’t it be that, given humans’ fairly primitive current condition, that we just haven’t figured it out yet? Couldn’t it be that, although these problems are profound and complicated, they don’t need supernatural explanations?

The beginning of the universe and abiogenesis are not easy things to grapple with.  To attack them with any sort of authority, one needs to understand a huge collection of science that took the most talented and knowledgeable humans on Earth thousands of years to collect and deduce.  But if history tells us anything, it is that the best method for humans to unravel the complexities of the universe is science.  In other words, our best method to answer very complicated questions is to start with, and assume, natural causes and effects.

Materialistic Straw Men (The Noumenon/Phenomenon Barrier)

One of the claims theists hurl at atheists is that, from the materialist’s perspective (the perspective that the whole of the universe is simply material, and is not guided by transcendent, supernatural rules), morality, logic, thought, ego, and a host of other abstract and innate concepts could not exist.  Theists go on to state that Atheists do not have a foundation for their moral claims, in particular, that there can be morality in the absence of religion and/or God.

This is an interesting perspective, and it takes a while to work through what’s wrong with this position.  There are a lot of metaphysical considerations a person could loop through to grapple with this topic, but I think that, at the crux of this matter is the relationship between thought and language.  Wittgenstein said “The limits of my language means the limits of my world.”  He also said “Philosophy is the bewitchment of our intelligence by means of language.”

My point in referencing Wittgenstein is to point out that he, along with a host of other 20th century philosophers, were keenly interested in the relationship between thought and language.  We use language to convey thoughts we have, but is language a good enough medium to represent those thoughts accurately?  This is quite analogous to Plato’s ideal forms.  We can never draw a perfect triangle because of constraints in the natural world.  In fact, according to Plato, our interpretations of reality is wrong; we’re either purposely misled or our interpretation is obscured by the shackles of our reality.  In Plato’s, Allegory of the Cave, – http://en.wikipedia.org/wiki/Allegory_of_the_Cave, he used the analogy of human beings shackled, facing the wall, and only seeing the shadows of what actually exist, not the object itself.

One of Raphael’s most famous paintings is The School of Athens (http://en.wikipedia.org/wiki/The_School_of_Athens), and it depicts a number of important Greek philosophers.  At the center of the painting is Plato, pointing up to the sky, a reference to his concept of the ideal.

Standing next to Plato is his student, Aristotle, who, contrasted with Plato, is holding his hand palm-down towards the ground.  This depiction is a reference to Aristotle gently reminding his teacher that we should be considerate of the world in which we live, and not just the abstract

My modern interpretation of this is that empiricism and induction are as good of an anchor for our insights into reality as philosophical deduction is.  We live in a natural world, and the best strategy we have for understanding it, and from separating fact from fantasy, is through induction and empiricism.

I tend to fall on Aristotle’s side of the discussion.  Dwelling on our intellectual ability (or inability) to understand the physical world, although an interesting exercise for undergraduates in philosophy survey courses, doesn’t do a lot of good or solve anything with a high degree of success.  Even if we never could articulate why logic exists or why we’re so compelled to natural decency, that doesn’t mean that our inability to do so means God did it.  I wouldn’t go so far as to say that we can’t articulate these positions (I think we clearly can), but it certainly isn’t easy when you get down to the nitty-gritty metaphysical contrasts between the object and the language we use to describe the object.

It’s worse to appeal to God in describing these seemingly transcendent characteristics of the world.  Clearly, humans are bound by their earthly experiences when formulating thoughts to describe their world.  And clearly there is a difference between noumena and phenomena – http://en.wikipedia.org/wiki/Noumenon.  To say that God is the barrier between noumena and phenomena is putting forward a claim that is neither substantiated nor honest.

Swinging back to moral behavior, there’s an old lesson that parents teach their kids about sharing.  If 2 kids are bickering over how to share some cake, the parent allows the following compromise:  Child A cuts the cake AND Child B chooses who gets which piece.  This predictably results in decent outcomes, in terms of fairness for both individuals.

The abstract concept to take from this lesson is the following:  create systems and institutions that create maximum benefit, minimal harm, and from the perspective that you will not know what role you play in that system.  This is referred to as the Veil of Ignorance – http://en.wikipedia.org/wiki/Veil_of_ignorance

Though this framework might very well be difficult to deductively justify (it takes more words to justify this position than it does to say “God did it”), we can appeal to historical empiricism to determine whether or not the action or institution results in better human outcomes (or, if you don’t like the term “better”, then outcomes where humans don’t die or endure unnecessary physical or emotional harm).

To me, I don’t think a consistent universe is an explanation or proof for God.  When I was a deist, I might not have been so quick to make a statement like that, but these days, I think it’s a profound logical leap one must take to say “the universe is consistent in a logically discernible way, therefore God exists.”  Part of what led me down the path I’ve been on lately is that I have no right to make claims for which I have no proof.  In fact, I could put forward an explanation for why the universe is consistent without making assumptions for which I have no proof.  I think that’s the more honest solution to the problem of a complicated universe, and I suspect it would result in a more peaceful world than one where people can manage to use their invented deity to justify killing, raping, and enslaving people.  Learning how to honestly investigate things we don’t know should be a priority for a healthier and happier humanity.

If the universe, or even the world, were so logically deductive, it wouldn’t have taken humans 200,000 years to formalize rules about it (and thousands of years after domestication).  That we can represent the world with any predictive capacity in a language we invented speaks to the quality of our language and the honesty of our interpretation.  If the world or universe were different, we might have a different language to represent it or different rules to describe it.

Appealing to incredulity is intellectually dishonest, and it’s worse than saying “I don’t know.”  It also ignores some of the things we know, such as the fact that there were evolving species over billions of years.  Couldn’t it be that our increased brain size and intellectual capacity that arose during evolution created a mechanism to internalize some of the more obvious characteristics of our world, such that it made it difficult for us to articulate its underlying rules.  Or couldn’t it be that the particular language we’ve developed doesn’t do as good of a job as it could to help us better describe the universe?

There are a lot of questions like these, and I think they all come down to God of the gaps.  Because the sun revolves around the Earth, that must mean God made it…oh wait, the Earth revolves around the sun, therefore God.  Oh wait, it’s more of an elliptical pattern…oh wait…

To a typical person who lived in the absence of formalized logic, outer space would have seemed as abstract as thought itself.  Those people lived their lives never understanding anything at all about the universe besides what they saw in the 20-square mile radius in which they spent their entire lives.  We are heirs of that ignorant time, and our emergence from this medieval Platonic cluster-fuck is in its death throes.

Who knows…maybe someday we’ll discover there is indeed a multiverse; in contrast, we may discover there is an edge of the universe that cannot be exceeded.  Regardless of what that answer is, the magnificence of it does not mean God did it.  It just means we don’t have an answer.  Appealing to Plato only serves to reinforce the notion that we could never possibly understand the universe in itself.  I think supporters of that position underestimate how much we can actually know.

Null Hypothesis, Statistics, Science, and The God Question

By definition, a null hypothesis can never be proven.  We describe our confidence in a null hypothesis by building mathematical equations to determine probability.

For instance, if we’re testing the effectiveness of a drug, the hypothesis might be laid out like this:

Hypothesis[null]:  Effect of drug = Effect of Placebo
Hpothesis[alt]:    Effect of drug != Effect of Placebo

We go to great lengths to build mathematical strategies for testing a hypothesis, and some hypothesis testing strategies are demonstrably better than others.

Our hypothesis test frames probability in terms of the null hypothesis.  For instance, a probability of 0% means that we reject the null hypothesis.  A probability of 95% (or really anything greater than 5% or 10%) means we *FAIL* to reject the null hypothesis.  In other words, we never say that we’ve *PROVED* the null hypothesis, but corroborated tests revealing consistent failure to reject a hypothesis might give us confidence that the null hypothesis is correct.  But, the null hypothesis would also need to accurately predict outcomes or build a statistical model.  We also need to make sure we’re testing the hypothesis well.  If we can’t test the hypothesis, how could we ever assign a probability to it?

Then there’s always the problem of sampling bias:  maybe the tests you’ve run are not representative of the natural distribution.  Good science means remaining skeptical, even when some preliminary data supports a position.

In statistical models, some variables are more important than others.  For instance, we could build a model that predicts a person’s religion (a person’s religion would be the target variable).  The input variables we could use to predict a person’s religion might include their parent’s religion, their geographic location, their peer group (and majority religion), their freedom to practice a religion, whether they attended a religious school, and their favorite food.

It’s easy to see why a person’s parent’s religion is more important in the model than what their favorite food is.  Of course, if a religion bans pork and the person’s favorite food is pork chops, that may elevate the importance of the variable on that particular person.  But for aggregate data, a person’s parent’s religion is almost certainly the most important variable in this model, followed distantly by their geographic region, and their freedom to practice this religion.

The prerequisite condition to building a model is that the model can be tested with some reliable method to assess probability that the input variables are meaningful.  Our confidence in the overall model depends on its ability to predict outcomes.

Of course, science or physics might hypothesize that some unknown field or phenomenon affects some outcome, but there are couple reasons why this is different than assuming God:

1.  They build tests to gain more clarity around the unknown field or phenomenon
2.  The phenomenon usually has a consistent effect
3.  The ignorance about the unknown thing is usually temporary
4.  People work to create rigor around building definition of the unknown thing
5.  The unknown thing helps to improve the predictability of a model

God’s existence is not testable, and therefore we can’t really assign it a probability.  I think we have a responsibility to assign something probability before it’s worth talking about in terms like these, which is one of the reasons why I rejected religion.  A person can talk science all they want, but the plain and indisputable truth is that we take an inappropriate logical leap by assuming there is a God, and this does not allow us to predict outcomes with any accuracy.