By definition, a null hypothesis can never be proven. We describe our confidence in a null hypothesis by building mathematical equations to determine probability.

For instance, if we’re testing the effectiveness of a drug, the hypothesis might be laid out like this:

Hypothesis[null]: Effect of drug = Effect of Placebo

Hpothesis[alt]: Effect of drug != Effect of Placebo

We go to great lengths to build mathematical strategies for testing a hypothesis, and some hypothesis testing strategies are demonstrably better than others.

Our hypothesis test frames probability in terms of the null hypothesis. For instance, a probability of 0% means that we reject the null hypothesis. A probability of 95% (or really anything greater than 5% or 10%) means we *FAIL* to reject the null hypothesis. In other words, we never say that we’ve *PROVED* the null hypothesis, but corroborated tests revealing consistent failure to reject a hypothesis might give us confidence that the null hypothesis is correct. But, the null hypothesis would also need to accurately predict outcomes or build a statistical model. We also need to make sure we’re testing the hypothesis well. If we can’t test the hypothesis, how could we ever assign a probability to it?

Then there’s always the problem of sampling bias: maybe the tests you’ve run are not representative of the natural distribution. Good science means remaining skeptical, even when some preliminary data supports a position.

In statistical models, some variables are more important than others. For instance, we could build a model that predicts a person’s religion (a person’s religion would be the target variable). The input variables we could use to predict a person’s religion might include their parent’s religion, their geographic location, their peer group (and majority religion), their freedom to practice a religion, whether they attended a religious school, and their favorite food.

It’s easy to see why a person’s parent’s religion is more important in the model than what their favorite food is. Of course, if a religion bans pork and the person’s favorite food is pork chops, that may elevate the importance of the variable on that particular person. But for aggregate data, a person’s parent’s religion is almost certainly the most important variable in this model, followed distantly by their geographic region, and their freedom to practice this religion.

The prerequisite condition to building a model is that the model can be tested with some reliable method to assess probability that the input variables are meaningful. Our confidence in the overall model depends on its ability to predict outcomes.

Of course, science or physics might hypothesize that some unknown field or phenomenon affects some outcome, but there are couple reasons why this is different than assuming God:

1. They build tests to gain more clarity around the unknown field or phenomenon

2. The phenomenon usually has a consistent effect

3. The ignorance about the unknown thing is usually temporary

4. People work to create rigor around building definition of the unknown thing

5. The unknown thing helps to improve the predictability of a model

God’s existence is not testable, and therefore we can’t really assign it a probability. I think we have a responsibility to assign something probability before it’s worth talking about in terms like these, which is one of the reasons why I rejected religion. A person can talk science all they want, but the plain and indisputable truth is that we take an inappropriate logical leap by assuming there is a God, and this does not allow us to predict outcomes with any accuracy.

Follow @TimSteppingOut
You are right that we cannot assign a probability to the existence of God, in the sense of probability that you are talking about. However, if you view probability as a measure of belief, as in Bayesian statistics, than you can investigate the impact of evidence on one’s belief in God. This is most useful as a subjective, personal exercise, not as a method to prove or disprove anything.

LikeLike