The Lipid Hypothesis Is Wrong

Lipid Hypothesis:  measures used to lower the plasma lipids in patients with hyperlipidemia will lead to reductions in new events of coronary heart disease.

In other words, a link is posited such that consuming dietary fat leads to increased blood cholesterol levels; increased cholesterol levels lead to heart disease.

About 60 years ago, the government started telling us that saturated fat is bad for you.  In particular, saturated fat raises “bad cholesterol,” otherwise known as Low-Density Lipoprotein (LDL), which by the way is not technically cholesterol.  This bad cholesterol is sure, according to our dear government, to kill you in a most unpleasant way.

As Richard Feynman said, “if it doesn’t agree with reality, it’s wrong”. If the hypothesis is that saturated fat raises your triglycerides, raises your cholesterol, and increases the likelihood that you’ll die early, then the minute saturated fat doesn’t do that, it seriously throws the hypothesis into question. At the very least, when we see saturated fat not causing harm en masse, it should prompt us to ask the question, were we wrong?  Indeed, the data proves the hypothesis is shit.

Why is it that a person who cuts carbs and significantly increases their saturated fat intake gets healthier and loses fat quicker than the traditional reduced calorie, high grain diet?  Why is it that reduced carbs and increased saturated fat, on average, drastically reduces triglyceride levels? Why is it that SO MANY people are having a lot of success controlling blood sugar, weight, cholesterol, and triglycerides, even though they’re getting 50%+ of their diet from fat?

There is no mechanism I am aware of that could lead to increased LDL cholesterol from eating saturated fat; on the contrary, the only way to raise LDL, as far as I know, is by consuming excess carbohydrates.  The way it works is that you overeat carbohydrates, the liver converts those excess carbohydrates to fat, the liver then packs said fat into a VLDL lipoprotein and sends it out to the bloodstream.  The VLDL delivers the fat to fat cells, and voila, you’ve got LDL.

Saturated fat follows a similar looking trajectory, except, instead of being packed onto a VLDL, it gets loaded onto Chylomicrons.  It also skips the liver – it enters the blood through the thoracic duct.  The handoff between the lipoprotein and the fat cell looks similar to the excess fat released by the liver; however, the remaining chylomicron is shrunk down much smaller than LDL. In other words, saturated fat doesn’t raise your LDL level.

Aside from technical aspects of this mechanism, there are a few things that are clear, based on the disastrous consequences of the last 60 years of public health policy:

1.  It is overwhelmingly harmful to rely on cohort surveys to drive health policy.  Cohort surveys have a place in science and medicine, but should not be the only tool used to demonstrate something’s benefit or harm.  Well-designed control trials are the only reliable tool to demonstrate causation
2.  Saturated fat and cholesterol are not the problems the government and “health experts” have been making them out to be for the past 60 years.  In fact, external (exogenous) cholesterol has negligible effects on blood cholesterol.  Moreover, there is no biological mechanism for saturated fat to affect cholesterol levels
3.  The success of various diets (veganism, paleo, ketogenic, traditional reduced calorie) makes it clear that sugar (sucrose), high fructose corn syrup, and processed flour are the primary problems.  Vegetable oils with a high amount of omega 6 are probably a problem too

There are plenty of people who articulate this better than me, but I think the takeaway of our observations in reality should be that the fixation on the relationship between saturated fat and blood cholesterol should be critically re-examined.  Better alternatives, based on best practices, should be favored over dogma and demonization.

Secular Morality

The presupposition for many Christians is that there can be no morality without God, and, like-it-or-not, God is the moral compass by which we navigate our lives, and he created the framework within which we exist.

As I’ve mentioned in other posts, I can’t prove this God character *didn’t* exist, but then again, there are a lot of things I can’t disprove, such as invisible garden gnomes or transparent fire-breathing dragons.  Luckily, I don’t believe it’s my burden to justify my disbelief in those fantasies.

I suppose the best we can do is to put forward the question:  can we define a moral code without invoking God, religion, or any of its byproducts?

There are a lot of people who are more articulate about this matter than I am (eg Matt Dillahunty, Daniel Fincke), but for me, I think that a moral way to live is to try to identify what is best for everyone, in terms of their physical and emotional well-being, and to do our best to maximize everyone’s well-being, while simultaneously minimizing harm.  Details for how to maximize benefit and minimize harm can be demonstrated via empirical evidence, buiding models, comparing inputs and outputs, and identifying best practices.

Of course, this framework creates dilemmas from time-to-time, but last time I checked, we live in a complicated world which is exacerbated by social relationships and competing interests, and this dilemma is not unique to the faith-less.  But the advantage this framework gives is that it has the capacity to identify a conflict of interest when it exists, and it does not create a hierarchy such that one party is always prone to benefit while others are always prone to being harmed, such as the case with various implementations of Christianity.

The other advantage of this agile implementation of morality is that it does not have a rigid ruleset.  It is self-correcting.  It does not presuppose that text in a particular document is an absolute standard for behavior, because behavior is relative to the time and situation.  It always has the capacity to improve or adjust in the event of moral or logical inconsistency.

The hazard of an implementation like this is that models are never perfect – their predictive capacity can diminish under certain circumstances.  For example, a society might conclude that what is most beneficial for the majority is that we should kill minority groups who some claim are detrimental to society.  If there are 95 in the majority, and 5 in the minority, it’s quite clear how to maximize benefit if the majority is convinced that the minority is harmful.  Indeed, various historical figures have successfully convinced the masses that some minority group (or even a majority group) is harmful to the overall well-being of society, and used that argument to justify mass killings.

I have a couple responses to this concern:

1.  It’s not like that claim is isolated to non-theistic societies.  There have been a lot of religious claims put forward over the past couple millenia to justify mass extermination.

2.  In an agile moral framework which forces people to consider well-being and harm for all competing interests, it would be quite difficult to gloss over harm caused by mass extermination, especially when there is no rigid text that condones this sort of thing.  The Christian bible, on the other hand, has several examples that justify mass extermination.  Rigid textual frameworks are the antithesis to secularism’s agility.

People can be duped, and people can be convinced that “the other” causes them harm, even if they’ve never even met “the other”.  This is a human failing, and it’s a theme across cultures, religions, and time.  Humans have a difficult time getting along, and religion often exacerbates this, as does resource scarcity and desire for power.

Secular morality is a better solution, because it doesn’t create a framework that give peoples license to claim themselves “chosen” or more innately good because of their birthplace, tribe, or family.

Claims, Hypotheses, and the Burden Of Proof

What evidence do you have that God DOES NOT exist?  If you’re a reasonable person, your response should be somewhere in the neighborhood of “absolutely none.”  Interestingly, this is roughly the same evidence you have that underwear-stealing gnomes don’t exist, or, for that matter, that leprauchans don’t exist.

It’s a very unscientific position a person takes when they ask someone to prove that something does not exist.

In science and medicine, various statistical measures are used to test claims.  For instance, if a drug company is testing to see if the drug they’ve developed is any good, the core hypothesis they test against is whether there is any difference between their drug and placebo.  Framed as a hypothesis:

H0:  Drug A = Placebo [this is the null hypothesis]
HA:  Drug A (not equal) placebo [this is the alternative hypothesis]

If you ever read a scientific paper regarding claims like these, they’ll often make reference to P-values.  These P-values represent the probability that H0 is true.  In other words, the lower the P-value, the more likelihood that what is being tested means something.  A P-value of 0.01 means that there is a 1% chance that Drug A has the same outcomes as placebo.  The effect of a statement like that is that it’s very unlikely that Drug A has the same outcomes as placebo; in other words, Drug A is different than placebo.

Framing the God question in these terms, you’d arrive at a hypothesis like the following:

H0:  God = 0 (God does not exist)  [this is the null hypothesis]
HA:  God > 0 (God exists)  [this is the alternative hypothesis]

This is not a very testable hypothesis.  How could you demonstrate that God > 0?  If you answer something like:  look at how life exists…or the universe is proof that God exists, my response would be:  why does that require God?  Are you certain that it couldn’t have come from something else?  Or are you certain that the answer to these things isn’t something that we haven’t yet discovered? If so, how so?

This framework by which we approach hypothesis testing is quite reliable.  We construct tests that are designed to deliver a probability that the null hypothesis is true.  Because of that framework, we hardly ever get to say we’ve proved something.  We only frame it such that we’ve disproved something, or failed to reject something.  In other words, we can never know if we’re right…all we can know is if we’re wrong.  But arriving at the same conclusion over and over again (either by rejecting or failing to reject a claim), we get good insight into the likelihood a hypothesis is correct or not.

This is a humble way to approach the natural world.  It puts the burden of proof where it belongs, and it’s reliable.  If we test a hypothesis many times in many different ways, and we consistently find that we fail to reject a hypothesis, that adds weight to that hypothesis, and with other corroboration, might allow it to graduate to a theory, which is the highest level of science (general theory of relativity, theory of evolution, theory of gravity, etc).