Causal Arguments and Reasoning

Introduction

Causal arguments are the heart and soul of scientific reasoning. Understanding both how to make and evaluate causal claims is essential to understanding the world. This entry will be somewhat lengthy since a lot of concepts need to be covered. You might want to make yourself a coffee…We'll begin by looking at Mill's methods for how to discover a causal relation then we will look at how to evaluate causal claims including the various common errors people can make at each stage.

Part 1: How to Identify Causal Variables Using Mill's Methods

John Stuart Mill developed four formal methods for determining causation: method of agreement, method of difference, joint method of agreement and difference, and method of concomitant variation. These methods take what most of us already do intuitively and formalize them. The advantage of formalizing a method is that we reduce the possibility of error.

Method of Agreement
Suppose you go out for dinner with a group of 10 friends. Some of you order fish, some order tacos, some share a pitcher of sangria, and some order the beef fajitas. A few hours later 6 of you are really sick. You suspect food poisoning. Assuming you're right, how would you go about figuring out what caused it?

Assuming that it is a case of food poisoning, most of us might start by listing what the sick people ate. From our list we'd try to identify the common variable and that would be the most likely cause of the food poisoning.

You have just applied the method of agreement: If two or more events share only one relevant characteristic/variable then that variable is probably the cause.

We can formalize the method this way:

Instance 1: Factors a, b, and c are followed by E.
Instance 2: Factors a, c, and d are followed by E.
Instance 3: Factors b and c are followed by E.
Instance 4: Factors c and d are followed by E.
Therefore, factor c is probably the cause of E.
(Lower case letter = possible causal variables, E = event/causal outcome)

Let's apply the method to the above case to see how it works. To save ourselves a little time we can make a translation key for each variable:

f=fish, t=tacos, s=sangria, b=beef fajitas. Each instance represents a person who got sick. Suppose John, Liz, Ami, and Evan got sick. We'd formalize our reasoning this way:

John: Ate f, t, and s and got sick.
Evan: Ate f, s, and b and got sick.
Liz: Ate t and s and got sick.
Ami: Ate s and b and got sick.
Therefore, sangria is probably the cause of the food sickness.

In the method of agreement we look at all the cases were the event occurred, list all the possible causal factors, and then see which one shows up in each case where there's an effect.

The method of agreement is often employed by public health officials whenever a community has a higher than normal rate of some disease or condition. Officials try to determine what the common variable is. It's often something in the water supply or in the air. Once you've identified the source, the more difficult part can be to figure out the exact chemical in the water or air that's causing the effect. To do this, we often need to apply the method of difference…

The Method of Difference
Sometimes I have trouble sleeping. I noticed that when I eat, drink beer, and listen to podcasts before bed I can't fall asleep. How do I figure out which of the variables is causing my insomnia? (Ignore for the moment that it could be caused by combining all three). I'll probably do something like this. One night I'll eat and drink beer before bed but won't listen to podcasts. If I don't have insomnia then I can infer that the podcasts are causing my insomnia. If I do fall asleep then on another night I'll drink beer and listen to podcasts before bed but I won't eat. If I don't have insomnia then I can infer that eating right before bed causes my insomnia.

In the example above we have applied the method of difference: If a relevant factor is present when a phenomenon occurs and absent when the phenomenon does not occur, it is probably the cause. To apply the the method of difference, list all the relevant factors that are present prior to an event happening and compare it to a list of the factors present when the event doesn't happen. Whatever is on the first list but not the second is the cause.

We can formalize the method this way:

Instance 1: Factors a, b, and c are followed by E.
Instance 2: Factors a and b are not followed by E.
Therefore, factor c is probably the cause of E.

Combining Methods to Discover Cause: Joint Method of Agreement and Difference
Combining the method of agreement with the method of difference is the backbone of the scientific method. Recall that with the method of agreement we identify the cause if a single factor is always present when the event occurs. Often there is more than just one common variable associated with an effect and so the method of agreement alone won't help us identify the cause. For example, suppose a community has a higher than average rate of a particular cancer. If most people share a similar diet, source of food, water supply, and air how are we to determine the cause using method of agreement alone?

The method of agreement tells us that one of these variables is the cause, but if everyone in the community shares them then we're not going to be able to distinguish causation from mere correlation (more on that in Part 2). We need to also employ the method of difference in conjuntion with the method of agreement.

For this example, this means we need to find other communities that share all the variables except one or two and see if the cancer is present in that population. Maybe we have two communities that have a similar diet, food source, and air quality but get their water from different rivers. If the second community (B) doesn't have the higher cancer rate we can conclude (using the method of difference) that the cause of the higher cancer rate is probably in the water supply of the first community (A).

Clinical trials rely on the joint method. In a simple trial we have a treatment group and a control group (placebo or current standard of care). To establish that the treatment is the cause of any observed effect it's given to each member of the treatment group (duh). This is using the method of agreement. The treatment group has a shared variable so it's probably the cause. However, there are many additional shared reasons why a treatment group could improve apart from the treatment.

Consider a trial for weight loss. The main problem for weight loss trials is that the people who enter them are often a self-selected group which, as you know from the lesson on polling, can lead to measurement errors. Who enters weight loss studies? People who are already motivated to lose weight. What this means is that in addition to the magic pill they are given in the trial, they might be exercising more than they did before, as well as cutting calories. This means that even if the magic pill has no effect, we'll probably see weight loss in the treatment group. But this weight loss will be caused by the lifestyle changes they made at the same time the entered the trial. If we only base our conclusions off a treatment group with no control group, we might mistakenly attribute the cause of the weight loss to the pill: Everyone took the pill and everyone lost weight therefore the pill did it! Method of agreement worked! Not so fast…

This is why also applying the method of difference is so important. We need to control for unknown and additional causes like the lifestyle changes mentioned above. Let's assume we have randomly assigned each participant to either the treatment group or the control group. We should have approximately equal numbers of equally motivated participants in both groups. We have also effectively concealed to the participants (and observers) which group has the "real" pill and which has the sugar pill. So long as both groups engage in similar lifestyle changes, the weight loss caused by the changes will be approximately the same in both groups. Any difference will be attributable to the pill. We'll observe a greater effect in the treatment group if the pill actually works since this is the only difference between the groups. Through the method of difference we isolated the causal factor and controlled for confounding factors. Ta da! Now you can science!

We can formalize the joint method of agreement and difference this way:

Instance 1: Factors a, b, and c are followed by E.
Instance 2: Factors a, b, and d are followed by E.
Instance 3: Factors b and c are not followed by E. (Look for counter-examples)
Instance 4: Factors b and d are not followed by E. (Look for counter examples)
Therefore, factor a is probably the cause of E.

Correlation and Causation: Method of Concomitant Variation
Suppose for a moment that you are an alien from outer space. After a hard day of making crop circles you're thirsty. You notice that the drinks you get in bars (while wearing your human suit) give you a head ache in the morning. Using the method of agreement like in the image below, you falsely conclude that soda water causes the head ache:

causationvscorrelation%20alcohol.jpg

Sometimes when two variables consistently appear together followed by an effect, we use the method of concomitant variation to figure out which one is the causal factor and which one is merely correlated. Basically, the effect size should rise and fall in proportion to variation of the causal variable. Causal variable up=effect size goes up. Causal variable goes down=effect size goes down.

Instance 1: Factors a, b, and c are correlated with E.
Instance 2: Factors a, b and increased c are correlated with increased E.
Instance 3: Factors a, b, and decreased c are correlated with decreased E.
Therefore, factor c is causally connected with E.

So, in the above example if we varied the amount of alcohol we'd figure out that it is the causal factor—not the soda water.

Note: In medical research and toxicology, concomitant variation is referred to as the dose-response relationship. If you look at a lot of alternative medicine modalities you'll find that there is no dose response relationship with the proposed treatment. This is an indication that the treatment has no causal efficacy and the any observed effect (if any) is the subjective short-term effect of placebo or a failure to adhere to double blinding. (More on subjective measures, self-reporting, blinding, and placebos in the unit on pseudoscience).

Summary of Mill's Method's
Our intuitive methods of inferring causation can be formalized into four different methods. Formalizing the method allows us to apply them systematically which in turn makes our evaluations less prone to error. Mill's methods are the backbone of the scientific method and, as we will see, important for being able to carefully evaluate causal claims.

Part 2: Evaluating Causal Claims

In this section we'll begin by distinguishing kinds of causal claims. Depending on whether a cause is physical, psychological, sociological, or economic we will evaluate it a bit differently. Causal claims can also be either generalizations or about specific events. In the former case we'll need to consider the same criteria we use for any kind of generalization (sample size, representativeness, and selection bias). Once we distinguish the kinds of causal claims, we'll look at the basic method for evaluating all of them.

Kinds of Causal Claims

1. The milk spilled because I bumped it with my elbow.
2. Reading a bright screen just before bed can cause insomnia.
3. The baby is crying because he's grumpy.
4. Text messaging is popular because you can avoid asking people about their day and just get to the point.
5. John is unemployed right now because the economy is weak.
6. Young men act aggressively because they are trying to signal dominance to other males.

All of the above are causal claims. In terms of their scope, causal claims can be divided into two categories: particular causal claims, general causal claims. Particular causal claims are about particular events. Claims 1, 3, and 5 are particular causal claims. General causal claims are just like generalizations we've studied earlier. They are inferences from samples to more general conclusions. Claims 2, 4, 6 are general causal claims.

We can use the method we are about to learn to evaluate both kinds of causal claims. The only difference is that for general causal claims also need to do what we do for any kind of generalization: We need to evaluate sample size, representativeness, and look for selection bias. Once we've done that, the rest of method is the same for both kinds of claims.

Before moving to the method I want to briefly point out one other distinction between kinds of causal claims: physical vs non-physical. Non-physical causal claims include claims that are behavioral, psychological, sociological, political or economic. Claims 1 and 2 are physical causal claims. The others are non-physical. The important (general) difference between the two is this: With physical causation the cause (usually) occurs before the effect. Think billiard balls. If one ball moves (the effect) it's because at a slightly early point in time another ball hit it (the cause). With non-physical causal claims,(3, 4, 5, 6) causation doesn't have to have a temporal order. It can be simultaneous because these types of explanations and claims usually have to do with background conditions that cause behavioral/psychological/social/economic effects. Background conditions needn't be physical (although they can be). Regardless of whether we're evaluating physical or non-physical causes, the same method will apply. As we will see, the main difference will be the kinds of errors we can make in our evaluation in distinguishing causation and correlation.

The Basic Method for Evaluating Causal Claims

All causal claims imply the following structure:

(P1) X is correlated with Y.
(P2) The correlation between X and Y is not due to chance (i.e., it is not merely statistical or temporal).
(P3) The correlation between X and Y is not due to some mutual cause Z or some other cause.
(P4) Y is not the cause of X. (Direction of causation).
(C): X causes Y.

A causal argument or explanation is strong to the degree that we are willing to accept each of the four premises (five if it's a general causal claim). We can think of the evaluation method as a series of challenges a causal claim must pass. If it passes (P1) then it must now pass (P2), and so on until it's passed each test. In evaluating each premise I need to justify why or why not the premise should be accepted or rejected. Perhaps the best way to see how to apply the method is to work through an example:

Example Argument: The MMR vaccine (X) causes a decrease in measles incidence rates (Y).
(P1) Taking the MMR vaccine is correlated with a lower incidence rate of measles in a population: When vaccination rates go up in a population, incidence rates go down. When vaccination rates go down in a population, incidence rates go up.
(P2) The correlation between taking the vaccine and lower rates of incidence is not due to chance. Proposed causal mechanism: Infectious diseases are spread via micro-organisms. Vaccines cause the immune system to produce antigens that bring about resistance to contact with the associated micro-organism. Also applying Mill's method's suggest the MMR vaccine is the causal factor.
(P3) The correlation between vaccines and incidence rates are not due to some mutual cause. For example, greater sanitation, nutrition, and hygiene doesn't explain all the changes in incidence rates pre and post vaccine since vaccines we introduced at different times but the other variables were all introduced at the same time.
(P4) Lower incidence rates don't cause people to get vaccines in greater numbers.
(C) Therefore, the MMR vaccine causes lower incidence rates for measles.

Now that we have an example, let's look more carefully at how we should systematically evaluate each premise.

Evaluating Premise 1: X and Y are Correlated
Correlation implies that if one variable increases so does the other. Here we might appeal to the method of concomitant variation. However, recall that Mill's methods are inductive methods meaning they don't (and can't) guarantee that a factor causes some effect. As we know by now, this isn't a weakness, it is merely a fact about inductive arguments. The first step in evaluating any causal claim is to first establish correlation. If two variables aren't at least correlated then they certainly aren't causally related.

In the MMR vaccine example we see rates of infection rise and fall in direct proportion to vaccination rates. This shows correlation.

You will often hear people say "correlation doesn't imply causation", and they are correct; however, sometimes it does! This brings us to the method of evaluating Premise 2…

Evaluating Premise 2: The Correlation between X and Y Is not Due to Chance
Correlations are relatively common. The real work is to distinguish mere correlations from genuine causation. Consider some extremely closely correlated variables here. For example, the number of drownings in pools/year correlates almost perfectly with the number of Nicolas Cage movies per year. Does it follow that people drowning in pools causes Nicolas Cage to act in movies? Probably not.

The interesting question is, how do we know those two variables aren't causally related but only statistically related? Answer: There is no plausible causal mechanism. A causal mechanism refers to the way in which a cause and an effect can plausibly be related. So, in order to accept the Premise 2, we'll need to identify a plausible causal mechanism. If there isn't one, then we should reject the premise and therefore reject the conclusion. The correlation is probably just statistical rather than causal. We can also distinguish correlation from causation by appealing to the joint method of agreement and difference.

Our key objective in evaluating Premise 2 is to establish that the relationship between the supposed cause (X) and the effect (Y) isn't merely statistical. We do this by using the joint method of agreement and difference (we can appeal to a controlled experiment — natural or clinical) and by suggesting a plausible causal mechanism.

Common Errors in Evaluating Premise 2:
1. Confusing Correlation for Causation or Cum hoc ergo proptor hoc fallacy ("comes with therefore because of"): This fallacy (usually just called "confusing correlation with causation") is committed anytime someone doesn't properly evaluate Premise 2. They attribute causation where there is probably only correlation. If we don't establish reasonable grounds for causation then we likely only have correlation. Until we reasonably demonstrate causation we cannot accept Premise 2.

Example:

autism%20vs%20organic.jpg

Just because the rise of organic sales correlates strongly with autism rates it probably doesn't mean that organic foods cause autism. We'd need to show a plausible causal mechanism and perhaps a controlled study. Maybe it's the untested organic pesticides?

2. Post hoc ergo proptor hoc (usually called "post hoc fallacy"): Post hoc ergo proptor hoc ("after therefore because of") is a subspecies of the correlation/causation fallacy but has to do with temporal order. It usually applies to physical causes (but not always). Just because an event regularly occurs after another event doesn't mean that the first event causes the second. When I eat dinner I eat my salad first, then my protein; but my salad doesn't cause me to eat my protein.

Symptoms of autism become apparent about 6 month after the time a child gets their MMR vaccine. Because one event occurs after the other, many reason the the prior event is causing the later event. But as I've explained, just because an event occurs prior to another event doesn't automatically mean it causes it. We need a plausible mechanism and — better yet — a controlled experiment (i.e., the joint method).

And why pick out as the cause of autism one prior event out of the 6 months worth of other prior events? Babies with autistic symptom eat many foods. Why not hypothesize one of the foods? And why ignore possible genetic and environmental causes? Or why not say "well, my son got new shoes 6 months ago (prior event) therefore, new shoes cause autism"? Until you can tease out all the possible variables, hypothesize mechanisms connecting them to the cause, and employ some of Mill's methods, you can't attribute causation just merely because of temporal order.

Summary of Evaluating Premise 2: Identify a causal mechanism and/or use the joint method of agreement and difference to justify accepting the premise. If neither of these criteria can be met, we should reject the premise. Common errors include confusing correlation with causation (cum hoc ergo proptor hoc) and post hoc ergo proptor hoc fallacy.

Evaluating Premise 3: The Correlation between X and Y Is not Due to Some Mutual Cause Z or Some Other Cause.
Here's what appears to be a causal relation: In humans up to about 20 years of age, shoe size correlates strongly with mathematical ability. The greater the shoe size (X), the higher the math scores (Y). What's going on? Does having big feet (X) cause better mathematical reasoning (Y)? Probably not. Maybe something (Z) is causing both phenomena (X and Y). If fact, something is causing both phenomena! As children get older and grow, both their feet and brain develop. There is a third variable (aging and associated development (Z)) that explain why (X) and (Y) correlate closely.

Sometimes it can appear as though something is causing an effect when in fact there is a third factor that is causing both that variable (X) and the effect (Y)! In one of my favorite examples, there was a study that showed a strong relationship between testicle size (X) and being a good father (Y). Realizing the there isn't a direct plausible mechanism linking testicle size to parenting skills, researchers hypothesized that men with low testosterone levels (Z) are more likely to be better fathers (Y). The low testosterone levels (Z) plausibly explained both the smaller testes (X) and the fatherly behavior (Y). As we'll see in the section on evaluating Premise 4, the story is even more complicated but this should suffice to illustrate the point.

Another example: Suppose someone thinks that "muscle soreness (X) causes muscle growth (Y). This would be mistaken because it's actually exercising the muscle (Z) that causes both events.

Sometimes a third variable (Z) doesn't explain the appearance of both (X) and (Y) but rather (Z) is a more general version of (X). For example, in social psychology there was in interesting reinterpretation of a study that demonstrates this third general cause principle. An earlier study showed a strong correlation between overall level of happiness (Y) and degree of participation in a religious institution (X). The conclusion was that participation in a religious institution (X) causes happiness (Y).

However, a subsequent study showed that there was a third element — sense of belonging to a close-knit community (Z) — that explained the apparent relationship between religion (X) and happiness (Y). Religious organizations are often close-knit communities so it only appeared as though it was the religious element (X) that caused a higher sense of happiness (Y). It turns out that there is a more general explanation of which participation in a religious organization is only an instance: Belonging to a close-knit community.

Summary of Evaluating Premise 3: Make sure that there isn't a third factor (Z) that underlies the relationship correlation between the proposed cause (X) and the effect (Y) or that there isn't a more general version of (X) that explains effect (Y).

Evaluating Premise 4: Y Is not the Cause of X.
When we evaluate Premise 4 we are trying to establish the direction of causation. Figuring out the direction of the arrow of causation can sometimes be very tricky for a variety of reasons, and sometimes it can point both ways. For instance, some people say that drug use (X) causes criminal behavior (Y). But in a recent discussion I had with a retired parole officer, he insists that it's the other way around. He says that youths with a predisposition toward criminal behavior end up taking drugs only after they've entered a life of crime. That is criminal behavior (Y) causes drug use (X). I think you could plausibly argue the arrow can point both directions depending on the person or maybe even within the same person (i.e., feedback loop). There's probably some legitimate research on this matter beyond my musings and the anecdotes of one officer, but this should suffice to illustrate the principle.

For one more example, let's return to the case of the good fathers with small testes. It was originally hypothesized that low testosterone levels have an effect on men such that they become better fathers. That is, low testosterone (X) causes good fatherly behavior (Y). It turns out that the arrow of causation points the other way. Men who are good fathers wake up at night to take are of the baby. As such, they get little sleep causing their testosterone levels to fall. And so, the actual level of causation is that being a good father (Y) causes low testosterone levels (X). These sorts of cases are tricky to figure out and require a bit of background knowledge about the subject.

Especially when it comes to social and psychological explanations of behavior, it's not uncommon to find the arrow of causation pointing in both directions. For example, treating someone like a child can cause them to avoid taking responsibility for their life. But people not taking responsibility for their lives can cause other people to treat them like children. The causal arrow goes both ways and can easily end up creating a feedback loop.

For general causal claims feedback loops won't be too problematic. The problems arise when we are making a particular causal claims when we know that there is a general feedback loop. Suppose it's true that there's a feedback loop between treating someone like a child and their not taking responsibility for their life. If a particular person A acts childishly, it will be difficult to determine why, in a particular instance they are acting childishly or why have the disposition to do so. You'll have to try to provide evidence (perhaps using method of difference) that in this particular case only one direction has causal force.

Common errors: When we get the order of causation wrong we call this error "confusing cause for effect" or "mistaking the order of causation". Philosophers spent many years coming up with those creative names.

Summary of Evaluating Premise 4: Check whether the direction of causation is reversed. You'll want to provide evidence that this possibility is ruled out. If there's a feedback loop, then the causal claim isn't incorrect, it's just incomplete. It needs to include the fact that there's a feedback between the two variables.

Summary of the Basic Method for Evaluating Causal Claims
If, after evaluation, we find all four premise acceptable (five if we're generalizing from a sample) then we can reasonably conclude that X causes Y.

Premises 2, 3, and 4 are to a large degree about distinguishing correlation from causation and ruling out alternative explanations. As critical thinkers evaluating or producing a causal argument, we need to seriously consider the plausibility of these alternative explanations. Recall earlier in the semester we looked briefly at Popperian falsificationism. We can extend this idea to causation: i.e., we can never completely confirm a causal relationship, we can only eliminate competing explanations.

With that in mind, the implied premises in a causal claim provide us a systematic way to evaluate the claim in pieces so we don't overlook anything important. In other words, when you evaluate a causal claim, you should do so by laying out the implied structure of the argument for the claim and evaluating each premise in turn.

Summary of Most common Ways that People Can Reason Incorrectly about Causation
(there are more but these are the most common):

1. Post hoc ergo proptor hoc (after therefore because of)—usually referred to just as "post hoc fallacy"). This is known as confusing causation with temporal order. Just because Y happened after X it doesn't follow that X caused Y. For example, everyday I eat peanut butter and toast for breakfast then go to work. It doesn't follow that eating peanut butter and toast causes me to go to work. This error applies to (P1), (P2), and (P3).

2. Misidentifying the Relevant Causal Factor(s): For any given general causal relationship there are often hundreds of factors common to each causal event. It does not follow that they are all relevant. This is why it's important to hypothesis a (possible) causal mechanism. For example, suppose you go out to dinner with 8 friends, 3 of which got sick a few hours after eating. It turns out that the 3 friends are all male. If you were to conclude that they got sick because they are all male this would be to misidentify the relevant causal factor. It seems unlikely that there is a causal relationship between their gender and their illness. This common variable is irrelevant. More likely their illness has to do with what they ate or drank in common. This reasoning error is usually a consequence of not having very deep knowledge of the topic at hand. A little wikipedia research can usually at least get you started in the right direction. This error applies to (P1), (P2), and (P3).

3. Mishandling Multiple Factors: As with identifying relevant causal factors, for every general causal argument there will often be many antecedent variables involved. Identifying the one that has causal import can be tricky. Again, as in above, you want to find ways to falsify competing alternatives. Also, people will often fail to consider alternative causal variables to the one(s) they identify. Again, a little wiki-reseach gets you started. This error applies to (P1), (P2), and (P3).

4. Confusing Correlation and Causation (Cum hoc ergo proptor hoc="comes with therefore because of": Just because two events or variables are correlated or co-occur, it doesn't follow necessarily that there's a causal relationship. For example, just because there's a correlation between sales of organic foods and autism rates, it doesn't follow that there's a causal relationship between the two. Often a good way to avoid committing this error is to see if you can come up with a likely causal mechanism. If you can't then it's likely simply correlation. However, you could be wrong, so do a little digging online just in case. This error applies to (P2) and (P3).

5. Confusing Cause and Effect (aka Direction of Causation). Often it is difficult to disentangle the direction of causation. For example, does participation in high school sports cause the development of a good work ethic and perseverance or do people with a good work ethic and perseverance have a greater tendency to do sports? This error applies to (P4).

6. No Control (see Method of Difference). Often misattributions of causation occur because there is no control group. If we don't know the natural prevalence rate of a disease or its average natural healing time we cannot reasonably attribute causal power to a purported remedy. The same goes for social policy interventions. Being able to compare an intervention group to a non-intervention group improves our ability to attribute (or dismiss) causation to the intervention. Applying a control helps eliminate errors in (P1), (P2), (P3), and (P4).

Bonus: Conspiracy Thinking and Causation

A Note on the Nature and (Mis)use of Causal Arguments in Science Denialism and Conspiracy Thinking

It's important to remember that causal arguments are inductive arguments. This means that by their very nature they are probabilistic and incapable of offering 100% certainty. But, as you should know by now, this doesn't mean that causal arguments are inherently bad arguments. It simply means we need to recognize their probabilistic nature which means evaluating their strength relative to competing hypotheses. This is the heart of the error that pseudoscientists, science denialists, and conspiracy theorists make. They fail to evaluate causal claims relative to competing claims in terms of their likelihood of being true.

The science denialist will argue that, for example, you can't prove 100% that greenhouse gases are causing global warming. And they are correct! But not in any meaningful way. They are merely noting a fact about all scientific arguments about causation: They are inductive and so none of them are capable of 100% proof. This type of reasoning commits (at least) two kinds of errors. The first, we are familiar with: there is misplaced burden of proof. Given that there is a consensus of experts (and overwhelming positive evidence) for the claim, the likelihood of the claim being true is quite high and so the burden of proof correctly falls on anyone wishing to deny that greenhouse gases cause global warming. (You can substitute any issue where people argue against a consensus of experts: creationism, anti-vaxxers, people who claim GMOs are harmful to human health, flat earthers, 9-11 "Truthers", etc…)

Recall also that since causal arguments are probabilistic they should be evaluated relative to competing hypothesis that also purport to explain the phenomena in question. For example, there are no competing hypothesis that, to a higher degree of probability, explain the phenomena of global warming. To reasonably deny that greenhouse gases cause global warming requires offering a competing hypothesis that is more likely to be true than the consensus hypothesis. So, the second error of the denialists reasoning is to believe the less probable hypothesis over the more probable hypothesis. It is more probable that greenhouse gas emissions are causing global warming than any other competing hypothesis — even if it can't be proven 100%. So, the reasonable thing is to believe the hypothesis that is most likely to be true (given current evidence).

Let's decontextualize the issue to show why this is true. If hypothesis A is 10% likely to be true and hypothesis B is 90% likely to be true, what is the reasonable thing to believe? Now, of course, it still remains true that B could turn out to be false (but only given new and better evidence to the contrary). However, pointing out possibility doesn't make it any more reasonable to instead believe what is only 10% likely to be true…because hypothesis A is even more likely to turn out false. That is, the same reason offered to disbelieve hypothesis B is an even stronger reason to disbelieve a hypothesis that is only 10% likely to be true.

The growing popularity of conspiracy theories across all political views and demographics results from the combination of the nature of social media and our cognitive biases. Humans are hard wired to find causal patterns in the world. Imagine what life would be like if you couldn't make causal inferences. Even something as simple as hypothesizing why your hand gets burnt every time you put it on a hot stove would be a mystery. You'd have to retest it each time you walked by the stove. Evolution doesn't favor the inability of recognize patterns. Millions of years of evolution have favored brains (in both humans and animals) that look for patterns. The problem is that our brains are imperfect. Sometimes we attribute causation where there is only correlation or we see patterns where there is no pattern. At its heart, conspiracy thinking attributes causation where there is none, or at least where there is no positive evidence of causation.

The recipe for a conspiracy theory goes like this. Take any event (usually a bad event), figure out which group(s) benefit from that event — preferable a group with power and or that is disliked — and declare that since that group benefited they must have caused the event. Throw in some unexplained details (at least unexplainable to you), make circles and arrows on picture, and voila! You have your conspiracy. Be sure to post on social media with the caption "WAKE UP SHEEPLE!!!!!" or "FOLLOW THE MONEY!!!!" so people realize how "woke" you are compared to them.

Let's build a conspiracy to show just how easy it is. Here's the fun part: There is not a single event in the world — no matter how tragic — that some person or group won't in some way benefit from. Think about an earthquake. The companies that go rebuild and sell emergency supplies are going to benefit. The charities that help will get increased donations. Therefore, they caused it? See? Without positive evidence it's a terrible line of reasoning.

Recently, any time there has been a school shooting some conspiracy theorists have been claiming "false flag! false flag!" because they believe (probably correctly) that public support for gun regulation will increase if there are shootings in schools. But the fact that government is more likely to follow the public will to limit access to firearms in the face of school shootings doesn't mean that the government caused the shootings. To show this, as we have learned, would require a lot of positive evidence. Not just pointing to what at first glance appear to be anomalies. Causation requires positive evidence otherwise you're merely pointing out what is true of any event: some people will benefit.

Notice also that we could select more than one group that benefits from gun rights advocates thinking the government will take away their guns. In fact, the group that most benefited from the conspiratorial hysteria were gun manufacturers and gun store owners. Yet gun rights advocates never suggested that those groups caused the school shootings. Hmmm…Coincidence? What are they hiding????1??1 FOLLOW THE MONEY!!!111!!!!1!!!

Formula for Building a Conspiracy
Step 1: Find any negative or tragic event.
Step 2: Figure out which person or groups might benefit in some way from that event.
Step 3: Of those groups/people, pick one that is either powerful or socially unpopular or both.
Step 4: Point out how your chosen group benefits from the event.
Step 5*: Suggest either subtly using innuendo or directly that since the group benefited they must somehow be behind the event.
Step 6: Using a paint editor, draw circles and arrows on pictures to indicate what appear to be anomalies given your limited background information and only indirect familiarity with the event.
Step 7: End with a catchy phrase to show everyone that you know the TRUTH. Popular examples include: "Wake up sheeple" and "Follow the money"

At Step 5, using innuendo is a great preemptive tactic against people who eventually ask you to support your claim with positive evidence. When they ask, just say "hey, I'm not saying for sure that it happened, but it's possible. I'm just asking questions." This will allow you to back-peddle and seem like a reasonable person who is woke enough to question the "official" story. Of course, you have no positive evidence for your suspicions, but why should that stop you from insinuating a nefarious plot?

Caveat on conspiracy thinking: The important lesson here isn't that all conspiracy theories are false or that there are and never have been any genuine conspiracies. The lesson concerns inference to the best explanation, burden of proof and the requirement to provide positive evidence to support a causal claim. There are and have been real conspiracies throughout history. However, these conspiracies were uncovered because — just like for any other causal claim — there was positive evidence for the conspiracy.

Also, conspiracies are generally quite difficult to keep covered up. Have you ever tried to get more than 4 people to keep a secret? As the number of people involved in a conspiracy grows, the likelihood of its existence decreases since it would be so hard to keep it covered up. For a mathematical model of the relationship between the number of people hypothesized to be in conspiracy and the likelihood of keeping that conspiracy secret click the footnote.1

Unless otherwise stated, the content of this page is licensed under Creative Commons Attribution-ShareAlike 3.0 License