Obstacles to Good Reasoning 3A: Confirmation Bias, Fallacy of Confirming Evidence, and Falsificationism

Overview

In the section on biases we learned that there are two basic categories for ways we can reason poorly: (1) How we think and (2) what we think. This lesson will continue to focus on the former. Our brain is hardwired in certain ways that cause up to reason poorly. These are called cognitive biases. The most common cognitive bias is called confirmation bias. Confirmation bias causes us to focus only on confirming evidence for our hypothesis/worldview while ignoring or trivializing any disconfirming evidence. Confirmation bias can also lead us to evaluate evidence within an misleading context. When we engage in confirmation bias we commit a fallacy called the fallacy of confirming evidence. One important tool for avoiding confirmation bias is the idea of falsificationism: There are an infinite number of ways to confirm a hypothesis so instead we should seek to disconfirm it. That is, we should try to disprove claims and hypotheses instead of trying prove them.

Introduction

Before you begin reading, click on the link and do the test.
.
.
.
.
.
.
.
.
.
.
Hey! No cheating! I said click on the above link!
.
.
.
.
.
.
.
.
.
.
If you're like 99% of humans on the planet you fell prey to confirmation bias. Let's take a few step backs before explaining what that means…

Cognitive Biases and Confirmation Bias

We can fail to reason well because the hard-wiring in our brains affects the way the information is presented and interpreted. A cognitive bias is when our brain's hard-wiring has an unconscious effect on our reasoning. It is a current area of debate as to whether cognitive biases are on the whole beneficial or detrimental to our reasoning. However, we'll set this issue aside for this class and operate under the assumption that in many instances cognitive biases do negatively influence our capacity to reason well.

confirmation%20bias.jpg

There are hundreds of cognitive biases but the one that causes the most errors in reasoning is called confirmation bias. Confirmation bias is when we only report the "hits" and ignore the "misses"; in other words, we only include information/evidence/reasons in our argument that support our position and we ignore information that disconfirms. Stereotyping is often the result of confirmation bias. And like stereotyping, confirmation bias is often (but not always) unintentional and everyone does it to some degree (except me). Confirmation bias is a problem because in order to know if a conclusion is well-supported we need to evaluate it in light of all evidence—positive and negative. Taking into account only positive evidence gives us a distorted picture of a claim's truth.

When an argument engages in confirmation bias it commits the fallacy of confirming evidence1.

Let's consider an example.

It's a fairly widespread belief that men are better drivers than women. But how do people arrive at this belief? If you hold this belief here's what will typically happen. A driver cuts you off. You look and notice it's a woman. You think to yourself, "typical women drivers." Hypothesis confirmed! You get cut off again. You look and notice it's a guy. You shrug but don't think "typical male driver." You also don't think "oh, maybe men and women are both bad drivers." Nope. You totally forget about that incident and didn't count it against your hypothesis.

This is how confirmation bias works. We have a belief or hypothesis and anytime we encounter confirming evidence we whisper softly to ourselves, "See! I knew it!". But anytime something doesn't fit our hypothesis we ignore it and fail to adjust our hypothesis. Our brain is a confirmation seeking machine, not a truth-seeking machine.
confirmation%20bias%20a.jpg


If our brain is wired for bad reasoning in a way that's invisible to us, how do we avoid doing it?

Falsificationism

It'd be helpful to bring in a little philosophy here. Please meet my good friend Carl Popper (no relation to the inventor of the popular snack food known as Jalepeno Poppers).

Popper made a very important philosophical observation in regards to how we can test a hypothesis: he said we cannot test for a hypothesis' truth, rather we can only test for its falsity. The method of testing a hypothesis by trying to disconfirm rather than confirm it is called falsificationism. There are infinitely many ways to show that a hypothesis is true, but it only requires one2 to show that it is false. We should focus on looking to falsify rather than to confirm.

In fancy-talk we refer to an instance of a falsification as a counterexample. A counterexample is a case in which all the premises are true but the conclusion is false (more on this later).

Let's go back to our "women are bad drivers" hypothesis. How many ways can I confirm my hypothesis? Any time a woman cuts me off. But what does falsificationism tell me to do? I need to see if there are counterexamples. I.e., times when I get cut off but it's false that I was cut off by a woman. In other words I need to also count how often do I get cut off by men. To really determine the truth of the matter I need to keep track of how many women cut me off per period of time vs how many men cut me off per period of time. Then I should compare the numbers.3 When I approach the issue with falsificationism in mind, I'm considering both confirming evidence and disconfirming evidence and I'll have a much more accurate picture of relative driving ability.

To illustrate one more time how confirmation bias and falsificationism work, let's return to the number-pattern test. In this case the same principles are in play. You were given a series of numbers and asked to identify the ordering principle that describes the ordering pattern. Suppose, (unbeknownst to you) the ordering principle is any 3 numbers in ascending order. How did you go about trying to discover the ordering principle? You looked at the example that conformed to the rule (suppose it's 2, 4, 6) and like most people thought it was something to do with ascending even numbers evenly spaced. You looked at the sample pattern and tried to make more patterns that confirmed your hypothesis.

For instance, You might have thought, "Ah ha! the pattern is successive even numbers!" So, you tested your hypothesis with {8, 10, 12}. The "game" replied that the set of numbers matches the pattern. Now you have confirmation of your hypothesis that the pattern is successive even numbers. Success! Next, you want to further confirm you hypothesis so you guess 20, 22, 24. Further confirmation again! Wow! You are definitely right! Now, you type your hypothesis, "successive even numbers," but it says you are wrong. What? But I just had 2 instances where my hypothesis was confirmed?!

Here's the dealy-yo. You can confirm your hypothesis until the cows come home. That is, there are infinitely many ways to confirm the hypothesis. In the number game this means there are infinitely many sets of evenly spaced ascending even numbers. However, as Popper noted, rather than try to confirm your hypothesis, what you need to do is to ask questions that will falsify it. So, instead of testing number patterns that confirm what you think the pattern is, you should test number sequences that would prove your hypothesis to be false. That is, instead of plugging in more instances of successive even numbers you should see how the game responds to different types of sequences like 3, 4, 5 or 12, 4, 78. If the game accepts these sets, then you know your (initial) hypothesis (i.e., the pattern is evenly spaced ascending even numbers) is false.

If you test sequences by trying to find counterexamples you can eventually arrive at the correct ordering principle, but if you only test hypothesis that further confirm your existing hypothesis, you can never encounter the necessary evidence that leads you to reject it. If you never reject your incorrect hypothesis, you'll never get to the right one! Ah! It seems sooooooo simple when you have the answer!

As Critical Thinkers, Why do we Care about Confirmation Bias and Falsificationism?

Most arguments are (hopefully) presented with evidence. However, (usually) the evidence that is presented is only confirming evidence. As we learned from the bad-driving example we need not only positive evidence, we need to know if disconfirming evidence has been considered. And as we learned from the number-pattern example, positive evidence can support any number of hypothesis. To identify the best hypothesis we need to try to disconfirm as many hypotheses as possible. In other words, we need to look for evidence that can make our hypothesis false. The hypothesis that stands up best to falsification attempts has the highest (provisional) likelihood of being true.

As critical thinkers, when we evaluate evidence we should look to see not only if the arguer has made an effort to show why the evidence supports their hypothesis and not another, but also what attempt has been made to prove their own argument false. We should also be aware of this confirmation bias in our own arguments.

Falsificationism has a second important implication for either argument construction or evaluation. A good hypothesis or claim has to be falsifiable. That is, there has to be, in principle, so way that it could be shown to be false. If your claim or hypothesis is untestable, it's not worth much. You need to at least be able to identify what would make a claim false.

Here's an example:

There is an invisible man that follows me everywhere I go.

Ask yourself if there's any way to disprove this claim. I can give you lots of (not very good) confirming evidence: E.g., I can just feel his presence. He talks to me at night and lets me know he's there.

Here's another example:

Slanting by Omission: How to Detect and Create BS

Slanting by omission is a subspecies of confirmation bias that is often intentionally used to manipulate an audience. The general idea is to get the audience to focus on confirming evidence. Slanting by omission, as you might have guessed, is when important information is left out of an argument to create a favorable bias.

The anti-vaccine and anti-GMO movements along with conspiracy theories and political websites love this trick. Let's look an example:

Headline from an anti-GMO news release.: 100% of Wines Test Positive of Glyphosate!!!1111!!!111!!!!!
Let's take a peak inside and see how they got these results.
The report begins:

On March 16th, 2016 Moms Across America received results from an anonymous supporter which commissioned Microbe Inotech Lab of St.Louis, Missouri that showed all ten of the wines tested positive for the chemical glyphosate, the declared “active" ingredient in Roundup weedkiller and 700 other glyphosate ­based herbicides.

Before even talking about confirmation bias we should revisit our last lesson on personal and group biases. What is the source of the data? "An anonymous supporter." Seems legit…

What was the sample size? Ten. Ten freakin' bottles. Now let's go back to the headline. If you had only read the headline and hadn't read the actual article (line 99% of the population) how many kinds and brands of wine would you have thought contained glyphosate? 100%!!!!1111!!!1 This is a textbook example of slanting by omission. They're omitting the fact that only 10 bottles got tested.

Next we return to confirmation bias. We're provided no methodology for the testing. How do we know that only 10 bottles were originally submitted for testing? Suppose 1000 bottles were submitted and only 10 came back positive translating to a rate of 1% containing (trace) amounts of glyphosate. Given no methodology and the fact that the source likely has at least a vested interest in a positive result, it's not unfair to assume we have an instance of the fallacy of confirming instances.

But wait! There's more! Look at the amounts of glyphosate contained in the wine. The highest concentration was 18 ppb. Ppb means parts per billion. In every day language, this means "chemically insignificant trace amounts". In concrete terms, 1 ppb is equivalent to 1 second/11 days, 1 inch in 16 miles, or 1 car in bumper to bumper traffic from Cleveland to San Fransisco.4 This is a textbook example of slanting by omission. They haven't said anything false (here) but the information they omit presents a false image. What we really need to know is if there are any health concerns for glyphosate at 18 ppb. If there aren't any, then we shouldn't care. It turn out that glyphosate is less toxic than baking soda and even salt. Think about how much salt or baking soda you consume in a day—it's orders of magnitude greater that even 100 ppb. Did you die?

The EPA has set the safe daily does of glyphosate at 30 000 ppb. To reach that level you'd have to drink about 2 500 glasses of wine per day for 70 years. Incidentally, we might flip the Mom's Across America argument on its head. If toxic chemicals in wine are the genuine concern, shouldn't we be concerned about the alcohol in the wine? Alcohol is easily one of the most toxic chemicals we consume5 and wine is made of about 14% alcohol!!!111!!!! Add to that that, *gasp*, 100% of wines contain alcohol!!111!!!!11!!

Common technique for slanting by omission: Neglecting to consider that dose makes the poison.
Aside from the beautiful example of slanting by omission there's another important principle to grasp that will come up throughout the course: The dose makes the poison. This means that toxicity is a matter of dose. There is not a single chemical in the universe that isn't toxic at some dose. Both oxygen and water will kill you if the concentration or dose are high enough. When someone screams "but it has X in it" the first questions that should come to mind are "at what dose? and what dose is toxic?" This is a favorite tactic of anti-science and pseudoscience movements. As you become aware of it, you'll see it everywhere…like you're Neo in the critical thinking Matrix or like you're John Nash as portrayed by Russell Crow in a Beautiful Mind.

dose%20makes%20the%20poison.jpg

Alternative medicine research is another common area were we (almost always) see slanting by omission. Alternative medicine will often report that their treatment shows some effect. Confirmation! It works! What they neglect to say is that the effect is exactly the same size effect as we see in the placebo group; that is, in the group that doesn't get any treatment. An effect that is equal to the effect of having no treatment is no effect at all. Throughout the course we'll take a closer look at particular cases like acupuncture, supplements, and chiropractic.

One final area where slanting by omission is guaranteed to be found is on political websites and media. Very often when reporting about the other team, important contextual information will be left out to create a distorted narrative. Also, when reporting on one's own team, websites and media will neglect to conveniently forget to mention negative facts.

Confirmation Bias and the Scientific Method
We'll discuss the scientific method in more detail later in the course but a couple of notes are relevant for now. The scientific method seeks to systematically prevent confirmation bias (although, just as in any human enterprise, it sometimes creeps in). There are specific procedures and protocols to minimize its effect. When evaluating arguments, you should keep them in mind. Here are a few:

  • Peer review: When a scientist publishes an article, it is made available to a community of peers for criticism. If the source of the information isn't peer reviewed, be skeptical.
  • Double-blinding (usually in medical research): To avoid bias, neither the participant or the tester knows if the treatment is real medicine or just sugar pills. If a study isn't blinded, it's extremely easy for the results to be biased.
  • Control group: To measure if there's a genuine effect you have to know what the baseline rate/incidence level is. For example, if I'm testing a treatment and 25% get better, before I can say the medicine caused the improvement I need to know how many people got better without treatment. If 25% got better without treatment then the treatment had no important effect.

Summary

A common type of bias is confirmation bias in which only confirming evidence and reasons are cited, and falsifying evidence is ignored.
A good way to test a hypothesis or argument is to ask whether it's possible for all the premises to true and the conclusion to be false; that is, are there counter examples. Instead of emphasizing confirming evidence, a good argument also tries to show why counter examples fail. In other words, it shows why, if all the premises are true we must also accept the particular conclusion rather than another one.
As critical thinkers assessing other arguments, we should try to come up with counter examples.
Slanting by omission is when important information (relative to the conclusion) is left out of an argument.
Slanting by distortion is when opponents arguments/evidence are unfairly trivialized.

Unless otherwise stated, the content of this page is licensed under Creative Commons Attribution-ShareAlike 3.0 License