When we want to investigate something, it is common — and sometimes comfortable — for us to search for information that confirm what we already believe or what we want to believe. Even scientists are not immune to confirmation bias, which is affirming information that align with our beliefs and expectations while ignoring those that contradicts them.
Author Terry Pratchett described confirmation bias through his character Lord Vetinari from his novel “The Truth: A Novel of Discworld.”
“Be careful. People like to be told what they already know. Remember that. They get uncomfortable when you tell them new things. New things…well, new things aren’t what they expect. They like to know that, say, a dog will bite a man. That is what dogs do. They don’t want to know that man bites a dog, because the world is not supposed to happen like that. In short, what people think they want is news, but what they really crave is olds…Not news but olds, telling people that what they think they already know is true.”
Science, however, does not look for ways or evidence to prove itself right; it constantly tries to prove itself to be wrong, reducing confirmation bias. Even so, scientific research could sometimes fall into the vanity of confirmation bias to create a desired outcome or expectation. This is where scientists must tread carefully to avoid making such mistakes.
How Confounding Factors Can Mislead Scientists
For example, confounding factors can cause some researchers to believe that a certain factor or phenomenon causes something to happen (e.g. X causes Y or vice versa). In a recent Australian review published in the International Journal of Sports Physiology and Performance examined some of these overlooked factors that could skew data, results, and conclusions. Exercise physiologist Israel Halperin and his colleagues from the School of Exercise and Health Sciences department at Australia’s Edith Cowan University reviewed several confounding factors that exercise scientists tend to overlook or ignore when they conduct experiments. These “noise” include the pitch and tone of the coach or researcher’s voice, what the test subjects focus on when they move (e.g. muscle vs. movement), the number of men and women who are watching the subjects, mental fatigue, and the type of background music. All of these factors has been shown to affect performance outcome in exercise science.
“The main reason is that unless they pay close attention to these confounding effects they won’t be able to trust their data,” Halperin told Massage & Fitness Magazine. He uses all of the listed confounding factors as an example.
“Let’s say we would like to examine the effectiveness of a new eight-week strength and conditioning program. After a long discussion, we decide to test for maximal strength, power, and a cardiovascular endurance test. Due to logistical constrains, the first testing day was scheduled after a mentally fatiguing three-hour lecture concerning the upcoming financial year of the team. The gym was relatively empty and the head coach could not make it because of an important meeting. And so, the assistant coach, who has a quiet voice, provided the instructions and encouragement instead.
“At the end of the eight-week program, the second testing day took place. This time around the gym was packed and the female volleyball squad performed their strength and conditioning session. Loud house music was playing in the background and the head coach provided the instructions and strong verbal encouragement.”
And voila! All of the tests showed significant improvement! But was the positive outcome caused by the program or the inconsistent testing environment? “Was it simply the loud music, female observers, set of instructions, verbal encouragement, and lack of mental fatigue in second that lead to the positive improvements?“ Halperin asked. “The truth is we don’t know, and that’s why it’s so incredibly important to control for these variables, especially given that the magnitude of their effects can be quite large.”
Confirmation bias could be a product of the post hoc propter ergo hoc fallacy (or post hoc), which is Latin for “after this, therefore, because of this.” Basically, this implies the correlation and causation effect in which two events happened in succession and the first event caused the other.
For example, your headache that has been bothering you for three days is gone after you received a full-body massage. So did the massage actually alleviate the headache or did something else caused the headache to go away, like getting enough sleep or being around with your best friend for a day before you got the massage? Maybe the headache would have gone away on its own with or without a massage. This can be applied to almost any profession, such as a physical therapist, nurse, or a hairdresser. Maybe getting your hair highlighted made the headache go away.
Going back to Halperin’s review, sometimes the athletes’ performance may be affected by overlooked factors. For example, exercise test subjects who know the exercise endpoint in terms of distance, time, or number of repetitions often outperform those who have no idea what much more they have to do.
In 2009, researchers from the University of Exeter did an experiment that compared cyclists who knew the distance (control) versus those who did not know the distance (experimental). The latter group was never given distance feedback during the exercise, and they were considerably slower than those who knew the distance they had covered.
If a similar study was conducted that compared performance between male versus female cyclists or cyclists on low-carb diet and high-carb diet, then all or none of the test subjects should be given the exercise endpoint so that this confounding factor does not effect the experiment’s outcome.
Likewise, the number of female observers can affect how well male test subjects perform. A 2012 British study found that recreational male runners tend to have lower rates of exertion when there are female observers than an all-male observer group. Another study from the University of Virginia also found similar outcomes among male. cyclists. However, when the roles are reversed, female test subjects performed almost the same regardless of who is watching.
Thus, the post hoc fallacy could make us believe that a certain factors influence an outcome, but the outcome is caused by something else that we did not notice. By asserting a correlation to two or more events that confirms our beliefs, then confirmation bias filters our perception of what is really happening.