10 Types of Study Bias

By: Patrick J. Kiger  | 
A patient fills in a questionnaire and sleep diary before undergoing a polysomnography at a sleep center in Switzerland. What are some biases scientists need to be aware of when conducting studies? AMELIE-BENOIST /BSIP/Getty Images

Key Takeaways

  • Study biases like confirmation bias, sampling bias and selection bias can significantly affect the outcomes and interpretations of scientific and social science research.
  • Biases such as channeling bias, question-order bias and interviewer bias highlight the complexities and challenges in study design and data collection.
  • Addressing biases in research, such as publication bias and file drawer bias, is crucial for ensuring the integrity and transparency of scientific knowledge.

Arrhythmia, an irregular rhythm of the heart, is common during and soon after a heart attack and can lead to early death. That's why when anti-arrhythmia drugs became available in the early 1980s, they seemed like a major life-saving breakthrough [source: Freedman].

The problem, though, was that although small-scale trials showed that the drugs stopped arrhythmia, the drugs didn't actually save lives. Instead, as larger-scale studies showed, patients who received such treatments were one-third less likely to survive. Researchers had focused on stopping arrhythmia as a measure of effectiveness rather than on the problem that they were trying to solve, which was preventing deaths [sources: Freedman, Hampton].

Advertisement

Why did the researchers go wrong? As Discover magazine writer David H. Freedman explained in a 2010 article, the mistaken conclusions about anti-arrhythmia drugs are an example of something called the streetlight effect. The effect is named after the proverbial drunk who explains that he lost his wallet across the street, but he's looking under the streetlight for it because the light is better there. Similarly, in science, there's a tendency to look at and give more weight to phenomena that are easier to measure — which sometimes may result in a wrong conclusion.

But the streetlight effect is just one of numerous types of bias that can infect scientific studies and lead them astray. Scientists consider bias to be such a major problem that in recent years, it's become a subject of research itself, in which scholars use statistical analysis and other methods to figure out how often it occurs and why.

In this article, we'll look at 10 of the many types of bias that can influence the results of scientific and social science studies, starting with a well-known one.

10: Confirmation Bias

Confirmation bias occurs when a researcher takes the hypothesis that he or she starts out with ("marijuana is beneficial/detrimental") and shapes the study methodology or results to confirms that premise, whether or not it’s actually justified. krisanapong detraphiphat/Getty Images

Back in 1903, a few years after the discovery of X-rays by German researchers, a French scientist named René Blondlot announced that he'd discovered yet another previously unknown form of radiation — N-rays. They could only be observed using peripheral vision, and seen as a corona when electricity was discharged from crystals. Eventually, Blondlot's research was refuted by an American scientist, Robert Wood, who visited the Frenchman's lab and found that Blondot still observed N-rays, even after Wood secretly removed the crystal during one of the experiments.

But after that, something strange happened. For years, other French scientists continued to publish papers describing their observations of N-rays, as if they actually existed. Perhaps out of nationalistic pride, French scientists wanted to see N-rays, and so they did [sources: Lee, Simon].

Advertisement

Those N-ray findings were an extreme example of one of the simplest most widely recognized reasons that studies can go wrong — confirmation bias. That's when a researcher takes the hypothesis that he or she starts out with ("marijuana is beneficial/detrimental") and shapes the study methodology or the analysis of the data in a way that confirms the original premise, whether or not it's actually justified [source: Sarniak]. Lay people are prey to confirmation bias as well. If they support (or despise) a sitting president of the U.S., for instance, they tend to look for information that confirms their view and disregard anything that refutes it.

9: Sampling Bias

Thanks to a sampling bias, the Literary Digest incorrectly predicted that Alf Landon (right) would defeat Franklin D. Roosevelt (left) in the 1936 presidential election. Keystone View Company/FPG/Archive Photos/Getty Images

Researchers who've done meta-analyses of scientific research have found that early, small-scale studies — ones that end up being frequently cited in other work — often overstate their results [source: Fanelli, et al.].

That can happen because of sampling bias, in which researchers conducting small studies base their findings upon a group that isn't necessarily representative of the larger population. Universities often use students for their studies but the findings for this group don't necessarily project to the wider population.

Advertisement

It's a problem that's seen in both medical studies and social science research. For example, if a political science researcher who's studying attitudes about gun control does surveys in an area where most people are Second Amendment supporters, that will skew the results in a way that doesn't necessarily reflect the views of the larger U.S. population.

But sampling bias can occur in bigger studies as well. One famous example of sampling bias occurred during the 1936 U.S. presidential campaign, when Literary Digest conducted a mail survey of 2.4 million people and predicted — incorrectly — that Republican Alf Landon would handily beat incumbent Democrat Franklin Roosevelt. The problem was that the magazine used phone directories, drivers' registrations and country club memberships to find people to poll — a method that tended to reach relatively affluent voters (cars and phones were luxury items back then), rather than the poorer ones among whom Roosevelt was popular. The erroneous results hastened the end of the publication [source: Oxford Math Center].

8: Selection Bias

You can have selection bias if you don't control for all variables in your study. Morsa Images/Getty Images

How do scientists determine whether a new drug will cure or help a particular disease? Usually with a study involving two groups of people. For instance, if the scientists are studying the effectiveness of a new antihistamine on allergy sufferers, they would give the trial medication to one group of patients and a placebo (sugar pill) to the other group, called the control group. Neither group is supposed to know whether they have been given the medication and the study participants are randomly assigned to each group.

This is referred to as a randomized double-blind placebo control study and is considered the gold standard of clinical trials. "Double-blind" refers to the fact that neither the scientists nor the participants know which allergy patients are in which group until after the experiment is over.

Advertisement

There are several reasons for doing this but one is to avoid selection bias. Let's say you want to study whether people who work at night are more likely to develop headaches. So, you recruit a group of people who work at night, and another group who work during the day, and then compare them. Your results show that the people who work at night are more likely to have aching temples.

But that doesn't necessarily mean that night work is the cause, because it could be that people who work at night tend to be poorer, have more unhealthy diets or more stress. Such factors might bias your results, unless you could make sure that the two groups are similar in every other way except for their schedules [sources: Institute for Work and Health, CIRT].

7: Channeling Bias

In a study, a hand surgeon could be more likely to pick the younger, healthier patients to get an operation and leave the older patients out of it, which could skew the results of whether the surgery is successful for all. This is called channeling bias. Cultura RM Exclusive/KaPe Schmidt/Getty Images

Channeling bias occurs when a patient's prognosis or degree of illness influences which group he or she is put into in a study. It's a particular problem in nonrandomized medical trials, ones in which doctors select which patients are going to receive the drug or surgical procedure that's going to be evaluated.

It's not hard to figure out why it happens, because physicians, after all, generally want to help the people that they treat, and are trained to weigh the risks versus the rewards for a treatment.

Advertisement

Let's look at a hypothetical example of a study intended to evaluate the effectiveness of a certain surgical procedure on the hand. Surgeons might be more inclined to pick younger, healthier patients to get the operation, because they have lower risks of complications afterward, and more of a need to have full hand function.

In turn, they might be less likely to perform it on older patients who face higher post-operative risks and don't need to have the same degree of hand function because they're no longer working. If researchers aren't careful, the group that gets the surgery in the study will consist of younger patients, and the group that doesn't will be mostly older ones. That could produce a very different result than if the two groups were otherwise identical [source: Pannucci and Wilkins].

6: Question-Order bias

A researcher asks two women for their views concerning the creation of a state health service in England in the 1940s. The order questions are asked can influence the answers received. Hulton-Deutsch Collection/CORBIS/Corbis via Getty Images

The order that questions are asked in a survey or study can influence the answers that are given. That's because the human brain has a tendency to organize information into patterns. The earlier questions — in particular, the ones that come just before a particular query — may provide information that subjects use as context in formulating their subsequent answers, or affect their thoughts, feelings and attitudes. That effect is called priming [sources: Pew, Sarniak].

Pew Research gave this example from a December 2008 poll: "When people were asked 'All in all, are you satisfied or dissatisfied with the way things are going in this country today?' immediately after having been asked 'Do you approve or disapprove of the way George W. Bush is handling his job as president?'; 88 percent said they were dissatisfied, compared with only 78 percent without the context of the prior question."

Advertisement

Another example of the question-order bias effect comes from the General Social Survey, a major long-term study of American attitudes. In 1984, GSS participants were asked to identify the three most important qualities for a child to have, and given a card with a list of qualities. When "honest" was high on the list, it was picked by 66 percent of respondents. But when it came near the end, only 48 percent of people picked it as one of their top three. A similar pattern was seen with other qualities [source: Henning].

5: Interviewer Bias

Interviewer bias could occur in medical studies when the interviewer knows the research subject’s health status before questioning her. GARO/Getty Images

Not only do researchers need to be careful about whom they pick to be in groups in studies, but they also have to worry about how they solicit, record and interpret the data that they get from these subjects. Interviewer bias, as this problem is called, is more of an issue in medical studies when the interviewer knows the research subject's health status before questioning him or her.

A 2010 medical journal article on how to identify and avoid bias cites the hypothetical example of a study that's attempting to identify the risk factors for Buerger's disease, a rare disorder in which arteries and veins in the arms and legs become swollen and inflamed. If the interviewer already knows that a research subject has the disease, he or she is likely to probe more intensely for known risk factors, like smoking. So, the interviewer may ask people in the risk group, "Are you sure you've never smoked? Never? Not even once?"— while not subjecting patients in the control group to these kinds of questions [source: Pannucci and Wilkins].

Advertisement

An interviewer also can cause errant results in a study by giving subjects non-verbal cues when asking questions, such as with gestures or facial expressions, or tone of voice [source: Delgado, et al.].

4: Recall Bias

A man helps a child with autism to paint in Abidjan, Ivory Coast. Parents of children with autism are likelier to recall their child was immunized prior to showing signs of autism and draw a connection, even if incorrect -- an example of recall bias SIA KAMBOU/AFP/Getty Images

In studies where people are questioned about something that occurred in the past, their recollections may be affected by current realities. Recall bias, as this phenomenon is known, can be a major problem when researchers are investigating what factors could have led to a health condition, and interviews are the prime source of information. For example, since there's a widespread — though unsubstantiated — belief that autism is somehow caused by the measles-mumps-rubella (MMR) vaccine, parents of children on the autism spectrum are more likely to recall that their child was immunized prior to showing signs of autism, and draw a connection between the two events [source: Pannucci and Wilkins].

Similarly, mothers of children with birth defects may be more likely to remember drugs that they took during pregnancy than mothers of fully abled children. One study also found that pilots who knew they had been exposed to the herbicide Agent Orange had a greater tendency to remember skin rashes that they experienced in the year after exposure [source: Boston College].

Advertisement

3: Acquiescence Bias

People want to be thought of as likeable, so if you are asking about a controversial subject, the questions need to be framed in a way that suggests that all answers are acceptable. asiseeit/Getty Images

This is another bias that can occur with social science surveys. People want to be agreeable so they are more likely to answer in the affirmative to a "yes/no" or "agree/disagree" question — particularly if they are less-educated or have less information. One way to get around this bias is to ask participants to choose between two statements (the forced choice format) rather than have them agree or disagree to one statement. The two statements would give two different views of a subject.

And in addition to being agreeable, survey respondents also want to be seen as likeable. "Research has shown that respondents understate alcohol and drug use, tax evasion and racial bias; they also may overstate church attendance, charitable contributions and the likelihood that they will vote in an election," notes Pew Research. Therefore, the questions have to be framed in a way that gives participants an "out" for admitting to less-than-desirable behavior. So, a question on voting could be phrased as: "In the 2012 presidential election between Barack Obama and Mitt Romney, did things come up that kept you from voting, or did you happen to vote?"

Advertisement

2: Publication Bias

Journals have a preference for positive outcomes in studies, which can hinder whether other kinds of studies get published. Epoxydude/Getty Images

One common type of bias stems from an uncomfortable reality in the scientific culture. Researchers have a continual need to publish articles in journals, in order to sustain their reputations and rise in academia. That publish-or-perish mentality might exert an influence upon the outcomes of hypotheses, because as one critic notes, academia tends to bias toward statistically significant, "positive" results [source: van Hilten].

Indeed, meta-analyses show that journals are much more likely to publish studies that report a statistically significant positive result than ones that don't. Publication bias is stronger in some fields than others; one 2010 study found that papers in the social sciences are 2.3 times more likely to show positive results than papers in the physical sciences [source: Fanelli].

Advertisement

As Ian Roberts, a professor of epidemiology and public health at the London School of Hygiene and Tropical Medicine, noted in a 2015 essay, clinical trials showing that a treatment works are much more likely to be published than those showing that it doesn't have any benefit or even harmful.

1: File Drawer Bias

archive file
On the flip side, scientists may relegate negative or neutral findings from clinical trials to a file drawer. blackred/Getty Images

In some ways, this is the flip side of publication bias. Negative results from a study get shoved in a metaphorical file drawer instead of being published. Critics see it as a particular problem when it comes to studies of new medications, which these days often are sponsored by the companies that developed them [source: Pannucci and Wilkins].

File-drawer bias can be significant. A study published in the New England Journal of Medicine in 2008 compared the results of published studies on antidepressants to data from a U.S. Food and Drug Administration registry of research that included unpublished information. It found that 94 percent of the published studies reported drugs having positive effects. But when the unpublished studies were included, the number with positive results dropped to 51 percent [source: Turner, et al.].

In an effort to get more information into the public domain, Congress in 2007 passed a law requiring researchers to report results of many human studies of experimental treatments to ClinicalTrials.gov. In 2016, the U.S. Food and Drug Administration strengthened the rules, requiring more thorough reporting of clinical trials, including drugs and devices that were studied but never brought to market [source: Piller].

But some critics worry that the laws won't have much teeth since there is no increase in enforcement staffing.

Lots More Information

Author's Note: 10 Types of Study Bias

This assignment was an interesting one for me, since over the years I've often had to write articles based upon scientific research. Journalists, I think, have to avoid the temptation to assume that the latest published study must be the definitive word on any subject.

Related Articles

More Great Links

  • Athanasiou, Thanos, etal. "Key Topics in Surgical Research and Methodology." Page 32. Springer, 2010. (Sept. 10, 2017) http://bit.ly/2vZ9rsn
  • Boston College. "Differential Misclassification of Exposure." Bu.edu. (Sept. 10, 2017) http://bit.ly/2vYFIQo
  • Burge, Sandra. "Bias in Research." Familymed.uthscsa.edu. (Sept. 9, 2017) http://bit.ly/2xXMRhl
  • Center for Innovation in Research and Teaching. "Sources of Error and Bias." Cirt.gcu.edu. (Sept. 8, 2017) http://bit.ly/2xXsLne
  • Cochrane Methods. "Assessing Risk of Bias in Included Studies." Cochrane.org. (Sept. 9, 2017) http://bit.ly/2xXyl8W
  • Delgado, M., etal. "Bias." Journal of Epidemiology and Health. August 2004. (Sept. 10, 2017) http://bit.ly/2vYAtQO
  • Dusheck, Jennie. "Studies of scientific bias targeting the right problems." Med.stanford.edu. March 20, 2017. (Sept. 9, 2017) http://stan.md/2xXcCyh
  • Dwan, Kerry, etal. "Systematic Review of the Empirical Evidence of Study Publication Bias and Outcome Reporting Bias — An Updated Review." PLOS ONE. July 5, 2013. (Sept. 9, 2017) http://bit.ly/2xX2a9J
  • Enserink, Martin. "Most animal research studies may not avoid key biases." Science. Oct. 13, 2015. (Sept. 9, 2017) http://bit.ly/2xWwhy6
  • Fanelli, Daniele. "Do Pressures to Publish Increase Scientists' Bias? An Empirical Support from US States Data." PLOS ONE. April 21, 2010. (Sept. 7, 2017) http://bit.ly/2xXgvD1
  • Fanelli, Daniele. ""Positive" Results Increase Down the Hierarchy of the Sciences." PLOS ONE. April 7, 2010. (Sept. 7, 2017) http://bit.ly/2xYmLKR
  • Fanelli, Daniele; Costas, Rodrigo; and Ioannidis, John P.A. "Meta-assessment of bias in science." PNAS. March 20, 2017. (Sept. 7, 2017) http://www.pnas.org/content/114/14/3714
  • Freedman, David H. "Why Scientific Studies Are So Often Wrong: The Streetlight Effect." Discover. Dec. 10, 2010. (Sept. 10, 2017) http://bit.ly/2xYJTJ4
  • Hampton, John. "Therapeutic fashion and publication bias: the case of anti-arrhythmic drugs in heart attack." JLL Bulletin. 2015. (Sept. 10, 2017) http://bit.ly/2xXUN1L
  • Henning, Jeffrey. "Order Bias Is a Larger Source of Error Than You Think." ResearchAccess.com. Aug. 1, 2014. (Sept. 10, 2017) http://bit.ly/2vZdWDb
  • Institute for Work & Health. "What researchers mean by...selection bias." Iwh.on.ca. (Sept. 10, 2017) http://bit.ly/2xYlxzk
  • Kicinski, Michal. "Publication Bias in Recent Meta-Analyses." PLOS ONE. Nov. 27, 2013. (Sept. 9, 2017) http://bit.ly/2xWKr29
  • Krishna, R.; Maithreyi, R.; Surapaneni, K.M. "Research Bias: A Review for Medical Students." Journal of Clinical and Diagnostic Research. April 5, 2010. (Sept. 9, 2017). http://bit.ly/2xWJiYp
  • Lee, Chris. "Confirmation bias in science: how to avoid it." ArsTechnica. July 13, 2010. (Sept. 9, 2017) http://bit.ly/2xYNmHO
  • McCook, Alison. "What leads to bias in the scientific literature? New study tries to answer." Retractionwatch.com. March 20, 2017. (Sept. 9, 2017) http://bit.ly/2xXBqGi
  • Mullane, Kevin and Williams, Michael. "Bias in research: the rule rather than the exception?" Elsevier.com. Sept. 17, 2013. (Sept. 9, 2017) http://bit.ly/2xXci2n
  • Oxford Math Center. "Famous Statistical Blunders in History." Oxfordmathcenter.edu. (Sept. 10, 2017) http://bit.ly/2xYi1VE
  • Pannucci, Christopher J., and Wilkins, Edwin G. "Identifying and Avoiding Bias in Research." Plastic Reconstructive Surgery. Aug. 2010. (Sept. 9, 2017) http://bit.ly/2xWIbbt
  • Pennwarden, Rick. "Don't Let Your Own Opinions Sneak Into Your Survey: 4 Ways to Avoid Researcher Bias." Surveymonkey.com. Jan. 1, 2015. (Sept. 9, 2017) http://bit.ly/2xWBTbP
  • Pew Research Center. "Questionnaire Design." Pewresearch.org. (Sept. 9, 2017) http://pewrsr.ch/2vYk0vD
  • Piller, Charles. "New federal rules target woeful public reporting of clinical trial results." Statnews.com. Sept. 16, 2016. (Sept. 9, 2017) http://bit.ly/2xYpCU5
  • Roberts, Ian. "Retraction of scientific papers for fraud or bias is just the tip of the iceberg." The Conversation. June 11, 2015. (Sept. 9, 2017) http://bit.ly/2xWTkZD
  • Sarniak, Rebecca. "9 types of research bias and how to avoid them." Quirks.com. August 2015. (Sept. 9, 2017) http://bit.ly/2vWV8EQ
  • Schupak, Amanda. "How Often Are Scientific Studies Retracted?" CBS News. May 26, 2015. (Sept. 9, 2017) http://cbsn.ws/2xXO8F9
  • Shuttleworth, Martyn. "Research Bias." Explorable.com. Feb. 5, 2009. (Sept. 9. 2017) http://bit.ly/2xXzDRk
  • Simon, Matt. "Fantastically Wrong: The Imaginary Radiation That Shocked Science and Ruined Its 'Discoverer.'" Wired. Sept. 3, 2014. (Sept. 10, 2017) http://bit.ly/2xYwHUS
  • Thase, Michael E. "Do antidepressants really work? A clinicians' guide to evaluating the evidence." Current Psychiatry Reports. December 2008. (Sept. 9, 2017) http://bit.ly/2xWWUD5
  • Turner, Eric H., etal. "Selective Publication of Antidepressant Trials and Its Influence on Apparent Efficacy." New England Journal of Medicine. Jan. 17, 2008. (Sept. 10, 2017) http://bit.ly/2xYsGzx
  • Van Hilten, Lucy Goodchild. "Why it's time to publish research "failures." Elsevier.com. May 5, 2015. (Sept. 10, 2017) http://bit.ly/2xYyLfr
  • Whoriskey, Peter. "As drug industry's influence over research grows, so does the potential for bias." Washington Post. Nov. 24, 2012. (Sept. 9, 2017)

Advertisement

Loading...