Arrhythmia, an irregular rhythm of the heart, is common during and soon after a heart attack and can lead to early death. That's why when anti-arrhythmia drugs became available in the early 1980s, they seemed like a major life-saving breakthrough [source: Freedman].
The problem, though, was that although small-scale trials showed that the drugs stopped arrhythmia, the drugs didn't actually save lives. Instead, as larger-scale studies showed, patients who received such treatments were one-third less likely to survive. Researchers had focused on stopping arrhythmia as a measure of effectiveness rather than on the problem that they were trying to solve, which was preventing deaths [sources: Freedman, Hampton].
Why did the researchers go wrong? As Discover magazine writer David H. Freedman explained in a 2010 article, the mistaken conclusions about anti-arrhythmia drugs are an example of something called the streetlight effect. The effect is named after the proverbial drunk who explains that he lost his wallet across the street, but he's looking under the streetlight for it because the light is better there. Similarly, in science, there's a tendency to look at and give more weight to phenomena that are easier to measure — which sometimes may result in a wrong conclusion.
But the streetlight effect is just one of numerous types of bias that can infect scientific studies and lead them astray. Scientists consider bias to be such a major problem that in recent years, it's become a subject of research itself, in which scholars use statistical analysis and other methods to figure out how often it occurs and why.
In this article, we'll look at 10 of the many types of bias that can influence the results of scientific and social science studies, starting with a well-known one.