10 Signs That Study Is Bogus

Caveat emptor. Don't be so sure you can trust every so-called scientific study that pops up in the news -- even if the writer wears glasses.
shironosov/iStock/Thinkstock

Eat chocolate, lose weight! In March 2015 the results of a new study by researchers in Germany made waves with the claim that chocolate could be part of a slimming regime. The study, conducted by one Johannes Bohannon, research director of the nonprofit Institute of Diet and Health, showed that adding chocolate to a low-carb diet actually increased weight loss.

The only trouble was that the study was completely bogus. Johannes Bohannon was actually a science journalist named John Bohannon. Bohannon concocted the study to demonstrate how little fact-checking the media engages in when it comes to reporting on science [source: Hiltzik].

Advertisement

While the "study" made a splash in the tabloid media, few if any reputable outlets covered it. Maybe that's because they recognized the telltale signs of a bogus study. Let's find out what those are.

10: It's Unrepeatable

Don't do it, baby. Don't drop that cup. Don't -- you're totally going to do it, aren't you?
Alliance/iStock/Thinkstock

Everybody conducts studies. When you see babies dropping their sippy cups from their high-chairs over and over again, they're engaging in one of the most consequential experiments of their lives. The object of their investigation? Gravity.

No matter how many times we drop cups, they always fall to the ground. The fact that this experiment can be repeated endlessly and by anyone with the same result is one of the basic principles of any study. No matter how convincing the results of a study might be, if it can't be repeated by peer researchers, that means it can't be validated.

Advertisement

To point this problem out, the Center for Open Science coordinated 270 researchers in a massive project to reproduce the results from 100 published studies in the field of psychology. In 2015, after years of work, the Center reported that more than half of the studies couldn't be repeated because the evidence gathered for them was not as strong as originally claimed.

One of the studies that they tested, for instance, was designed to determine whether men have a harder time distinguishing sexual cues from friendliness. Following the structure of the original study, the Center for Open Science showed test subjects a series of photos of women exhibiting different facial expressions.

While the original study found that, indeed, men were ninnies at nailing the cues, the follow-up test of the study couldn't replicate those results. It isn't clear whether this was because of cultural differences between the first and second studies (one study was conducted in the UK, the other in the USA) or the time elapsed between studies [source: Firger]. Either way, a finding that can't be reproduced isn't one for the ages.

9: It's Plausible, Not Provable

Jim took this whole observational-research thing a little too literally.
Colorblind/Getty Images

Very serious people in white lab coats bending pensively over test tubes — that's the image that often comes to mind when we think of a scientific study. But that's just one type of study — a laboratory experiment.

Another kind of study is called "observational." That's when researchers find a group of test subjects, ask them a lot of questions, record the answers and then "data mine" the results to see what they find.

Advertisement

Once upon a time, observational studies suggested that having a "Type A" personality put you at higher risk for having a heart attack. But follow-up randomized clinical trials could not repeat these results, and it's now known that the original finding was completely false. How did this happen?

Observational studies can, of course, be enormously useful and enlightening. But there's a potential for misleading results. One problem is that you can get statistically significant results by chance about 5 percent of the time. So if you ask enough questions (and sometimes these studies can include thousands of questions), the data might appear to render something important. But on subsequent review, or attempts to repeat the study, the results might not be the same [source: Miller and Young].

8: The Samples Are Unsound

Let's not tell them that virginity study was bogus just yet.
Fuse/Thinkstock

Breaking news: A new study reveals that the manner in which you lost your virginity will have an important impact on how you experience sex forever after.

Surprising results from odd research studies are a regular feature of the news cycle these days. But if you can get your hands on the actual research study, the design of the project might surprise you even more than the results.

Advertisement

The thing about that 2013 virginity study is that its subjects were extremely homogeneous. In fact, it turns out, they usually are when it comes to studies in psychology and social science.

Since many of these studies are conducted by academics, the typical sample population for such research is — surprise, surprise — college students. That makes them WEIRD — an acronym for Western, Educated and from Industrialized, Rich and Democratic countries. In other words, not exactly representative of global society as a whole.

To make matters worse, in the virginity study, researchers excluded people who had experienced violent first encounters as well as anybody experiencing anything other than heterosexual intercourse [source: Brookshire].

So next time you hear the breaking news of yet another astonishing finding about human behavior and experience, take it with a grain of salt as you wonder just who exactly was being studied.

7: Something's Missing

No, you'll be totally fine, mouse! Just take this medicine and have a little nap!
WUNCHANA_SEUBWAI/iStock/Thinkstock

Say you're conducting a trial on a new drug to help prevent stroke. You've got 20 mice: 10 getting the drug and the remaining 10 in your control group. It's a small study group, really small, but then your budget is small, and, well, everybody's doing it this way these days.

Seven of your test mice are doing really well, but unfortunately three of them die of massive strokes. What do you do? Simple. Just leave them out of the results. Yes, that's right, when you do up your charts and graphs, just don't mention the deceased rodents. Everybody's doing it this way these days.

Advertisement

That really happened. Actually, it turns out that really happens all the time in animal research. Luckily, in this particular case, the scientist who was asked to review the study didn't let it pass. The three dead mice, he pointed out, were vitally important elements of the study. Indeed, they showed that the new drug might be harmful rather than helpful.

That scientist, Ulrich Dirnagl, and a colleague named Malcolm MacLeod have been raising the alarm about the lax standards found in many animal research studies [source: Couzin-Frankel]. Let's hope their message gets through.

6: Predatory Publishers

She just found out the International Journal of Overly Large Fignernails isn't actually legit -- and that they expect her to pay up for publishing.
Bart Sadowski/Getty Images

There you are, a young doctor getting your career off the ground when you get an email from "The Journal of Clinical Case Reports" asking you to submit some articles. As it happens, you've got some interesting cases to report so you write them up and send them off. To your delight, they're accepted. A nice addition to your résumé.

Then comes the bill — the journal says you owe them $2,900! Shocked, you write back to say that you've never heard of being charged to publish and you have no intention of paying. They publish your articles anyway, offering to reduce your bill to $2,600. After a year of wrangling, the journal finally agrees to "forgive" your so-called debt.

Advertisement

Welcome to the brave new world of predatory publishing, an unexpected consequence of the open-access movement to make scientific findings more widely available. Jeffrey Beall, a research librarian, has been keeping a record of publications he considers predatory. He thinks there might be as many as 4,000 of them out there — that would be 25 percent of all open-access journals.

Suffice it to say that predatory publishers are more concerned with their profit margin than scientific rigor. If researchers can pay, they can get published, regardless of the quality of their work. As a result, the number of questionable studies published has multiplied. Unless you're an expert in a given field, it might be hard to tell which science is reliable and which is junk [source: Kolata].

Famously, to get ahead in academia, scholars must publish or perish. Small wonder that predatory publishing flourishes in such an atmosphere. Buyer (and reader) beware!

5: It Proves a Point (Follow the Money)

Beware the powerful bubblegum lobby. Who knows what sort of sway they have over gum researchers?
belchonock/iStock/Thinkstock

One day you decide to try a new brand of chewing gum, but a short time after popping it in your mouth you break out in hives. In the emergency room the doctor tells you that blood tests reveal an allergy to some ingredient in the new chewing gum. But let's say you're weirdly stubborn, and let's say you really liked that gum, too. Oh, and you decide to visit nine more doctors.

Eight of nine of them agree with the first one. But one lone doctor says no, it's not the gum, it's just a coincidence. In fact, he thinks you might be allergic to chewing, or walking, or walking and chewing at the same time. You like his answer, and you like his fancy office, but you're starting to wonder how he paid for it.

Advertisement

When trying to figure out whether a study is bogus, some common sense is useful. If 99.9 percent of the experts in a given field say one thing and a handful of others disagree, have a look at where the skeptics' funding is coming from. In other words, follow the money.

Willie Soon is one of the handful of researchers who deny that human activity has anything to do with climate change. The fact that he works at the Harvard-Smithsonian Center for Astrophysics lends a degree of prestige to his opinions. However, the Center for Astrophysics has an arms-length relationship with Harvard, and researchers there are not on salary and receive no funds from the university.

In fact, it turns out that the bulk of Soon's funding has been coming from sources such as Exxon Mobil and the American Petroleum Institute, among others in the energy sector. In other words, the people paying for Soon's research are the very people who have the most invested in disproving human responsibility for climate change. While Soon insists the source of his funding has no bearing on his research, the optics aren't in his favor [source: Goldberg].

4: It's Self-Reviewed

"I'll be reviewing YOU later myself."
mediaphotos/Getty Images

Peer review: It's the bedrock of reputable scientific publishing. The idea is that if a study has been carefully examined and approved by another researcher in the same field, then it's valid enough to publish in a respected journal. But that idea only holds if the peer review system itself is trustworthy.

Hynung-In Moon, a medicinal-plant researcher at a university in South Korea, was having good luck with the reviews of the studies he was publishing in "The Journal of Enzyme Inhibition and Medicinal Chemistry." Aside from a few suggestions about how to improve his papers, they were quickly approved. Very quickly. In fact, the peer reviews were sometimes coming back to the editor of the journal within 24 hours of his having sent them out.

Advertisement

Growing suspicious, the editor asked Moon what was going on. The researcher fessed up — those fast, approving reviews were coming from none other than himself. Following common practice, the journal had asked Moon to suggest some potential reviewers. When he did, he gave them a combination of real and fictitious names with fake contact information, including email addresses that came to Moon's inbox [source: Ferguson et al].

It turns out that some of the systems set up for reviewing have a loopholes like this, and Moon's instance of self-review is not an isolated anomaly. This leaves us with the possibility that some of the peer-reviewed studies we hear about might actually be peer-less.

3: The Batteries Are Low

"Whatever. That lung cancer study probably had low statistical power."
iStock/Thinkstock

The field of neuroscience is in an exciting phase with powerful new technology capable of analyzing the brain with ever-increasing exactitude. Using scans, for instance, a number of different studies looked at the relationship between mental health and abnormal brain volume. Many, in fact most of them, found a correlation.

But the overwhelming rate at which these studies kept confirming one another piqued the curiosity of researcher John Ioannidis. After analyzing the data he found that, taken together, all of these studies had an average statistical power of 8. That sounds low, and it is. But what does it mean?

Advertisement

The statistical power of a study refers to the size of the samples used and how large or small the results were. To massively oversimplify, if you study 10,000 smokers and 10,000 non-smokers and find that 50 percent of the smokers developed lung cancer while only 5 percent of the non-smokers did, then your study has very high power. You had a huge sample population, and the results were huge as well.

But if you study 10 smokers and 10 non-smokers and find that two of the smokers developed lung cancer and one of the non-smokers did too, then you have an extremely underpowered study. The sample size is so tiny that the difference between the two groups is meaningless [source: Yong].

To be fair, most studies accused of having low power aren't as ridiculously low as that fictional example. But in recent years, concerned researchers have been blowing the whistle on the prevalence of under-powered studies. Their message? It's time to power up!

2: It's Too New

Maybe wait a while before you rush out to try the latest supposed cancer-blocking cure touted by a brand new study.
Thinkstock Images/Stockbytes

Every other day we hear of a new study that's found vitamin X, Y or Z prevents cancer or Alzheimer's or autoimmune disorders, and we rush out to buy vast quantities of the stuff. Before mainlining yet another trendy supplement, it might be wise to sit back and wait a little while to see whether subsequent research bolsters or debunks its value.

The media thrives on the latest and the newest. But when it comes to science, novelty isn't necessarily a good thing. Often it just means that the exciting results of a surprising new study are too new to have been disproven yet. Check back in a couple of years and see how many of those headliner research results have stood the test of time.

Advertisement

In terms of research, much of what the media reports on are "initial findings." Initial findings are just that, initial. They need to be verified by more studies to see whether the results can be reproduced. Frequently they can't be — but it's rare when media outlets report negative research results [source: Crowe]. That's because they're never as popular as an exciting new discovery.

1: It's a Cool Story

It looks like Nature was right. Let's hope you have some flame-retardant outfits handy.
Don Arnold/WireImage/Getty Images

The April 2015 edition of "Nature," one of the most prestigious and reputable scientific journals in the world, startled its readership with the extraordinary results of a new study. Dragons, it said, are not mythical products of the pre-modern imagination, but real creatures who have been subsisting in a dormant phase since the Middle Ages. One of the most worrisome findings was that there was evidence that these ancient beasts are on the verge of waking up.

The publication date is the obvious tip-off that this article was an elaborate April Fool's Day prank. But despite the farcical nature of the piece, it points to an important feature of science journalism — people like good stories. Scientists know this. Even getting a paper published in a science journal can require the creation of a "beautiful story" to explain the findings. And the better the story, the more likely it is that mainstream media will cover it [source: Firger].

But scientific inquiry doesn't always result in good stories. In fact, more often than not, it doesn't. Much important research is highly inconclusive. At best, it might give us a provisional window on a possible truth. As often as not, it tells us little or nothing at all. So when it comes to science, beware of good stories, dragons or no.

Lots More Information

Author's Note: 10 Signs That Study Is Bogus

I'm as cynical as the next person, but the research required for this article shocked me more than once. Like most people, I associate science with rigor, so I was disheartened to learn about some of the shoddy, slapdash and compromised work being done. But I suppose I shouldn't be. Scientists aren't robots operating in hermetically sealed environments. They're as subject to venality and cultural trends as anybody else. And in the end I came away with maybe even greater respect for a mode of inquiry that polices itself to the degree that science does. There's a lot of bad science out there, but the people blowing the whistle are ... scientists!

Related Articles

More Great Links

  • Brookshire, Bethany. "Psychology Is WEIRD." Slate. May 8, 2013. (Sept. 15, 2015) http://www.slate.com/articles/health_and_science/science/2013/05/weird_psychology_social_science_researchers_rely_too_much_on_western_college.html
  • Button, Kate. "Unreliable Neuroscience? Why Power Matters." The Guardian. April 10, 2013. (Sept. 18, 2015) http://www.theguardian.com/science/sifting-the-evidence/2013/apr/10/unreliable-neuroscience-power-matters
  • Couzin-Frankel, Jennifer. "When Mice Mislead." Science Magazine. Vol. 342. Nov. 22, 2013. (Sept. 15, 2015) https://www.gwern.net/docs/dnb/2013-couzinfrankel.pdf
  • Crowe, Kelly. "It's News, But Is It True?" CBC News. Oct. 5, 2012. (Sept. 14, 2015) http://www.cbc.ca/news/health/it-s-news-but-is-it-true-1.1282472
  • The Economist. "How Science Goes Wrong." Oct. 19, 2013. (Sept. 12, 2015) http://www.economist.com/news/leaders/21588069-scientific-research-has-changed-world-now-it-needs-change-itself-how-science-goes-wrong
  • Ferguson, Cat et al. "Publishing: The Peer Review Scam." Nature. Vol. 515. Pages 480-482. Nov. 26, 2014. (Sept. 17, 2015) http://www.nature.com/news/publishing-the-peer-review-scam-1.16400
  • Firger, Jessica. "Science's Reproducibility Problem." Newsweek. Aug. 28, 2015. (Sept. 12, 2015) http://www.newsweek.com/reproducibility-science-psychology-studies-366744
  • Fischer, Douglas. "'Dark Money' Funds Climate Change Denial Effort." Scientific American. Dec. 23, 2013. (Sept. 17, 2015) http://www.scientificamerican.com/article/dark-money-funds-climate-change-denial-effort/
  • Goldenberg, Suzanne. "Work of Prominent Climate Change Denier Was Funded by Energy Industry." The Guardian. Feb. 21, 2015. (Sept. 17, 2015) http://www.theguardian.com/environment/2015/feb/21/climate-change-denier-willie-soon-funded-energy-industry
  • Hamilton, Andrew J. et al. "Here Be Dragons." Nature. Vol. 520. April 2, 2015. (Sept. 14, 2015) http://water-pire.uci.edu/wp-content/uploads/2015/04/Hamilton_Dragons.pdf
  • Hiltzik, Michael. "A Bogus Study of Chocolate and Diets – And the Media Swallowed It Whole." Los Angeles Times. May 29, 2015. (Sept. 14, 2015) http://www.latimes.com/business/hiltzik/la-fi-mh-a-bogus-study-of-chocolate-20150529-column.html
  • Horowitz, Evan. "Studies Show Many Studies are False." The Boston Globe. July 1, 2014. https://www.bostonglobe.com/lifestyle/2014/07/01/studies-show-many-studies-are-false/PP2NO6lKd7HMyTZa1iCHGP/story.html
  • Ioannidis, John P.A. "Why Most Published Research Findings Are False." PLoS Med. Vol. 2, No. 8. Aug. 30, 2005. (Sept. 12, 2015) http://journals.plos.org/plosmedicine/article?id=10.1371/journal.pmed.0020124
  • Kolata, Gina. "Scientific Articles Accepted (Personal Checks Too)." The New York Times. April 7, 2013. (Sept. 17, 2015) http://www.nytimes.com/2013/04/08/health/for-scientists-an-exploding-world-of-pseudo-academia.html
  • Marcus, Adam and Ivan Oransky. "Getting the Bogus Studies Out of Science." The Wall Street Journal. Aug. 19, 2015. (Sept. 14, 2015) http://www.wsj.com/articles/getting-the-bogus-studies-out-of-science-1440024409
  • Miller, Henry I. "The Trouble With 'Scientific' Research Today: A Lot That's Published Is Junk." Forbes. Jan. 8, 2014. (Sept. 12, 2015) http://www.forbes.com/sites/henrymiller/2014/01/08/the-trouble-with-scientific-research-today-a-lot-thats-published-is-junk/
  • Yong, Ed. "Neuroscience Cannae Do It Cap'n, It Doesn't Have the Power." National Geographic Phenomena. April 10, 2013. (Sept. 18, 2015) http://phenomena.nationalgeographic.com/2013/04/10/neuroscience-cannae-do-it-capn-it-doesnt-have-the-power/

Advertisement

Loading...