Consider a coin toss.
Say you flip a quarter five times, and four of those times it comes up heads. You might chalk it up to random chance.
Now say you flip that quarter 500 times, and it lands on heads 400 times. That’s the same ratio. But here you could confidently say something was fishy with that coin (and pity the poor sucker who kept betting on tails).
We’ve entered the vexing realm of sample size, or “n” in the jargon of statistics. Any published paper on health or medical research will say how large of a population the authors studied. It might relate to patients or tissue samples, but you can always find out what n equals.
Overhyped headlines, snarled statistics lead readers astray
Read our special series of articles with tips and tools to help you better understand health risks, scientific research, statistics and clinical studies.
Trouble is, sometimes journalists don’t.
We’ve all seen health-related headlines that go viral — and then turn out to have been misleading at best. Maybe a journalist writes about a study that “shows” how running does wonders for your knees, or that a cup of coffee a day sharpens your short-term memory. If those studies only involved a handful of individuals, the chance of those findings being true are reduced.
In the statistical world, size matters.
Still, small sample sizes aren’t inherently problematic, says Dr. Ted Gooley, a biostatistician at Fred Hutch. But they should give one pause — especially when interpreting results.
“Say I hear about a baseball player with a .385 batting average,” said Gooley, who falls into the not-small population of sports-loving statisticians. “If it’s over a three-game stretch, I’m not that impressed. But it becomes a different story if it’s over an entire season.”
Gooley has designed, and authored papers on, hundreds of clinical trials for patients with cancer. Many of them are early-phase trials that involve small numbers of patients. He points to a recent study he collaborated on with Fred Hutch’s Dr. Aude Chapuis. It found that engineered immune cells prevented relapse in 12 patients with high-risk acute myeloid leukemia.
That is clearly a small sample size. But in the context of high-risk AML — where most patients will relapse — the finding is striking, Gooley said.
“That is not what you would have expected given the types of patients they were treating,” he said. “A small study may not be able to offer definitive proof of an effect, but it can offer evidence of a potential signal that something’s going on, and that can be a reason to look at this further,” he said. “And the ultimate goal of these smaller studies is to get to a larger, randomized study, which is the gold standard for showing definitive proof of a meaningful treatment effect.”
Anything less than that gold standard, and there’s room for doubt. So be vigilant when reading about studies with small sample sizes, he said. And always be skeptical when something surprising is reported.
But sometimes surprising things do happen, he added:
“Who knows? Maybe the Mariners will win the World Series someday. Rare events do happen.”
Jake Siegel is a former staff writer at Fred Hutchinson Cancer Research Center. Previously, he covered health topics at UW Medicine and technology at Microsoft. He has an M.A. from the Missouri School of Journalism.
Are you interested in reprinting or republishing this story? Be our guest! We want to help connect people with the information they need. We just ask that you link back to the original article, preserve the author’s byline and refrain from making edits that alter the original context. Questions? Email us at firstname.lastname@example.org