I recently saw a press release about a study showing that only 19.5% of news release headlines are optimized for SEO. It brought to mind all kinds of issues about how best to report numbers in press releases. In particular it highlighted the important issue of whether specific numbers are meaningful and whether they communicate a misleading sense of precision.
For example, when a survey reports a margin of error to any decimal place, it suggests a level of precision that is misleading. Do a quick search, and you’ll find press releases reporting margins of sampling error such as +/- 4.8%, +/- 10.5%, or +/- 1.85%. These numbers are based on sample size formulas that assume perfect random sampling and one hundred percent response rates, which are almost never achieved.
Moreover, these overly-precise numbers convey a misleading impression about how good a study’s findings are overall. How many readers understand the difference between a margin of sampling error vs. the “overall” margin of error which is what most readers actually care about? Not many. As the American Association of Public Opinion Polling (AAPOR) correctly notes:
Poll results are subject to lots of sources of errors ranging from how well the questions were designed and asked to how well the interview was conducted to how well the sample design was implemented. Good pollsters and researchers do everything in their power to minimize these other possible sources of errors, but they are non-measurable in any case, and one can never know the precise amount of error associated with any poll finding.
Indeed, there are so many conceptual and practical difficulties with reporting margins of error, that we, along with other major polling firms, typically recommend not reporting them.
But what if we could accurately report numbers, margins of error, and findings down the tenths or hundredths of a percent? (And maybe they can in the SEO and headlines study—hard to say without knowing more about their methods.) Is a tenth of a percent a meaningful level of precision worth reporting? Does it matter that it 19.5% of headlines are optimized versus just 18.8%? Probably not. Better to just say 19%, which is likely to be a more credible statistic because it does not pretend to be more meaningful or precise than it really is.
–Joe Hopper, Ph.D.