Accurate statistics have become increasingly important as the world becomes more quantitative, . They show up in fields such as medicine, education, politics, and, of course, market research. Consumers can find statistics everywhere, including on websites, in marketing copy, and on social media posts, to name a few. Companies often use these numbers to hook consumers; it’s hard not to check out a claim such as 9 out of 10 teachers recommend THIS product! or #1 cancer care center. Likewise, statistics can make for a compelling hook in a blog post or a social media post.
Numbers sound impressive and authoritative. That’s why businesses love to use statistics. However, some consumers don’t know how to properly evaluate statistics for reliability. Even businesses that contract out their marketing research or consumer data research may not fully understand the statistics that the firms return to them. It’s easy for statistics to mislead consumers and businesses alike, so here’s a primer on seven signs of misleading or lying statistics.
1. Statistics Benefit the Group Who Collected the Information
Suppose a toothpaste company releases the results of a study it conducted, with the data showing that consumers using the company’s toothpaste have fewer cavities. Just because the study was carried out by a company it benefits does not automatically mean the results are unreliable. But – and this a huge but – consumers and businesses absolutely need to look carefully at factors such as sample size, audience selection, and the slant of questions asked. It’s important to know the questions asked, who did the data interpretation and why the research was conducted.
Companies should be straightforward and clear about how studies were carried out, though it’s possible for a company to mask that it’s basically the entity that carried out the study. For example, a charity could found a for-profit subsidiary to conduct research, among other tasks, and typical readers would not know about the relationship unless they dig deep.
2. The Sample Size Is Small
Sample (the number of people surveyed) size matters; if a sample is too small, results are not conclusive and cannot be generalized for the population the research is supposed to study. Certainly, a small sample size can be attractive for a business, as it is often convenient and requires fewer resources. The trade-off isn’t worth it though, and it’s important for researchers to disclose how they arrived at their sample sizes and how reliable they are.
3. Error Margins Are Too Large
Error margins are one way to get an idea about sample size. The smaller the sample size, the larger the error margins should be. It’s also important to look at error margins for comparable research to see if the error margins for the statistics in question are relatively small or large.
4. The Sample Representation Is Inaccurate or Biased
A poll on how much money businesses spend on marketing research would not be accurate if it surveyed only small businesses. Likewise, polling people on their ability to identify a quote from classic literature would lead to inaccurate results if the study was done only at a conference of literature professors. The cherry-picking of respondents does not lead to fair results. The best samples are probability samples because they are designed to account for the characteristics of the group being studied.
Other ways in which sample representation can be inaccurate or biased is if the survey was carried out via telephone only and if no double blinding occurred. The problem with using telephone only is that landlines tend to be used, thus skipping people who have cell phones and people who do not have telephones at all. Such a project, however, could be accurate if its interested in measuring only landline telephone users. Double blind studies are when participants in a study don’t know which group they’re in. For example, which cola drinks are in which glasses. This is the only way studies can avoid the risk of giving subtle cues to respondents or introducing biases, however unconscious, into the conclusions.
5. Incentives are Inappropriate for the Sample
Incentives are commonplace, and often required, in research. Some incentive structures can lead to inaccurate results. One sign of biased statistics is that respondents had incentives to answer a certain way. As an example, if a pollster says, “We will give you a free cellphone for answering this survey”, and the questions center on if consumers prefer that cellphone brand to other brands, the results cannot be conclusive.
There are other ways to look at incentives, too. For example, a survey that offers $1 cash up front could be more likely to draw respondents from certain income brackets, thus rendering the sample representation inaccurate.
6. The Context Is Not Reported
It’s common to hear about conclusions that reporters or researchers have drawn from a statistic. For example, a journalist could go on and on about the fact that Americans want hybrid cars based on a statistic, but they journalist might never mention the context of the study and statistic. That is a red flag that the statistic is misleading. Remember: just because something sounds authoritative does not mean it actually is authoritative.
When a statistic says that people are now twice as likely to die from something, that could be an example of context not being reported. What were the odds of dying from that cause in the first place? If they were something like 0.00003 percent, then being two times more likely to die from it could technically be true but is still very misleading, as death from this cause is rare.
7. The Statistic Flies in the Face of Precedent
What might someone think if a survey came out tomorrow saying that skin cancer actually is not all that common? (It is the most common of all cancers, according to the American Cancer Society and many other organizations.) Beware of statistics that go against the grain. They’re not necessarily wrong, but they are worth approaching with caution. Look at the groups sponsoring or carrying out the research.
The conclusion for businesses and statistics are threefold:
1. Many consumers are savvy. They know when something is not on the up and up, and its often best for businesses to be straightforward about how they conducted research and reached their conclusions.
2. It’s important for businesses to understand the statistics they quote or present to the world. In this age of social media, it’s too easy to share a cool statistic without doing due diligence.
3. Businesses need to be sure that the companies they work with for, say, tracking consumer data, are presenting information accurately.
By being aware of these pitfalls of misleading data and looking at signs such as sample size, methodology, and sample representation, a company can get a good idea of whether research is being performed accurately. Curious about what can happen when companies get the data wrong (or ignore the data entirely)? Check out our blog post: Top 5 Examples of How NOT To Do Market Research