Yo! Today, we’re talking statistics and this article is mainly inspired by the TED-Ed video linked to here. I saw it a while ago and I thought it might make for an interesting article. I’m going to give some more examples of some of the potential blindspots in statistics and give y’all a nice wrap up on the subject. If you’re like me and are thinking of taking AP Stat next year, this might also come in handy. If nothing else, it’s interesting to know so let’s jump in.
1.The Cloak of Authority
Stats look much more authoritative than generalisations like most or some. This usually means that people are more likely to believe in statistics that are either wrong or just plain made up. Be careful of where you get your stats from. Be aware of who and what the survey sample is and watch out for biases on the part of the surveyor. Also, pay attention to the year the survey was made; don’t use outdated data.
Depending on the location the surveyors decide to ask questions, the survey results could get skewed one way or another. An obvious example would be the question of which football team was the most popular in a certain city. Of course, the city’s inhabitants are going to vote overwhelmingly for their city/state’s team. If you just wanted to know the favourite team in that city, then there’s no problem. The problem surfaces, however, when you take that data and apply it to a bigger range of people. Say, the state or the whole country. This example is rather obvious but this sort of thing happens quite a lot.
There is also the fact that many surveys have options like unsure or don’t know or decline to answer. This means that a portion of the sampled people didn’t give preference one way or another means that those who were unwilling to answer actually makes whichever side the minority smaller. But when we’re visible creatures who care more about the bottom line more than the shortcomings of stats, then we have things like this:
Or when T. Roosevelt ran against Taft and both lost because they split the Republican party vote and Wilson won instead. So these third-option-choosers do take away from the impact of the results and sometimes makes it seem like one side is winning more than it actually is. Here’s a visual:
I’m getting off-topic. Let’s move on.
There are ways that you can phrase questions to get the response you want. For example, if you ask people if they think the government should help those who are in a bad financial situation and are unable to find work, you’ll find that yes will be the more popular answer. However, if you ask if the government should pay into the welfare programs for people who sit home and don’t work, then the answer would be overwhelmingly no. So there’s plenty of ways polls can be led to display one result or another and still display people’s opinions on certain matters accurately.
This is like in Calculus where if you want to find an inverse of a trig function (say, a sine function), it’s impossible until you impose a restriction on it. This also happens in statistics except with more sinister intentions. This is when you select data points that seem to further your own cause and choose to ignore the rest of the data. For example, take this set of prices for a pound of apples throughout a period of six months:
If a person wanted to make a case that apple prices had gone up in price during this six-month period, they would isolate the data from February to April and cite that as evidence that prices have indeed gone up. While that is technically true, it doesn’t tell the complete truth because the price comes down to a low in May and June. So be careful of data omissions when looking at stats.
5. Ambiguous Importance
Then, there are stats that tell us that this thing ranks first in this category and fifth in the other category. The problem is that we don’t know the scope of difference between one rank and the other. If the US ranks 15th in reading proficiency per one hundred people, we don’t know if we’re lagging behind 14th place by 1% or by 20%. A real-life example would be the ranking of countries according to their population. America ranks third in terms of population. But you see the problem when you consider the 900 million people gap between the US and India, who is the second most populous country in the world. Here’s a cartogram according to stats collected in 2015 on world populations:
Then there is the fact that since the ranking doesn’t use definite numbers, what is actually being measured can be unclear. For example, if you say that this product is the best to help alleviate dry itchy skin, it leaves out what other products have been considered, if this brand is for everyone (for example, if it was meant for people suffering from a particular illness that causes dry itchy skin) and it leaves out the price range, availability and if it has any side effects. So don’t fall for these generalities. See if what they claim actually holds up in the areas that matter most.
With qualifiers, anyone can make anything sound impressive. If your dog knows how to do ten tricks and its name is Trevor, then you can say that Trevor is the smartest dachshund within your neighborhood. The claim would be true and Trevor is number one in something but it doesn’t really mean that much.
A popular example to give for this type of misleading stat is the claim that bears are the biggest land predators in the world. If it’s the biggest land animal, then it’s the elephant. If it’s the biggest predator, then something like the Great White would take the title. If it’s the biggest animal, then the Blue Whale takes the cake. If it’s the biggest animal ever, then it’s probably going to be one of those titanosaurs. So the qualifiers of being a land predator in the world really matters a lot when you see that even the biggest bears can’t begin to compare to a sauropod.
7. Misleading Percentages
To those who are students out there, this will hit close to home. As students, percentages make up most of what we are. But percentages can be misleading as well. If you get a five-question quiz and get one question wrong, your score drops to an 80%. Meanwhile, on a fifty-question test, you need to get ten questions wrong to get an 80. It’s all a matter of proportion. This is when sample size becomes glaringly important.
Then, there is the fact that percentages don’t tell you the scope of the sample size in question. From the example I gave above, if you see two 80s on your report card, you don’t know that one is a quiz with five questions and the other is a test with fifty questions.
If someone wanted to quantify the percentage of the number of hate crimes compared to the overall number of cases the police department receives, depending on what they’re trying to say, they might decide to use percentages. For example, if a small town has had 100 crimes committed in the past year and twenty of the crimes were hate crimes, saying that 20% of all cases were hate crimes that year sound more impressive than just saying that there were twenty such cases. But if it’s a country with fifty million crimes committed one particular year and one hundred thousand hate crimes committed that year, then 100,000 hate crimes sound more impressive than saying that .2% of all crimes were hate crimes. Be aware of this type of bias lest things get blown out of proportion. Don’t fall for it.
After this article, you might be inclined to distrust statistics. But don’t give up entirely on these nifty little numbers. Just be aware of where you’re getting your numbers from and see if they have any sort of discrepancies between what is actually true and what is shown. Like with everything else, be healthily critical of everything you see and stay away from biased sources.
Stay aware and I’ll talk to you later.
Here is another one of my favorite videos on the impact and manipulation of statistics. It might be a bit old, but it still rings true.
A very good and thorough article.
I like your section on selective data. Given a time frame, people could selectively choose a certain subset of that time frame to support their argument. This selected time frame is a bias. For example, a financial stock price may have gone up in a three month window but the overall 5 year trend has the stock price going down. This three month window is somewhat of a small sample size versus 5 years.
When politicians talk about jobs, it can be ambiguous. Do they refer to full time jobs, part time jobs both? People may not use the dictionary definition and may use their own definition instead.
I do agree that statistical measures can “lie” but I think that misleading data reporting is just as bad or even worse (as seen in certain media outlets).
LikeLiked by 1 person
I agree. Sometimes, the data that the media outlets are getting isn’t even their own, opening them up to errors and biases made by a third party but it doesn’t seem to get checked before they report the numbers as fact. What you said about jobs is true as well and I’ve noticed numerous instances where that type of blindspot has been exploited in order to push someone’s agenda. You could also add the fact that sometimes, they also include minors and seniors in the number of umemployed, making unemployment numbers higher than it is. I’m glad you liked this article. I worked pretty hard on it. 🙂
LikeLiked by 1 person