Statistics 101

Okay, I can't hold it in any longer, controversy be darned. I need to address something that annoys me GREATLY. One of my biggest pet peeves with the public in general, but the mommy blogosphere in particular, is the inability to understand or interpret statistics, especially in an academic setting. You know what I'm talking about, you look up an issue from this expert or that, and you are inundated with citations from so-called "scientific studies". The fact that high schools, colleges, and graduate programs are failing to teach our students such a fundamental skill is deeply troubling. In some cases an author will cite a scientific study as evidence, which instead turns out to only be the results of a customer satisfaction survey. Even worse, numerous, NUMEROUS, books and articles I've read contain statements such as "recent scientific studies show", but never tell you anything about the studies or even give citations of where to find them (the book, YOU: Having a Baby was quite guilty of this). Can we be sure then that these studies are credible? No, we certainly cannot. Certain issues are filled with such passion and conviction that a well-meaning author may be tempted to misrepresent the presence of data in order to prove his or her point. They do not really believe they are misleading you, they are just Machiavellian in their efforts. In an attempt to combat this growing, maddening trend, I present to you my version of Stat 101. Please know this is not intended to be an in-depth discussion of p value or the two-tailed Tukey, just a very basic primer on how to approach research.


1. The study should be published in a peer-reviewed journal. Now, just because it is published in such a journal does not mean it is automatically credible, but this is a basic requirement. If it is not peer-reviewed, do not pass go, do not collect $200. Many popular articles out there will cite personal experiences or anecdotal evidence and call them scientific studies. While these may be considered with some interest, they should not be central in your decision-making process.

2. Look for bias. What do those performing or funding the study have to gain from the results? If the National Association for Golden Retrievers publishes a study claiming Golden Retrievers are the best possible pet for your family, is there a way they could benefit from these findings? Also, as much as researchers may strive to be impartial, bias always exists to some point. A good investigator will try to reduce this as much as possible, but being biased is part of being human, no matter how hard we may try. Many researchers will declare their conflicts of interest up front. This transparency is an important first step in maintaining good research standards.

3. A single study does not a case make. If a study is published that contradicts a long history of other findings, it is not going to have as much weight as the previous studies. Research is as much an art as a science, and it takes multiple, well-executed studies to attempt to prove a phenomenon. This is not to discredit new findings completely, but it will take a while before other studies are conducted to test a new hypothesis. This same principle applies to case studies. Case studies are intended to make an observation to others in the same field, but not necessarily constitute new data in the field.

3. As a general rule, the media (news, blogs, magazines, radio, etc) does not know how to interpret data. Period. You know how it goes, a new study is released and the media runs with the results. They have a talent of picking out findings that the study may (or may not) suggest in order to make a story. Often times they may even make interpretations from the data that the researcher did not intend. This normally begins with an innocent comment that "a study out of Switzerland may suggest that the color green makes children happier" and quickly turns into "Do you decorate with enough green? Swiss scientists are now reporting that lack of exposure to the color green will result in childhood depression". Be wary of any conclusive statements made as a result of a study.

3. CORRELATION DOES NOT EQUAL CAUSATION. Please, everyone, repeat after me.

Correlation does not equal causation.
Correlation does not equal causation.
Correlation does not equal causation.

This is the most important thing I will say in this post. Correlation does not equal causation. If the number of working mothers and the number of tornado occurrences both increased in 2011, it does not mean that the increase of working mothers caused more tornadoes. Yes, I know this seems ridiculous, but this is the most prevalent statistical sin that I see repeated relentlessly in scientific journals and popular media. In many studies there is often a missing 3rd factor that is the cause that is unaccounted for. Look for confounds.


There are many many other issues I could address on this topic, but this is my attempt to combat the misinformation in the media, especially in the Internet. The citation of (faulty) data is one of the primary tools used in Mommy Wars. Have I mentioned how I hate Mommy Wars? Hate them. Here's hoping the knowledge of how to responsibly interpret research will lead to a reduction in this shameful practice.

Comments

Popular Posts