{"id":8710,"date":"2013-07-06T06:24:06","date_gmt":"2013-07-06T06:24:06","guid":{"rendered":"https:\/\/acneeinstein.com\/?p=1305"},"modified":"2018-07-27T19:06:19","modified_gmt":"2018-07-27T19:06:19","slug":"how-to-read-and-understand-scientific-research","status":"publish","type":"post","link":"https:\/\/acneeinstein.com\/how-to-read-and-understand-scientific-research\/","title":{"rendered":"How to Read and Understand Scientific Research"},"content":{"rendered":"

Chris Kresser just published a nice article on how to read and understand scientific research<\/a>. Basic understanding of scientific research and how to interpret it is vital in navigating the confusing jungle of health claims you encounter every day online. That\u2019s why I\u2019ll summarize the point Chris made here, and also recommend you check out his article.<\/p>\n

This post has nothing to do with acne. And if understanding scientific research doesn\u2019t interest you in the least, then this post is not for you. Move along, nothing to see here.<\/p>\n

In today’s world of conflicting interests, flawed science, and sensationalized media, it’s important to question new claims and findings, especially when those findings could have serious implications for your health. One of the most important things you can do to make sure you’re getting the real scoop is to read the scientific literature yourself.<\/p>\n

Chris Kresser<\/p><\/blockquote>\n

I don\u2019t mean that you\u2019ll become a master wizard at reading obscure scientific papers, but that\u2019s not really the point. Because even a cursory understanding of how scientific research helps you to spot the biggest nonsense.<\/p>\n

The article Chris wrote is a good start. I\u2019d like to add the following to his points.<\/p>\n

Understanding why epidemiological studies can\u2019t establish correlation<\/h2>\n

One example irks me is how many alternative doctors demonize artificially sweetened sodas based on epidemiological studies. They say that people who drink diet sodas are YY% more likely to be overweight than people who don\u2019t drink diet sodas.<\/p>\n

The unstated implication is that diet soda made the people fat, but you can\u2019t make that conclusion based on those studies. Because a far more likely explanation is that people drink diet sodas because<\/i> they are overweight, because they want to reduce calories and lose weight.<\/p>\n

You\u2019ll see Drs. Mark Hyman, Joe Mercola and other alt-med celebrities making such misleading statements all the time, and especially when it fits the spin they want to push.<\/p>\n

To make a point, a skeptic at Reddit<\/a> produced this graph.<\/p>\n

\"Correlation<\/a><\/p>\n

Stop the press! Organic food causes autism!! It\u2019s almost a perfect correlation!<\/p>\n

You get the point. Thought the graph looks very convincing, nobody seriously claims that organic food causes autism. That\u2019s why you also shouldn\u2019t take seriously the graphs showing a relationship between artificial sweeteners and neurological problems \u2013 or whatever substance they next want you to be afraid of.<\/p>\n

Understanding study quality and bias<\/h2>\n

I can\u2019t say about you, but when I didn\u2019t understand scientific research, I had this impression of how infallible scientific research is. That once a study had been published, then that\u2019s it. The matter has been settled (because a study said so). Oh, how wrong was I, lol.<\/p>\n

There\u2019s a huge amount of bias and other nonspecific effects (sometimes referred as the placebo effect, which has nothing to do with mind over matter or anything like that) that make it hard it interpret individual studies.<\/p>\n

Humans are inherently biased, and we want our cherished notions to be true. Scientists are no different in this regard. They often fall in love with they own hypotheses and want to see them proven. Unfortunately this can cause a scientist to, often subconsciously, devise a biased experiment that\u2019s more likely to confirm his\/her hypothesis.<\/p>\n

The entire scientific method was created to fight against such biases we humans come hardwired with. That\u2019s why we need things like controls<\/i>, blinding<\/i> and randomization<\/i> to minimize the effect of these biases. Without such protections you can never know how much of the observed effect was due to the treatment and how much due to nonspecific effects.<\/p>\n

Due to bias in scientific studies, you can (almost) always find a study to support whatever you want, heck there are tens of studies showing magic water (i.e. homeopathy) \u2018works\u2019. That\u2019s why you should never put too much faith into findings of a single study. For us to be able to conclude something, the initial positive studies need to be replicated by other, indent researchers.<\/p>\n

Hierarchy of scientific evidence<\/h2>\n

The fact is that scientific studies can be expensive. Usually the better and more conclusive a study is, the more it costs. For a study to be conclusive it needs to have a lot of participants and many safeguards against bias, and all those things cost money.<\/p>\n

Because good studies are expensive, we can\u2019t do them all the time. We just don\u2019t have enough money. That\u2019s why new ideas are often tested in less expensive studies. The very first studies are usually done in vitro<\/i> (outside a living organism, such as in a petri dish). These studies establish whether something is possible even in the best case scenario.<\/p>\n

Let\u2019s say that you think that feruminin (a made up word) could reduce sebum production. What you would do first is grow some sebocytes (sebum producing cells) in a petri dish and expose them to feruminin to see if it affects how fast the cells grow and divide. If the results are positive, you\u2019ve just established it\u2019s theoretically possible for a feruminin cream to reduce sebum production. You have also gathered information about required dosage, so you have an idea of how much feruminin you need to get into the skin.<\/p>\n

Next, you might produce a cream with a certain concentration of feruminin and test it out in animals for safety and first in vivo<\/i> (in a living organism) studies. If these go well, you\u2019ll move to human studies.<\/p>\n

You might produce 2 creams, one with feruminin and other vehicle-only cream (lacks feruminin but otherwise identical) and randomly give the creams to study participants. If the results show that the group that used the feruminin cream got better results than the group using the placebo cream, then you start to be on to something. This is the kind of evidence we can start taking seriously.<\/p>\n

Next, you might do a properly controlled study where your feruminin cream is compared against an existing treatment and an inactive placebo in a study with more participants (at least 30 to 40 participants per group). If this study shows that the feruminin cream outperforms both the placebo and the existing treatment, then we really start to be on to something.<\/p>\n

The final step would be to have other, independent research groups to replicate the study and to see if they get the same results. After that we can really say that yes, your feruminin cream works.<\/p>\n

The purpose of those preliminary studies is not really to establish whether feruminin works, but to see whether it qualifies for the next level of studies. It\u2019s unfortunately common to see people hype these preliminary studies as proof and without clearly describing what you can and cannot conclude from the studies. I’m not saying we should never talk about such studies, only that we should be careful with the conclusions we draw from them.<\/p>\n

Statistical significance vs. clinical significance<\/h2>\n

Sometimes studies produce results that are statistically significant but clinically meaningless. Let\u2019s say that you invent another acne treatment; comuvin. You test a comuvin cream against a placebo cream. At the baseline (beginning of the study) the participants had on average of 25 pimples. After12 weeks, the comuvin cream group had 19 pimples on average and the placebo group had 20 pimples. Let\u2019s also assume your study had enough participants to be able to say such a small difference is statistically significant (i.e. not likely due to random chance).<\/p>\n

You could say that your comuvin cream is significantly more effective than placebo cream; significant not referring to difference in effect size but to statistical significance. But such a small difference in effectiveness is meaningless in real life. As a patient you probably couldn\u2019t tell the difference.<\/p>\n

You\u2019ll see this kind of statistically significant but clinically meaningless effects often in alternative medicine studies, especially when \u2018real\u2019 acupuncture is compared to \u2018sham\u2019 acupuncture (such as toothpicks, needles that don\u2019t penetrate the skin, or needles on non-acupuncture points). This small difference in effectiveness is far more likely due to unblinding or other bias than proof of effectiveness for acupuncture.<\/p>\n

Conclusion<\/h2>\n

Science is not religion. Therefore it\u2019s ok to question scientific findings, especially since many studies are biased and will later be shown wrong. The points Chris made in his article and the ones here should arm you with basic information and skeptical attitude to navigate the confusing jungle of health claims.<\/p>\n