top of page
Search

"Where do I start?": A Quick(ish) and Simple Guide to Assessing Research

Updated: Apr 4, 2020


By Ben Csiernik


“I’m evidence based”. I’ve heard it, you’ve heard it, everyone’s heard it. It’s a phrase so over-used and watered down that it holds about the same value as Boeing stock right now. The simple fact is we should ALL be evidence-based, so having to say it out loud defeats the purpose. The reality is that we aren’t all evidence based, because it’s really hard to be in a time when eight new clinical trials get published every day and you only have 15 minutes twice a week to maybe look at one. So what I’m going to do here is quickly break down a 5 step process for how you can get more out of papers you’re interested in, in a lot less time.



Step 1: The Abstract


If you’re going to read it, read all of it. Don’t just skip to the conclusion like we both know we want to. With that being said you don’t have to read it at all if you don’t want to. We live in a time in healthcare and fitness where the abstracts are loaded with “spin” (1), also known as “well we think that this could maybe work, sort of, if we subgrouped further, and did more research. But trust us we got good results, and this totally might work, we promise”. I like to look for two things in the abstract: 1) What were the authors hoping to find 2) Does something seem like it worked too well, or didn’t work at all. If I’m interested in either 1 or 2, I’ll dive in to the paper, and I think you should too.


Step 2: The introduction


If you know the background of the topic the paper is covering, just skip this. It may give you an unwanted bias that the authors are intentionally or unintentionally setting up. If you’re unfamiliar with the topic, look to see if there’s a citation that gets repeated a bunch, that may be worth more of your time to learn about the topic.


Step 3: Methods


This, in my opinion, is the most important section, and also where you should start when reading a paper. Instead of me pretending that I have a magical system that helps me determine if a study has a legitimate design, let me guide you to what smarter people say to look for in a study (2)


Let’s do a quick example. Let’s say we’re testing to see if duct tape therapy works on pain. If a paper says that duct tape therapy IS effective, but we see:


1) The intervention group came into the study with way less/more pain


2) The intervention group has spent way less/more time in pain


3) The intervention group got WAY MORE/WAY LESS treatment than the control group (if there was a real control group)


4) The intervention group has a significantly lower/higher disability score, AND


5) The researchers knew who was getting the treatment


6) The participants knew the treatment they were getting


7) There were 8 people total in this study


Add this all together, and maybe we need to interpret the results from this study cautiously.


Step 4: The Results


In the world of research on pain, or the treatment of dysfunction, we often run into the issue of determining how effective our treatment is, based on statistical significance. First of all, let’s make it clear that humans aren’t P values. Second of all, P values can be manipulated, and showing statistical significance does not imply clinical significance (1). If 30 people with an average pain score of 7/10 get duct tape therapy, and their pain drops to 6/10, it MAY be statistically significant, but I don’t know how many people are happy to be living their life in 6/10 pain. Thirdly, humans aren’t P values.


This can also be applied in the total opposite direction as well; sometimes good things are being done that don’t meet statistical significance. If 20 people’s pain do not drop by a statistically significant amount, but 90% of them feel better about their lives now, that intervention was probably pretty effective in its own way.


Instead of worrying about P values, look to see if a study presents a Minimally Clinically Important Difference (MCID), which looks at whether an intervention reached the point that it both holds value to the patient, and that the value is of enough significance that you should change your plan of management.


Finally, P values don’t matter (for studies focusing on people in pain)


Step 5: The Discussion/Conclusion


So you’ve critically assessed the methods, and you’ve looked at what’s clinically significant. If you feel like you’ve got the point of the paper, congratulations, you’re done! However, sometimes the results are confusing and we want to see how the authors interpreted their findings. The discussion can be great to see how the authors tied in their work to previously conducted research, but we can also find ourselves in a bit of a “spin” zone once again. My one piece of advice; remember if the paper doesn’t demonstrate clinical significance, and instead presents only statistical significance, we might be in trouble.

At the end of the day, we are in pursuit of finding good clinical results that can help people. I’ll leave you with this thought from Dr. Andrew Moore (it’s also the title of the paper): Expect Analgesic failure; pursue analgesic success.

Final Thoughts:


1) If you can only read a part of the paper, stick to the methods and the results.

2) Humans aren’t walking P values, statistical significance can underestimate good studies, and overly support bad ones.

3) Be critical of the process, if something seems too good to be true, it probably is.

4) Research is hard. Don’t be like me, be critical, yet kind to the researchers who are searching for the answers that we have questions to.

5) If you want a much deeper and much more thorough look at these ideas, go check out Dr. Neil O’Connell’s work. He’s much smarter than I am, and I strongly suggest reading the paper (it’s not a Randomized Control Trial!) cited as number one (1) in the citations.

 

Citations


1) O'Connell NE, Moseley GL, McAuley JH, Wand BM, Herbert RD. Interpreting effectiveness evidence in pain: short tour of contemporary issues. Physical therapy. 2015 Aug 1;95(8):1087-94.

2) Higgins JP, Wells GA. Cochrane handbook for systematic reviews of interventions.

 

bottom of page