top of page
Search

How biases shape our clinical decisions

By Dave Emond


Here at ARME, we haven’t shied away from discussing the importance of clinical decision making, and the use of best practice guidelines. We know that clinician guideline adherence in musculoskeletal health care is poor at best, thanks to a paper by Zadro et al. from 2019. The median number of physical therapists who chose to provide recommended treatments is 54% (1). Data collected from surveys and audits reported that 43% and 27% of patients receive non-recommended therapies, respectively (1). We must also consider the idea of medical overuse. Overuse includes using services that are unlikely to improve patient outcomes, may result in more harm than benefit, and would typically be unwanted by someone who is well informed on their condition and its prognosis (2).


So why do clinicians continue to offer treatments that are not recommended by our best available evidence? That answer becomes blurry if we were to dive a little deeper than we are today. But there is a line that is echoed in every profession: “Based on my clinical experience and observations…”. We shouldn’t ignore clinical experience, especially when it comes to working with patients with multiple comorbidities, abnormal presentations, or serious and rare pathologies. However, when it comes to many common musculoskeletal complaints such as low back pain, knee pain, etc., we do have a large pool of evidence that shows what generally works best, and what doesn’t.


We have to remember a variety of factors can influence our perceived experiences and observations. We are not working in a controlled environment, so there are plenty of confounding variables that can play a role in clinical success, or as we will see shortly, perceived clinical success. In the coming minutes, we will be going over several biases and fallacies which can play a large role in how we make clinical decisions and choose the therapies we provide for our patients.


This isn’t going to be your typical literature review on a topic with countless references. Let’s use this opportunity as a thinking exercise and do some self-reflection.



Let’s start by talking about the base-rate fallacy, while the topic of guideline adherence is fresh in our minds. The base-rate fallacy is when we choose to ignore general information in favour of a specific case. Let’s consider this example: we have great guidelines for the management of patellofemoral pain (3). These guidelines inform us that we should not be prescribing knee braces, sleeves or straps for patients with this condition. But at a recent seminar, an instructor informed you of a case where they used a knee brace for a patient with PFP, and saw good success. The fallacious action here would be to consider this specific example in a future case and let it override the overwhelming evidence that suggests this intervention is not recommended, by fitting and selling a brace to someone with PFP.


There are a few other biases and fallacious arguments that are closely related to this.


The hot hand fallacy:


The belief that someone who has had success with a random event will have a greater chance at success with further events. We should be cognizant of those “fluke” cases that an intervention inexplicably "worked", and consider that the likelihood of a similar outcome in the future is not greater due to our previous luck. It was what it was: a random occurrence. This is why controlled trials are important. We are expecting supposed efficacious therapies to perform better than random chance.


Neglect of probability:


This is the tendency to completely disregard probability when making a decision under uncertainty. The classic “well let’s take a shot at this therapy” when a patient is not responding to a plan of management is probably not the best approach with common and often self-resolving conditions. Perhaps we should reflect on how we’ve managed the patient, and if we’ve missed addressing important factors in the person’s recovery?


Let's move forward with something called the continued influence effect. This is where we continue to believe misinformation after it has been corrected. In the past, it was suggested that pelvises could slip out of place and we could palpate this and “correct” it. We’ve since known, for quite a while, that the sacroiliac joints have only miniscule amounts of movement (palpation is unreliable, and likely futile), and outside of serious trauma or pathology, do not “slip out of place” (4). We still see plenty of clinicians using this narrative or assessments with their patients, and it is not helping with society’s understanding of back pain. This highlights the importance of staying well read throughout your career.


Why do these clinicians continue to use this narrative? Well there is another bias that may be playing a role here. Confirmation bias can play a heavy role in our beliefs. This is where we witness success in certain cases, which then strengthens our preconceptions. Here is an example: When I was in school, I learned that strengthening our “core” muscles improved back pain, and early in my clinical career I would hand out exercises purported to strengthen core muscles. Many patients would get better, therefore, it must have worked. We now have evidence to say that improving strength outcomes does not improve low back pain (5). This doesn’t mean the exercises didn’t help, but the narrative and rationale behind choosing this intervention should be different. This leads to our next consideration, and a quick lesson in Latin.


Post hoc ergo propter hoc - after this, therefore because of this. In the same example above, I attributed my patient’s improvement in pain and disability to the use of core exercises. It is possible this played a role, whether it be big or small, however we cannot confidently conclude that the exercises improved outcomes just because the patients felt better afterwards. We must take into consideration all of the other confounding variables and contextual effects that come with a visit to a health professional. Improvements in low back pain tend to follow a similar trend regardless of the therapy used, which should make us question if any of these specific therapies are playing much of a role in outcomes on top of natural history (6). There are plenty of contextual factors that can play a role in the improvement of a patient’s condition, so we cannot say with certainty that an intervention was the main factor in a patient’s subsequent improvement (7).


This next one is something I’m sure we have all either done, or observed clinically: The observer-expectancy effect. This occurs when a clinician expects a given result, therefore unconsciously manipulates or misinterprets data in order to find it. Let’s try this example: A person woke up with neck pain, and would like some relief. A clinician expects instrument assisted soft tissue therapy to get some good results. At the end of the session, the patient feels no better and has potentially negligible changes in range of motion. The clinician thinks they see improvements in range of motion, although not having any objective measures, and expresses this to the patient as a successful treatment. It is possible the perceived change in observable range of motion only appeared slightly improved due to the clinician’s expectation that the therapy should be effective. Since the patient is not feeling much better, they are now manipulating findings to match up with their expectation. If we aren’t seeing objective changes, or meaningful changes for the patient, we should take a step back and reconsider if this was an appropriate choice of therapy.


Let’s move on to what we call survivorship bias. This is where we concentrate on people or things that “survived” a process, and inadvertently overlook those that didn’t due to their lack of visibility. This can lead to overly optimistic beliefs because the failures are forgotten. We can think of this clinically, but I would rather touch on its effect on scientific literature. It is far less likely to get a study published in academic journals if it does not show positive results. So when we are looking at systematic reviews and meta-analyses on the effectiveness of an intervention, we have to remember that there is unpublished data that would likely change the whole picture of how effective an intervention may be. When we make a clinical decision to use an intervention that has very little efficacy in the literature, it is possible that its effectiveness may be even less (8).


Let’s finish this article with one more example of clinical biases. Motivated reasoning is when we use emotionally-biased reasoning to produce justifications to make a decision. You may have heard us discuss this next point in our “Evidence Based Beers” roundtable episode on our podcast. Most healthcare practitioners just want to help people get better. What we occasionally see are practitioners using non-recommended interventions because nothing else has worked, and they just want to help the person in front of them. They want to do something, and so this is their “Hail Mary”. Although these are good intentions, this is probably not the best reason to do something that is not shown to be effective for a condition. If this intervention does not work (the likely odds, if it is not a recommended therapy), we’ve now just wasted their time and money, which are types of patient harm we often forget about. We can find many reasons why we should choose an intervention, but we should be keeping our emotions in check so we can use empirical evidence to choose how we will address someone’s musculoskeletal problem.


Final Thoughts


Hopefully these points have given us all a chance to take a step back, and self-reflect on our clinical decision making. These biases and fallacies mentioned above are just the tip of the iceberg in terms of the numerous factors that can alter judgement in a clinical setting. It is important we continue to be critical of ourselves, and remember that scientific studies generally do a good job to control for many of these factors - which ultimately shows which interventions perform better than others. On a final note, don’t feel bad if you had an identity crisis when reading some of these. I’ve already done this before, and continue to do this as I try to develop as a clinician. It’s human nature to be biased - but let’s keep pushing forward to think about our biases and improve healthcare and patient outcomes as a whole.


If you would like to read more on this topic, Dr. Michael Ray over at Barbell Medicine wrote a piece for their June 2020 monthly research review. I’ll leave the link right here.


 

References:


  1. Zadro J, O’Keeffe M, Maher C. Do physical therapists follow evidence-based guidelines when managing musculoskeletal conditions? Systematic review. BMJ open. 2019 Oct 1;9(10):e032329.

  2. Zadro JR, Décary S, O'Keeffe M, Michaleff ZA, Traeger AC. Overcoming overuse: improving musculoskeletal health care. JOSP. 2020 Feb 22; 50(3): 113-115.

  3. Willy RW, Hoglund LT, Barton CJ, Bolgla LA, Scalzitti DA, Logerstedt DS, Lynch AD, Snyder-Mackler L, McDonough CM, Altman R, Beattie P. Patellofemoral pain: Clinical practice guidelines linked to the international classification of functioning, disability and health from the Academy of Orthopaedic Physical Therapy of the American Physical Therapy Association. Journal of Orthopaedic & Sports Physical Therapy. 2019 Sep;49(9):CPG1-95.

  4. Palsson TS, Gibson W, Darlow B, Bunzli S, Lehman G, Rabey M, Moloney N, Vaegter HB, Bagg MK, Travers M. Changing the narrative in diagnosis and management of pain in the sacroiliac joint area. Physical Therapy. 2019 Nov 25;99(11):1511-9.

  5. Steiger F, Wirth B, de Bruin ED, Mannion AF. Is a positive clinical outcome after exercise therapy for chronic non-specific low back pain contingent upon a corresponding improvement in the targeted aspect (s) of performance? A systematic review. European Spine Journal. 2012 Apr 1;21(4):575-98.

  6. Artus M, van der Windt DA, Jordan KP, Hay EM. Low back pain symptoms show a similar pattern of improvement following a wide range of primary care treatments: a systematic review of randomized clinical trials. Rheumatology. 2010 Dec 1;49(12):2346-56.

  7. Hartman SE. Why do ineffective treatments seem helpful? A brief review. Chiropractic & osteopathy. 2009 Dec 1;17(1):10.

  8. Young NS, Ioannidis JP, Al-Ubaydli O. Why current publication practices may distort science. PLoS Med. 2008 Oct 7;5(10):e201.

bottom of page