It’s become a major problem in our quest to obtain a perfect healthcare system.
It’s called science.
Now, don’t get me wrong. I have nothing against observation and experimentation. But, since blind pursuit of anything can lead to destruction, it’s time to take more notice.
To be fair, it’s not science, per se, we are talking about. It’s our interpretation of it—how we perceive its role to be within the framework of medicine.
I’ll begin with a simple illustration.
I will flip a coin 10 times. I will tabulate the number of instances the coin returns heads or tails. Then, I will repeat the experiment 10 more times.
In fact, here are the results:
I will now propose that two medical treatments exist for you to receive.
There is treatment X and there is treatment Y.
You, of course, want to be provided the best treatment for your ailment. The one that has been studied and proven to work better. The one that makes you live the longest or feel the best. I get that.
But, I’ll let you in on a little secret.
The two treatments I am studying here are actually identical. Treatment X is the same as treatment Y. It’s the same pill. One is just labeled “X” while the other is labeled “Y.”
Said another way, both treatment X and treatment Y have an equal chance of being good for you. Because they are the same thing.
You see, in my experiment above, you are the coin.
Whether or not treatment X (heads) or treatment Y (tails) is best for you, is merely a coin flip. Both therapies possess equivalent odds of being beneficial.
We often presume this equivalency means that data collected on X and Y will always return Even-steven.
Fifty-fifty.
One study after another should obviously prove that X and Y are equal. Because we know that they are the same.
But, this is just not true.
Flip ten coins. These coins represent ten people like you.
In one of my isolated coin tosses above, tails (treatment Y) was found to be superior to heads (treatment X) by a margin of 8 to 2. Said another way, four times as many people benefited from receiving treatment Y during that experiment.
Yet, once again, X and Y are exactly the same thing.
What we are learning about here is not esoteric. It’s called variance. And, variance is what makes clinical studies in medicine so challenging to interpret.
We often presume the magnitude of variance in any clinical trial is magically confined to the differences in the sides of a coin—the disparity between one therapy or another.
Yet, variance pays no attention to this assumption. Instead, variance is found everywhere—in fact, infinitely more places than within the treatments themselves.
I previously wrote about a clinical trial published last year. The conclusion of the study was that lowering patients’ blood pressure to lower target levels (than currently recommended) was better. This was achieved by using more medicines, spending more money, and so forth. The benefits to the population studied were small, yet its findings were hailed as “landmark” by many experts.
This week, at a major international medical meeting in Europe, doctors debated applying the results of this trial to real world practice.
Why?
Because there was some variance in how blood pressure was checked in this trial compared to others done like it.
The blood pressure in this study was checked by an automatic monitor without the healthcare professional in the room. The healthcare professionals were trained to leave the room before the measurement started. Evidently, this was a different strategy than what had been used before. Many experts felt it varied the results.
But, did it?
Well, I’m sure it affected something.
Whether or not it affected things more than the amount of traffic patients had to navigate on the way to their appointment, where they had to park relative to the office front door, what they precisely ate for breakfast, and so forth, is really anyone’s guess.
Variance abounds—even when the coin has an equal chance of giving you the same result.
* * *
I enjoy listening to the experts debate the research. A healthy skepticism is my own second language.
But, among the experts, there is too often a confidence that facts and rational thinking are found only on the side of “science” and this “evidence based” movement. Frequently, this is wrong.
Evidence from well-conducted studies. Experience from clinical practice. Reason. Tradition. They all have a place in medicine—each with their own tradeoff.
You can trace the birth of Evidence Based Medicine (EBM) back many decades. You can appropriately say it has greater influence now than anytime before in history. Yet, if you’ve tried to pay for your healthcare recently, you might question the value it’s created for you.
You see, we’ve become so hellbent on making this arbitrary construct—the “Population”—a healthier place, it’s become an unaffordable place.
Not because of Evidence Based Medicine. Because of the health policies we’ve made around it that try and mandate care can’t be provided without it.
Evidence Based Medicine offers no perceived benefit to so many of my patients. Often, I’m left trying, through other means, to provide them value in more abstract ways—mostly aimed at creating “vitality” and “hope.”
And, if you ask me where the evidence is for doing that, you’ve just revealed you’re part of the problem.