I made a mistake.
The article was written by a person with well-known hostility toward private practice physicians. That’s fine. He was giving his interpretation of a study.
The study appeared almost one year ago in a medical journal published by the American Medical Association (AMA). The AMA is the same society who last month, in response to increasing prescription medication costs, recommended that we ban all television commercials for pharmaceutical drugs.
Whatever your take is on this one, I don’t really care. I just wasn’t certain opening up more advertising slots for beer commercials and Doritos would be the answer. Of note, the AMA failed to mention any need to ban the dollars they receive from pharmaceutical ads appearing in their own magazines. (Perhaps, that will be a post for another day.)
Regardless, the New York Times article was about a study that evaluated outcomes of heart patients admitted to hospitals during two distinct periods of time: (1) during days of the year when a handful of cardiologists are away from work “learning” at national meetings, and (2) during similar days when they are not.
Interesting study, right? Are you curious what its angle is going to be?
To help you follow along, assume for a moment (hopefully, not too far fetched) that I’m a good doctor. I work at a large Academic Hospital. I’m toward the end of my professional career, well-respected in my field. I see some patients, but I also do some research and teaching.
Twice a year, I attend educational meetings in my field of cardiology. This requires me to miss work for about four days at a time. Twice annually. Over 10,000 other doctors around the country, many of them with jobs similar to mine, also attend these meetings.
And, so, I’ll ask the question:
How do patients fare, at the hospital where I work, when I’m gone?
How do patients do when a “good doctor” is away?
Apparently, patients do better.
That’s right, better… if you believe the study’s conclusions or the New York Times article.
Like I told you, it was an interesting study.
Of course, there are a few things you should know. It was one of these statistical studies. You know, the kind that doesn’t really review actual clinical details surrounding a patient’s hospital course at all.
What studies are even done like that, you ask?
A lot of them.
They get published all the time.
Studies like this one typically look at billing data, submitted to the government on behalf of Medicare patients. Since this data is helplessly flawed when it comes to figuring out what diseases patients may or may not really have, these studies usually just analyze the one outcome or variable we assume can’t get messed up:
Even faulty data should be able to determine whether a patient is still living, right? And, thus, a statistical study is born.
We start by attempting to match up the severity of illnesses between two patient groups. For example, in this particular study, we need to make sure that patients are not “sicker” when doctors are away at meetings compared to when they are home. If this were found to be true, it might mess something up.
So, we statistically adjust for things.
Sometimes, what we end up getting doesn’t make too much sense to any of us. That’s why the methods we use for our statistical study can actually be more relevant than the data itself.
For example, this study assumed that analyzing the 21 days before and after national meetings adequately represented when doctors were not attending them. Whether results would have been different if only days following meetings were used for the control group is not known. Whether findings would have been different if the study included patients with commercial insurance (and not just Medicare) is also uncertain.
Ultimately, we can only see what our methods allow for us to see.
In this trial, hospitalized patients statistically appeared to be equally “sick” at all times, regardless of any ongoing cardiology meetings. This realization seemed to make sense, as the average patient probably doesn’t keep up with the timing of doctor meetings. You wouldn’t expect a change in patient behavior.
But, after that realization, everything else just falls apart when you try to make sense out of the data.
For example, statistically high-risk patients assigned heart attack billing codes had numerically more deaths at 30-days when admitted on cardiology meeting days. Conversely, high-risk patients assigned heart failure billing codes did not.
When you statistically corrected for the severity of patient illnesses, low-risk patients with heart failure had similar 30-day death rates no matter when meetings were taking place. Yet, high-risk patients with heart failure (admitted on non-meeting days) died more frequently over 30-days.
Are you confused yet?
(Just wait until we get to the authors’ conclusions.)
I was kind of interested to know how individual patients fared if they got assigned two billing diagnoses, say both heart attack and heart failure (a relatively common occurrence for me). The study didn’t tell me. It just appears these people were counted twice.
The more that I read into things, the less certain I became. To what degree would I even expect the timing of a first hospital day to have on 30-day mortality?
For example, if my partner admits you in stable condition on Day 1 (when I’m away at a meeting), and then, I return to take care of you on Days 2 through 10 (when you die), how does your sudden development of instability on Day 9 (and subsequent death within 24 hours) relate to my Day 1 meeting at all?
I just don’t know.
If you want to try and explain it, fine.
But, trust me, you will need more than a billing code.
* * *
In summary, this study was nothing more than a scatter plot of statistical mumbo jumbo associated with sweeping conclusions that may or may not be true. Whatever truths really exist seem to be anyone’s guess and almost certainly are not explainable from the information collected.
Not surprisingly, no author of this study (or the New York Times article) has even practiced cardiology. Half of them aren’t even physicians. As for their conclusions about a field within which they have never worked? I’ll tell you:
They decided that heart stents are being overused.
Yes, those little “metal pipes”–the ones that even naysayers agree save lives when placed during major heart attacks, reduce future hospitalizations when placed during minor ones, and cure debilitating chest pain in multitudes of others–are clearly being overused.
As for their reasoning?
This is the best part.
It’s because when “good” cardiologists were away at meetings, a few less heart stents were placed in hospitalized Medicare patients. And, if you were assigned a heart attack billing code during this time, you survived for another 30-days at a rate statistically similar to selected controls.
Basically, more stents placed (when “good” cardiologists were home) did not result in more patients living for 30 days.
Therefore, stents must be overused, they said. They aren’t helping.
As if the only outcome to justify a stent is an extra month of life.
As if the study’s non-cardiology authors are discerning experts on practice subtleties.
As if only “good” cardiologists attend national meetings.
As if these “attenders” really see the highest volume of patients when back home, avidly placing stents without a justifiable cause.
* * *
All of this just reminds me of a landmark trial published in 1988 that showed aspirin to be a successful therapy for treatment of a heart attack.
In an effort to explain potential hazards with making unsubstantiated statistical conclusions, the authors of this paper mentioned how patients with Gemini or Libra astrological signs were found statistically to not benefit from aspirin therapy. In fact, numerically, this group of people actually appeared to experience harm.
Rest assured, had that trial been published today, the New York Times would have written something about it. And, of course, they would have concluded what they also do: “That Good Doctors Are Bad For Your Health.”
You are better off with an astrologist.