pp_opt

An independent company, known as ProPublica, made healthcare news this week by releasing the complication rates of nearly 17,000 surgeons nationwide.

ProPublica is another one of these non-profit groups that has been granted charity status by the Internal Revenue Service. Their self-stated mission is investigative journalism in the public interest.

You should know that ProPublica claims to be supportive of the “little guy.” I, too, like underdogs, so I enjoyed reading on their own website about their goal to shine “a light on the exploitation of the weak by the strong… [to expose] the failures of those with power to vindicate the trust placed in them.”

Giddy up, ProPublica.

I also favor transparency in medicine. In fact, I wrote a book, in part, to outline my support for it. But, ultimately, what I care about most is taking back medicine. Returning the practice of it to those things that are really meaningful to patient care. And, in our modern era of excessive third-parties mooching off the healthcare system, my focus has settled in on chiseling away at the less meaningful.

With that in mind, I’ll tell you about the “Surgeon Scorecard” that ProPublica has now made public.

THE SURGEON SCORECARD

ProPublica picked eight medical procedures (none of which I perform) to spotlight. These eight procedures, or surgeries, were chosen because ProPublica said they are performed on relatively healthier patients at lower surgical risk. They included knee replacements, hip replacements, three types of spinal fusions (one in the neck and two in the lower back), gall bladder removals, prostate removals, and prostate resections.

ProPublica evaluated five years of information (between 2009 and 2013) on hospitalized Medicare patients. Then, they sought to link complication rates from surgeries to individual surgeons.

I was intrigued by their project up until I found out that ProPublica was going to use Medicare’s administrative billing data as the sole determinant for whether a complication occurred. There was going to be no on-site investigative effort involving the operating room or hospital. There was not going to be routine interviewing of witnesses or time spent sorting through a myriad of physician and nursing notes related to each surgery in order to actually figure out how, what, or why something had really happened.

No, they were just going to comb through Medicare’s administrative claims-based data and make their conclusions. It’s kind of like framing a house with toothpicks. Yes, you are still using wood, but that’s about all you can say.

Medicare’s administrative data is codified jargon supposedly representative of actual patient diagnoses. This data gets used over-and-over in analyses of public health, and it’s one of the reasons I never know what to make of any conclusions that come from it.

I have no statistical proofs to share with you about this data and no correlation coefficients to lean upon. I only have my own experience. And, my experience will tell you that these diagnosis codes submitted to central government bureaucrats for billing commonly fail to accurately depict a patient’s medical situation. In fact, using them for this purpose just yields a bunch of rubbish.

I’m not talking about any fraudulent claims. I’m just saying that the patient is frequently labeled with a billing diagnosis code that they may or may not even have. It might just be what a really smart doctor thinks the patient has or is wanting to rule out. The real diagnosis might not even become clear until the patient leaves the hospital.

Charles Babbage said that “errors using inadequate data are much less than those using no data at all.” If you believe that to be true, then you have essentially stumbled upon what I see as the only benefit that a study using Medicare’s administrative data can ever claim: It’s better to guess at something than to do nothing at all.

I just believe that rubbish in equals rubbish out.

PROPUBLICA’S PROMOTIONAL VIDEO

Despite all the limitations in ProPublica’s methodology for determining complications, it wasn’t until I watched their 48 second promotional video that I started to question their intent.

Watch the video. Make your own conclusions. Don’t take mine. But, I personally find less meaning when people in an organization build their case using public fear tactics that could just as easily be mistaken for a “witch hunt.”

pp_1

Adverse events in medicine are serious issues. We all want to prevent them. But, their video is not just one-sided, it’s actually misleading.

It opens with a dramatic quote that 400,000 patients die every year from medical harm. This number is well-known to me and comes from an article published two years ago that arrived at this figure by analyzing a total of 38 deaths. That’s right, 38 total deaths were extrapolated to 400,000 patients because these deaths were determined to have been “preventable.”

I’m not disputing that human errors occur in medicine. I’m not arguing about whether we need to find ways to better eliminate them. In fact, we have strived to do that at every healthcare facility where I have ever worked. But, when you guess that 400,000 deaths occur every year from medical harm because 38 were “preventable,” you make an inaccurate assumption. You assume that “all” adverse events in medicine are preventable. And, unfortunately, that’s just not true.

I could give you dozens of examples, but here’s one that ProPublica evaluated: blood clots developing in veins. Long ago, we learned that patients can get them while sitting in hospital beds. Sometimes that’s because they have to sit. They have a broken leg. And, we have rightfully tried to prevent these blood clots from occurring with all types of therapies.

But, just know that no therapy we’ve ever studied, when used 100% of the time, has prevented ALL blood clots. Some people are still going to get them in the hospital. Some people are still going to get them at home regardless of the therapy provided. All adverse events are not preventable, and whose fault is that?

pp_2ProPublica wants you to believe it’s just the surgeon. And, who knows, maybe it is. Their video says surgeons were responsible for “avoidable” infections, “avoidable” injuries, and even “avoidable” deaths. And, how does ProPublica know that all these things were avoidable? By using claims-based administrative data? I have my doubts.

Besides, who really thinks that the surgeon is the only one responsible for “avoidable” infections in the hospital? The surgeon performs a 2 hour procedure. Then, literally, 50 other people might touch the patient during the hospital stay. What percentage are they responsible? Show me ProPublica’s formula for that one.

At least ProPublica did attempt to create a method to account for the surgeons who take on the most complicated cases. Yes, it’s a bunch of incomplete statistical mumbo-jumbo related to a patient’s health, but I’ll congratulate them for trying. Here’s the funny thing, however, about the “Health Score” they calculated for each patient to presumably equalize differences in case mix:

ProPublica said in their final report that when patient age, individual hospital, and surgeon were factored in, the patient’s “Health Score” statistically didn’t matter anymore!

Say, what?

Seriously, read that again. The “Health Score” didn’t matter? Okay, whatever. Tell me anything. In fact, just let the statistician perform the surgery if you don’t think that I know what I’m talking about. The health of the patient before a surgery does matter, which further suggests that ProPublica’s calculated “Health Score” wasn’t the least bit successful at doing what it was supposed to do.

In conclusion, I’m not against transparency. I just believe that the Surgeon Scorecard is going to be about as helpful to the general public as the release of physician payment data by CMS was in 2014. And, for the record, the latter has been relatively meaningless.

BUILDING ACCOUNTABILITY

Every day, on my way to work, I drive over two bridges. I trust they are strong and have been built appropriately. I’m not certain about these things, but admittedly, I’ve seen others drive across them, and we have survived together.

You could release the “building statistics” of all the workers who built those bridges. You could tell me how many similar projects they have been involved with in their careers. You can tell me if any of those projects experienced adverse events. You could try to risk-stratify these events based upon how long, or how high, or how challenging building each bridge was determined to be by some incomplete formula.

Would this make my drive to work safer?

Perhaps, but again, I have my doubts.

Maybe, I would ultimately take the longer route to work and use a different bridge, possibly built by a crew with better safety statistics.

But, the truth is that I don’t know much about bridge building. And, no matter how much data you release, it won’t change the fact that building bridges is not what I do. At the end of the day, I really just want a brilliant engineer assuming the risk for all of us.

Friends, accountability is a must. Both for bridge builders and medical providers. Establishing it is paramount.

I’m just less certain that poorly keeping score is the answer.