Explore 1.5M+ audiobooks & ebooks free for days

Only $12.99 CAD/month after trial. Cancel anytime.

Why Nobody Believes the Numbers: Distinguishing Fact from Fiction in Population Health Management
Why Nobody Believes the Numbers: Distinguishing Fact from Fiction in Population Health Management
Why Nobody Believes the Numbers: Distinguishing Fact from Fiction in Population Health Management
Ebook304 pages3 hours

Why Nobody Believes the Numbers: Distinguishing Fact from Fiction in Population Health Management

Rating: 0 out of 5 stars

()

Read preview

About this ebook

Why Nobody Believes the Numbers introduces a unique viewpoint to population health outcomes measurement:   Results/ROIs should be presented as they are, not as we wish they would be.  This viewpoint contrasts sharply with vendor/promoter/consultant claims along two very important dimensions:

(1)     Why Nobody Believes presents outcomes/ROIs achievable right here on this very planet…

(2)   …calculated using actual data rather than controlled substances.

Indeed, nowhere in healthcare is it possible to find such sharply contrasting worldviews, methodologies, and grips on reality. 

Why Nobody Believes the Numbers includes 12 case studies of vendors, carriers, and consultants who were apparently playing hooky the day their teacher covered fifth-grade math, as told by an author whose argument style can be so persuasive that he was once able to convince a resort to sell him a timeshare. The book's lesson:  no need to believe what your vendor tells you -- instead you can estimate your own savings using “ingredients you already have in your kitchen.” Don't be intimidated just because you lack a PhD in biostatistics, or even a Masters, Bachelor's, high-school equivalency diploma or up-to-date inspection sticker.  

Why Nobody Believes the Numbers explains how to determine if the ROIs are real...and why they usually aren't. You'll learn how to:

  • Figure out whether you are "moving the needle" or just crediting a program with changes that would have happened anyway
  • Judge whether the ROIs your vendors report are plausible or even arithmetically possible
  • Synthesize all these insights into RFPs and contracts that truly hold vendors accountable for results
LanguageEnglish
PublisherWiley
Release dateJun 11, 2012
ISBN9781118332061
Why Nobody Believes the Numbers: Distinguishing Fact from Fiction in Population Health Management
Author

Al Lewis

Al Lewis began in show business as a vaudeville performer. However, he is best known as the writer and director of Our Miss Brooks on radio and on television. He either wrote all of the scripts himself or with his partner Joe Quillan.

Related to Why Nobody Believes the Numbers

Related ebooks

Medical For You

View More

Reviews for Why Nobody Believes the Numbers

Rating: 0 out of 5 stars
0 ratings

0 ratings0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Why Nobody Believes the Numbers - Al Lewis

    To my fifth-grade math teacher, for doing a better job than the other kids′ fifth-grade math teachers.

    Introduction

    This book contains arithmetic. DON'T HIT ME. However, I promise that the arithmetic will be quite accessible, even to people who say they can't do math. Oh, and you think you can't do math? You'll see examples of vendors and consultants whose math skills couldn't land them a job as the before picture on Sesame Street.

    This promise is possible partly because Why Nobody Believes the Numbers largely avoids things that make other books about numbers real turn-offs, such as, for example, numbers. But mostly it's possible because I am a great writer. Don't take my word for it—just look at the evidence:

    I am blessed with such effective persuasive powers that I was once able, against all odds, to convince a resort to sell me a timeshare.

    My first book got eight five-star reviews on Amazon, including four from people I don't even recall having slept with.

    The bottom line: The math is presented clearly enough that readers who understand it can probably continue to live independently for at least a few more years.

    Now that we've settled that issue, let's review what you probably already know if you've bought this book, or borrowed it from a friend temporarily until your own copy(ies) arrive in the mail: Vendorsa routinely show you outcomes reports for your Population Health Improvementb programs whose savings claims are much closer to fiction than fact.

    But you don't know why these numbers are fictional, do you? You leaf through these vendor outcomes reports, squint thoughtfully, and then ask your vendor's account manager: Did you control for regression to the mean? In response, the vendor will babble something about how their methodology is Extremely Scientific, so, of course, it adjusts for regression to the mean. Plus, their methodology is validated by [fill in the name of an actuarial firm that will happily put their name on anything if you pay them enough money], so obviously the resulting savings estimate is accurate because a real actuary says it is.

    You nod your head, and that's the end of the conversation. Vendor 1, Customer 0.

    It's like the time I took my car to the auto mechanic because it was making a funny noise. We raised the hood. As we listened to the engine idling, I rubbed my chin and nodded knowledgeably. Then, I diagnosed the problem right on the spot, just to make sure this shyster wouldn't think he could take advantage of me.

    I said, I think it's time to replace the distributor caps, only to be informed by this particular shyster that this particular problem was unlikely to be distributor caps, because this particular car didn't have distributor caps.

    Lacking any knowledge of what a distributor cap does, whether distributor caps are or ever were found in cars, or for that matter how any car not driven by Fred Flintstone actually gets around, I had no alternative but to take his word for that conclusion, as well as everything else he told me when he presented the bill. He could have told me that spiders had built webs in the canister valve and I would have believed him. Come to think of it, that's what he did tell me.c

    And, that, my friends, is roughly what happens in Population Health Improvement just about every time you read a proposal, negotiate a contract, or review an outcomes report. This assertion covers disease management, medical homes, value-based contracting, productivity/absenteeism management, on-site clinics, and especially wellness. (Wellness is five years behind disease management in measurement, which is like being five years behind Iraq in democracy.) Just like the mechanic with the arachnid-infested canister valve, vendors can tell you whatever they want because you don't know what you're doing and they know it.

    This isn't your fault. For starters, you have an actual day job. You've got tons of other things to worry about, whereas vendors have entire staffs whose job is to find new ways to rip you off. You think I'm kidding? We have a dozen vignettes and case studies of vendors performing math-defying feats, in one case making costs disappear even before the program starts.

    Second, you have benefits consultants (or, in the case of health plans, actuaries) whose job it is to analyze this stuff and make sense of it for you. Well, guess what: Your consultants are every bit as ignorant as you are. Once again, if you think I'm kidding, wait ‘til you read about some of their misdeeds. Worse, they get paid much more than you do, to the tune of $500/hour, to understand outcomes math, and they still don't. Only two benefits consultants have ever achieved Critical Outcomes Report Analysis certification—and yet, here's what benefits consultants do for a living: critically analyze outcomes reports.

    Why Does the Author Hate Benefits Consultants?

    Some of the references to benefits consultants in this book are not flattering. You might ask: Why does the author hate benefits consultants? When he was a kid, was he bitten by one?

    Quite the contrary: (a) I don't hate benefits consultants, and (b) despite years of therapy, any memory of being bitten by one remains repressed. Paradoxically, I get a large number of referrals from benefits consultants. And I think, based on the side-by-side consulting that I've done with them, benefits consultants excel at the following:

    1. Benefits consulting

    However, I also think, based on the data I've seen, that they are quite bad at the following:

    2. Everything else*

    A health benefit is something that you automatically get as part of your health insurance, like covered drugs, a certain number of visits to the chiropractor, and so forth. When you use those services, you fill out a claim, usually pay a co-pay, and get reimbursed for the rest. Figuring out how to design a benefits structure to control spending while keeping employees happy, and picking a carrier to do that, is the role of the benefits consultant. Many do it very well. Some of the best at those activities, knowing that what they are being asked to measure or procure is not the same as what they are expert at, will call me or someone like me in to work side-by-side with them on…

    …Population Health Improvement (PHI), which this book is about. PHI does not fit in the statutory category of health benefits and therefore is not something that benefits consultants automatically know how to procure or measure. PHI consists of administrative programs designed to reduce the need for benefits by making people healthier on a large scale. There are no claims forms—and certainly no co-pays. The tools that these consultants learned in actuarial charm school to analyze benefits spending do not apply to PHI, though that doesn't stop them from using these tools, often with hilarious results, as case study after case study will show.

    They prefer to spend their (highly billable) time writing overwrought, uber-detailed requests for proposals and/or contracts dictating every conceivable contingency in the life of a disease manager—from what to do following a specified number of unsuccessful attempts to contact patients/employees by phone to what to do in the event of being sexually harassed by a presidential candidate.

    Instead, perhaps they should use their time adopting the specific tools to understand and measure PHI, which are right in this very book, and, as one might conclude from the case studies, apparently nowhere else. So if your consultant is reading this book and especially if he is sharing his copy with you, chances are you've found one willing to learn how to analyze PHI.

    It's also possible they've picked this stuff up through osmosis. If so, that would be reflected in their ability to negotiate a contract and measure an outcome. You can check this yourselves by reading the last chapter to see if your contract(s) for PHI are well-negotiated or your outcomes reports well-vetted.

    * And the reverse is true, too. No one would ever want me doing benefits consulting for him or her. I would have no clue, for instance, how to estimate the effect of a change in drug co-pay levels on total health spending. I'm not an actuary. I don't even play one on TV.

    Perhaps people don't think they have to critically analyze outcomes reports, because their trade association has done that for them. Vendors will often say: Our ROI methodology follows the Care Continuum Alliance (CCA) Outcomes Guidelines. Unfortunately, the industry pre-post methodology section of the CCA Outcomes Guidelines is built on a completely invalid premise. The premise is, as the first chapter will show, mathematically not just questionable, but also provably wrong—and you will also see that another methodology is provably right. To the CCA's credit, they don't even pretend that their guidelines are based on provable math. Instead, they call them consensus guidelines.d

    This is a critical hedge: Unlike sociology or philosophy or global warming, real math is not consensus-based. Maybe if you're Dunder Mifflin's Michael G. Scott it is. (Why does 2 + 2 = 4? Because everybody agrees that it should, that's why.) But for the rest of us, math is proof-based. Numbers either add up or they don't. There are not multiple ways of doing math leading to different answers. 2 + 2 = 4, period. Suppose you ask your banker how much money is in your bank account. Your banker doesn't say: Well, it depends on how you measure. There are different methodologies. You'll see for yourself in Chapter 1 (Actuaries Behaving Badly) that the vendor-endorsed industry consensus measurement of today will always overstate savings.

    Then, because this isn't a mystery book, I won't leave you hanging until the end to learn that indeed you can measure outcomes validly using proof rather than consensus. While the actual, mathematically correct, way to measure outcomes is a bit cumbersome, the good news is that …

    … Chapter 2 (How to Measure Outcomes Using Ingredients You Already Have in Your Kitchen) presents an ersatz measurement proxy that can be estimated without math, using observational data to figure out whether you are moving the needle or not. Instead of math, there are a bunch of graphs. This means that, in the immortal words of the great philosopher Yogi Berra, you can observe a lot just by watching. And that's what Chapter 2 is about. Observational data is used to create plausibility tests. You look at actual event rates over time (heart attacks, asthma attacks, and so forth) and ask whether the return-on-investment (ROI) that the vendor is insisting you received is plausible given the changes in event rates over time in your population. Plausibility-checking turns out to be easy, fast, and inexpensive, as well as valid, which is probably why most vendors don't do it, consultants pretend they've never heard of it, and the CCA doesn't emphasize it.

    You don't learn to drive just by reading a book on driving, and you don't learn to apply plausibility tests just by reading a chapter on them. Instead, your next step in learning to drive is to watch someone do it, badly. Badly because engineers say you learn more from one bridge that falls down than from 100 that stay up. In this industry, the ratio is almost the reverse, and that's what Chapters 3 and 4 are all about: Short vignettes and longer case studies--involving disease management, wellness, patient-centered medical homes, and more—of vendors who, analytically speaking, drive drunk. Examples of real companies—possibly including even your own vendor—caught either flunking plausibility tests or simply making up numbers. These examples aren't obscure outcomes reports that we discovered by hacking into someone's voicemail like Rupert Murdoch. In most cases, these violations are right on the vendor's own websites or right in their brochures. It's as if they are daring you to challenge their numbers—and the examples of vendors bragging about their phony numbers epitomize the complete lack of respect that vendors have for you and your consultants.

    Yes, it's true: These companies, carriers, and even states eagerly broadcast their math-defying fantasies with breathless albeit misplaced enthusiasm. One vendor, which we will call Vendor A, used a mass e-mail to proclaim some mathematically impossible results, and urged people to share them. I wrote back and said that I'd happily share those results, as an example of how not to analyze outcomes. The CEO wrote back and said: I want to be VERY clear that the study summary that you received from Vendor A was part of a transmittal to our ‘Friends of Vendor A’ e-mail list. As such, it was not sent to you as a publication or marketing claim, and should not be used by you for any purpose other than to provide feedback to us directly.

    I wrote back and said: If this was private, you shouldn't have posted it on your website. Then the CEO wrote: A published, peer-reviewed article is coming soon. Stay tuned! Almost as if on cue, the New England Journal of Medicine shortly thereafter published a research article showing that utilization reduction using the Vendor A system was 79 percentage points lower than Vendor A's claim.

    Vendor A is a perfect example: Why Nobody Believes the Numbers kicks posterior and take appellations. If someone from one of the vendors in a case study wants to sue, perhaps on the grounds that their name really is Vendor A, I say, If you sue me, you'll have to sue Newton, Descartes, Pythagorus, and all their buddies, because this isn't about name-calling or accusations. This is about fifth-grade math, and apparently the only thing you know about fifth-grade math is that you can't do it. I can, however. I guarantee it: If any reader can find an invalidating flaw in the math before June 2013, not only will I at my own expense refund your purchase price, but I will even let you still keep the book. That's the kind of warm-hearted, magnanimous guy I am. Plus, I don't want your germs.

    The three subsequent chapters—5 through 8 for those of you keeping score at home—show that it is indeed possible to procure a contract that guarantees valid savings, and have the vendor deliver on those savings. These case studies saved money, for real. The vendors did their job well enough to show savings even though the measurement was valid.

    By the way, those of you keeping score at home need to pay a little bit more attention, even if it means putting down your brewskis for a minute: 5 through 8 is four chapters, not three. It's actually Chapters 5 through 7 that have the case studies. And therein lies my point: Most people will simply accept the math that they read. Why Nobody Believes counsels the opposite: Check every piece of arithmetic you see because in this field most calculations are wrong, often to the point of being impossible.

    Finally, in Chapter 8, you'll be able to synthesize all these insights into a contract or request for proposal (RFP) that lets prospective vendors know that you weren't born yesterday, and that sneaking a phony outcome past you would be like sneaking a Senator past a lobbyist.

    Once you've finished Why Nobody Believes, you'll find that the math is nowhere near as complex as your vendors and consultants make it out to be. Your job will become much easier. True, it still won't be the world's easiest job. That would be: Pip, where the entire job description consists of repeating what Gladys Knight says and occasionally adding train noises.

    You'll find the Pips apropos for another reason, too: When you identify your first measurement fallacy, you'll want to jump up from your chair and shout: Whoo-whoo!

    a Or carrier. Unless otherwise indicated in this book or obvious from the context, a vendor is any organization, whether independent or carrier-affiliated, that sells a program where outcomes need to be measured. Whether the vendors in question are independent or carrier-affiliated is indicated in their code names, like Vendor A or Health Plan C.

    b PHI encompasses all programs designed to save money by improving health or access to care. See the Glossary for a full definition of this term and others used in this book.

    c That really is what he told me. Apparently spiders are so attracted to canister valves in Acuras that the fix is provided free. I am not a vendor so nothing in this book is made up.

    d Oscar Wilde noted that there is no need to cheat if you hold the winning cards, so full disclosure: The CCA emphasizes on page 34 of their guidelines that their consensus methodology, even with their many adjustments (a list that somehow leaves out the adjustment suggested in Why Nobody Believes that would actually give their methodology a shot at being valid) do not approach the accuracy of a true randomized controlled trial (RCT). If you could do RCTs in the real world, you'd probably get the numbers right, but half of your members would be pretty miffed at you, which is why few payors ever do them.

    Chapter 1

    Actuaries Behaving Badly

    Let's start by exploring the validity of the most popular pre-post guidelines, as compared to another much less publicized methodology. By then attempting

    Enjoying the preview?
    Page 1 of 1