What’s behind all these assessments of digital health? – The Health Care Blog

What’s behind all these assessments of digital health? – The Health Care Blog

What’s behind all these assessments of digital health? – The Health Care Blog

By MATTHEW HOLT

A decent amount of time in recent weeks has been spent hashing out the conflict over data. Who can access it? Who can use it for what? What do the new AI tools and analytics capabilities allow us to do? Of course the idea is that this is all about using data to improve patient care. Anyone who is anybody, from John Halamka at the Mayo Clinic down to the two guys with a dog in a garage building clinical workflows on ChatGPT, thinks they can improve the patient experience and improve outcomes at lower cost using AI.

But if we look at the recent changes to patient care, especially those brought on by digital health companies founded over the past decade and a half, the answer isn’t so clear. Several of those companies, whether they are trying to reinvent primary care (Oak, Iora, One Medical) or change the nature of diabetes care (Livongo, Vida, Virta et al) have now had decent numbers of users, and their impact is starting to be assessed. 

There’s becoming a cottage industry of organizations looking at these interventions. Of course the companies concerned have their own studies, In some cases, several years worth. Their  logic always goes something like “XY% of patients used our solution, most of them like it, and after they use it hospital admissions and ER visits go down, and clinical metrics get better”. But organizations like the Validation Institute, ICER, RAND and more recently the Peterson Health Technology Institute, have declared themselves neutral arbiters, and started conducting studies or meta-analyses of their own. (FD: I was for a brief period on the advisory board of the Validation Institute). In general the answers are that digital health solutions ain’t all they’re cracked up to be.

There is of course a longer history here. Since the 1970s policy wonks have been trying to figure out if new technologies in health care were cost effective. The discipline is called health technology assessment and even has its own journal and society, at a meeting of which in 1996 I gave a keynote about the impact of the internet on health care. I finished my talk by telling them that the internet would have little impact on health care and was mostly used for downloading clips of color videos and that I was going to show them one. I think the audience was relieved when I pulled up a video of Alan Shearer scoring for England against the Netherlands in Euro 96 rather than certain other videos the Internet was used for then (and now)!

But the point is that, particularly in the US, assessment of the cost effectiveness of new tech in health care has been a sideline. So much so that when the Congressional Office of Technology Assessment was closed by Gingrich’s Republicans in 1995, barely anyone noticed. In general, we’ve done clinical trials that were supposed to show if drugs worked, but we have never really  bothered figuring out if they worked any better than drugs we already had, or if they were worth the vast increase in costs that tended to come with them. That doesn’t seem to be stopping Ozempic making Denmark rich.

Likewise, new surgical procedures get introduced and trialed long before anyone figures out if systematically we should be doing them or not. My favorite tale here is of general surgeon Eddie Jo Riddick who discovered some French surgeons doing laparoscopic gallbladder removal in the 1980s, and imported it to the US. He traveled around the country charging a pretty penny to  teach other surgeons how to do it (and how to bill more for it than the standard open surgery technique). It’s not like there was some big NIH funded study behind this. Instead an entrepreneurial surgeon changed an entire very common procedure in under five years. The end of the story was that Riddick made so much money teaching surgeons how to do the “lap chole” that he retired and became a country & western singer.

Similarly in his very entertaining video, Eric Bricker points out that we do more than double the amount of imaging than is common in European countries. Back in 2008 Shannon Brownlee spent a good bit of her great book Overtreated explaining how the rate of imaging skyrocketed while there was no improvement in our diagnosis or outcomes rates. Shannon by the way declared defeat and also got out of health care, although she’s a potter not a country singer.

You can look at virtually any aspect of health care and find ineffective uses of technology that don’t appear to be cost effective, and yet they are widespread and paid for.

So why are the knives out for digital health specifically?

And they are out. ICER helped kill the digital therapeutics movement by declaring several solutions for opiod use disorder ineffective, and letting several health plans use that as an excuse to not pay for them. Now Peterson, which is using a framework from ICER, has basically said the same thing about diabetes solutions and is moving on to MSK, with presumably more categories to be debunked on deck.

One of the more colorful players in this whole arena is Al Lewis, who is the worst type of true believer–a convert. Back in the 1990s Al Lewis was the head cheerleader for something called Disease Management, which was kind of like “digital health 0.5”. In the mid-2000s CMS put a bunch of these disease management programs into a study called Medicare Health Support. The unpleasant answer was that disease management didn’t work and cost more than it saved. Much of the problem was that these programs were largely phone-based and not integrated with the physician care the patients were receiving. Meanwhile Al Lewis (I’m using his full name so you don’t think Al is AI!) has since taken his analytical sword to disease management, prevention and wellness programs, and now several digital health companies, proving that many of them don’t save the money they claim. He does this usually in a very funny way, along with lots of $100k bets which he never pays out on (and never wins either)!

Which leads me to another skeptical player coming at this from a slightly different angle. Brian Dolan, in his excellent Exits & Outcomes newsletter, pointed out that there was something rather strange about the Peterson study. Dolan noted that Peterson picked one study about Livongo about A1c reduction (not the one it did itself which was well critiqued by Al Lewis) and extrapolated the clinical impact from that one study as being the same for all the companies’ solutions–even though Livongo had previously done very few studies compared to say Omada Health.

Peterson then pulled a different random study from the literature to extrapolate the financial impact of that A1c reduction. What it didn’t do is pull the claims data from patients actually using these solutions, even though Peterson’s advisory board is a who’s who list of health insurers. So of course we could get better real world data, but why bother when we can effectively guess and extrapolate? Also worth a mention that many of those insurers, including Aetna & United have competitive diabetes products too. 

So you might think that the very well-funded Peterson Institute could or should have done rather more, and certainly might have included some of the solutions being marketed by the health insurers on its advisory board too.

This is not to say that the digital health companies have done great studies. Like everyone else in health care, their reporting and studies are all over the map and plenty of them make claims that are pushing the limits, clearly because they have commercial reasons to do so.

But it’s also true that many haven’t needed those studies to commercially grow. The poster child here is Livongo, which grew its number of employer clients and members from nothing in 2015 to over 600 employers and 150,000 patients by the time it went public in 2019–all while publishing only one study right at the end of the period. The reason for that growth was that Livongo cost the same as what the employer was already paying for diabetes strips (which it included as a loss leader), it lined up favorable business arrangements with Mercer and CVS to get to employers, and in general the patients liked it. Al Lewis doesn’t agree with that last part (pointing to a few bad Amazon reviews), but Peterson actually noted lots of positive user reviews of the diabetes solutions on its “patient perspective” section–which had no impact on its overall negative evaluation.

My assessment is that, while the individual health service researchers at Peterson et al mean well, we are witnessing another power struggle. The current incumbents have done things one way. Several of these new digital health approaches are providing new more continuous and more comprehensive patient care approaches–which some patients seem to like. Of course the incumbent providers and insurers could have tried these approaches over the decades. It’s not as if we had data that showed everything was hunky dory over the last 40 years. But America’s hospitals, doctors and insurers did what they always did, and continued to get rich. 

Now there’s a new set of tech-enabled players and there’s a choice that potentially could be made. Should we move to a system with comprehensive, constant monitoring of chronically ill patients, and see how we can improve that? Or should we let the incumbents determine the pace of that change? I think we all know the incumbents’ answer, and to me that puts all these analyses of digital health in perspective.

After all, would those incumbents be happy with similar levels of rigor being assessed of their current activities?

Leave a Reply

Your email address will not be published. Required fields are marked *