AI discussion is everywhere, currently infusing the whole of the healthcare news and insight ecosystem. And yet, it remains an exceptionally difficult topic for many healthcare leaders to write, read, or talk about in a meaningful way.
To bring clarity, Union's newest research report (which members may access in full here) is aimed at a specific audience and time horizon. As always, we are addressing the healthcare executive generalist: individuals who are conversant in the industry at large but who constantly need updated understanding across the many strategically important domains they track. This includes general management executives, board members, and strategy leaders, among others. We believe that this group is typically best served by an explicit three-to-five year outlook. This timeframe ensures discussion is forward-looking, but not overly futuristic in nature. It also aligns with a typical strategic planning cycle. To further ground the discussion, we emphasize a “commonsense” view and a question/answer structure for the work (more on this a moment).
To frame this work, we began with the truism that AI is both over-hyped and under-leveraged. It’s traveling along the oft-cited innovation hype cycle, in which we tend to overestimate the short-term impact of an innovation, while underestimating the long-term. Our own primary research corroborated this sentiment: AI was the healthcare innovation vector most often picked as likely to make a meaningful impact in the next five years but was also most frequently cited as overhyped.
Clearly, the question is not whether AI will be adopted in the next 5 years, but where, how, and by how much. It is equally clear, then, that all executive generalists should have baseline AI knowledge to help guide decision-making about potential AI-based solutions. This report is therefore structured around six commonsense questions that any thoughtful board member or strategy leader might pose—and should have educated answers to.
In this post, we'll share our perspective on questions 1 and 2—members may access the full report here.
Insight #1: Surging interest in generative AI will not prompt immediate widespread adoption, but it will increase organizational AI IQ and momentum for more mature applications, such as predictive AI.
While the first generative AI application was developed in the 1960s, it wasn’t until the recent launch of ChatGPT in November 2022 that generative AI made its way into the mainstream—and touched off an explosion of interest in AI generally. ChatGPT enabled individuals to interact and experiment with AI in a practical and accessible way. It has also demonstrated generative AI’s ability to produce content with some level of human-like nuance, as opposed to the clunkier, more limited-use chatbots that had preceded it.
The spectrum of potential healthcare applications for generative AI is wide, particularly in cases where the inputs can be tightly controlled and trusted.
One particularly clear-cut example that has generated little pushback: it could be tremendously useful in providing language translation services.
Other possible applications are less clear-cut, and also more fraught; for example, early research indicates that patients may find AI-generated responses to be more empathetic that the language of human clinicians. This suggests that there may be interesting applications for tailoring patient communications but has also generated significant concern among physicians.
In fact, despite it’s potential, there are a wide range of concerns and open questions about the use of generative AI in healthcare. These range from challenges around ensuring the accuracy of its generated content, to the general wish to preserve the ‘human touch’ in healthcare, to the desire to protect healthcare professions and employment.
As these ideas, concerns, and questions have played out in both industry and mainstream news, professional interest in AI has skyrocketed. At healthcare organizations, this interest has manifested in several concrete ways:
Mandates from the board: Boards are proactively approaching their organization’s executive teams and IT departments with questions about AI and requests to understand their organization’s approach to AI.
Dedicated budget for AI experimentation: Many organizations have dedicated a (sometimes significant) portion of their innovation budget toward AI specifically.
Increased willingness among frontline workers to try solutions out: Widening familiarity with generative AI has made previously reticent portions of the healthcare workforce—particularly clinical workers—more open to the idea of at least piloting AI applications (for more on this, see insight #2).
Even if generative AI faces significant hurdles before widespread adoption, these larger organizational shifts are a notable development. ChatGPT has increased awareness of AI as a whole, opening the door to a more mature set of predictive AI technologies that we think will see a meaningful uptick in adoption in the coming years.
In some cases, generative AI can even enhance the power of predictive AI; for example, by more easily enabling the creation of synthetic data sets (which can be an efficient means for enhancing the statistical significance of predictive models) or by helping to structure data for input into predictive models. This work is currently manual.
Insight #2: Clinicians and patients have developed more positive—and more nuanced—views on AI; the probable rate-limiter is no longer ‘whether to use it at all’ , so much as ‘in what way’.
A bedrock piece of industry conventional wisdom about AI has long been that the overall, adoption rate-limiter will be clinician or patient acceptance, or lack thereof. This picture has changed meaningfully in a short time, thanks not only to changing views, but also more and better perception data. The arrival of ChatGPT has spurred more AI opinion research in healthcare; these studies have illuminated important nuances within clinician and patient views. These studies show that many clinicians and patients are still concerned about the use of AI in healthcare. However, overall acceptance about AI generally has grown, especially among clinicians.
In 2019, Medscape reported that nearly half of physicians in the U.S. were uncomfortable with the general idea of AI-powered tools. In contrast, a 2023 Medscape survey of over 1,000 physicians found that only 28% of physicians categorized themselves as “apprehensive”. In fact, the plurality of physicians (42%) characterized themselves as “enthusiastic”, with the remainder (30%) describing themselves as neutral.
A critical nuance in current AI perception data is that surveys now test views on specific applications of AI, not just general sentiment about the technology per se. Currently, while most physicians remain concerned about the potential use of AI to drive diagnoses independently, the majority are enthusiastic about its potential use as an adjunct to diagnosis. And, in a stark contrast to the general public (more on this in a moment), many physicians believe that AI will enable them to spend more time with their patients, not less. This is likely a function of the types of tasks that physicians report being most open to using AI for, with greater enthusiasm for administrative tasks and assistance with diagnosis vs. actual patient communication or treatment.
It's important to note that fresh data on the patient side is less positive than on the physician side; about 60% of patients report feeling uncomfortable with the idea of AI being used in their care. However, public opinion data also shows that higher levels of AI familiarity are correlated with greater comfort with its use in healthcare. In a Pew survey of over 11,000 Americans conducted in December of 2022 (i.e. immediately following ChatGPT’s launch) respondents who reported knowing “a lot” about AI were evenly split on being comfortable vs. not comfortable with its use in healthcare. On the other hand, among those who knew nothing about AI, a full 70% reported being uncomfortable with its use. These results suggest that as familiarity with continues to AI grow, acceptance will likely grow too.
More granular data also now show the public recognizing trade-offs between different AI benefits and risks. The largest concerns are information security and potential compromise to personal relationships with providers (a concern that may eventually be allayed given the types of applications physicians are likely to embrace, as noted previously).
On the flip side, respondents are more optimistic about AI’s ability to reduce medical errors; this is a notable departure from research published just a few years ago, in which increased medical errors were a top concern. The public also believes in AI’s potential to help reduce racial disparities.
Parting thoughts
The rest of this report includes our perspective on the remaining four questions, a simple resource guide for more information on the current regulatory approach to AI in healthcare, and our very own AI application feasibility calculator, which both application developers and enterprise buyers may use to assess the near-term feasibility of a given AI application.
Not yet a member, but interested in the full report? Reach out to us at info@unionhealthcareinsight.com.
Comments