The use of Artificial Intelligence in health care is not new. Remote patient monitoring, decision support tools, imaging detection and analytics have all been used for decades. However, as AI becomes more intuitive, its applications extend to the front line, bringing new challenges and opportunities.
The 2023 Futurescan publication, produced by the Society for Health Care Strategy and Market Development and the Foundation of the American College of Healthcare Executives, discussed applications for AI, including:
- Advance Care Planning: Using predictive analytics, providers can identify patients at high-risk for mortality within the next six months. This enables them to begin advance care conversations earlier and limit palliative care resources to the highest-risk patients.
- Addressing Patient No-Shows: AI can identify patients with a high no-show rate. This allows operational teams to proactively connect with patients and identify social barriers like transportation that may impact their arrival.
- Remote Diagnostics: AI can be used to extend the reach of providers by providing expert diagnostics via remote access. This is especially important in rural settings or specialty areas with workforce shortages.
- Predictive Analytics: Using AI, systems can identify high-risk patients based on utilization, admissions, ER visits and readmissions. Resources can be deployed early to ensure patients receive appropriate care management and intervention.
With the advancement and benefits of AI, two key challenges emerge:
- Data Security and Integrity
- Consumer Resistance and Human Interaction
Data Security and Integrity
Nearly half (48%) of Futurescan survey respondents said that by 2028 their hospital or health system will have a complete IT infrastructure in place to implement AI to assist and augment clinical decision-making. And 53% of the same respondents believe that by 2028, a federal regulatory body will determine that AI for clinical care delivery augmentation is safe for their hospital or health system use.
With the increasing use of AI and health data, health-care organizations need effective measures to ensure patient data remains secure. Experts recommend assembling multi-disciplinary teams to include data science and clinical care experts to assess AI applications.
Shawn Wang, chief AI officer at Elevance Health, spoke at a 2023 HIMSS panel on his system’s experience with AI. When discussing a large project to facilitate informed, novel connections between patients and providers, Wang addressed the importance of interdisciplinary perspectives. He shared that the team initially created the algorithm from a tech perspective, overlooking the business perspective. They also failed to include provider feedback, both of which caused setbacks.
Data integrity is another key issue. Artificial Intelligence is only as good as the data it is learning from, which often contains a degree of bias. For example, suppose AI bases risk identification on health-care claims, and you have large populations of patients who are not receiving care. In that case, the absence of claims may not indicate a healthy population. Rather, as in the case of marginalized populations, it may represent patients who do not have access to care.
“Many healthcare organizations implement AI technologies to help identify patients who may need closer management during care,” said Jeremy VanderKnyff, Ph.D., chief integration and informatics officer for Proactive MD. “However, because the tools have now been trained on fundamentally flawed datasets, they often underestimate patient risks.”
Consumer Resistance and Human Interaction
Consumer preference is a key aspect of AI in health care. A new Pew Research Center survey explored public views and the use of technology in clinical care. Sixty percent of respondents said they are uncomfortable with AI being used in their health care.
Resistance is primarily based on the lack of confidence that AI will improve care outcomes. The survey, conducted in December 2022, said only 38% of Americans believe that AI to help diagnose disease and recommend treatments would lead to better patient health outcomes, and one-third of respondents thought it would lead to worse results.
Patients are also concerned with the impact of AI on their relationship with their provider, noting that 57% believed the use of AI would worsen the patient-provider relationship.
“Nothing can replace the compassionate care of a provider,” Dr. VanderKnyff said in a Health Leaders editorial. “They can see you, hear your stories, grieve with you, rejoice with you, and offer a human experience that can never be replicated – even with all the knowledge in the world at their fingertips.”
AI could make it easier for clinicians to make the right decision at its core, but it does not make decisions on behalf of providers. The intent is to reduce administrative burden so providers can spend more time with patients, not to replace sound clinical judgment or personal interaction.
Current decision-support systems can improve patient safety via error detection, risk stratification and medication management. In the Futurescan publication, Juan Rojas, MD, medical director in the analytic interventions unit at UChicago Medicine said, “One of the main aspirational visions is improving patient safety across the board: identifying diagnoses, problems or risks for an event earlier so that you might change the trajectory of the final outcome for the better."
The growing applications of AI will profoundly impact health-care delivery and outcomes. This is especially true in value-based care, where predictive analytics can ensure we identify high-risk populations, proactively manage their care and strengthen their patient experience and health outcomes.