Agentic AI is a hot topic across numerous industries including healthcare. Everyone seems to be buzzing about AGI, envisioning Jarvis from Iron Man where AI systems are capable of human-level intelligence and decision-making capabilities.
The excitement is understandable – suddenly there is a real opportunity to build models and robots that mimic human tasks and can interact and engage with humans. Businesses are enthusiastic about this capability, especially in industries with labor shortages to replace human workers with AI machines who can automate tasks more efficiently, without errors or financial compensation.
But when it comes to healthcare, I think we need to take a minute to assess at a deeper level. In general, healthcare is a highly regulated industry for good reason and cannot take on some of the risks and failures inherent with AI that other industries can. We need to evaluate agentic AI through the specific lens of the healthcare industry and its workflows, evaluating the needs of providers, healthcare organizations as well as patients, recognizing that healthcare is not monolithic and will have workflows that are more appropriate than others for implementing AI.
To understand this requires being honest about the strengths and weaknesses of agentic AI. LLMs excel at creative processes – for example, ChatGPT can instantly create a new song about Taylor Swift’s latest album in the style of Dr. Seuss with minimal prompts. Comparatively, rules-based engines are good at structured output. Consider a prompt for a self-driving car: “When a traffic light is red, THEN stop.” What’s fascinating about agentic AI is that it will likely fall somewhere in the middle – partly rules-based and partly creative, and navigating that middle ground and the appropriate risk and reward of when it should be applied in healthcare workflows is the big challenge.
The most useful heuristic I have found on this topic is looking at the concept of risk versus consequence. I define risk as the percentage that something would fail and consequence as the outcome of that failure.
In scenarios where workflow risks are high, you don’t want the agentic process to own it – the reality is that every AI model will fail at some point and the cost of that failure could be too high when it comes to healthcare outcomes.
Here are two examples where agentic workflows would not work:
- Authoring or defining advanced directives (end-of-life planning). There’s clearly a creative and interpretive workflow here that requires empathy, human experience, and judgement on when to guide vs. when to listen; because the source of information (people and their situations) are not all equal. It’s also a situation with high consequences that you don’t want to get wrong in any way shape or form.
- Managing triage in an ER — a chaotic, fast-moving environment. People are best suited to make quick decisions — there’s no time to input data into an agent.
However, here are two examples of where agentic AI would work in healthcare:
- Unlocking EHR data using an agent to automate a sequence of tasks that require it to navigate a user interface. Enterprise level software has done this before. It used to be called RPA or Robotic Process Automation, but now agent-powered processes can do this with much more resilience.
- Reviewing patient charts to make sure emergent chronic conditions weren’t missed by clinicians.
Right now, I think agentic AI is at a stage where it will fail if it starts telling doctors what to do and takes over decision making — and worse, hurt the trust of everyone (by its association) with artificial intelligence in medicine. When patient safety, empathy and human judgment take precedence over cost savings and potential efficiency improvements, a human must be in the loop. But automating tasks, note taking, extracting data — the tedious, manual processes in healthcare that do not require human intervention in every step — is a great place to begin implementing AI. The healthcare industry should be looking at how AI can find and present relevant data with context to enable human clinicians to make informed decisions, freeing up providers’ time to reduce burnout and allow them to offer more personalized patient care to improve patient outcomes.
Photo: filo, Getty Images
Isaac Park spent his youth tinkering with technology and in high school, started his formal education in software development. After moving to Durham, North Carolina, he graduated from Duke University with a Bachelor of Science degree in Computer Science. Isaac began his technology career immediately, working as a software developer, building front-end frameworks. Afterwards, he moved into a product management role, guiding stakeholders and technical teams through a wide variety of projects from inception to final release.
In 2009, he co-founded an innovation and product studio, Pathos Ethos, and guided startups and corporate innovation teams through business-changing digital products in the healthcare and defense verticals: all the way from releasing a multi-million dollar software product, to building a native mobile application used by over 1M users concurrently. At the end of 2022, he exited Pathos Ethos and joined the Duke Pratt School of Engineering, serving as faculty for the Christensen Family Center for Innovation in Product Management and Innovation. In 2023, he co-founded an AI-Native healthcare technology company, Keebler Health, and currently serves as the CEO.
This post appears through the MedCity Influencers program. Anyone can publish their perspective on business and innovation in healthcare on MedCity News through MedCity Influencers. Click here to find out how.