Skip to main content
Email

The latest news in Healthcare IT – straight to your inbox.

Home
  • Main Menu
  • Subscribe
  • Topics
    • Video
    • Analytics
    • Artificial Intelligence
    • Cloud Computing
    • EHR
    • Government & Policy
    • Interoperability
    • Patient Engagement
    • Population Health
    • Precision Medicine
    • Privacy & Security
    • Telehealth
    • Women In Health IT

Regions

  • ANZ
  • ASIA
  • EMEA
  • Global Edition
Global Edition
AI & ML Intelligence

As the rush toward AI in healthcare continues, explainability is crucial

Explainable AI aims to make models more interpretable and transparent, ensuring their decision-making processes can be understood and trusted by clinicians and others.
By Bill Siwicki
August 20, 2024
12:24 PM

Neeraj Mainkar, vice president of software engineering and advanced technology at Proprio

Photo: Neeraj Mainkar

Artificial intelligence is seeing a massive amount of interest in healthcare, with scores of hospitals and health systems already have deployed the technology – more often than not on the administrative side – to great success.

But success with AI in the healthcare setting – especially on the clinical side – can't happen without addressing the growing concerns around models' transparency and explainability. 

In a field where decisions can mean life or death, being able to understand and trust AI decisions isn't just a technical need – it's an ethical must.

Neeraj Mainkar is vice president of software engineering and advanced technology at Proprio, which develops immersive tools for surgeons. He has considerable expertise in applying algorithms in healthcare. Healthcare IT News spoke with him to discuss explainability, and the need for patient safety and trust, error identification, regulatory compliance and ethical standards in AI.

Q. What does explainability mean in the realm of artificial intelligence?

A. Explainability refers to the ability to understand and clearly articulate how an AI model arrives at a particular decision. In simpler AI models, such as decision trees, this process is relatively straightforward because the decision paths can be easily traced and interpreted.

However, as we move into the realm of complex deep learning models, which consist of numerous layers and intricate neural networks, the challenge of understanding the decision-making process becomes significantly more difficult.

Deep learning models operate with a vast number of parameters and complex architectures, making it nearly impossible to trace their decision paths directly. Reverse engineering these models or examining specific issues within the code is exceedingly challenging.

When a prediction does not align with expectations, pinpointing the exact reason for this discrepancy is difficult due to the model's complexity. This lack of transparency means even the creators of these models can struggle to fully explain their behavior or outputs.

The opacity of complex AI systems presents significant challenges, especially in fields like healthcare, where understanding the rationale behind a decision is critical. As AI continues to integrate further into our lives, the demand for explainable AI is growing. Explainable AI aims to make AI models more interpretable and transparent, ensuring their decision-making processes can be understood and trusted.

Q. What are the technical and ethical implications of AI explainability?

A. Striving for explainability has both technical and ethical implications to consider. On the technical side, simplifying models to enhance explainability can reduce performance, but this also can help AI engineers with debugging and improving algorithms by giving them a clear understanding of the origins of its outputs.

Ethically, explainability helps to identify biases within AI models and promote fairness in treatment, eliminating discrimination against smaller, less represented groups. Explainable AI also ensures end users understand how decisions are made while protecting sensitive information, keeping in line with HIPAA.

Q. Please discuss error identification as it relates to explainability.

A. Explainability is an important component of effective identification and correction of errors in AI systems. The ability to understand and interpret how an AI model reaches its decisions or outputs is necessary to pinpoint and rectify errors effectively.

By tracing decision paths, we can determine where the model might have gone wrong, allowing us to understand the "why" behind an incorrect prediction. This understanding is critical for making the necessary adjustments to improve the model.

Continuous improvement of AI models heavily depends on understanding their failures. In healthcare, where patient safety is of utmost importance, the ability to debug and refine models quickly and accurately is vital.

Q. Please elaborate on regulatory compliance regarding explainability.

A. Healthcare is a highly regulated industry with stringent standards and guidelines that AI systems must meet to ensure safety, efficacy and ethical use. Explainability is important for achieving compliance, as it addresses several key requirements, including:

  • Transparency. Explainability ensures every decision made by the AI can be traced back and understood. This transparency is needed for maintaining trust and ensuring AI systems operate within ethical and legal boundaries.
  • Validation. Explainable AI facilitates the demonstration that models have been thoroughly tested and validated to perform as intended across diverse scenarios.
  • Bias mitigation. Explainability allows for the identification and mitigation of biased decision-making patterns, ensuring models do not unfairly disadvantage any particular group.

As AI continues to evolve, the emphasis on explainability will continue to be a critical aspect of regulatory frameworks, ensuring these advanced technologies are used responsibly and effectively in healthcare.

Q. And where do ethical standards come in with regard to explainability?

A. Ethical standards play a fundamental role in the development and deployment of responsible AI systems, particularly in sensitive and high-stakes fields such as healthcare. Explainability is inherently tied to these ethical standards, ensuring AI systems operate transparently, fairly and responsibly, aligning with core ethical principles in healthcare.

Responsible AI means operating within ethical boundaries. The push for advanced explainability in AI enhances trust and reliability, ensuring AI decisions are transparent, justifiable and ultimately beneficial to patient care. Ethical standards guide the responsible disclosure of information, protecting user privacy, upholding regulatory requirements like HIPAA and encouraging public trust in AI systems.

Follow Bill's HIT coverage on LinkedIn: Bill Siwicki
Email him: bsiwicki@himss.org
Healthcare IT News is a HIMSS Media publication.

Topics: 
Artificial Intelligence

More regional news

Patient does a virtual care consult

Castlight Health intros virtual urgent care for members

By
Mike Miliard
April 18, 2025
HIMSSCast logo

HIMSSCast: Should every healthcare organization have an AI strategy?

By
Mike Miliard
April 18, 2025
Nurse checks tablet to communicate on shift

Zoom launches agentic AI-powered mobile comms for frontline staff

By
Andrea Fox
April 18, 2025
Want to get more stories like this one? Get daily news updates from Healthcare IT News.
Your subscription has been saved.
Something went wrong. Please try again.

Top Story

Nurse checks tablet to communicate on shift
Zoom launches agentic AI-powered mobile comms for frontline staff

Most Read

How UCHealth is reducing fall injuries with AI-enhanced risk modeling
2025: AI enhances personalized care; caregiver experience in the spotlight
Frontera launches with $32M in seed funding
Korea University Medical Center pursues brain, heart AI development and more briefs
Roundup: AI and cloud tackle cyber risk and improve workflows
HIMSSCast: Fundamentals of data governance - lessons from UNC Health, part 1

Research

White Papers

More Whitepapers

Telehealth
Create secure, connected omnichannel communications
Telehealth
Let us guide you to HIPAA compliance
Cloud Computing
How a cloud communications platform puts connection at the center of care

Webinars

More Webinars

Analytics
Standby Eligibility and Claims Solutions: Diversify Your Risk & Ensure Business Continuity
Interoperability
Nursing Leadership, Operational Innovation, and Emerging Technologies with AONL
Artificial Intelligence
Loving the AI Revolution: How Automation is Humanizing Healthcare and Improving Provider Well-Being

Video

Ilir Kullolli, Stanford Medicine Children's Health_Las Vegas skyline Photo by halbergman/E+/Getty Images
HIMSS-ACCE working together to advance digital health
Vik Bajaj, Foresite Labs_Medical research Photo by Edward Jenner/pexels.com
Healthcare research is being affected by federal budget cuts
Priyanka Jain, Evvy_Hand holding sample vial Photo courtesy of Evvy
How one women's health startup tests fertility outcomes
Keisuke Nakagawa, UC San Diego Health_Las Vegas skyline Photo by halbergman/E+/Getty Images
Can technology help bring the human touch back to medicine?

More Stories

Lee Kim, HIMSS_Las Vegas skyline Photo by halbergman/E+/Getty Images
Past year's data breaches often stemmed from remediable cybersecurity gaps
Cathy Menkiena, Health Catalyst_Las Vegas skyline Photo by halbergman/E+/Getty Images
Innovative – and useful – tech is key to empowering care teams
Sameer Sethi of Hackensack Meridian Health on AI
Hackensack Meridian Chief AI Officer on the intersection of business and technology
Doctor checking and tracking information on a computer
HHS updates regulatory guides for the safe use of EHRs
Sameer Sethi, Hackensack Meridian Health_Computer neural network concept Photo by dan/Moment/Getty Images
Chief AI Officer on becoming one and working with the C-suite
Businessperson signing piece of paper
White House releases guidance on federal AI use and procurement
Dr. Ateev Mehrotra of Brown University School of Public Health on telehealth policy
Brown University policy expert talks about the future of telehealth flexibilities
Micky Tripathi, former HHS acting chief AI officer
Former National Coordinator headed to Mayo Clinic, reports say
Home

More News

  • MobiHealthNews
  • Healthcare Finance News
  • Healthcare Payers News

Newsletter Signup

HIMSS25 European Health Conference & Exhibition
HIMSS25 European Health Conference & Exhibition
Get ready for knowledge-sharing, all the latest innovations, and in-depth demos with Europe's most influential healthcare community.
10 - 12 June, 2025 | Paris
Learn More
AI in Healthcare Forum
AI in Healthcare Forum
The HIMSS AI in Healthcare Forum cuts through the hype to showcase real-world examples illustrating the transformative potential, and realistic challenges of AI application across the care continuum.
10 - 11 July 2025 | New York
Learn More

Footer Menu

  • About
  • Advertise
  • Reprints
  • Contact
  • Privacy Policy

© 2025 Healthcare IT News is a publication of HIMSS Media

X

Topics

  • Video
  • Analytics
  • Artificial Intelligence
  • Cloud Computing
  • EHR
  • Government & Policy
  • Interoperability
  • Patient Engagement
  • Population Health
  • Precision Medicine
  • Privacy & Security
  • Telehealth
  • Women In Health IT

Career

  • Events
  • Jobs
  • Research Papers
  • Webinars

More

  • About
  • Advertise
  • Contact
  • Special Projects
  • Video

Regions

  • ANZ
  • ASIA
  • EMEA
  • Global Edition

The Daily Brief Newsletter

Get daily news updates from Healthcare IT News.

Search form

Top Stories

Nurse checks tablet to communicate on shift
Zoom launches agentic AI-powered mobile comms for frontline staff
HIMSSCast logo
HIMSSCast: Should every healthcare organization have an AI strategy?
Vik Bajaj, Foresite Labs_Medical research Photo by Edward Jenner/pexels.com
Healthcare research is being affected by federal budget cuts