Should AI models be explainable to clinicians?

Bibliographic Details
Title: Should AI models be explainable to clinicians?
Authors: Abgrall, Gwénolé, Holder, Andre L., Chelly Dagdia, Zaineb, Zeitouni, Karine, Monnet, Xavier
Source: Critical Care; 9/12/2024, Vol. 28 Issue 1, p1-8, 8p
Abstract: In the high-stakes realm of critical care, where daily decisions are crucial and clear communication is paramount, comprehending the rationale behind Artificial Intelligence (AI)-driven decisions appears essential. While AI has the potential to improve decision-making, its complexity can hinder comprehension and adherence to its recommendations. "Explainable AI" (XAI) aims to bridge this gap, enhancing confidence among patients and doctors. It also helps to meet regulatory transparency requirements, offers actionable insights, and promotes fairness and safety. Yet, defining explainability and standardising assessments are ongoing challenges and balancing performance and explainability can be needed, even if XAI is a growing field. [ABSTRACT FROM AUTHOR]
Copyright of Critical Care is the property of BioMed Central and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
Database: Complementary Index
More Details
ISSN:13648535
DOI:10.1186/s13054-024-05005-y
Published in:Critical Care
Language:English