Should AI models be explainable to clinicians?

Bibliographic Details
Title: Should AI models be explainable to clinicians?
Authors: Abgrall, Gwénolé, Holder, Andre L., Chelly Dagdia, Zaineb, Zeitouni, Karine, Monnet, Xavier
Source: Critical Care; 9/12/2024, Vol. 28 Issue 1, p1-8, 8p
Abstract: In the high-stakes realm of critical care, where daily decisions are crucial and clear communication is paramount, comprehending the rationale behind Artificial Intelligence (AI)-driven decisions appears essential. While AI has the potential to improve decision-making, its complexity can hinder comprehension and adherence to its recommendations. "Explainable AI" (XAI) aims to bridge this gap, enhancing confidence among patients and doctors. It also helps to meet regulatory transparency requirements, offers actionable insights, and promotes fairness and safety. Yet, defining explainability and standardising assessments are ongoing challenges and balancing performance and explainability can be needed, even if XAI is a growing field. [ABSTRACT FROM AUTHOR]
Copyright of Critical Care is the property of BioMed Central and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract. (Copyright applies to all Abstracts.)
Database: Complementary Index
FullText Text:
  Availability: 0
CustomLinks:
  – Url: https://resolver.ebsco.com/c/xy5jbn/result?sid=EBSCO:edb&genre=article&issn=13648535&ISBN=&volume=28&issue=1&date=20240912&spage=1&pages=1-8&title=Critical Care&atitle=Should%20AI%20models%20be%20explainable%20to%20clinicians%3F&aulast=Abgrall%2C%20Gw%C3%A9nol%C3%A9&id=DOI:10.1186/s13054-024-05005-y
    Name: Full Text Finder (for New FTF UI) (s8985755)
    Category: fullText
    Text: Find It @ SCU Libraries
    MouseOverText: Find It @ SCU Libraries
Header DbId: edb
DbLabel: Complementary Index
An: 179605074
RelevancyScore: 1041
AccessLevel: 6
PubType: Academic Journal
PubTypeId: academicJournal
PreciseRelevancyScore: 1040.51953125
IllustrationInfo
Items – Name: Title
  Label: Title
  Group: Ti
  Data: Should AI models be explainable to clinicians?
– Name: Author
  Label: Authors
  Group: Au
  Data: <searchLink fieldCode="AR" term="%22Abgrall%2C+Gwénolé%22">Abgrall, Gwénolé</searchLink><br /><searchLink fieldCode="AR" term="%22Holder%2C+Andre+L%2E%22">Holder, Andre L.</searchLink><br /><searchLink fieldCode="AR" term="%22Chelly+Dagdia%2C+Zaineb%22">Chelly Dagdia, Zaineb</searchLink><br /><searchLink fieldCode="AR" term="%22Zeitouni%2C+Karine%22">Zeitouni, Karine</searchLink><br /><searchLink fieldCode="AR" term="%22Monnet%2C+Xavier%22">Monnet, Xavier</searchLink>
– Name: TitleSource
  Label: Source
  Group: Src
  Data: Critical Care; 9/12/2024, Vol. 28 Issue 1, p1-8, 8p
– Name: Abstract
  Label: Abstract
  Group: Ab
  Data: In the high-stakes realm of critical care, where daily decisions are crucial and clear communication is paramount, comprehending the rationale behind Artificial Intelligence (AI)-driven decisions appears essential. While AI has the potential to improve decision-making, its complexity can hinder comprehension and adherence to its recommendations. "Explainable AI" (XAI) aims to bridge this gap, enhancing confidence among patients and doctors. It also helps to meet regulatory transparency requirements, offers actionable insights, and promotes fairness and safety. Yet, defining explainability and standardising assessments are ongoing challenges and balancing performance and explainability can be needed, even if XAI is a growing field. [ABSTRACT FROM AUTHOR]
– Name: Abstract
  Label:
  Group: Ab
  Data: <i>Copyright of Critical Care is the property of BioMed Central and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder's express written permission. However, users may print, download, or email articles for individual use. This abstract may be abridged. No warranty is given about the accuracy of the copy. Users should refer to the original published version of the material for the full abstract.</i> (Copyright applies to all Abstracts.)
PLink https://login.libproxy.scu.edu/login?url=https://search.ebscohost.com/login.aspx?direct=true&site=eds-live&scope=site&db=edb&AN=179605074
RecordInfo BibRecord:
  BibEntity:
    Identifiers:
      – Type: doi
        Value: 10.1186/s13054-024-05005-y
    Languages:
      – Code: eng
        Text: English
    PhysicalDescription:
      Pagination:
        PageCount: 8
        StartPage: 1
    Titles:
      – TitleFull: Should AI models be explainable to clinicians?
        Type: main
  BibRelationships:
    HasContributorRelationships:
      – PersonEntity:
          Name:
            NameFull: Abgrall, Gwénolé
      – PersonEntity:
          Name:
            NameFull: Holder, Andre L.
      – PersonEntity:
          Name:
            NameFull: Chelly Dagdia, Zaineb
      – PersonEntity:
          Name:
            NameFull: Zeitouni, Karine
      – PersonEntity:
          Name:
            NameFull: Monnet, Xavier
    IsPartOfRelationships:
      – BibEntity:
          Dates:
            – D: 12
              M: 09
              Text: 9/12/2024
              Type: published
              Y: 2024
          Identifiers:
            – Type: issn-print
              Value: 13648535
          Numbering:
            – Type: volume
              Value: 28
            – Type: issue
              Value: 1
          Titles:
            – TitleFull: Critical Care
              Type: main
ResultId 1