On the practical, ethical, and legal necessity of clinical Artificial Intelligence explainability: an examination of key arguments
On the practical, ethical, and legal necessity of clinical Artificial Intelligence explainability: an examination of key arguments
Blog Article
Abstract The necessity for explainability of artificial intelligence technologies in medical applications has been widely discussed and heavily debated within the literature.This paper turbo air m3f24-1 comprises a systematized review of the arguments supporting and opposing this purported necessity.Both sides of the debate within the literature are quoted to synthesize discourse on common recurring themes and subsequently critically analyze and respond to it.
While the use of autonomous black box algorithms is compellingly discouraged, the same cannot be said for the whole of medical artificial intelligence technologies that lack explainability.We contribute novel comparisons of unexplainable clinical artificial intelligence tools, diagnosis of idiopathy, and diagnoses by exclusion, to analyze implications on patient autonomy and informed consent.Applying a novel approach using comparisons with clinical practice guidelines, we contest the claim that lack of explainability compromises clinician due diligence and click here undermines epistemological responsibility.
We find it problematic that many arguments in favour of the practical, ethical, or legal necessity of clinical artificial intelligence explainability conflate the use of unexplainable AI with automated decision making, or equate the use of clinical artificial intelligence with the exclusive use of clinical artificial intelligence.