Should We Trust AI in Medicine?

Advertisements
Advertisements

Related Posts:


By: Matthew DeCamp, Jon C Tilburt

George Orwell said, “if thought corrupts language, language can also corrupt thought.” Orwell’s worry about the totalitarian regimes of his day offers broader insights about how language can obscure thinking and values, including about contemporary medical applications of artificial intelligence (AI). The potential of AI to revolutionize medicine appears vast. Nevertheless, concerns over the unknown and unknowable so-called black boxes of AI have spurred a movement toward building trust in AI.

Although well intentioned, applying trust to AI is a category error, mistakenly assuming that AI belongs to a category of things that can be trusted. Trust implies entrustment, placing something of value in the responsibility of another in circumstances of vulnerability. The ideal patient–physician relationship reflects this definition. Patients place their health in physicians’ voluntary responsibility and discretion and believe in physicians’ benevolent motives. AI does not have voluntary agency and cannot be said to have motives or character.

These arguments have significance beyond semantics. Promulgating trust in AI could erode a deeper, moral sense of trust. Were we to conflate trust with mere reliability and accuracy, as the performance of AI improves, this could decrease trust in physicians whose technical accuracy might end up being inferior to machines.

Trust properly understood involves human thoughts, motives, and actions that lie beyond technical, mechanical characteristics. To sacrifice these elements of trust corrupts our thinking and values. It limits our imaginations about the meaning of the patient–physician relationship when promoting humane, personalised care seems ever more crucial.

Embracing trust in AI as if AI were a moral agent also unwittingly fosters diffusion of responsibility.3 Absolving physicians of blame in times of error while muting praise for wise decisions takes medicine in the wrong direction. Although AI, like a faulty surgical instrument, might be causally implicated, we cannot rightly assign moral responsibility to it. Whether future versions of AI can be regarded as moral agents is only a matter of speculation.

Although in common parlance we certainly speak of trusting machines—eg, trusting cars to get us places—we ought not confuse these colloquialisms with the true meaning of trust. Preserving precision in the usage of trust strikes at the heart of the identity of medicine. As AI increasingly becomes a part of medicine, its proper role should be in supporting effective, empathic, and ethically attentive care for humans, by humans. Instead of trusting AI, we should aspire for medicine that warrants placing our trust in each other.

We declare no competing interests. Both authors contributed equally to the conception or design of the work; the acquisition, analysis, or interpretation of data for the work; drafting the work or revising it critically for important intellectual content; and final approval of the version to be published; and both agree to be accountable for all aspects of the work.

Source:

https://www.thelancet.com/journals/landig/article/PIIS2589-7500(19)30197-9/fulltext

Advertisements
Advertisements