Are doctors necessary? Just how far might the automation of medicine go? These are questions posed by Jonathon Cohn in his article “The Robot Will See You Now” recently published in the Atlantic magazine and online.
The article is about the computer Watson – yep, the same one set a record playing Jeopardy and earned the first $1,000,000 in winnings. The computer is designed to answer questions posed in real world language, instead of “structured” language. And this computer is now “learning” to be a doctor. It is superior at handling all of the vast amounts of potentially relevant data, whereas humans cannot keep all of those statistics in the forefront of their brains.
The computing power to harness a comprehensive library of diagnostic options, tests, and therapeutic strategies may indeed be a beneficial “disruptive innovation” for physicians and their patients. As you may guess, since the article mentions various biases that I’ve written about before, I love the idea of counterbalancing those limitations with decision support. I agree that more possibilities could lead to more tests, and actually increase the cost of care. And, I’m not worried about my job security despite this disheartening quote: “If technological aids allow us to push more care down to people with less training and fewer skills, more middle-class jobs will be created along the way” (Is that really the future of medicine?)
Watson will not supplant the patient-doctor relationship because it lacks a key feature of human decision making: preference. Patients have personal preferences, often linked to non-rational (by that I mean, not based purely on statistics) cognitive processes. Regret, framing, certainty, and loss aversion are just some of these processes. As a hard-wired part of thought, these processes result in different preferences for different people. What defines a “good” quality of life? What is a “good” gamble in medicine? What outcomes are absolute deal-breakers for you – you’d rather not seek treatment if one of those dreaded side effects is likely? And how likely? How much money should be spent on testing to rule in or rule out something obscure but awful?
Watson may be able to tell you how likely diseases and outcomes are to occur, but Watson cannot tell you how you will feel about those outcomes, or at what threshold the scales tip in terms of your desire to accept or decline an option.