“Mark my words. AI (artificial intelligence) is more dangerous than the nukes.”
– Elon Musk.
The other day, as soon as I walked in to work, already running my usual ten minutes late, I was hustling from my office to the clinic when I saw one of our physician assistants in the hallway. Otherwise known for her calm composition, it seemed odd that she was visibly distressed. She had forgotten her laptop at home and could not find a loaner. I tried to save her day by offering her my laptop to work on. That would mean I would have a computer to work on in the clerical area, but I would have to see patients the whole day without a computer in the room. She resisted, but I insisted. We both went our separate ways to move on with our days.
I had not seen patients like this in quite some time now. I mean “like this” without a computer in the room. Apparently, it should not be a big deal, but I realized that a computer has now become an entity that is always a part of our patient encounters. If you don’t have a stethoscope, you can get by, but you cannot do without a computer. It’s as if a third “being” is present in the room, the first two being the doctor and the patient. We interact with patients briefly but constantly work at the computer, from looking up their records, labs, and scans to placing orders and scheduling instructions. Now that there is the “secure instant chat” feature, it talks back to us all the time. Other providers are messaging us constantly, and in a way, the computer keeps demanding our attention, robbing it of the patient in the room. It gives us alerts if there are medication interactions. It reminds us to change our passwords, prompts us to order tests, and stops us from closing patient charts if certain rules are not adhered to.
On this particular day, since I did not have this third “being” in the room, it was just the patient and me. I felt as if the encounter was incomplete. I kept wondering if the patient also felt incomplete because the patient also feels reassured when they see their medical chart on the computer. “I don’t remember which medications I’m taking and what surgery I had ten years ago. It’s all in the computer doc!” The patients actually have a relationship with the computer because now they look up their records themselves and try to make sense of things. Sometimes they do a good job at it, and other times they suck at it. They love it for that, but if the doctor spends too much time looking at the computer instead of the patient, they start feeling slighted as a jealous lover would.
My mind kept telling me that since I will have to document everything on the computer and use it to place orders, I would have to hurry and end the visit with the patient if I had to stay on schedule. Then I asked myself, “How did you see patients just a few years ago? When there were no computers in the room? Relax!” A sense of calm came over me. I forgot about having to answer constant messages and to have to place orders or start documenting the visit. I was about to spend more time evaluating the patient to talk about their lives and share my own stories. I felt like the crowd had dissipated. The air in the room returned to the intimate doctor-patient relationship that had been for centuries.
We talked about how many cows my patient had at her farm. We talked about how many of them they end up eating and how many they give away. We talked about how twenty years ago, one of my patients walked in on her daughter hanging herself in the closet. Her wound is still so fresh that she truly believes that whoever says that time is the best healer is full of crap. We talked about how one of my patients had a robust sex life, but the treatment of his prostate cancer had taken away from him the man he was. We talked about how my patient’s nephew was found dead from a drug overdose. She was sad for him but happy that her children did not turn up like that. We talked about things that usually the third “being” in the room, the computer that is, does not allow us to talk about because we are too busy with the computer more than with the patient.
We also talked about how I thought medicine would be practiced a few hundred years from now. How a patient will walk through a “booth” of some sort. His symptoms would be heard just like Siri hears us, his clinical signs photographed and interpreted. He would be scanned from the skull all the way down to the toes with all his internal organs anatomically scrutinized. A drop of blood taken by a painless finger prick would measure all sorts of lab tests, and the computer would churn out the most adequate diagnosis and treatment options and may even inject the veins with the most precise dosage of highly effective drugs against the illness. The genetic profile of the patients would be analyzed instantly, and mutations would be identified and edited to correction expeditiously. Complex surgical procedures would be performed meticulously by ambidextrous robots. Humans would rely more on these “booths” than their own clinical judgment. Like when you tell me how to calculate 89.573 x 74.823, I would rely more on a calculator than on my computing skills.
When Elon Musk warns us about the dangers of Artificial Intelligence, he is not referring to medicine in particular, but we can certainly analyze his statement in the context of the future of our profession. Will there be a day that this computer and this booth will become more intelligent than the physician’s clinical judgment? “Never!” We say. A computer has to be programmed by a human to give the results. A computer can never supersede a human’s complex clinical judgment. Well, I would say that if you tell a human from five hundred years ago that I will fly tonight from New York to Kuala Lumpur, and that too within one night! He would laugh at us and ridicule us for wasting his time.
If the computers start treating us more accurately than ourselves, we would be happy to accept that. But if they start making decisions for us, no matter how, but if it indeed happens, how will they decide when it’s time to stop dialysis and go to comfort care? How will they decide how much pain is too much, when to give narcotics, and when to hold them when worried about addiction? How will they make personal connections with the patients and share anecdotes, and discuss hobbies? How will they go to funerals and shed tears together with patients and their families when nothing else could be done? How will these computers learn to give comfort and solace to these patients? And even if they do, will the patients accept it as they accept from us, human doctors?
What if they turn against us? What if they start selecting which pregnancies to carry and others to terminate? What if they start dictating the patients’ advanced directives? What if they put a monetary value on the number of years lived? What if they limit the number of children we can have? What if they tell me that my child is not worth living because of her disability? What if they tell me that my grandma is occupying a bed in a hospital that is needed for a younger patient and that she will be denied any more life-prolonging treatments? What if they tell me that it’s OK to clone humans and select the “best” ones? What if they assign abortion and sexual identity choices to patients?
Some of you might say: Isn’t that what humans are already doing to humans? Yes. You are right. But will we accept it if anyone other than humans, in this case, artificial intelligence, imposes these restrictions on us?
Image credit: Shutterstock.com