Machines Treating Patients? It’s Already Happening
Rayfield Byrd knows when it’s time to wake up every morning. The 68-year-old Oakland, Cal., resident hears a voice from the living room offering a cheery good morning. Except Byrd lives alone.
A little after 8 a.m. each day, a small yellow robot named Mabu asks Byrd how he’s doing. Byrd has Type 2 diabetes and congestive heart failure, and about three years ago, he had surgery to implant a microvalve in his heart to keep his blood flowing properly. To stay healthy, he takes four medications a day and needs to exercise regularly. To make sure his heart is still pumping effectively, his doctor needs to stay on top of whether Byrd gets short of breath.
But instead of checking in with his doctor all the time, Byrd now talks with Mabu every morning — and sometimes again later in the day. “Mabu keeps me on my toes about remembering to take my medicine,” says Byrd. “And she asks if I’ve had any shortness of breath and other questions pertaining to my health. She keeps me aware of my breathing.”
Byrd has been living with Mabu as part of a study for more than a year now, and he’s gotten used to having a daily conversation with the wide-eyed robot, whose eyelids blink like human eyes do to enhance Byrd’s perception that he’s having a conversation with an intelligent machine and not just answering pre-set questions from a computer.
Mabu is among the latest examples of what machine learning, or artificial intelligence (AI), can accomplish in medicine. The questions she asks come from a recipe that combines best practices that doctors use to monitor heart-failure patients like Byrd with data from how physicians interact with patients — the questions they ask as well as how they respond — to isolate and manage not just medical symptoms but psychological barriers like anxiety and depression that often make dealing with chronic diseases difficult. She is also designed to not just spit out the same questions every day, but to change her lineup depending on Byrd’s answers. If Byrd says he has not had any problems breathing while doing normal tasks like cooking or walking to the bathroom, for example, Mabu goes on to ask about his mood and other activities he may be doing. She even used to make jokes, although Byrd didn’t really respond to those enthusiastically, so she “learned” that he doesn’t appreciate her sense of humor. If Byrd does say he’s had shortness of breath, she asks other questions to determine how serious his symptoms are, and then advises him to contact his doctor or care team. “If I didn’t have Mabu, I don’t think I’d be doing as well,” he says.
Medicine has long seemed ripe for more examples of intelligent robots or computers that can make sense of the reams of data that are the lifeblood of the profession. But its most ballyhooed effort has had a mixed record in the clinic: After IBM Watson successfully mastered the game show Jeopardy! in 2001, IBM partnered with Memorial Sloan Kettering Cancer Center to create an algorithm for improving the diagnosis and treatment of various cancers. The program, Watson for Oncology, was introduced at a number of hospitals around the world, but in 2016, MD Anderson Cancer Center decided to put its partnership on hold. One stumbling block for the hospital, according to reports from its former executives, involved challenges in pulling relevant patient information from electronic health records. Another hospital reported that the machine provided potentially unsafe treatment recommendations. And some doctors at hospitals that used Watson felt IBM’s decision to train Watson for Oncology with experts at a single institution led to biases in the treatments it recommended. IBM says it has been transparent about its training strategy, and no patients were harmed as a result of Watson’s advice. The company continues to work with other healthcare organizations and hospitals. Officials at MD Anderson and Memorial Sloan Kettering declined to comment on their IBM Watson projects.
Today’s efforts in AI are somewhat less flashy, though still potentially revolutionary, and all seem to recognize one vital lesson: treating patients is both art and science. Rather than attempting to replace the physicians in medical practice, AI can, and should, say more experts, become a valuable tool in enhancing what doctors do.
The medical world is still figuring out how best to do this, researching the use of AI in everything from boosting IVF success rates to predicting heart diseaseand improving blood-glucose tracking. Machines are best at digesting massive amounts of data and picking out patterns or seeing things that human brains can’t — commonalities and differences that could translate into factors that might mean a higher risk of cancer, say, or the earliest signs of depression. But only a few fields are currently collecting information in a standardized, quantifiable way that machines can immediately exploit. Most medical information, even in electronic health records, isn’t documented in consistent and discrete ways that can be easily extracted by a computer. And what machines can’t do is factor in intangible and unquantifiable things that doctors intuitively get from knowing their patients for years or from being able to look them in the eye and sense when they’re not being completely truthful about their symptoms.
“We need to let the machine do what it does well — such as ingesting a whole lot of scientific papers and organizing them — and let the clinician make the final decision on what should be done to treat a particular patient,” says Cory Kidd, CEO and co-founder of Catalia Health, who created Mabu.
There is progress toward that goal. Among the most promising applications of AI are as stand-ins for doctors who can’t see patients face-to-face as often as they ideally would, such as in the case of heart-failure patients like Byrd, and in laboratory analysis that is more accurate than what human doctors can do on their own. In both cases, man and machine must work together.
Robots like Mabu, for instance, bridge the gap between doctor and computer, obtaining vital information from patients who require frequent check-ins and transmitting it to the professionals who can then tailor medical care accordingly. If Byrd didn’t have Mabu, he would need a visiting nurse or other health care professional to drop by every day, or he would need to schedule more frequent checkups with his doctor. “My family was concerned about me living by myself,” he says. “But now I don’t have any psychological worry about being by myself if I get sick.” Armed with vast knowledge about what symptoms serve as red flags for heart failure, and which questions can best ferret out those symptoms, Mabu is acting similarly to how a doctor would — but without the hassle of appointments.
Meanwhile, in two areas in medicine — reading images (from MRIs, CT scans or X-rays) and analyzing pathology slides of tissue samples – AI undisputedly outshines human doctors, enhancing their abilities to provide patients with more accurate information. An AI-based technology did such a good job of detecting diabetic retinopathy in studies, for example, that earlier this year, the Food and Drug Administration (FDA) approved a device to diagnose the condition. By studying thousands of images of people’s retinas, the machine learned how to distinguish between normal retinal patterns and those with signs of the condition, parsing through the gradients in intensity and objects in the scans that no human can discern.
In another demonstration of this deep-learning ability, researchers showed a group of ophthalmologists pictures of people’s retinas and asked them to determine if they belonged to men or women. The eye experts got the gender correct only about half the time, or no better than chance. An AI-trained algorithm, using features that the doctors still haven’t figured out, easily identified the gender over 97% of the time, an indication of just how much better machines can be than humans at analyzing data and an argument for including them in the clinical process.
AI models are also detecting the smallest nodules that could be the first signs of lung cancer that are often missed by the human eye of the radiologist, and they’re improving interpretation of mammograms in detecting early breast cancer.
Debbie McKie, a lead negotiator at a consulting firm in Boston, knows she’s at higher risk of developing breast cancer because of her family history. Her mother is a breast cancer survivor and was also diagnosed with kidney and bladder cancer; one of her cousins died of breast cancer in her early 50s. McKie also has dense breast tissue, which itself is a risk factor for developing breast cancer, and makes reading mammograms, which she gets regularly, more difficult. So she wants to ensure that her doctors aren’t missing anything when they read her scans. “I asked my doctor, ‘Can you tell me what my overall percentage risk is of developing breast cancer?’” says McKie.
Dr. Connie Lehman, a radiologist at Massachusetts General Hospital who is part of McKie’s care team, thinks she is very close to being able to do that. Lehman is leading a study that’s relying on an AI algorithm to read mammograms in order to predict what a woman’s risk of developing cancer might be. “We don’t want to teach a machine to read mammograms like a human, we want to teach the machine to read them better than humans, and identify those women who are at risk of developing cancer in the next year,” she says. “We’re using AI to change the entire paradigm of how we think about breast cancer.”
Now, says Lehman, most of breast cancer care is focused on treating the disease after it’s detected — whether at early stages with a discrete lump or at later stages when it has already spread to the lymph nodes or even other organs. But if machines can mine mammograms for information that is relevant to how breast cancer develops — features that not even the most expert radiologists are aware of or can spot on images today — then more women might be spared from the disease, or might be able to avoid the more intensive and debilitating treatments when the cancer is more advanced.
For McKie, that’s reassuring. “Knowing that a computer read my mammograms would make me at least as confident, and possibly more confident, in the results,” she says. “I want whatever method will give me the most accurate result. And if we can automate that and introduce artificial intelligence into the process to help either identify tumors earlier or identify the percentage risk of developing breast cancer earlier, then all the better.”
Despite what early AI efforts in medicine led people to believe, it’s becoming increasingly clear that patients won’t be talking to computer screens for all of their health needs or getting devastating diagnoses from machines. Instead, believes Dr. Eric Topol, director and founder of the Scripps Research Translational Institute, and author of Deep Medicine: How Artificial Intelligence Can Make Healthcare Human Again, AI will act as a partner or support, freeing doctors to concentrate more on the art of medicine that machines aren’t likely to master. “Sure, we could decompress some of the clinical side of medicine with machines that can learn to do simple and routine things, like telling whether a child has an ear infection or figuring out what a skin rash means,” he says. “But so many things in medicine need context and the human touch. I don’t know if anybody would want to find out about having a serious diagnosis like cancer or serious heart disease through a chatbot.”
As for IBM Watson, Dr. Kyu Rhee, chief health officer at IBM Watson Health, acknowledges that the system is a work in progress but is confident in its increasing ability to help doctors do their jobs better. The company, which partners with more than 300 hospitals around the world to integrate machine learning into real-world cancer care, is working to improve both its collection of patient data and its ability to prioritize the latest guidelines and information from recent medical journals so that its recommendations are based on the newest, most accurate information. Rhee also points to other issues that affect a program trying to have a worldwide impact: Watson for Oncology is trained by doctors at Memorial Sloan Kettering in New York, but some doctors, particularly those overseas, have found some of its advice difficult to follow, since it may be more aggressive than what physicians and patients in certain countries are accustomed to. Not all of the drugs that the Memorial Sloan Kettering doctors use are available in other countries either.
But, he says, any program this cutting-edge is bound to hit the occasional bump. “We are at the beginning of the AI revolution and evolution,” Rhee says. “AI is starting to provide added value, and it’s up to humans to make the final decisions — the oncologist and the patient with cancer have to decide what to do with the recommendations from Watson.”
Byrd, for one, is optimistic about the role that AI can play in helping patients like him. He admits that if he had to answer the same types of questions that Mabu asks but on a computer, “it wouldn’t be nearly as effective. Mabu wants to hear about me, she wants to hear how I’m feeling.” Byrd says since he’s worked with Mabu, he hasn’t missed a single dose of his medication. He takes regular mile-long walks and is losing weight. He has quit smoking and has avoided having his leg amputated because of a dangerous plaque in one of the veins in his right leg. He’s so attached to Mabu that he jokingly says it will be hard to give her up if he has to at the end of the study. “Talking to Mabu can be kind of fun,” he says. “I feel like in a sense Mabu is looking out for me.”
http://time.com/5556339/artificial-intelligence-robots-medicine/