Your Future Doctor May Not be Human. This Is the Rise of AI in Medicine.

Diagnosing with “The Stethoscope of the 21st Century”

A new kind of doctor has entered the exam room but doesn’t have a name. In fact, these doctors don’t even have faces. Artificial intelligence has made its way into hospitals around the world. Those wary of a robot takeover have nothing to fear; the introduction of AI into health care is not necessarily about pitting human minds against machines. AI is in the exam room to expand, sharpen, and at times ease the mind of the physician so that doctors are able to do the same for their patients.

Bertalan Meskó, known better as The Medical Futurist, has called artificial intelligence the “the stethoscope of the 21st century.” His assessment may prove to be even more accurate than he expected. While various techniques and tests give them all the information they need to diagnose and treat patients, physicians are already overburdened with clinical and administrative responsibilities, and sorting through the massive amount of available information is a daunting, if not impossible, task.

That’s where having the 21st-century stethoscope could make all the difference. The applications for AI in medicine go beyond administrative drudge work, though. From powerful diagnostic algorithms to finely-tuned surgical robots, the technology is making its presence known across medical disciplines. Clearly, AI has a place in medicine; what we don’t know yet is its value. To imagine a future in which AI is an established part of a patient’s care team, we’ll first have to better understand how AI measures up to human doctors. How do they compare in terms of accuracy? What specific, or unique, contributions are AI able to make? In what way will AI be most helpful — and could it be potentially harmful — in the practice of medicine? Only once we’ve answered these questions can we begin to predict, then build, the AI-powered future that we want.

AI vs. Human Doctors

Although we are still in the early stages of its development, AI is already just as capable as (if not more capable than) doctors in diagnosing patients. Researchers at the John Radcliffe Hospital in Oxford, England, developed an AI diagnostics system that’s more accurate than doctors at diagnosing heart disease, at least 80 percent of the time. At Harvard University, researchers created a “smart” microscope that can detect potentially lethal blood infections: the AI-assisted tool was trained on a series of 100,000 images garnered from 25,000 slides treated with dye to make the bacteria more visible. The AI system can already sort those bacteria with a 95 percent accuracy rate. A study from Showa University in Yokohama, Japan revealed that a new computer-aided endoscopic system can reveal signs of potentially cancerous growths in the colon with 94 percent sensitivity, 79 percent specificity, and 86 percent accuracy.

In some cases, researchers are also finding that AI can outperform human physicians in diagnostic challenges that require a quick judgment call, such as determining if a lesion is cancerous. In one study, published December 2017 in JAMA, deep learning algorithms were able to better diagnose metastatic breast cancer than human radiologists when under a time crunch. While human radiologists may do well when they have unrestricted time to review cases, in the real world (especially in high-volume, quick-turnaround environments like emergency rooms) a rapid diagnosis could make the difference between life and death for patients.

Then, of course, there’s IBM’s Watson: When challenged to glean meaningful insights from the genetic data of tumour cells, human experts took about 160 hours to review and provide treatment recommendations based on their findings. Watson took just ten minutes to deliver the same kind of actionable advice. Google recently announced an open-source version of DeepVariant, the company’s AI tool for parsing genetic data, which was the most accurate tool of its kind in last year’s precision FDA Truth Challenge.

With AI in medicine, doctors will have to spend less time doing this. Image Credit: National Cancer Institute via Wikimedia Commons

AI is also better than humans at predicting health events before they happen. In April, researchers from the University of Nottingham published a study that showed that trained on extensive data from 378,256 patients, a self-taught AI predicted 7.6 percent more cardiovascular events in patients than the current standard of care. To put that figure in perspective, the researchers wrote: “In the test sample of about 83,000 records, that amounts to 355 additional patients whose lives could have been saved.” Perhaps most notably, the neural network also had 1.6 percent fewer “false alarms” — cases in which the risk was overestimated, possibly leading to patients having unnecessary procedures or treatments, many of which are very risky.

AI is perhaps most useful for making sense of huge amounts of data that would be overwhelming to humans. That’s exactly what’s needed in the growing field of precision medicine. Hoping to fill that gap is The Human Diagnosis Project (Human Dx), which is combining machine learning with doctors’ real-life experience. The organization is compiling input from 7,500 doctors and 500 medical institutions in more than 80 countries in order to develop a system that anyone — patient, doctor, organization, device developer, or researcher — can access in order to make more informed clinical decisions.

“You have to design these things with an end user in mind.”

Shantanu Nundy, the Director of the Human Diagnosis Project Nonprofit, told Futurism that, when it comes to developing technology in any industry, the AI should be seamlessly integrated into its function. “You have to design these things with an end user in mind. People use Netflix, but it’s not like ‘AI for watching movies,’ right? People use Amazon, but it’s not like ‘AI for shopping.’” In other words, if the tech is designed well and implemented in a way that people find useful, people don’t even realize they’re using AI at all.

For open-minded, forward-thinking clinicians, the immediate appeal of projects like Human Dx is that it would, counterintuitively, allow them to spend less time engaged with technology. “It’s been well-documented that over 50 percent of our time now is in front of a screen,” Nundy, who is also a practising physician in the D.C. area, told Futurism. AI can give doctors some of that time back by allowing them to offload some of the administrative burdens, like documentation. In this respect, when it comes to healthcare, AI isn’t necessarily about replacing doctors but optimizing and improving their abilities.

Mental Health Care with a Human Touch

“I see the value of AI today as augmenting humans, not as replacing humans,” Skyler Place, the chief behavioural science officer in the mobile health division at Cogito, a Boston-based AI and behavioural analytics company, told Futurism. Cogito has been using AI-powered voice recognition and analysis to improve customer service interactions across many industries. The company’s foray into healthcare has come in the form of Cogito Companion, a mental health app that tracks a patient’s behaviour.

Contigo Companion. Image Credit: Cotigo

The app monitors a patient’s phone for both active and passive behavior signals, such as location data that could indicate a patient hasn’t left their home for several days or communication logs that indicate they haven’t texted or spoken on the phone to anyone for several weeks (the company claims the app only knows if a patient is using their phone to call or text — it doesn’t track who a user is calling or what’s being said). The patient’s care team can monitor the subsequent reports for signs that, in turn, may indicate changes to the patient’s overall mental health.

Cogito has teamed up with several healthcare systems throughout the country to test the app, which has found a particular niche in the veteran population. Veterans are at high risk for social isolation and may be reluctant to engage with the healthcare system, particularly mental health resources, often because of social stigma. “What we’ve found is that the app is acting as a way to build trust, in a way to drive engagement in healthcare more broadly,” Place said, adding that the app “effectively acts as a primer for behavioral change,” which seems to help veterans feel empowered and willing to engage with mental health services.

Here’s where the AI comes in: the app also uses machine learning algorithms to analyze “audio check-ins” — voice recordings the patient makes (somewhat akin to an audio diary). The algorithms are designed to pick up on emotional cues, just as two humans talking would. “We’re able to build algorithms that match the patterns in how people are speaking, such as energy, intonation, the dynamism or flow in a conversation,” Place explained.

From there, humans train the algorithm to learn what “trustworthy” or “competence” sound like, to identify the voice of someone who is depressed, or the differences in the voice of a bipolar patient when they’re manic versus when they’re depressed. While the app provides real-time information for the patient to track their mood, the information also helps clinicians track their patient’s progress over time.

At Cogito, Place has seen the capacity of artificial intelligence to help us “understand the human aspects of conversations and the human aspects of mental health.” Understanding, though, is just the first step. The ultimate goal is finding a treatment that works, and that’s where doctors currently shine in relation to mental health issues. But where do robots stand when it comes to things that are more hands-on?

Under The (Robotic) Knife

Over the last couple of decades, one of the most headline-making applications for AI in medicine has been the development of surgical robots. In most cases to date, surgical robots (the da Vinci is the most well-known) function as an extension of the human surgeon, who controls the device from a nearby console. One of the more ambitious procedures claimed to be a world-first, took place in Montreal in 2010. It was the first in-tandem performance of both a surgical robot as well as a robot anesthesiologist (cheekily named McSleepy); data gathered on the procedure reflects the impressive performance of these robotic doctors.

In 2015, more than a decade after the first surgical robots entered the operating room, MIT performed a retrospective analysis of FDA data to assess the safety of robotic surgery. There were 144 patient deaths and 1,391 patient injuries reported during the period of study, which was mainly caused by technical difficulties or device malfunctions. The report noted that “despite a relatively high number of reports, the vast majority of procedures were successful and did not involve any problems.” But the number of events in more complex surgical areas (like cardiothoracic surgery) were “significantly higher” than in areas like gynaecology and general surgery.

Surgical robot. Image Credit: Getty Images

The takeaway would seem to be that, while robotic surgery can perform well in some specialities, the more complex surgeries are best left to human surgeons — at least for now. But this could change quickly, and as surgical robots are able to operate more independently from human surgeons, it will become harder to know who to blame when something goes wrong.

Can a patient sue a robot for malpractice? As the technology is still relatively new, litigation in such cases constitutes something of a legal grey area. Traditionally, experts consider medical malpractice to be the result of negligence on the part of the physician or the violation of a defined standard of care. The concept of negligence, though, implies an awareness that AI inherently lacks, and while it’s conceivable that robots could be held to performance standards of some kind, those standards would need to exist.

So if not the robot, who, or what takes the blame? Can a patient’s family hold the human surgeon overseeing the robot accountable? Or should the company that manufactured the robot shoulder the responsibility? The specific engineer who designed it? This is a question that, at present, has no clear answer — but it will need to be addressed sooner rather than later.

The building, Not Predicting, the Future

In the years to come, AI’s role in medicine will only grow: In a report prepared by Accenture Consulting, the market value of AI in medicine in 2014 was found to be $600 million. By 2021, that figure is projected to reach $6.6 billion.

The industry may be booming, but we shouldn’t integrate AI hurriedly or haphazardly. That’s in part because things that are logical to humans are not to machines. Take, for example, an AI trained to determine if skin lesions were potentially cancerous. Dermatologists often use rulers to measure the lesions that they suspect to be cancerous. When the AI was trained on those biopsy images, it was more likely to say a lesion was cancerous if a ruler was present in the image, according to The Daily Beast.

Algorithms may also inherit our biases, in part because there’s a lack of diversity in the materials used to train AI. In medicine or not, the data the machines are trained on is largely determined by who is conducting the research and where it’s being done. White men still dominate the fields of clinical and academic research, and they also make up most of the patients who participate in clinical trials.

A tenet of medical decision-making is whether the benefits of a procedure or treatment outweigh the risks. When considering whether or not AI is ready to be on equal footing with a human surgeon in the operating room, a little risk-benefit and equality analysis will go a long way. “I think if you build [the technology] with the right stakeholders at the table, and you invest the extra effort to be really inclusive in how you do that, then I think we can change the future,” Nundy, of Human Dx, said. “We’re trying to actually shape what the future holds.”

Though sometimes we fear that robots are leading the charge towards integrating AI in medicine, humans are the ones having these conversations and, ultimately, driving the change. We decide where AI should be applied and what’s best left done the old-fashioned way. Instead of trying to predict what a doctor’s visit will be like in 20 years, physicians can use AI as a tool to start building the future they want — the future that’s best for them and their patients — today.

– Abby Norman


The Technological Future of Surgery

The future of surgery offers an amazing cooperation between humans and technology, which could elevate the level of precision and efficiency of surgeries so high we have never seen before

Will we have Matrix-like small surgical robots? Will they pull in and out organs from patients’ bodies?

The scene is not impossible. It looks like we have come a long way from ancient Egypt, where doctors performed invasive surgeries as far back as 3,500 years ago. Only two years ago, Nasa teamed up with American medical company Virtual Incision to develop a robot that can be placed inside a patient’s body and then controlled remotely by a surgeon.

That’s the reason why I strongly believe surgeons have to reconsider their stance towards technology and the future of their profession.

Virtual Incision - Robot - Future of Surgery

Surgeons have to rethink their profession

Surgeons are at the top of the medical food chain. At least that’s the impression the general audience gets from popular medical drama series and their own experiences. No surprise there. Surgeons bear huge responsibilities: they might cause irreparable damages and medical miracles with one incision on the patient’s body. No wonder that with the rise of digital technologies, the Operating Rooms and surgeons are inundated with new devices aiming at making the least cuts possible.

We need to deal with these new surgical technologies in order to make everyone understood that they extend the capabilities of surgeons instead of replacing them.

Surgeons also tend to alienate themselves from patients. The human touch is not necessarily the quintessence of their work. However, as technological solutions find their way into their practice taking over part of their repetitive tasks, I would advise them to rethink their stance. Treating patients with empathy before and after surgery would ensure their services are irreplaceable also in the age of robotics and artificial intelligence.

As a first step, though, the society of surgeons has to familiarize with the current state of technology affecting the OR and their job. I talked about these future technologies with Dr. Rafael Grossmann, a Venezuelan surgeon who was part of the team performing the first live operation using medical VR and he was also the first doctor ever to use Google Glass live in surgery.

Future of Surgery

So, I collected the technologies that will have a huge impact on the future of surgery.

1) Virtual reality

For the first time in the history of medicine, in April 2016 Shafi Ahmed cancer surgeon performed an operation using a virtual reality camera at the Royal London hospital. It is a mind-blowingly huge step for surgery. Everyone could participate in the operation in real time through the Medical Realities website and the VR in OR app. No matter whether a promising medical student from Cape Town, an interested journalist from Seattle or a worried relative, everyone could follow through two 360 degree cameras how the surgeon removed a cancerous tissue from the bowel of the patient.

This opens new horizons for medical education as well as for the training of surgeons. VR could elevate the teaching and learning experience in medicine to a whole new level. Today, only a few students can peek over the shoulder of the surgeon during an operation. This way, it is challenging to learn the tricks of the trade. By using VR, surgeons can stream operations globally and allow medical students to actually be there in the OR using their VR goggles. The team of The Body VR is creating educational VR content as well as simulations aiding the process of traditional medical education for radiologists, surgeons, and physicians. I believe there will be more initiatives like that very soon!

2) Augmented reality

As there is a lot of confusion around VR and AR, let me make it clear: AR differs in two very important features from VR. The users of AR do not lose touch with reality, while AR puts information into eyesight as fast as possible. With these distinctive features, it has a huge potential in helping surgeons become more efficient at surgeries. Whether they are conducting a minimally invasive procedure or locating a tumor in liver, AR healthcare apps can help save lives and treat patients seamlessly.

As it could be expected, the AR market is buzzing. More and more players emerge in the field. Promising start-up, Atheer develops the Android-compatible wearable and complementary AiR cloud-based application to boost productivity, collaboration and output. The Medsights Tech company developed a software to test the feasibility of using augmented reality to create accurate 3-dimensional reconstructions of tumors. The complex image reconstructing technology basically empowers surgeons with X-ray views – without any radiation exposure, in real time. The True 3D medical visualization system of EchoPixel allows doctors to interact with patient-specific organs and tissue in an open 3D space. It enables doctors to immediately identify, evaluate, and dissect clinically significant structures.

Google Glass - Future of Surgery

Grossmann also told me that HoloAnatomy, which is using HoloLens to display real data-anatomical models, is a wonderful and rather intuitive use of AR having obvious advantages over traditional methods.

3) Surgical robotics

Surgical robots are the prodigies of surgery. According to market analysis, the industry is about to boom. By 2020, surgical robotics sales are expected to almost double to $6.4 billion.

The most commonly known surgical robot is the da Vinci Surgical System; and believe it or not, it was introduced already 15 years ago! It features a magnified 3D high-definition vision system and tiny wristed instruments that bend and rotate far greater than the human hand. With the da Vinci Surgical System, surgeons operate through just a few small incisions. The surgeon is 100% in control of the robotic system at all times; and he or she is able to carry out more precise operations than previously thought possible.

Recently, Google has announced that it started working with the pharma giant Johnson&Johnson in creating a new surgical robot system. I’m excited to see the outcome of the cooperation soon. They are not the only competitors, though. With their AXSIS robot, Cambridge Consultants aim to overcome the limitations of the da Vinci, such as its large size and inability to work with highly detailed and fragile tissues. Their robot rather relies on flexible components and tiny, worm-like arms. The developers believe it can be used later in ophthalmology, e.g. in cataract surgery.

Da-Vinci-Surgical-Robot - Future of Surgery

4) Minimally Invasive Surgery

Throughout the history of surgery, the ultimate goal of medical professionals was to peak into the workings of the human body and to improve it with as small incisions and excisions as possible. By the end of the 18th century, after Edison produced his lightbulb, a Glasgow physician built a tiny bulb into a tube to be able to look around inside the body.

But it wasn’t until the second half of the 20th century when fiber-optic threads brought brighter light into the caverns of the body. And later, tiny computer chip cameras started sending images back out. At last, doctors could not only clearly see inside a person’s body without making a long incision, but could use tiny tools to perform surgery inside. One of the techniques revolutionizing surgery was the introduction of laparoscopes.

The medical device start-up, Levita aims to refine such procedures with its Magnetic Surgical System. It is an innovative technological platform utilizing magnetic retraction designed to grasp and retract the gallbladder during a laparoscopic surgery.

The FlexDex company introduced a new control mechanism for minimally invasive tools. It transmits movement from the wrist of the surgeon to the joint of the instrument entirely mechanically and it costs significantly less than surgical robots.

5) 3D Printing and simulations in pre-operative planning and education

Complicated and risky surgeries lasting hours need a lot of careful planning. Existing technologies such as 3D printing or various simulation techniques help a lot in reforming medical practice and learning methods as well as modelling and planning successfully complex surgical procedures.

In March 2016 in China, a team of experienced doctors decided to build a full-sized model of the heart of a small baby born with a heart defect. Their aim was to pre-plan an extremely complicated surgery on the tiny heart. This was the first time someone used this method in China. The team of medical professionals successfully completed the surgery. The little boy survived with little to no lasting ill-effects.

In December 2016, in the United Arab Emirates doctors have used 3D printing technology for the first time to help safely remove a cancerous tumour from a 42-year-old woman’s kidney. With the help of the personalized, 3D printed aid the team was able to carefully plan the operation as well as to reduce the procedure by an entire hour!

The technology started to get a foothold also in medical education. To provide surgeons and students with an alternative to a living human being to work on, a pair of physicians at the University of Rochester Medical Center (URMC) have developed a way to use 3D printing to create artificial organs. They look, feel, and even bleed like the real thing. Truly amazing!

To widen the platform of available methods for effectively learning the tricks of the trade, Touch Surgerydeveloped a simulation system. It is basically an app for practicing procedures ranging from heart surgery to carpal tunnel operations.

6) Live diagnostics

The intelligent surgical knife (iKnife) was developed by Zoltan Takats of Imperial College London. It works by using an old technology where an electrical current heats tissue to make incisions with minimal blood loss. With the iKnife, a mass spectrometer analyzes the vaporized smoke to detect the chemicals in the biological sample. This means it can identify whether the tissue is malignant real-time.

The technology is especially useful in detecting cancer in its early stages and thus shifting cancer treatment towards prevention.

Surgical iKnife - Future of Surgery

7) Artificial Intelligence will team up with surgical robotics

Catherine Mohr, vice president of strategy at Intuitive Surgical and expert in the field of surgical robotics believes surgery will take to the next level with the combination of surgical robotics and artificial intelligence. She is thrilled to see IBM WatsonGoogle Deepmind’s Alpha Go or machine learning algorithms to have a role in surgical procedures. She envisioned a tight partnership between humans and machines, with one making up for the weaknesses of the other.

In my view, AI such as the deep learning system, Enlitic, will soon be able to diagnose diseases and abnormalities. It will also give surgeons guidance over their – sometimes extremely – difficult surgical decisions.

 Artificial Intelligence in Surgery - Future of Surgery

I agree with Dr. Mohr in as much as I truly believe the future of surgery, just as the future of medicine means a close cooperation between humans and medical technology. I also cannot stress enough times that robots and other products of the rapid technological development will not replace humans. The two will complement each other’s work in such a successful way that we had never seen nor dreamed about before. But only if we learn how.


Google Brain chief: Deep learning takes at least 100,000 examples

Jeff Dean, a senior fellow at Google and head of the Google Brain project, speaks at VB Summit 2017 in Berkeley, California on October 23, 2017

While the current class of deep learning techniques is helping fuel the AI wave, one of the frequently cited drawbacks is that they require a lot of data to work. But how much is enough data?

“I would say pretty much any business that has tens or hundreds of thousands of customer interactions has enough scale to start thinking about using these sorts of things,” Jeff Dean, a senior fellow at Google, said in an onstage interview at the VB Summit in Berkeley, California. “If you only have 10 examples of something, it’s going to be hard to make deep learning work. If you have 100,000 things you care about, records or whatever, that’s the kind of scale where you should really start thinking about these kinds of techniques.”

Dean knows a thing or two about deep learning — he’s head of the Google Brain team, a group of researchers focused on a wide-ranging set of problems in computer science and artificial intelligence. He’s been working with neural networks since the 1990s when he wrote his undergraduate thesis on artificial neural networks. In his view, machine learning techniques have an opportunity to impact virtually every industry, though the rate at which that happens will depend on the specific industry.

There are still plenty of hurdles that humans need to tackle before they can take the data they have and turn it into machine intelligence. In order to be useful for machine learning, data needs to be processed, which can take time and require (at least at first) significant human intervention. “There’s a lot of work in machine learning systems that are not actually machine learning,” Dean said. “And so you still have to do a lot of that. You have to get the data together, maybe you have to have humans label examples, and then you have to write some data processing pipeline to produce the dataset that you will then do machine learning on.”

In order to simplify the process of creating machine learning systems, Google is turning to machine learning itself to determine the right system for solving a particular problem. It’s a tough task that isn’t anywhere near completed, but Dean said the team’s early work is promising. One encouraging example of how this might work comes from a self-trained network that posted state-of-the-art results identifying images from the ImageNet dataset earlier this year. And Google-owned DeepMind just published a paper about a version of AlphaGo that appeared to have mastered the game solely by playing against itself.

DeepMind, a division of Google that’s focused on advancing artificial intelligence research, unveiled a new version of its AlphaGo program today that learned the game solely by playing itself. Called AlphaGo Zero, the system works by learning from the outcomes of its self-play games, using a machine learning technique called reinforcement learning. As Zero was continuously trained, the system began learning advanced concepts in the game of Go on its own and picking out certain advantageous positions and sequences.

After three days of training, the system was able to beat AlphaGo Lee, DeepMind’s software that defeated top Korean player Lee Sedol last year, 100 games to zero. After roughly 40 days of training — which translates to 29 million self-play games — AlphaGo Zero was able to defeat AlphaGo Master (which defeated world champion Ke Jie earlier this year) 89 games to 11. The results show that there’s still plenty more to be learned in the field of artificial intelligence when it comes to the effectiveness of different techniques. AlphaGo Master was built using many of the similar approaches that AlphaGo Zero was, but it began training on human data first before moving on to self-play games.

One interesting note is that while AlphaGo Zero picked up on several key concepts during its weeks of training, the system learned differently than many human players who approach the game of Go. Sequences of “laddered” stones, played in a staircase-like pattern across the board, are one of the first things that humans learn when practising the game. Zero only understood that concept later in its training, according to the paper DeepMind published in the journal Nature.

In addition, AlphaGo Zero is far more power-efficient than many of its predecessors. AlphaGo Lee required the use of several machines and 48 of Google’s Tensor Processing Unit machine learning accelerator chips. AlphaGo Fan, an earlier version of the system, required 176 GPUs. AlphaGo Zero, along with AlphaGo Master, each only require a single machine with four TPUs. What remains to be seen is how well these techniques and concepts generalize to problems outside the realm of God. While AlphaGo’s effectiveness in human games and against itself has shown that there’s room for AI to surpass our capacity in tasks that we think are far too difficult, the robot overlords aren’t here yet.