Using artificial intelligence to improve early breast cancer detection

The model developed at MIT’s Computer Science and Artificial Intelligence Laboratory could reduce false positives and unnecessary surgeries.

Pictured, left to right, are: Manisha Bahl, director of the Massachusetts General Hospital Breast Imaging Fellowship Program; MIT Professor Regina Barzilay (center); and Constance Lehman, professor at Harvard Medical School and chief of the Breast Imaging Division at MGH’s Department of Radiology. 

Pictured, left to right, are Manisha Bahl, director of the Massachusetts General Hospital Breast Imaging Fellowship Program; MIT Professor Regina Barzilay (centre); and Constance Lehman, a professor at Harvard Medical School and chief of the Breast Imaging Division at MGH’s Department of Radiology.- Image: Jason Dorfman/CSAIL

Every year 40,000 women die from breast cancer in the U.S. alone. When cancers are found early, they can often be cured. Mammograms are the best test available, but they’re still imperfect and often result in false positive results that can lead to unnecessary biopsies and surgeries. One common cause of false positives is so-called “high-risk” lesions that appear suspicious on mammograms and have abnormal cells when tested by needle biopsy. In this case, the patient typically undergoes surgery to have the lesion removed; however, the lesions turn out to be benign at surgery 90 percent of the time. This means that every year thousands of women go through painful, expensive, scar-inducing surgeries that weren’t even necessary.

How, then, can unnecessary surgeries be eliminated while still maintaining the important role of mammography in cancer detection? Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), Massachusetts General Hospital, and Harvard Medical School believe that the answer is to turn to artificial intelligence (AI).

As a first project to apply AI to improving detection and diagnosis, the teams collaborated to develop an AI system that uses machine learning to predict if a high-risk lesion identified on needle biopsy after a mammogram will upgrade to cancer at surgery. When tested on 335 high-risk lesions, the model correctly diagnosed 97 percent of the breast cancers as malignant and reduced the number of benign surgeries by more than 30 percent compared to existing approaches.

“Because diagnostic tools are so inexact, there is an understandable tendency for doctors to over-screen for breast cancer,” says Regina Barzilay, MIT’s Delta Electronics Professor of Electrical Engineering and Computer Science and a breast cancer survivor herself. “When there’s this much uncertainty in data, machine learning is exactly the tool that we need to improve detection and prevent over-treatment.” Trained on information about more than 600 existing high-risk lesions, the model looks for patterns among many different data elements that include demographics, family history, past biopsies, and pathology reports.

“To our knowledge, this is the first study to apply machine learning to the task of distinguishing high-risk lesions that need surgery from those that don’t,” says collaborator Constance Lehman, a professor at Harvard Medical School and chief of the Breast Imaging Division at MGH’s Department of Radiology. “We believe this could support women to make more informed decisions about their treatment, and that we could provide more targeted approaches to health care in general.” A recent MacArthur “genius grant” recipient, Barzilay is a co-author of a new journal article describing the results, co-written with Lehman and Manisha Bahl of MGH, as well as CSAIL graduate students Nicholas Locascio, Adam Yedidia, and Lili Yu. The article was published today in the medical journal Radiology.

How it works

When a mammogram detects a suspicious lesion, a needle biopsy is performed to determine if it is cancer. Roughly 70 percent of the lesions are benign, 20 percent are malignant, and 10 percent are high-risk lesions. Doctors manage high-risk lesions in different ways. Some do surgery in all cases, while others perform surgery only for lesions that have higher cancer rates, such as “atypical ductal hyperplasia” (ADH) or a “lobular carcinoma in situ” (LCIS).

The first approach requires that the patient undergo a painful, time-consuming, and expensive surgery that is usually unnecessary; the second approach is imprecise and could result in missing cancers in high-risk lesions other than ADH and LCIS. “The vast majority of patients with high-risk lesions do not have cancer, and we’re trying to find the few that do,” says Bahl, a fellow doctor at MGH’s Department of Radiology. “In a scenario like this there’s always a risk that when you try to increase the number of cancers you can identify, you’ll also increase the number of false positives you find.”

Using a method known as a “random-forest classifier,” the team’s model resulted in fewer unnecessary surgeries compared to the strategy of always doing surgery, while also being able to diagnose more cancerous lesions than the strategy of only doing surgery on traditional “high-risk lesions.” (Specifically, the new model diagnosed 97 percent of cancers compared to 79 percent.) “This work highlights an example of using cutting-edge machine learning technology to avoid unnecessary surgery,” says Marc Kohli, director of clinical informatics in the Department of Radiology and Biomedical Imaging at the University of California at San Francisco. “This is the first step toward the medical community embracing machine learning as a way to identify patterns and trends that are otherwise invisible to humans.”

Lehman says that MGH radiologists will begin incorporating the model into their clinical practice over the next year. “In the past, we might have recommended that all high-risk lesions be surgically excised,” Lehman says. “But now, if the model determines that the lesion has a very low chance of being cancerous in a specific patient, we can have a more informed discussion with our patient about her options. It may be reasonable for some patients to have their lesions followed with imaging rather than surgically excised.”

The team says that they are still working to further hone the model. “In future work, we hope to incorporate the actual images from the mammograms and images of the pathology slides, as well as more extensive patient information from medical records,” says Bahl. Moving forward, the model could also easily be tweaked to be applied to other kinds of cancer and even other diseases entirely.

“A model like this will work anytime you have lots of different factors that correlate with a specific outcome,” says Barzilay. “It hopefully will enable us to start to go beyond a one-size-fits-all approach to medical diagnosis.”

-Adam Conner-Simons | CSAIL

 

AI Can Diagnose Heart Disease and Lung Cancer More Accurately Than Doctors

A pair of recently developed AI systems can diagnose lung cancer and heart disease more accurately than human doctors. These AIs have the potential to save billions of dollars and countless lives if widely adopted

IMPROVED DIAGNOSIS

Artificial intelligence (AI) has already proven useful in the healthcare industry, and now, two newly developed AI diagnostics systems could change how doctors diagnose heart disease and lung cancer.

Cardiologists are very good at their jobs, but they’re not infallible. To determine whether or not something’s wrong with a patient’s heart, a cardiologist will assess the timing of their heartbeat in scans. According to a report by BBC News, 80 percent of the time, their diagnosis of various heart problems is correct, but it’s the remaining 20 percent that shows the process has room for improvement.

To that end, a team of researchers from the John Radcliffe Hospital in Oxford, England, developed Ultromics, an AI diagnostics system that is more accurate than doctors at diagnosing heart disease.

Ultromics was trained using the heart scans of 1,000 patients treated by the company’s chief medical officer, Paul Leeson, as well as information about whether or not those patients went on to suffer heart problems. The system has been tested in multiple clinical trials, and Leeson told BBC News it has greatly outperformed human cardiologists. The specific results of the Ultromics trials are expected to be published in a journal later this year.

Meanwhile, startup Optellum is working to commercialize an AI system that diagnoses lung cancer by analyzing clumps of cells found in scans. That system has also been tested in various trials, and the company’s chief science and technology officer, Timor Kadir, told BBC News that the results suggest it could diagnose as many as 4,000 lung cancer patients per year earlier than doctors can.

SAVING LIVES AND MONEY

Not only could these AI diagnostics systems save lives by providing earlier diagnosis of heart problems and lung cancer, they could also save money that could then be put toward anything from hiring more doctors, nurses, and hospital staff to new equipment.

Kadir told BBC News that Optellum could cut costs by £10bn ($13.5 billion) if both the United States and Europe decided to utilize it. Meanwhile, U.K. healthcare tsar Sir John Bell told BBC News that AI could have a huge positive impact on the National Health Service’s (NHS) bottom line.

“There is about £2.2bn ($2.97 billion) spent on pathology services in the NHS,” said Bell. “You may be able to reduce that by 50 percent. AI may be the thing that saves the NHS.”

Based on the abilities of today’s systems to not only best their human counterparts but also save institutions money, some are concerned that AI could replace doctors altogether. However, given the wide range of tasks a doctor must be capable of handling, it’s more likely that AIs will play a supporting role in the healthcare industry, at least in the near future, serving as a powerful tool that will help human workers do their jobs more efficiently and effectively.

-References: BBC News, Ultromics

Reversing Paralysis

Scientists are making remarkable progress at using brain implants to restore the freedom of movement that spinal cord injuries take away.

The French neuroscientist was watching a macaque monkey as it hunched aggressively at one end of a treadmill. His team had used a blade to slice halfway through the animal’s spinal cord, paralyzing its right leg. Now Courtine wanted to prove he could get the monkey walking again. To do it, he and colleagues had installed a recording device beneath its skull, touching its motor cortex, and sutured a pad of flexible electrodes around the animal’s spinal cord, below the injury. A wireless connection joined the two electronic devices.

The result: a system that read the monkey’s intention to move and then transmitted it immediately in the form of bursts of electrical stimulation to its spine. Soon enough, the monkey’s right leg began to move. Extend and flex. Extend and flex. It hobbled forward. “The monkey was thinking, and then boom, it was walking,” recalls an exultant Courtine, a professor with Switzerland’s École Polytechnique Fédérale de Lausanne.

In recent years, lab animals and a few people have controlled computer cursors or robotic arms with their thoughts, thanks to a brain implant wired to machines. Now researchers are taking a significant next step toward reversing paralysis once and for all. They are wirelessly connecting the brain-reading technology directly to electrical stimulators on the body, creating what Courtine calls a “neural bypass” so that people’s thoughts can again move their limbs.

At Case Western Reserve University, in Cleveland, a middle-aged quadriplegic—he can’t move anything but his head and shoulder—agreed to let doctors place two recording implants in his brain, of the same type Courtine used in the monkeys. Made of silicon, and smaller than a postage stamp, they bristle with a hundred hair-size metal probes that can “listen” as neurons fire off commands.

To complete the bypass, the Case team, led by Robert Kirsch and Bolu Ajiboye, also slid more than 16 fine electrodes into the muscles of the man’s arm and hand. In videos of the experiment, the volunteer can be seen slowly raising his arm with the help of a spring-loaded armrest, and willing his hand to open and close. He even raises a cup with a straw to his lips. Without the system, he can’t do any of that.

Just try sitting on your hands for a day. That will give you an idea of the shattering consequences of spinal cord injury. You can’t scratch your nose or tousle a child’s hair. “But if you have this,” says Courtine, reaching for a red espresso cup and raising it to his mouth with an actor’s exaggerated motion, “it changes your life.”

Grégoire Courtine holds the two main parts of the brain-spine interface. PHOTOGRAPH BY HILLARY SANCTUARY | EPFL

The Case results, pending publication in a medical journal, are a part of a broader effort to use implanted electronics to restore various senses and abilities. Besides treating paralysis, scientists hope to use so-called neural prosthetics to reverse blindness with chips placed in the eye, and maybe restore memories lost to Alzheimer’s disease (see “10 Breakthrough Technologies 2013: Memory Implants”).

And they know it could work. Consider cochlear implants, which use a microphone to relay signals directly to the auditory nerve, routing around non-working parts of the inner ear. Videos of wide-eyed deaf children hearing their mothers for the first time go viral on the Internet every month. More than 250,000 cases of deafness have been treated.

But it’s been harder to turn neural prosthetics into something that helps paralyzed people. A patient first used a brain probe to move a computer cursor across a screen back in 1998. That and several other spectacular brain-control feats haven’t had any broader practical use. The technology remains too radical and too complex to get out of the lab. “Twenty years of work and nothing in the clinic!” Courtine exclaims, brushing his hair back. “We keep pushing the limits, but it is an important question if this entire field will ever have a product.”

Courtine’s laboratory is located in a vertiginous glass-and-steel building in Geneva that also houses a $100 million centre that the Swiss billionaire Hansjörg Wyss funded specifically to solve the remaining technical obstacles to neurotechnologies like the spinal cord bypass. It’s hiring experts from medical-device makers and Swiss watch companies and has outfitted clean rooms where gold wires are printed onto rubbery electrodes that can stretch as our bodies do.

A close-up of a brain-reading chip, bristling with electrodes.

Flexible electrodes developed to simulate the spinal cord

The head of the centre is John Donoghue, an American who led the early development of brain implants in the U.S. (see “Implanting Hope”) and who moved to Geneva two years ago. He is now trying to assemble in one place the enormous technical resources and talent—skilled neuroscientists, technologists, clinicians—needed to create commercially viable systems.

Among Donoghue’s top priorities is a “neurocomm,” an ultra-compact wireless device that can collect data from the brain at Internet speed. “A radio inside your head,” Donoghue calls it, and “the most sophisticated brain communicator in the world.” The matchbox-size prototypes are made of biocompatible titanium with a sapphire window. Courtine used an earlier, bulkier version in his monkey tests.

As complex as they are, and as slow as progress has been, neural bypasses are worth pursuing because patients desire them, Donoghue says. “Ask someone if they would like to move their own arm,” he says. “People would prefer to be restored to their everyday self. They want to be reanimated.”

A model of a wireless neuro communication device sits on a skull.