How Technology is Changing the Outlook of Breast Cancer

The outlook for finding and dealing with breast cancer has improved dramatically and will continue to improve exponentially

Mammomat Fusion, Siemens

Mammomat Fusion is the latest addition to the Siemens family of full-field digital mammography systems that offers premium product features to address the specific needs of volume screening centres and small to medium-sized hospitals. 

The outlook of breast cancer is changing, and death rates have been decreasing since 1989, thanks in part to treatment advances, earlier detection through screening and increased awareness. However, more than 40,000 women in the United States are still expected to die from this disease in 2015. According to the American Cancer Society, an estimated 231,840 new cases of invasive breast cancer are expected to be diagnosed in women in the United States this year, along with 60,290 new cases of carcinoma in situ (CIS). And for men, more than 2,000 new cases of invasive breast cancer are projected.

A national poll, conducted by the Truven Health Analytics-NPR Health Poll,1 has found that the majority of American women — 57 percent — still are in favor of an annual mammogram, despite the recent recommendation for biannual screenings issued by the U.S. Preventive Services Task Force (USPSTF) that suggested women should get a mammogram every two years starting at age 50, provided they don’t have a family history of breast cancer or find a lump. Only 12 percent of respondents said they believe the screening is only necessary every two years, as recommended by the USPSTF.

“The proper evaluation and treatment of localized breast cancer is an area of active research. The U.S. Preventive Services Task Force recommends biennial breast cancer screening with mammography in all average risk women between the ages of 50-74,” said Michael Taylor, M.D., a chief medical officer at Truven Health Analytics. “With the Affordable Care Act mandating that insurers provide screening mammograms at no additional charge, patients have no reason not to be diligent about receiving a regular screening.”

On the Legislative Front

The American College of Radiology (ACR) and the Society of Breast Imaging (SBI) are encouraging congressional leaders to pass the Protecting Access to Life savings Screenings (PALS) Act (H.R. 3339).2 Passage of the act would ensure women who want regular mammograms retain insurance coverage with no co-pay and avert thousands of unnecessary deaths.

Both organizations support the PALS Act3 as a way to delay implementation of draft breast cancer screening recommendations from the USPSTF for two years. “The two-year delay allows consideration of recent large studies that showed mammography to be far more effective than the old studies the USPSTF analyzed. It also provides time for Congress to enact separate legislation that mandates a badly needed overhaul of the closed and outdated USPSTF process,” said Debra Monticciolo, M.D., FACR, chair of the American College of Radiology Breast Imaging Commission.

The Affordable Care Act (ACA) requires private insurers to cover exams without patient cost-sharing given a grade of “B” or higher by the USPSTF. The Task Force gave the routine screening of women ages 40-49 a “C” grade and gave a “B” grade only to biennial screening for women 50-74. This would indicate that women ages 40-49 that choose routine screening and those 50-74 who want annual screening would not be guaranteed coverage, which could have an impact in underserved and rural areas.

“The closed USPSTF process does not meet Institute of Medicine (IOM) standards for ‘trustworthy’ guidelines creation and needs updating. These USPSTF mammography recommendations are suspect until ACR and SBI recognized experts are included in a meaningful way in their creation,” said Elizabeth A. Morris, M.D., FACR, president of the Society of Breast Imaging.

The Breast Density Movement

Louisiana recently became the 24th state to sign the density reporting bill into law. According to, additional screening tests beside mammography for women with dense breast tissue will increase detection by up to 100 percent. These invasive cancers, missed by mammography, are small, node-negative and at an early stage.

“Universal density reporting will prevent later-stage cancers and give all women access to an early diagnosis — when most treatable and with better survival outcomes. Data show a statistically significant increase in the detection of small, early and invasive cancers invisible by mammogram,” according to Nancy M. Cappello, Ph.D., executive director and founder of Are You Dense Advocacy Inc.

States with breast density inform laws in order of enactment include Connecticut, Texas, Virginia, New York, California, Hawaii, Maryland, Tennessee, Alabama, Nevada, Oregon, North Carolina, Pennsylvania, New Jersey, Arizona, Minnesota, Rhode Island, Massachusetts, Missouri, Ohio, Michigan, North Dakota, Delaware and Louisiana.

Advances in Technology

Of course, detection is only possible through screening and technology, and this past year brought forth many technological advancements, helping to bring the issue of breast screening, and in particular breast density, to the forefront.

The U.S. Food and Drug Administration (FDA) recently approved two systems from Siemens Healthcare. Mammomat Fusion is the latest addition to the Siemens family of full-field digital mammography systems that offers premium product features to address the specific needs of volume screening centres and small to medium-sized hospitals. The system features a new generation caesium-iodide detector — an innovative, layered configuration of the photodiodes within the detector that enables more efficient utilization of the radiation dose. The result is a high image quality at a patient dose that is at or below the range of other full-field digital mammography systems, with an even lower dose delivered in cases where the patient’s breast thickness exceeds 50 mm. The system’s large image matrix of 23 x 30 cm makes it convenient for screening various breast sizes.

The FDA also approved Siemens’ 3-D mammography imaging system with breast tomosynthesis option, marking its tomo debut in the U.S. market. Siemens’ Mammomat Inspiration with Tomosynthesis Option is a breast tomosynthesis add-on option for Siemens Mammomat Inspiration digital mammography platform. Its breast tomosynthesis algorithm reconstructs multiple 2-D images of the breast into an approximation of a 3-D image to enable detection of tumours that are hidden by overlapping breast tissue. This helps to provide a more accurate diagnosis than standard 2-D digital mammography alone and helps to reduce the number of false-positive findings. In tomosynthesis mode, the X-ray tube of the Mammomat Inspiration rotates in a circular motion around the breast to acquire an image every two degrees while moving through an angular range of 50 degrees. The resulting 25 projections are reconstructed as three-dimensional digital breast tomosynthesis (DBT) images.

iCAD Inc. launched the iReveal breast density module as the latest addition to its PowerLook Advanced Mammography Platform (AMP). The software is designed to deliver automated, rapid and reproducible assessments of fibroglandular breast density to help identify patients that may experience reduced sensitivity to digital mammography due to dense breast tissue. It uses an advanced software program to assess breast density based on the range of categories established by the American College of Radiology’s Breast Imaging Reporting and Data System (BI-RADS), automating the same analytical approach used by many experienced radiologists. In many cases, patients with dense breast tissue may be recommended for additional screening exams including the use of breast tomosynthesis.

Fujifilm Medical Systems U.S.A. Inc. recently submitted to the FDA the first module of its premarket approval (PMA) application for DBT. The module will be offered as an optional upgrade for the Aspire Cristalle mammography system. Fujifilm plans to file the remaining modules of DBT PMA within the coming year. The optional DBT upgrade for the Aspire Cristalle system, known as Amulet Innovality outside of the United States, has been available since May 2013 in Europe, Asia and Latin America. Aspire Cristalle features Fujifilm’s hexagonal close pattern (HCP) detector pixel design, engineered for higher acquisition efficiency and to enhance detail for improved low-dose performance compared to conventional square pixel design. The result is sharper images with gentler dose to the patient.

Eizo Inc. has received FDA 510(k) clearance for breast tomosynthesis on its 5-megapixel monochrome medical monitor, the RadiForce GX540. The FDA 510(k) clearance includes tomosynthesis, mammography and general radiography. The monitor’s high resolution makes it ideal for viewing the fine details in breast images. To detect the smallest structures, the monitor offers a high contrast ratio of 1,200:1. The deeper black levels distinguish similar shades of grey for sharper monochrome image reproduction.

Novarad has developed its own speciality viewer with advanced hanging protocols and workflows designed especially for digital mammography. While Novarad customers have been using a third-party mammography viewer for years, the new viewer is completely native to the NovaPACS enterprise viewer. The new NovaMG offers radiologists the option to use their picture archiving and communication system (PACS) workstation to read solely mammography and tomosynthesis studies, or the convenience to read for all patients and all modalities from the same workstation and user interface.  The U.S. Patent and Trademark Office has issued a patent for the technology incorporated into Kubtec’s Mozart System with TomoSpec Technology. The system allows surgeons to bring 3-D tomosynthesis imaging directly into the operating room for lumpectomy and partial mastectomy procedures and gives surgical teams the ability to make faster and more precise intraoperative determinations of the successful excision of tumour margins.

Community Outreach

Education and accessibility play a crucial role in the early detection and treatment of breast cancer. Many organizations have found unique ways to spread public awareness of and access to this cause.

For example, Loyola Medicine in Maywood, Ill., conducted a very successful See, Test & Treat Free Cancer Screening event that was funded by the College of American Pathologists (CAP) Foundation. This program is available at select institutions around the United States, however, it was Loyola’s first.

The program helps vulnerable women in underserved communities improve their health. In September, more than 50 women in need received free cervical and breast cancer screenings with same-day results as part of Loyola’s program and health fair held at Loyola University Medical Center. This event was also supported by Hologic Inc., the Community Memorial Foundation, the Coleman Foundation and Quest Diagnostics.

“Many of the women we cared for had not had a Pap test or mammogram in more than 20 years, and thanks to the generous funding by the CAP Foundation, were able to receive care,” said Loyola Chair of Pathology Eva M. Wojcik, M.D., FCAP. “Seeing those signs of relief on the faces of women who were told their screenings came back normal was priceless.”

Future Outlook 

The outlook for finding and dealing with breast cancer has improved dramatically in recent years and will continue to improve exponentially, according to Jim Culley, Ph.D., senior director, corporate communications for Hologic.

Culley cites a recent study by Sarah Friedewald, M.D., published in the June 2014 issue of JAMA that looked at nearly 500,000 screening exams, some utilizing 2-D mammography and some using tomosynthesis.4 The objective of the study was to determine if mammography plays a key role in early breast cancer detection, noting that single-institution studies have shown that adding tomosynthesis to mammography increases cancer detection and reduces false-positive results.

Culley called out some of the study’s most significant findings:

• A 41 percent increase in the detection of invasive breast cancers (p<.001);

• A 29 percent increase in the detection of all breast cancers (p<.001);

• A 15 percent decrease in women recalled for additional imaging (p<.001);

• A 49 percent increase in positive predictive value (PPV) for a recall (p<.001). PPV for the recall is a widely used measure of the proportion of women recalled from screening that is found to have breast cancer. The PPV for a recall increased from 4.3 to 6.4 percent;

• A 21 percent increase in PPV for biopsy (p<.001). PPV for biopsy is a widely used measure of the proportion of women having a breast biopsy that are found to have breast cancer. The PPV for a breast biopsy increased from 24.2 to 29.2 percent; and

• No significant change in the detection of ductal carcinoma in situ (DCIS). DCIS is non-invasive cancer. It has not spread beyond the milk duct into any normal surrounding breast tissue.

“In the U.S., almost no one is buying 2-D systems that can’t be upgraded to 3-D anymore,” said Culley. “And if they can afford it, most new purchases are for 3-D mammography.”

The study concluded that the addition of tomosynthesis to digital mammography was associated with a decrease in recall rate and an increase in cancer detection rate. It was determined that further studies are needed to assess the relationship to clinical outcomes. Culley concludes that it’s best to compare apples to apples, and agrees that the USPSTF guidelines are slightly out of touch with today’s technology. “There has been so much improvement that you can’t help but question the USPSTF and their new guidelines that look at long series of data to justify their recommendations,” he said. “These organizations haven’t really adjusted for the fact that technology is changing dramatically. In breast cancer screening no one does film mammography anymore, everyone does digital and since 2011, most are trying to use 3-D mammography.” He gives the analogy of the advent of the digital camera. “Just think about the picture clarity of your phone, which was probably none, or digital camera back in 2000, and how much better images are now. It’s unreasonable to consider mammography today as it was then.”

References: Accessed Sept. 21, 2015. Accessed Sept. 21, 2015. Accessed Sept. 21, 2015.

4 Accessed Sept. 21, 2015.

Computers Match Accuracy of Radiologists in Screening for Breast Cancer Risk

Commercial software performs as well as doctors in measuring breast density and assessing breast cancer risk- Jeremy Hsu

Woman doctor or nurse in surgery outfit is holding a mammogram in front of x-ray illuminator 

Women with dense breasts have a greater risk of undergoing mammogram screenings that miss signs of breast cancer. That’s why 30 U.S. states legally require that women receive some notification about their breast density. A new study suggests that commercial software for automatically classifying breast density can perform on par with human radiologists: a finding that could encourage wider use of automated breast density assessments. Increased breast density represents “one of the strongest risk factors for breast cancer,” because it makes it more difficult to detect the disease in its early stages, explained Karla Kerlikowske, a physician and breast cancer researcher at the University of California, San Francisco. Dense breast tissue may also carry a higher risk of developing breast cancer.

Breast density refers to the proportion of “non-dense” fatty tissue to other “dense” tissue, containing milk ducts and glands, within the breast. For women with dense breasts, physicians may recommend supplemental screening or changes to screening frequency in order to detect breast cancer earlier. The new study suggests automated screenings are just as accurate as doctors in determining breast density from a mammogram, and may have other advantages as well. In addition to comparing assessments of breast density, the study funded by the National Cancer Institute also compared the automated and human breast density assessments on two measures related to their ability to predict a woman’s risk of developing breast cancer.

First, the study looked at how well the software and clinical assessments by radiologists predicted breast cancer risk through mammography screening. Second, it considered how well they predicted the risk of “interval invasive cancer” that is not caught by mammography screening and is instead diagnosed through direct clinical examination. In both cases, the software assessments compared well with radiologists’ assessments in predicting those cancer risks. “Automated density measures are more reproducible across radiologists and facilities,” said Kerlikowske. “Using automated measures will allow accurate identification of women who have dense breasts and are at high risk of an interval cancer so these women can have appropriate discussions of whether supplemental imaging is right for them.”

To compare automated and human assessments, Kerlikowske and her colleagues combined data from two case-control studies based on the breast imaging databases of the San Francisco Mammography Registry and the Mayo Clinic. Their results are published in the 30 April 2018 online issue of the journal Annals of Internal Medicine. Radiologists estimate the percentage of dense breast tissue based on a subjective visual examination of mammogram images. They categorize the breast tissue under four classes defined by the Breast Imaging Reporting and Data System (BI-RADS): (a) almost entirely fatty, (b) scattered fibro-glandular densities, (c) heterogeneously dense, and (d) extremely dense.

But subjective assessments by radiologists can lead to inconsistencies. Previous research has found that 10 percent of women received a different breast density assessment when examined by the same radiologist in consecutive mammograms. That rises to 17 percent when their mammography images are examined by two different radiologists. Commercial software based on machine learning algorithms offers the promise of providing a more reliable and consistent measure of breast density that is not dependent upon an individual radiologist’s judgment.

One example is a program called Volpara that can estimate dense or non-dense tissue volume in each pixel of mammogram images. Its algorithms use that as the basis for calculating overall breast thickness and dense tissue volume in each breast. Volpara represents one of the more popular examples of such software, given that it currently covers about 3.2 percent of U.S. women and is undergoing trials in Europe. For that reason, the new breast density study focused on comparing Volpara’s performance with the performance of radiologists. But researchers may want to perform additional comparative studies for other software.

Another lingering question is how cost-effective the automated approach would be compared with human radiologists. That would require looking at the cost of a radiologist’s time to read and record breast density on mammograms for a year versus the cost of using software, Kerlikowske said. Anecdotally, one radiologist told her that he estimated the software might save him an hour a day. The questions of cost and overall effectiveness also appear in an editorial published in the same journal issue as the new study. Written by Joann Elmore, a physician at the University of California, Los Angeles, and Jill Wruble, a radiologist at the Yale School of Medicine in New Haven, the editorial points to the use of another technology, computer-aided detection (CAD) for highlighting abnormal areas in mammography images, as a cautionary tale for using automated tools in breast cancer screening.

Elmore and Wruble noted that CAD’s value has been questioned despite the fact that it has become widely used at a cost of more than $400 million per year. They cite studies suggesting that CAD’s use either provides no improvement in detecting breast cancer or performs with worse accuracy in comparison with the scrutiny of human radiologists. “Like CAD, automated density measurement has the potential to improve reproducibility and workflow efficiency,” Elmore and Wruble write.  “However, we are in an era of ‘choosing wisely’ and seeking value in health care. Therefore, we must be cautious before implementing and paying for medical technology.” For now, Kerlikowske and her research team are running additional studies to explore how machine learning software—particularly software based on deep learning algorithms—can help physicians identify women who many need additional imaging beyond mammograms to reduce their breast cancer risk.

Skin Cancer Detection Using Artificial Intelligence

Hoping to deliver free skin cancer screening worldwide, two software developers used artificial intelligence to create an app to detect skin cancer in real-time.

After losing a close friend to cancer last year, Mike Borozdin wanted to do something to stem the tide of cancer-related deaths in the world. At TechCrunch Disrupt’s 2017 hackathon in San Francisco, he and fellow software engineer Peter Ma got inspired to create an artificial intelligence (AI)-powered app that could detect skin cancer by simply analyzing a photo of a mole.

Dubbed Doctor Hazel, the app sorts through a database of images classifies the mole in question and instantly lets the person know if the mole looks benign or potentially cancerous. If the app raises a red flag, the person is advised to follow up with a dermatologist.

“Early detection is critical to the management of skin cancers,” said Rajiv Bhatnagar, a senior staff dermatologist and geographic medical director at Palo Alto Medical Foundation. “Even the most serious skin cancers are often cured if detected early.” Skin cancer is the most common cancer in the U.S., according to the Skin Cancer Foundation. By the age of 65, half of the population will be diagnosed with some form of skin cancer.

Detecting skin cancer using AI: Older woman looks at phone

Without early detection, the five-year survival rate falls to 62 per cent when the disease reaches the lymph nodes and to 18 per cent when it metastasizes to distant organs. However, when melanoma is found early, the survival rate is extremely high.

“Melanoma is a potentially fatal skin cancer, but when it is detected and treated in the early stages, it is quite curable,” said Bhatnagar. Citing recently published data from the American Joint Committee on Cancer (AJCC), Bhatnagar said that the five-year survival rate for early intervention with early-stage melanoma is 98 per cent.

Though there may be challenges in refining the tech and even with dermatologists adapting to this new type of screening, Bhatnagar said he sees tools like Doctor Hazel providing access to care sooner for patients with suspicious lesions. “I think Doctor Hazel and similar tools will be in widespread use and will ultimately save lives,” said Bhatnagar.

Tech Inside

Borozdin and Ma started with a high-powered endoscope camera they found for roughly $30 on Amazon to capture high-resolution images of questionable moles. A fellow in the Intel Software Innovator Program, Ma proposed using the Movidius Neural Compute Stick (NCS) for the project. The Intel program supports independent developers by offering access to the latest technology and open source software. The NCS is a tiny, fanless, deep-learning USB drive designed for high-performance AI programming. It enables processing at previously unimagined speeds to run real-time deep neural networks directly from the device.

“We’re basically able to do real-time screening for patients without any waiting time,” said Ma. “That’s a huge leap forward in engineering.” Much like an experienced dermatologist sees thousands of moles and other skin lesions over time and can quickly ascertain whether a biopsy may be warranted, Doctor Hazel can scan thousands of images for comparison and analysis, all in the blink of an eye.

This speed was evident when Borozdin and Ma demonstrated their app in their final TechCrunch pitch. Impressive, as well, was the accuracy rate achieved within just 24 hours at the hackathon: 85 per cent. This figure will only increase as more data is collected and the AI becomes more precise, said Ma. The more images the AI can scan and analyze, the more the accurate it becomes. “The worst thing would be to tell someone who has cancer that they have nothing to worry about,” said Borozdin.

Digging up the Data

Borozdin and Ma are intent on boosting accuracy. However, getting more data has proved challenging. For the hackathon, Borozdin and Ma used open data from the University of Iowa, including approximately 10,000 photos for comparison and analysis.

Detecting skin cancer AI: Pamphlet explaining skin cancer

“But we really need more data,” said Ma. “That’s part of the AI’s deep learning.”

The more images Doctor Hazel can process, the better it will be for detecting abnormalities, much like a toddler needs to see numerous vehicles before understanding the difference between a car and a truck. The two have reached out to universities, including Stanford’s Artificial Intelligence Laboratory where similar research has been conducted using a database of 130,000 images. Additionally, they’ve launched a Doctor Hazel beta site for people to submit photos.

Borozdin and Ma said their initial plan is to get Doctor Hazel into the hands of first-level care providers — primary care doctors, nurses, technicians and pharmacists — where patients can get quick and inexpensive diagnoses before pursuing higher-level care. Bhatnagar sees potential applications for primary care clinicians and, eventually, the patients themselves.

“It could certainly be helpful to the primary care team to help with triage, reassurance and referrals, and may provide a good environment for validation of the tool in the early stages,” he said. “In that setting, the patient is already in a care system and next steps are well defined.” Ultimately, Bhatnagar sees value in patients being able to directly access the tools. “For the individual user case, it will be important for the AI tools to be coupled with information on what to do and where to go if the findings are worrisome,” he said.

Peter Ma and Mike Borozdin

Peter Ma (left) and Mike Borozdin hope to provide free cancer screening.

Of course, there are other hurdles to get there. Gaining FDA approval through trials and testing of new devices can take years. Borozdin and Ma believe the benefits will be worth overcoming such challenges. Doctor Hazel, for example, could provide medical care to people in remote locales. “Someone in rural Kansas or remote Africa can now get access to screening without having to spend the time or incur the cost of travelling to a bigger city,” said Borozdin.

Time and cost savings are another plus. Because specialists are often busy and schedules are booked out far in advance, patients can get in for an initial screening with their primary care doctor or a lab tech much more quickly.  At the same time, dermatologists will be able to focus on the patients most in need of their expertise, expediting the treatment process for those who have the least time to spare. “Ultimately,” said Ma, “our goal is to deliver free skin cancer screening to everyone in the world.”

-Joyce Riha Linik


SHAOKANG WANG AND his startup, Infervision, build algorithms that read X-ray images and identify early signs of lung cancer. The company’s technology, Wang says, is already running inside four of the largest hospitals in China. Two are merely running tests, but according to Wang, the two others—Shanghai Changzheng and Tongji, both in Shanghai—are installing the technology across their operations. “It’s installed on every doctor’s machine,” he says.

To what extent these doctors are actually using the technology is another question. In the world of healthcare, artificial intelligence is still in its infancy. But the idea is spreading. At two hospitals in India, Google is now testing technology that can identify signs of diabetic blindness in eye scans. And just last week, the data science competition site Kaggleannounced the winners of a $1 million contest in which more than 10,000 researchers competed to build machine learning models that could detect lung cancer from CT scans. The winning algorithms will feed work at the National Cancer Institute to more rapidly and effectively diagnose lung cancer, the leading cancerous killer in the US among both men and women. “We want to take these solutions further,” says Keyvan Farahani, a program director at the institute.

Deploying such AI on a large scale—across hospitals, for instance—is still enormously difficult, says Dr George Shih, a physician and professor at Weill Cornell Graduate School of Medical Sciences, and the co-founder of, a company that participated in the Kaggle contest. Aggregating all the necessary data is enormously complicated, not to mention the difficulty that comes with just trying to plug this technology into existing systems and day-to-day operations. But Shih believes that today’s best algorithms are already accurate enough to drive commercial products. “We’re probably only a few years away from more massive deployments,” he says.

The rise of these systems is powered by the rise of deep neural networks, complex mathematical systems that can learn tasks on their own by analyzing vast amounts of data. This is an old idea, dating back to the 1950s, but now that operations like Google and Facebook have access to such enormous amounts of data and computing power, neural networks can achieve far more than they could in the past. Among other things, they can accurately recognize faces and objects in photos. And they can identify signs of disease and illness in medical scans. Just as a neural network can identify a cat in a snapshot of your living room, it can identify tiny aneurysms in eye scans or pinpoint nodules in CT scans of the lungs. Basically, after analyzing thousands of images that contain such nodules, it can learn to identify them on its own. Through the Kaggle contest, run in tandem with the tech-minded consultancy Booz Allen, thousands of data scientists competed to build the most accurate neural networks for the task.

Before a neural network can start learning the task from a collection of images, trained doctors must label them—that is, use their human intelligence and knowledge to identify the images that show signs of lung cancer. But once that’s done, building these systems is more computer science than medicine. Case in point: The winners of the Kaggle prize—Liao Fangzhou and Zhe Li, two researchers at Tsinghua University in China—have no formal medical training.

Physician’s Assistant

Still, these AI technologies won’t completely replace trained doctors. “This is still only a small part of what radiologists or doctors do,” Shih says. “There are dozens of other pathologies that we are still responsible for.” New AI systems will examine scans faster and with greater accuracy before doctors explore the patient’s situation in more detail. These AI assistants will ideally reduce health care costs since screenings require so much time from doctors, who may also make mistakes.

According to Shih and others, doctors don’t make many false negative diagnoses—failing to identify signs of cancer in a scan. But false positives are a problem. Hospitals often end up spending time and money tracking the progress of patients who don’t need such close care. “The issue with lung cancer screening is that it’s very expensive,” Shih says. “The big goal is: How do you minimize that?”

Shih’s company aims to build services for collecting and labelling data that researchers and companies can then use to train neural networks, not just for cancer detection but for many other tasks as well. He acknowledges that this kind of AI is only just getting started. But he believes it will fundamentally change the field of healthcare, particularly in the developing world, where trained doctors aren’t as prevalent. Over the next few years, he says, researchers aren’t likely to build an AI that’s better at detecting lung cancer than the very best doctors. But even if machines can top the performance of even some of them, that could change the way hospitals operate, one scan at a time.

Artificial Intelligence Aids in Cancer Diagnosis

A stained tissue sample from a precision medicine cancer patient under a microscope. Credit: Englander Institute for Precision Medicine Pathology Team

An artificial intelligence program developed by Weill Cornell Medicine and NewYork-Presbyterian researchers can distinguish types of cancer from images of cells with almost 100 per cent accuracy, according to a new study. This new technology has the potential to augment cancer diagnosis techniques that currently require the human eye.

Currently, cancer is diagnosed by visual examination of tissue samples under a microscope. Pathologists consider variables like cell shape, number, mass and appearance when determining whether tissue appears malignant or benign. While accurate analysis is critical to making the right diagnosis, the process can become complicated.

Dr. Olivier Elemento. Photo credit: Travis Curry

Dr Olivier Elemento. Photo credit: Travis Curry

“The diversity among cancer cells is very high,” said co-senior author Dr. Olivier Elemento, director of the Caryl and Israel Englander Institute for Precision Medicine at Weill Cornell Medicine, who also leads joint precision medicine efforts at Weill Cornell Medicine and NewYork-Presbyterian/Weill Cornell Medical Center. “Things like the morphology of cells, the arrangement of cells within a tumour and the genetic diversity of cancers all make it difficult to see every variation with the naked eye.”

In order to further improve the accuracy of cancer diagnosis, Dr Elemento and colleagues at Weill Cornell Medicine and NewYork-Presbyterian developed an artificial intelligence computer program that analyzes pathology images and determines whether they are malignant and if so, what type of cancer is present. Their results were published Dec. 28 in EBioMedicine.

The researchers developed what they dubbed a convolutional neural network (CNN), a computer program that is modelled on a human brain. “Just as a human would look at a lot of images and learn to distinguish certain characteristics of a cancer cell, our neural network did the same thing,” said co-senior author Dr Iman Hajirasouliha, an assistant professor of physiology and biophysics at Weill Cornell Medicine. In order to “train” the CNN, the team exposed the computer program to thousands of pathology images of known lung, breast and bladder cancers.

Dr. Iman Hajirasouliha. Photo credit: Nima Hajirasouliha

Dr. Iman Hajirasouliha. Photo credit: Nima Hajirasouliha

In order to test the CNN’s learning, the investigators—including co-first authors Dr. Pegah Khosravi, a postdoctoral associate in computational biomedicine at Weill Cornell Medicine, and Dr. Ehsan Kazemi, a postdoctoral associate at Yale University—obtained more than 13,000 new pathology images of lung, breast and bladder cancers and ran them through the algorithm. They found that the neural network was able to distinguish the type of cancer in the samples with 100 per cent accuracy. In addition, the program was able to discriminate lung cancer subtype with 92 per cent accuracy and detected biomarkers for bladder and breast cancer with 99 and 91 per cent accuracy, respectively.

“We have demonstrated that an artificial intelligence can have extremely high accuracy and efficiency in distinguishing between cancer types and subtypes,” said Dr. Hajirasouliha, who is also an assistant professor of computational genomics in computational biomedicine in the HRH Prince Alwaleed Bin Talal Bin Abdulaziz Alsaud Institute for Computational Biomedicine at Weill Cornell Medicine and a member of the Englander Institute for Precision Medicine. The team stresses that artificial intelligence will not replace human pathologists any time soon. Rather, they hope that this technology will help pathologists improve accuracy and speed of cancer diagnosis. “The neural network can do a lot of work for pathologists,” said Dr. Elemento, who is also associate director of the Institute for Computational Biomedicine, co-leader of the Genetics, Epigenetics and Systems Biology Program in the Sandra and Edward Meyer Cancer Center, and an associate professor of physiology and biophysics at Weill Cornell Medicine. “It can pick up patterns that are very hard for even highly-trained experts to see. But we still need humans to interpret the data and communicate with patients and their treating physicians. Our hope is that this technology makes pathologists more productive and more accurate.”