Viz.ai’s algorithms analyze CT scans of stroke victims to try to speed care
SINCE ENTREPRENEUR CHRIS Mansi cofounded in 2016, the best-funded wizards of artificial intelligence have taken on board games and created emoji that mirror your facial expressions. Meanwhile, Mansi has been developing algorithms to save the brain cells of stroke patients. This month, the Food and Drug Administration cleared to market its algorithms to doctors and hospitals. It was a small breakthrough toward using AI to make healthcare more efficient and powerful.
Someone in the US suffers a stroke every 40 seconds, according to the Centers for Disease Control. Doctors sum up the importance of each successive minute with a pithy and chilling phrase: “Time is the brain.” The longer a person waits for treatment, the more brain tissue dies. Time is the brain, but also disability, or death.
Viz’s first product is designed to help in that race against time by automatically analyzing CT scans of ER patients. The company has trained machine-learning algorithms similar to those that an iPhone uses to spot cats in your photos to detect blockages in major brain blood vessels. When the software thinks it has found a blockage—suggesting the most common form of stroke—it sends an alert to a brain specialist’s smartphone asking them to review the images. The software also flags the specific images it judges to be most important.
Mansi says this can save precious time—and brain—by bringing in specialists earlier. Usually, the call would only go out after another radiologist had read a patient’s scan. “You’re not cutting anyone’s job out, but this helps create a parallel workflow that can identify these patients faster,” says Mansi, whose company has funding from the venture fund of former Google CEO Eric Schmidt. “A lot of patients are not getting treatment fast enough.”
The FDA approval shows how the world’s most-watched medical regulator is opening the door to AI algorithms in healthcare. In approving Viz’s algorithms, the agency created a new regulatory classification for triage tools that analyze scans and flag the most urgent to a specialist. A spokesperson said the FDA is also adapting its processes so that safe digital health tools can go to market quickly, and encourage innovation. Last year, the agency formed a new unit of experts dedicated to digital health, including AI.
Mansi says the new classification created for his first product will make it easier for him—and competitors—to bring new algorithms to market. “It opens up the door to artificial intelligence in US healthcare,” he says. Viz is training systems to triage images with signs of other urgent problems, such as brain aneurysms, another cause of strokes.
Others are working on AI to tackle different diseases. Stanford and Google have shown machine-learning software could identify skin cancer or eye disease on par with human experts. San Francisco-based Arteries won FDA approval in January to market scan-scrutinizing machine-learning software for cancer care. Algorithms trained by Arteries can automatically highlight lung nodules and liver lesions, and track them over time to see how they grow or respond to treatment.
Iowa City startup IDx is asking the FDA to approve a product that would represent a bolder use of AI. It submitted results from a clinical trial of software intended to help primary-care clinics diagnose diabetic retinopathy, a complication of diabetes that can lead to blindness, without the presence of a specialist. Non-clinical staff photograph a patient’s retinas, and the software sends its analysis to the primary-care physician.
Michael Abramoff, IDx president and a University of Iowa professor in ophthalmology, predicts that analyzing images of the eye will become a hot area for machine learning in medicine. Google is running a trial at several hospitals in India aimed at detecting diabetic retinopathy. “While we expect to be the first, we also expect many to follow on the path we are now clearing,” Abramoff says.
The Viz and Arteries approvals also show the slow pace of integrating AI into practice. Both companies’ algorithms are designed as advisers, not practitioners, and leave the final diagnoses and responsibility to humans. “They are breaking new ground,” says Luke Oakden-Rayner, a radiologist researching machine learning at the University of Adelaide, Australia. “But these companies and the FDA appear to be doing so carefully, taking small steps.”
Keith Dreyer, vice chairman of radiology at Massachusetts General Hospital and a professor at Harvard, says the machine learning tools heading to market are creating a kind of natural experiment. The FDA approved Viz’s product in part by reviewing results of a study that applied the software to CT scans from 300 patients. But Dreyer says that it will take a while to see evidence tools like those of Viz and others actually change patient outcomes.
“We’re not going to know how effective it is until it gets deployed in practice and has an effect on care,” Dreyer says. He doesn’t doubt Viz algorithms can spot evidence of a stroke, for example, but says integrating the tool into hospitals’ existing workflows will be no easy feat.
Radiologists are also grappling with a more personal unknown: what more powerful and pervasive machine learning tools will do their jobs.
Geraldine McGinty, chair of the American College of Radiology’s AI advisory group, says the college is worried that hype around AI is deterring medical students from considering radiology as a speciality. She’s trying to fight back by arguing that the field is the place to go if you want to use tech to help patients because algorithms will enhance human capabilities, not make them obsolete. “If you want to be in the speciality that’s going to most effectively harness AI for the benefit of patients, that’s going to be radiology,” McGinty says.