Revolutionizing everyday products with artificial intelligence

Mechanical engineering researchers are using AI and machine learning technologies to enhance the products we use in everyday life-Mary Beth O’Leary 

Researchers in MIT's Department of Mechanical Engineering are using artificial intelligence and machine learning technologies to enhance the products we use in everyday life.

“Who is Bram Stoker?” Those three words demonstrated the amazing potential of artificial intelligence. It was the answer to a final question in a particularly memorable 2011 episode of Jeopardy!. The three competitors were former champions Brad Rutter and Ken Jennings, and Watson, a supercomputer developed by IBM. By answering the final question correctly, Watson became the first computer to beat a human on the famous quiz show.

“In a way, Watson winning Jeopardy! seemed unfair to people,” says Jeehwan Kim, the Class ‘47 Career Development Professor and a faculty member of the MIT departments of Mechanical Engineering and Materials Science and Engineering. “At the time, Watson was connected to a supercomputer the size of a room while the human brain is just a few pounds. But the ability to replicate a human brain’s ability to learn is incredibly difficult.”

Kim specializes in machine learning, which relies on algorithms to teach computers how to learn like a human brain. “Machine learning is cognitive computing,” he explains. “Your computer recognizes things without you telling the computer what it’s looking at.”

Machine learning is one example of artificial intelligence in practice. While the phrase “machine learning” often conjures up science fiction typified in shows like “Westworld” or “Battlestar Galactica,” smart systems and devices are already pervasive in the fabric of our daily lives. Computers and phones use face recognition to unlock. Systems sense and adjust the temperature in our homes. Devices answer questions or play our favourite music on demand. Nearly every major car company has entered the race to develop a safe self-driving car.

For any of these products to work, the software and hardware both have to work in perfect synchrony. Cameras, tactile sensors, radar, and light detection all need to function properly to feed information back to computers. Algorithms need to be designed so these machines can process these sensory data and make decisions based on the highest probability of success.

Kim and the much of the faculty at MIT’s Department of Mechanical Engineering are creating new software that connects with hardware to create intelligent devices. Rather than building the sentient robots romanticized in popular culture, these researchers are working on projects that improve everyday life and make humans safer, more efficient, and better informed.

Making portable devices smarter

Jeehwan Kim holds up the sheet of paper. If he and his team are successful, one day the power of a supercomputer like IBM’s Watson will be shrunk down to the size of one sheet of paper. “We are trying to build an actual physical neural network on a letter paper size,” explains Kim.

To date, most neural networks have been software-based and made using the conventional method known as the Von Neumann computing method. Kim, however, has been using neuromorphic computing methods. “Neuromorphic computer means portable AI,” says Kim. “So, you build artificial neurons and synapses on a small-scale wafer.” The result is a so-called ‘brain-on-a-chip.’

Rather than compute information from binary signalling, Kim’s neural network processes information like an analogue device. Signals act like artificial neurons and move across thousands of arrays to particular cross points, which function like synapses. With thousands of arrays connected, vast amounts of information could be processed at once. For the first time, a portable piece of equipment could mimic the processing power of the brain.

“The key with this method is you really need to control the artificial synapses well. When you’re talking about thousands of cross points, this poses challenges,” says Kim.

According to Kim, the design and materials that have been used to make these artificial synapses thus far have been less than ideal. The amorphous materials used in neuromorphic chips make it incredibly difficult to control the ions once voltage is applied.

In a Nature Materials study published earlier this year, Kim found that when his team made a chip out of silicon germanium they were able to control the current flowing out of the synapse and reduce variability to 1 percent. With control over how the synapses react to stimuli, it was time to put their chip to the test.

“We envision that if we build up the actual neural network with material we can actually do handwriting recognition,” says Kim. In a computer simulation of their new artificial neural network design, they provided thousands of handwriting samples. Their neural network was able to accurately recognize 95 percent of the samples.

“If you have a camera and an algorithm for the handwriting data set connected to our neural network, you can achieve handwriting recognition,” explains Kim.

While building the physical neural network for handwriting recognition is the next step for Kim’s team, the potential of this new technology goes beyond handwriting recognition. “Shrinking the power of a supercomputer down to a portable size could revolutionize the products we use,” says Kim. “The potential is limitless – we can integrate this technology in our phones, computers, and robots to make them substantially smarter.”

Making homes smarter

While Kim is working on making our portable products more intelligent, Professor Sanjay Sarma and Research Scientist Josh Siegel hope to integrate smart devices within the biggest product we own: our homes.

One evening, Sarma was in his home when one of his circuit breakers kept going off. This circuit breaker — known as an arc-fault circuit interrupter (AFCI) — was designed to shut off power when an electric arc is detected to prevent fires. While AFCIs are great at preventing fires, in Sarma’s case there didn’t seem to be an issue. “There was no discernible reason for it to keep going off,” recalls Sarma. “It was incredibly distracting.”

AFCIs are notorious for such ‘nuisance trips,’ which disconnect safe objects unnecessarily. Sarma, who also serves as MIT’s vice president for open learning, turned his frustration into opportunity. If he could embed the AFCI with smart technologies and connect it to the ‘internet of things,’ he could teach the circuit breaker to learn when a product is safe or when a product actually poses a fire risk.

“Think of it like a virus scanner,” explains Siegel. “Virus scanners are connected to a system that updates them with new virus definitions over time.” If Sarma and Siegel could embed similar technology into AFCIs, the circuit breakers could detect exactly what product is being plugged in and learn new object definitions over time.

If, for example, a new vacuum cleaner is plugged into the circuit breaker and the power shuts off without reason, the smart AFCI can learn that it’s safe and add it to a list of known safe objects. The AFCI learns these definitions with the aid of a neural network. But, unlike Jeewhan Kim’s physical neural network, this network is software-based.

The neural network is built by gathering thousands of data points during simulations of racing. Algorithms are then written to help the network assess its environment, recognize patterns, and make decisions based on the probability of achieving the desired outcome. With the help of a $35 microcomputer and a sound card, the team can cheaply integrate this technology into circuit breakers.

As the smart AFCI learns about the devices it encounters, it can simultaneously distribute its knowledge and definitions to every other home using the internet of things.

“Internet of things could just as well be called ‘intelligence of things,” says Sarma. “Smart, local technologies with the aid of the cloud can make our environments adaptive and the user experience seamless.”

Circuit breakers are just one of many ways neural networks can be used to make homes smarter. This kind of technology can control the temperature of your house, detect when there’s an anomaly such as an intrusion or burst pipe, and run diagnostics to see when things are in need of repair.

“We’re developing software for monitoring mechanical systems that are self-learned,” explains Siegel. “You don’t teach these devices all the rules, you teach them how to learn the rules.”

Making manufacturing and design smarter

Artificial intelligence can not only help improve how users interact with products, devices, and environments. It can also improve the efficiency with which objects are made by optimizing the manufacturing and design process.

“Growth in automation along with complementary technologies including 3-D printing, AI, and machine learning compels us to, in the long run, rethink how we design factories and supply chains,” says Associate Professor A. John Hart.

Hart, who has done extensive research in 3-D printing, sees AI as a way to improve quality assurance in manufacturing. 3-D printers incorporating high-performance sensors, that are capable of analyzing data on the fly, will help accelerate the adoption of 3-D printing for mass production.

“Having 3-D printers that learn how to create parts with fewer defects and inspect parts as they make them will be a really big deal — especially when the products you’re making have critical properties such as medical devices or parts for aircraft engines,” Hart explains.

The very process of designing the structure of these parts can also benefit from the intelligent software. Associate Professor Maria Yang has been looking at how designers can use automation tools to design more efficiently. “We call it hybrid intelligence for design,” says Yang. “The goal is to enable effective collaboration between intelligent tools and human designers.”

In a recent study, Yang and graduate student Edward Burnell tested a design tool with varying levels of automation. Participants used the software to pick nodes for a 2-D truss of either a stop sign or a bridge. The tool would then automatically come up with optimized solutions based on intelligent algorithms for where to connect nodes and the width of each part. “We’re trying to design smart algorithms that fit with the ways designers already think,” says Burnell.

Making robots smarter

If there is anything on MIT’s campus that most closely resembles the futuristic robots of science fiction, it would be Professor Sangbae Kim’s robotic cheetah. The four-legged creature senses its surrounding environment using LIDAR technologies and moves in response to this information. Much like its namesake, it can run and leap over obstacles.

Kim’s primary focus is on navigation. “We are building a very unique system specially designed for dynamic movement of the robot,” explains Kim. “I believe it is going to reshape the interactive robots in the world. You can think of all kinds of applications — medical, healthcare, factories.”

Kim sees the opportunity to eventually connect his research with the physical neural network his colleague Jeewhan Kim is working on. “If you want the cheetah to recognize people, voice, or gestures, you need a lot of learning and processing,” he says. “Jeewhan’s neural network hardware could possibly enable that someday.”

Combining the power of a portable neural network with a robot capable of skillfully navigating its surroundings could open up a new world of possibilities for human and AI interaction. This is just one example of how researchers in mechanical engineering can one-day collaborate to bring AI research to next level.

While we may be decades away from interacting with intelligent robots, artificial intelligence and machine learning has already found its way into our routines. Whether it’s using face and handwriting recognition to protect our information, tapping into the internet of things to keep our homes safe, or helping engineers build and design more efficiently, the benefits of AI technologies are pervasive.

The science fiction fantasy of a world overtaken by robots is far from the truth. “There’s this romantic notion that everything is going to be automatic,” adds Maria Yang. “But I think the reality is you’re going to have tools that will work with people and help make their daily life a bit easier.”

Computers Match Accuracy of Radiologists in Screening for Breast Cancer Risk

Commercial software performs as well as doctors in measuring breast density and assessing breast cancer risk- Jeremy Hsu

Woman doctor or nurse in surgery outfit is holding a mammogram in front of x-ray illuminator 

Women with dense breasts have a greater risk of undergoing mammogram screenings that miss signs of breast cancer. That’s why 30 U.S. states legally require that women receive some notification about their breast density. A new study suggests that commercial software for automatically classifying breast density can perform on par with human radiologists: a finding that could encourage wider use of automated breast density assessments. Increased breast density represents “one of the strongest risk factors for breast cancer,” because it makes it more difficult to detect the disease in its early stages, explained Karla Kerlikowske, a physician and breast cancer researcher at the University of California, San Francisco. Dense breast tissue may also carry a higher risk of developing breast cancer.

Breast density refers to the proportion of “non-dense” fatty tissue to other “dense” tissue, containing milk ducts and glands, within the breast. For women with dense breasts, physicians may recommend supplemental screening or changes to screening frequency in order to detect breast cancer earlier. The new study suggests automated screenings are just as accurate as doctors in determining breast density from a mammogram, and may have other advantages as well. In addition to comparing assessments of breast density, the study funded by the National Cancer Institute also compared the automated and human breast density assessments on two measures related to their ability to predict a woman’s risk of developing breast cancer.

First, the study looked at how well the software and clinical assessments by radiologists predicted breast cancer risk through mammography screening. Second, it considered how well they predicted the risk of “interval invasive cancer” that is not caught by mammography screening and is instead diagnosed through direct clinical examination. In both cases, the software assessments compared well with radiologists’ assessments in predicting those cancer risks. “Automated density measures are more reproducible across radiologists and facilities,” said Kerlikowske. “Using automated measures will allow accurate identification of women who have dense breasts and are at high risk of an interval cancer so these women can have appropriate discussions of whether supplemental imaging is right for them.”

To compare automated and human assessments, Kerlikowske and her colleagues combined data from two case-control studies based on the breast imaging databases of the San Francisco Mammography Registry and the Mayo Clinic. Their results are published in the 30 April 2018 online issue of the journal Annals of Internal Medicine. Radiologists estimate the percentage of dense breast tissue based on a subjective visual examination of mammogram images. They categorize the breast tissue under four classes defined by the Breast Imaging Reporting and Data System (BI-RADS): (a) almost entirely fatty, (b) scattered fibro-glandular densities, (c) heterogeneously dense, and (d) extremely dense.

But subjective assessments by radiologists can lead to inconsistencies. Previous research has found that 10 percent of women received a different breast density assessment when examined by the same radiologist in consecutive mammograms. That rises to 17 percent when their mammography images are examined by two different radiologists. Commercial software based on machine learning algorithms offers the promise of providing a more reliable and consistent measure of breast density that is not dependent upon an individual radiologist’s judgment.

One example is a program called Volpara that can estimate dense or non-dense tissue volume in each pixel of mammogram images. Its algorithms use that as the basis for calculating overall breast thickness and dense tissue volume in each breast. Volpara represents one of the more popular examples of such software, given that it currently covers about 3.2 percent of U.S. women and is undergoing trials in Europe. For that reason, the new breast density study focused on comparing Volpara’s performance with the performance of radiologists. But researchers may want to perform additional comparative studies for other software.

Another lingering question is how cost-effective the automated approach would be compared with human radiologists. That would require looking at the cost of a radiologist’s time to read and record breast density on mammograms for a year versus the cost of using software, Kerlikowske said. Anecdotally, one radiologist told her that he estimated the software might save him an hour a day. The questions of cost and overall effectiveness also appear in an editorial published in the same journal issue as the new study. Written by Joann Elmore, a physician at the University of California, Los Angeles, and Jill Wruble, a radiologist at the Yale School of Medicine in New Haven, the editorial points to the use of another technology, computer-aided detection (CAD) for highlighting abnormal areas in mammography images, as a cautionary tale for using automated tools in breast cancer screening.

Elmore and Wruble noted that CAD’s value has been questioned despite the fact that it has become widely used at a cost of more than $400 million per year. They cite studies suggesting that CAD’s use either provides no improvement in detecting breast cancer or performs with worse accuracy in comparison with the scrutiny of human radiologists. “Like CAD, automated density measurement has the potential to improve reproducibility and workflow efficiency,” Elmore and Wruble write.  “However, we are in an era of ‘choosing wisely’ and seeking value in health care. Therefore, we must be cautious before implementing and paying for medical technology.” For now, Kerlikowske and her research team are running additional studies to explore how machine learning software—particularly software based on deep learning algorithms—can help physicians identify women who many need additional imaging beyond mammograms to reduce their breast cancer risk.

The Technological Future of Surgery

The future of surgery offers an amazing cooperation between humans and technology, which could elevate the level of precision and efficiency of surgeries so high we have never seen before

Will we have Matrix-like small surgical robots? Will they pull in and out organs from patients’ bodies?

The scene is not impossible. It looks like we have come a long way from ancient Egypt, where doctors performed invasive surgeries as far back as 3,500 years ago. Only two years ago, Nasa teamed up with American medical company Virtual Incision to develop a robot that can be placed inside a patient’s body and then controlled remotely by a surgeon.

That’s the reason why I strongly believe surgeons have to reconsider their stance towards technology and the future of their profession.

Virtual Incision - Robot - Future of Surgery

Surgeons have to rethink their profession

Surgeons are at the top of the medical food chain. At least that’s the impression the general audience gets from popular medical drama series and their own experiences. No surprise there. Surgeons bear huge responsibilities: they might cause irreparable damages and medical miracles with one incision on the patient’s body. No wonder that with the rise of digital technologies, the Operating Rooms and surgeons are inundated with new devices aiming at making the least cuts possible.

We need to deal with these new surgical technologies in order to make everyone understood that they extend the capabilities of surgeons instead of replacing them.

Surgeons also tend to alienate themselves from patients. The human touch is not necessarily the quintessence of their work. However, as technological solutions find their way into their practice taking over part of their repetitive tasks, I would advise them to rethink their stance. Treating patients with empathy before and after surgery would ensure their services are irreplaceable also in the age of robotics and artificial intelligence.

As a first step, though, the society of surgeons has to familiarize with the current state of technology affecting the OR and their job. I talked about these future technologies with Dr. Rafael Grossmann, a Venezuelan surgeon who was part of the team performing the first live operation using medical VR and he was also the first doctor ever to use Google Glass live in surgery.

Future of Surgery

So, I collected the technologies that will have a huge impact on the future of surgery.

1) Virtual reality

For the first time in the history of medicine, in April 2016 Shafi Ahmed cancer surgeon performed an operation using a virtual reality camera at the Royal London hospital. It is a mind-blowingly huge step for surgery. Everyone could participate in the operation in real time through the Medical Realities website and the VR in OR app. No matter whether a promising medical student from Cape Town, an interested journalist from Seattle or a worried relative, everyone could follow through two 360 degree cameras how the surgeon removed a cancerous tissue from the bowel of the patient.

This opens new horizons for medical education as well as for the training of surgeons. VR could elevate the teaching and learning experience in medicine to a whole new level. Today, only a few students can peek over the shoulder of the surgeon during an operation. This way, it is challenging to learn the tricks of the trade. By using VR, surgeons can stream operations globally and allow medical students to actually be there in the OR using their VR goggles. The team of The Body VR is creating educational VR content as well as simulations aiding the process of traditional medical education for radiologists, surgeons, and physicians. I believe there will be more initiatives like that very soon!

2) Augmented reality

As there is a lot of confusion around VR and AR, let me make it clear: AR differs in two very important features from VR. The users of AR do not lose touch with reality, while AR puts information into eyesight as fast as possible. With these distinctive features, it has a huge potential in helping surgeons become more efficient at surgeries. Whether they are conducting a minimally invasive procedure or locating a tumor in liver, AR healthcare apps can help save lives and treat patients seamlessly.

As it could be expected, the AR market is buzzing. More and more players emerge in the field. Promising start-up, Atheer develops the Android-compatible wearable and complementary AiR cloud-based application to boost productivity, collaboration and output. The Medsights Tech company developed a software to test the feasibility of using augmented reality to create accurate 3-dimensional reconstructions of tumors. The complex image reconstructing technology basically empowers surgeons with X-ray views – without any radiation exposure, in real time. The True 3D medical visualization system of EchoPixel allows doctors to interact with patient-specific organs and tissue in an open 3D space. It enables doctors to immediately identify, evaluate, and dissect clinically significant structures.

Google Glass - Future of Surgery

Grossmann also told me that HoloAnatomy, which is using HoloLens to display real data-anatomical models, is a wonderful and rather intuitive use of AR having obvious advantages over traditional methods.

3) Surgical robotics

Surgical robots are the prodigies of surgery. According to market analysis, the industry is about to boom. By 2020, surgical robotics sales are expected to almost double to $6.4 billion.

The most commonly known surgical robot is the da Vinci Surgical System; and believe it or not, it was introduced already 15 years ago! It features a magnified 3D high-definition vision system and tiny wristed instruments that bend and rotate far greater than the human hand. With the da Vinci Surgical System, surgeons operate through just a few small incisions. The surgeon is 100% in control of the robotic system at all times; and he or she is able to carry out more precise operations than previously thought possible.

Recently, Google has announced that it started working with the pharma giant Johnson&Johnson in creating a new surgical robot system. I’m excited to see the outcome of the cooperation soon. They are not the only competitors, though. With their AXSIS robot, Cambridge Consultants aim to overcome the limitations of the da Vinci, such as its large size and inability to work with highly detailed and fragile tissues. Their robot rather relies on flexible components and tiny, worm-like arms. The developers believe it can be used later in ophthalmology, e.g. in cataract surgery.

Da-Vinci-Surgical-Robot - Future of Surgery

4) Minimally Invasive Surgery

Throughout the history of surgery, the ultimate goal of medical professionals was to peak into the workings of the human body and to improve it with as small incisions and excisions as possible. By the end of the 18th century, after Edison produced his lightbulb, a Glasgow physician built a tiny bulb into a tube to be able to look around inside the body.

But it wasn’t until the second half of the 20th century when fiber-optic threads brought brighter light into the caverns of the body. And later, tiny computer chip cameras started sending images back out. At last, doctors could not only clearly see inside a person’s body without making a long incision, but could use tiny tools to perform surgery inside. One of the techniques revolutionizing surgery was the introduction of laparoscopes.

The medical device start-up, Levita aims to refine such procedures with its Magnetic Surgical System. It is an innovative technological platform utilizing magnetic retraction designed to grasp and retract the gallbladder during a laparoscopic surgery.

The FlexDex company introduced a new control mechanism for minimally invasive tools. It transmits movement from the wrist of the surgeon to the joint of the instrument entirely mechanically and it costs significantly less than surgical robots.

5) 3D Printing and simulations in pre-operative planning and education

Complicated and risky surgeries lasting hours need a lot of careful planning. Existing technologies such as 3D printing or various simulation techniques help a lot in reforming medical practice and learning methods as well as modelling and planning successfully complex surgical procedures.

In March 2016 in China, a team of experienced doctors decided to build a full-sized model of the heart of a small baby born with a heart defect. Their aim was to pre-plan an extremely complicated surgery on the tiny heart. This was the first time someone used this method in China. The team of medical professionals successfully completed the surgery. The little boy survived with little to no lasting ill-effects.

In December 2016, in the United Arab Emirates doctors have used 3D printing technology for the first time to help safely remove a cancerous tumour from a 42-year-old woman’s kidney. With the help of the personalized, 3D printed aid the team was able to carefully plan the operation as well as to reduce the procedure by an entire hour!

The technology started to get a foothold also in medical education. To provide surgeons and students with an alternative to a living human being to work on, a pair of physicians at the University of Rochester Medical Center (URMC) have developed a way to use 3D printing to create artificial organs. They look, feel, and even bleed like the real thing. Truly amazing!

To widen the platform of available methods for effectively learning the tricks of the trade, Touch Surgerydeveloped a simulation system. It is basically an app for practicing procedures ranging from heart surgery to carpal tunnel operations.

6) Live diagnostics

The intelligent surgical knife (iKnife) was developed by Zoltan Takats of Imperial College London. It works by using an old technology where an electrical current heats tissue to make incisions with minimal blood loss. With the iKnife, a mass spectrometer analyzes the vaporized smoke to detect the chemicals in the biological sample. This means it can identify whether the tissue is malignant real-time.

The technology is especially useful in detecting cancer in its early stages and thus shifting cancer treatment towards prevention.

Surgical iKnife - Future of Surgery

7) Artificial Intelligence will team up with surgical robotics

Catherine Mohr, vice president of strategy at Intuitive Surgical and expert in the field of surgical robotics believes surgery will take to the next level with the combination of surgical robotics and artificial intelligence. She is thrilled to see IBM WatsonGoogle Deepmind’s Alpha Go or machine learning algorithms to have a role in surgical procedures. She envisioned a tight partnership between humans and machines, with one making up for the weaknesses of the other.

In my view, AI such as the deep learning system, Enlitic, will soon be able to diagnose diseases and abnormalities. It will also give surgeons guidance over their – sometimes extremely – difficult surgical decisions.

 Artificial Intelligence in Surgery - Future of Surgery

I agree with Dr. Mohr in as much as I truly believe the future of surgery, just as the future of medicine means a close cooperation between humans and medical technology. I also cannot stress enough times that robots and other products of the rapid technological development will not replace humans. The two will complement each other’s work in such a successful way that we had never seen nor dreamed about before. But only if we learn how.


Machine Learning in Surgical Robotics – 4 Applications That Matter

Surgical Robotics and AI - What's Possible Today?

The application of robotics in surgery has steadily grown since it began in the 1980s. In contrast, the integration of artificial intelligence in this sector is still fairly new. As promising applications, predominantly in the research and development phase, begin to the surface we aim to answer the important questions that business leaders are asking today:

What types of AI applications are currently being explored in the field of surgery?

What innovations have the potential to change the industry over the next decade?

What robotic surgery applications are currently showing results?

In this article, we explore current and “near future” examples of artificial intelligence applications in surgical robotics. Based on our research, most related applications fall into the following four sub-categories:

  • Automation of Suturing
  • Machine Learning for Evaluation of Surgical Skills
  • Machine Learning for Improving Surgical Robotic Materials
  • Machine Learning for Surgical Workflow Modeling

Below explore examples of these four sub-categories. Each provides a snapshot of how AI and robotics are converging within the surgical speciality.

Automation of Suturing

Raven Robot and PR2 Robot

Suturing – or the process of sewing up an open wound or incision – is an important part of surgery but it can also be a time-consuming aspect of the process. Automation can potentially reduce the length of surgical procedures and surgeon fatigue. This may be particularly significant in remote or telesurgery, where any lag between human surgical commands and robot responses can present complications.

In 2013, a team of researchers at the University of California at Berkeley published research on the application of an algorithm for automated suturing performed by robots. The algorithm was tested and simulated on two robot models: the Raven II robot and the PR2 robot. The Raven robot is designed for laparoscopic surgery while the PR2 platform appears to be adaptable across various robotic applications.

The Berkeley research team reported an overall success rate of 87 percent of successful suturing. However, increased complexity of the suturing scenarios tended to correspond with decreased robot accuracy.

These results are encouraging considering the fact that suturing has been identified as a key factor limiting the use of laparoscopy among surgeons. This is despite the fact that the clinical benefits of laparoscopy have been well documented and include “decreased complications, mortality, and [hospital] readmission rates.”

While partially automated tools are already on the market, the performance limitations of the robots mentioned above (in terms of complex suturing scenarios) strongly suggests that it will be some years before complete automation is achievable in surgeries performed on humans.

STAR Robot

Fast forward to 2016, Johns Hopkins University announced that one of its researchers was part of a team which developed a robotic surgical system called STAR or the Smart Tissue Autonomous Robot. The system integrates 3D computer imaging and sensors to help guide the robot through the suturing process.

Using a pig model, the robot’s performance was compared to the work of five human surgeons in three different procedures: “open surgery, laparoscopic and robot-assisted surgery.” Overall, the researchers reported comparable or better results to standard surgical performance.

An estimated 44.5 million soft tissue surgeries are performed annually in the U.S. In the case of colorectal and abdominal surgeries, complications such as “leakages around the seams” occur in roughly 20 to 30 percent of cases in human surgeries. Researchers are hopeful that innovations like STAR will help to reduce these complications.

Currently, it is uncertain when STAR will be brought into hospital operating rooms. However, the researchers believe that the robotic surgical system can help reduce operation errors and improve patient outcomes. This research presents the encouraging potential for automated suturing, however, extensive additional research will be required before we will see applications applied to human surgeries.

Machine Learning for Evaluation of Surgical Skills

The evaluation of surgical skills has traditionally been a subjective practice often conducted by other trained surgeons. As robotic technology becomes more commonly used in surgeries, researchers are exploring automated methods of measuring surgical technique.

study presented at the 2016 World Congress on Engineering and Computer Science discussed using machine learning to evaluate surgeon performance in robot-assisted minimally invasive surgery. The research team evaluated data collected from suturing performance and classified surgeons into two categories: novice and expert. The machine learning algorithm was developed to measure the following six features:

  • Completion time
  • Path length
  • Depth perception
  • Speed
  • Smoothness
  • Curvature

The experimental evaluation system reportedly classified surgical skills accurately in roughly 85 percent of trials. This is a promising result offering the possibility of more standardized evaluation methods. The researchers suggest that future research efforts should expand evaluation methods to other surgical techniques and larger data pools.

Machine Learning for Improved Surgical Robotics

In the case of neurosurgery where particularly sensitive manoeuvring is required, robots often lack the necessary dexterity to operate effectively and prevent injury. Researchers at the University of California, San Diego (UCSD) Advanced Robotics and Controls Lab are exploring machine learning applications to improve surgical robotics.

As depicted in the image below, “continuum robots” are made of flexible robotic material and serve as a core component of minimally invasive surgeries. Automation works particularly well for routine processes. However, the surgical environment is not always predictable which negatively impacts the reliability of continuum robots. As a result, researchers are exploring ways to help these robots successful navigate a more complex environment.

Machine Learning for Surgical Workflow Modeling

The interest in Improving efficiency in surgery extends beyond the operation table and translates to the pre- and post-operative experience; both for the patient and the surgical team. The rate of complications in surgery range from an estimated 3 to 17 percent. One study showed a 119 percent increase from $19 626 to $36,060 in average hospital costs associated with patients who experienced complications.

Checklists have been suggested as a strategy to help mitigate avoidable errors and now automation is also being considered as a potential tool for improved surgical workflow. In an effort to improve how clinical reports are processed, a team of researchers developed a  clinical information extraction system called IDEAL-X.

The manual process is often time-consuming and does not provide automatic user feedback on how to improve the process. The IDEAL-X adaptive learning platform uses machine learning to understand how a user generates reports and to predict patterns to improve speed and efficiency of the process.

In the study, the researchers reported that the system was “highly effective” and achieved an accuracy rate of 95 percent compared to two other methods of clinical information extraction. The researchers conclude that no advanced skills are needed to operate the system, it will be freely available online and that it is very adaptable to improving its performance. These factors position it well for clinical use.

Concluding Thoughts

Potential applications of machine learning in the surgical field are diverse and address multiple points along the surgical spectrum including training, operations and clinical data management. Innovations which can prove their worth over the long haul by consistently saving surgeons time and hospitals money will be most successful.

For example, the machine learning-based IDEAL-X clinical information extraction system has the potential for implementation earlier than other applications covered in this article due to its lower learning curve, and usefulness across multiple medical specialities.

In comparison, we would expect automated suturing robots to undergo an extensive testing, review and market approval process which could take years to complete. Costs associated with training surgeons how to use the robots must also be considered.

The current lack of data makes it difficult to predict the length of time that would be required to become fully competent in operating tools such as a suturing robot. In fact, certain industry experts have suggested that it may take up to two decades before we see AI fully integrated into the surgical field.

Among the challenges to consider is how AI would work in the surgical environment. Machine learning thrives on robust, abundant data and leans toward pattern recognition. The complexity of surgery often creates a far less uniform and quite unpredictable environment, the very opposite of the ideal AI situation.

Surgical robots may never have the same degree of repetitive feedback and constant wide-scale use as is seen in industrial robotics. Similarly, unlike surgical robotics, industrial applications aren’t always posing a direct threat to human safety, and so they can allow for more open trial-and-error (another significant learning advantage for industrial applications).

Thus, the research occurring in the continuum robots space is of particular significance in crafting tools that are smart, reliable and adaptable to an unpredictable environment. This approach should also inform the kind of data that are selected to train AI models.


We must make sure AI doesn’t discriminate

When it comes to developing artificial intelligence, President Trump may want a free-market approach. But a number of experts disagree — we need guidelines to protect people from discriminatory algorithms.
Today, a group of humans rights organizations such as Human Rights Watch, Amnesty International, The Wikimedia Foundation, Access Now, and others called on governments and technology companies to adopt guiding principles to protect human rights. As part of today’s RightsCon Toronto symposium, the organizations joined to pen the Toronto Declaration on Machine Learning, which can be found in full on Access Now’s website. The declaration calls for engineers to develop and revisit algorithms with the explicit goal of promoting transparency and equality while working to end algorithm-propagated racism and discrimination.

Human biases are feeding into AI.
Image: Visual Capitalist
It’s well-known by now that algorithms, as useful as they may be, learn our implicit biases based on the information we feed them. And when we employ them to dictate who the police should investigate or who should qualify for a loan, they shape the world accordingly. What makes The Toronto Declaration unique is its call for real solutions. The document draws from international human rights laws to argue that those who are discriminated against by artificial intelligence algorithms should have an avenue to seek reparations.
The declaration states:
“Existing patterns of structural discrimination may be reproduced and aggravated in situations that are particular to these technologies – for example, machine learning system goals that create self-fulfilling markers of success and reinforce patterns of inequality, or issues arising from using non-representative or “biased” datasets. All actors, public and private, must prevent and mitigate discrimination risks in the design, development and, application of machine learning technologies and that ensure that effective remedies are in place before deployment and throughout the lifecycle of these systems.”
Ultimately, the Toronto Declaration is a plea to protect marginalized groups who often bear the brunt of systemic discrimination. The world of technological development is mostly one of the wealthy white men, and there are undoubtedly many who would like to see it stay that way, whether or not they would explicitly say so. It’s critical to call attention to the rights and needs of those who are so often excluded from the conversation. And even though signing onto the Toronto Declaration isn’t legally binding, it’s an important first step in making sure that the future of smart technology is one of equality and inclusion.