Revolutionizing everyday products with artificial intelligence

Mechanical engineering researchers are using AI and machine learning technologies to enhance the products we use in everyday life-Mary Beth O’Leary 

Researchers in MIT's Department of Mechanical Engineering are using artificial intelligence and machine learning technologies to enhance the products we use in everyday life.

“Who is Bram Stoker?” Those three words demonstrated the amazing potential of artificial intelligence. It was the answer to a final question in a particularly memorable 2011 episode of Jeopardy!. The three competitors were former champions Brad Rutter and Ken Jennings, and Watson, a supercomputer developed by IBM. By answering the final question correctly, Watson became the first computer to beat a human on the famous quiz show.

“In a way, Watson winning Jeopardy! seemed unfair to people,” says Jeehwan Kim, the Class ‘47 Career Development Professor and a faculty member of the MIT departments of Mechanical Engineering and Materials Science and Engineering. “At the time, Watson was connected to a supercomputer the size of a room while the human brain is just a few pounds. But the ability to replicate a human brain’s ability to learn is incredibly difficult.”

Kim specializes in machine learning, which relies on algorithms to teach computers how to learn like a human brain. “Machine learning is cognitive computing,” he explains. “Your computer recognizes things without you telling the computer what it’s looking at.”

Machine learning is one example of artificial intelligence in practice. While the phrase “machine learning” often conjures up science fiction typified in shows like “Westworld” or “Battlestar Galactica,” smart systems and devices are already pervasive in the fabric of our daily lives. Computers and phones use face recognition to unlock. Systems sense and adjust the temperature in our homes. Devices answer questions or play our favourite music on demand. Nearly every major car company has entered the race to develop a safe self-driving car.

For any of these products to work, the software and hardware both have to work in perfect synchrony. Cameras, tactile sensors, radar, and light detection all need to function properly to feed information back to computers. Algorithms need to be designed so these machines can process these sensory data and make decisions based on the highest probability of success.

Kim and the much of the faculty at MIT’s Department of Mechanical Engineering are creating new software that connects with hardware to create intelligent devices. Rather than building the sentient robots romanticized in popular culture, these researchers are working on projects that improve everyday life and make humans safer, more efficient, and better informed.

Making portable devices smarter

Jeehwan Kim holds up the sheet of paper. If he and his team are successful, one day the power of a supercomputer like IBM’s Watson will be shrunk down to the size of one sheet of paper. “We are trying to build an actual physical neural network on a letter paper size,” explains Kim.

To date, most neural networks have been software-based and made using the conventional method known as the Von Neumann computing method. Kim, however, has been using neuromorphic computing methods. “Neuromorphic computer means portable AI,” says Kim. “So, you build artificial neurons and synapses on a small-scale wafer.” The result is a so-called ‘brain-on-a-chip.’

Rather than compute information from binary signalling, Kim’s neural network processes information like an analogue device. Signals act like artificial neurons and move across thousands of arrays to particular cross points, which function like synapses. With thousands of arrays connected, vast amounts of information could be processed at once. For the first time, a portable piece of equipment could mimic the processing power of the brain.

“The key with this method is you really need to control the artificial synapses well. When you’re talking about thousands of cross points, this poses challenges,” says Kim.

According to Kim, the design and materials that have been used to make these artificial synapses thus far have been less than ideal. The amorphous materials used in neuromorphic chips make it incredibly difficult to control the ions once voltage is applied.

In a Nature Materials study published earlier this year, Kim found that when his team made a chip out of silicon germanium they were able to control the current flowing out of the synapse and reduce variability to 1 percent. With control over how the synapses react to stimuli, it was time to put their chip to the test.

“We envision that if we build up the actual neural network with material we can actually do handwriting recognition,” says Kim. In a computer simulation of their new artificial neural network design, they provided thousands of handwriting samples. Their neural network was able to accurately recognize 95 percent of the samples.

“If you have a camera and an algorithm for the handwriting data set connected to our neural network, you can achieve handwriting recognition,” explains Kim.

While building the physical neural network for handwriting recognition is the next step for Kim’s team, the potential of this new technology goes beyond handwriting recognition. “Shrinking the power of a supercomputer down to a portable size could revolutionize the products we use,” says Kim. “The potential is limitless – we can integrate this technology in our phones, computers, and robots to make them substantially smarter.”

Making homes smarter

While Kim is working on making our portable products more intelligent, Professor Sanjay Sarma and Research Scientist Josh Siegel hope to integrate smart devices within the biggest product we own: our homes.

One evening, Sarma was in his home when one of his circuit breakers kept going off. This circuit breaker — known as an arc-fault circuit interrupter (AFCI) — was designed to shut off power when an electric arc is detected to prevent fires. While AFCIs are great at preventing fires, in Sarma’s case there didn’t seem to be an issue. “There was no discernible reason for it to keep going off,” recalls Sarma. “It was incredibly distracting.”

AFCIs are notorious for such ‘nuisance trips,’ which disconnect safe objects unnecessarily. Sarma, who also serves as MIT’s vice president for open learning, turned his frustration into opportunity. If he could embed the AFCI with smart technologies and connect it to the ‘internet of things,’ he could teach the circuit breaker to learn when a product is safe or when a product actually poses a fire risk.

“Think of it like a virus scanner,” explains Siegel. “Virus scanners are connected to a system that updates them with new virus definitions over time.” If Sarma and Siegel could embed similar technology into AFCIs, the circuit breakers could detect exactly what product is being plugged in and learn new object definitions over time.

If, for example, a new vacuum cleaner is plugged into the circuit breaker and the power shuts off without reason, the smart AFCI can learn that it’s safe and add it to a list of known safe objects. The AFCI learns these definitions with the aid of a neural network. But, unlike Jeewhan Kim’s physical neural network, this network is software-based.

The neural network is built by gathering thousands of data points during simulations of racing. Algorithms are then written to help the network assess its environment, recognize patterns, and make decisions based on the probability of achieving the desired outcome. With the help of a $35 microcomputer and a sound card, the team can cheaply integrate this technology into circuit breakers.

As the smart AFCI learns about the devices it encounters, it can simultaneously distribute its knowledge and definitions to every other home using the internet of things.

“Internet of things could just as well be called ‘intelligence of things,” says Sarma. “Smart, local technologies with the aid of the cloud can make our environments adaptive and the user experience seamless.”

Circuit breakers are just one of many ways neural networks can be used to make homes smarter. This kind of technology can control the temperature of your house, detect when there’s an anomaly such as an intrusion or burst pipe, and run diagnostics to see when things are in need of repair.

“We’re developing software for monitoring mechanical systems that are self-learned,” explains Siegel. “You don’t teach these devices all the rules, you teach them how to learn the rules.”

Making manufacturing and design smarter

Artificial intelligence can not only help improve how users interact with products, devices, and environments. It can also improve the efficiency with which objects are made by optimizing the manufacturing and design process.

“Growth in automation along with complementary technologies including 3-D printing, AI, and machine learning compels us to, in the long run, rethink how we design factories and supply chains,” says Associate Professor A. John Hart.

Hart, who has done extensive research in 3-D printing, sees AI as a way to improve quality assurance in manufacturing. 3-D printers incorporating high-performance sensors, that are capable of analyzing data on the fly, will help accelerate the adoption of 3-D printing for mass production.

“Having 3-D printers that learn how to create parts with fewer defects and inspect parts as they make them will be a really big deal — especially when the products you’re making have critical properties such as medical devices or parts for aircraft engines,” Hart explains.

The very process of designing the structure of these parts can also benefit from the intelligent software. Associate Professor Maria Yang has been looking at how designers can use automation tools to design more efficiently. “We call it hybrid intelligence for design,” says Yang. “The goal is to enable effective collaboration between intelligent tools and human designers.”

In a recent study, Yang and graduate student Edward Burnell tested a design tool with varying levels of automation. Participants used the software to pick nodes for a 2-D truss of either a stop sign or a bridge. The tool would then automatically come up with optimized solutions based on intelligent algorithms for where to connect nodes and the width of each part. “We’re trying to design smart algorithms that fit with the ways designers already think,” says Burnell.

Making robots smarter

If there is anything on MIT’s campus that most closely resembles the futuristic robots of science fiction, it would be Professor Sangbae Kim’s robotic cheetah. The four-legged creature senses its surrounding environment using LIDAR technologies and moves in response to this information. Much like its namesake, it can run and leap over obstacles.

Kim’s primary focus is on navigation. “We are building a very unique system specially designed for dynamic movement of the robot,” explains Kim. “I believe it is going to reshape the interactive robots in the world. You can think of all kinds of applications — medical, healthcare, factories.”

Kim sees the opportunity to eventually connect his research with the physical neural network his colleague Jeewhan Kim is working on. “If you want the cheetah to recognize people, voice, or gestures, you need a lot of learning and processing,” he says. “Jeewhan’s neural network hardware could possibly enable that someday.”

Combining the power of a portable neural network with a robot capable of skillfully navigating its surroundings could open up a new world of possibilities for human and AI interaction. This is just one example of how researchers in mechanical engineering can one-day collaborate to bring AI research to next level.

While we may be decades away from interacting with intelligent robots, artificial intelligence and machine learning has already found its way into our routines. Whether it’s using face and handwriting recognition to protect our information, tapping into the internet of things to keep our homes safe, or helping engineers build and design more efficiently, the benefits of AI technologies are pervasive.

The science fiction fantasy of a world overtaken by robots is far from the truth. “There’s this romantic notion that everything is going to be automatic,” adds Maria Yang. “But I think the reality is you’re going to have tools that will work with people and help make their daily life a bit easier.”

Robot ‘sets new Rubik’s Cube record’-Solving a Rubik’s Cube in record time

A robot developed by MIT students Ben Katz and Jared Di Carlo can solve a Rubik’s Cube in a record-breaking 0.38 seconds-Mary Beth O’Leary

Few toys have captured the public’s imagination quite like the Rubik’s Cube. Rubik’s Cube references have been made in all corners of popular culture — from “The Simpsons” to “Being John Malkovich.” For the better part of four decades, this small handheld object has tormented those who tried to solve it.

Over the years, competitions have been held to see who could solve the Rubik’s Cube the fastest by hand. Engineers then started building robots programmed to solve the cube at a lightning speeds. In 2016, a robot broke the world record and solved the cube in 0.637 seconds. Mechanical engineering graduate student Ben Katz and third-year electrical engineering and computer science major Jared Di Carlo thought they could do better.

“We watched the videos of the previous robots, and we noticed that the motors were not the fastest that could be used,” recalls Di Carlo. “We thought we could do better with improved motors and controls.”

The pair met through the MIT Electronics Research Society, MITERS, a student-run hackerspace. Throughout January’s Independent Activities Period, they set out to build a robot that could shatter the world record for solving a Rubik’s Cube.

“The gist is that there is a motor actuating each face of a Rubik’s Cube,” explains Katz, who conducts research at MIT’s Biomimetic Robotics Lab. Custom-built electronics and controls are then used to control each of those motors. The robot also has the pair of webcams pointed at the cube. “When we tell the robot to solve the cube, we use those webcams to identify the different colours on the face of the cube,” says Katz.

Di Carlo wrote software that identifies the colours of each individual part within the cube to determine the cube’s initial state. The team then used existing software written to instruct the robot on exactly how to move the pieces to solve the puzzle.

The result? They set a new world record. It only took their robot 0.38 seconds to solve the Rubik’s Cube. The team credits the unique skills they brought to the table as the key to their success. “I worked on the computer vision software, while Ben worked on the more mechanical stuff,” adds Di Carlo.


Making a robot that can draw blood faster and more safely than a human can. Veebot’s robot system can find a vein and place a needle at least as well as a human can                                                                                                                                                    -Tekla S. Perry

You probably know the routine for drawing blood. A medical technician briefly wraps your arm in a tourniquet and looks your veins over, sometimes tapping gently with a gloved finger on your inner elbow. Then the med tech selects a target. Usually, but not always, she gets a decent vein on the first try; sometimes it takes a second (or third) stick. This procedure is fine for the typical blood test at a doctor’s office, but for contract researchers it represents a significant logistics problem. In drug trials it’s not unusual to have to draw blood from dozens of people every hour or so throughout a day. These tests can add up to more than a hundred thousand blood draws a year for just one contract research company.

Veebot, a start-up in Mountain View, Calif., is hoping to automate drawing blood and inserting IVs by combining robotics with image-analysis software. To use the Veebot system, a patient puts his or her arm through an archway over a padded table. Inside the archway, an inflatable cuff tightens around the arm, holding it in place and restricting blood flow to make the veins easier to see. An infrared light illuminates the inner elbow for a camera; software matches the camera’s view against a model of vein anatomy and selects a likely vein. The vein is examined with ultrasound to confirm that it’s large enough and has sufficient blood flowing through it. The robot then aligns the needle and sticks it in. The whole process takes about a minute, and the only thing the technician has to do is attach the appropriate test tube or IV bag.

Veebot began in 2009 when Richard Harris, a third-year undergraduate in Princeton’s mechanical engineering department, was trying to come up with a topic for a project. At the same time, his father, Stuart Harris, founder of a company that does pharmaceutical contract research, mentioned that he’d love to see someone come up with a way to automate blood draws. Harris says he was drawn to the idea because “it involved robotics and computer vision, both fields I was interested in, and it had demanding requirements because you’d be fully automating something that is different every time and deals with humans.”

He built a prototype that could find and puncture dots drawn on flexible plastic tubing, and with funding from his father, he cofounded Veebot in 2010. Currently, Veebot’s machine can correctly identify the best vein to target about 83 percent of the time, says Harris, which is about as good as a human. Harris wants to get that rate up to 90 percent before clinical trials. However, while he expects to achieve this in three to five months, he will then have to secure outside funding to cover the expense of those trials.

Harris estimates the market for his technology to be about US $9 billion, noting that “blood is drawn a billion times a year in the U.S. alone; IVs are started 250 million times.” Veebot will initially try to sell to large medical facilities. Thomas Gunderson, managing director and a senior analyst at investment bank Piper Jaffray Companies, believes the time is right for this kind of medical device company. In a difficult case, “doctors today will search all over the hospital for the right person to do a blood draw, and they could still miss three or four times,” he says. “Technology can help from a labor standpoint and make the procedure safer for the patient and for the person drawing the blood.”

The biggest challenge, Harris says, is human psychology. “If people don’t want a robot drawing their blood, then nobody is going to use it. We believe if this machine works better, faster, and cheaper than a person, people will want to use it.” Says Gunderson: “These days we have multimillion-dollar robots doing surgery. I think we passed ‘creepy’ several years ago and moved on.”

Shape-shifting origami robot swaps bodies to roll, swim or walk


Each exoskeleton starts out as a sheet of plastic onto which the robot, known as Primer, rolls. Heat is then applied to cause the exoskeleton to fold around the robot in a motion akin to a piece of origami assembling itself. The folds are created by lines cut into the sheet of plastic, with their depths responsible for the angle of the fold.

The exoskeletons allow the robot to adapt to different situations. One gives it the ability to roll, meaning it can move twice as fast as without the exoskeleton; another is shaped like a boat, letting it float on water and carry nearly twice its weight. It even has a glider-shaped exoskeleton that allows it to soar when falling from a height.

The primer itself is only a couple of centimetres in size and is controlled by an external magnetic field. Once it has finished with a particular exoskeleton, it can ditch the covering by dipping itself in water.

Mini surgeons and explorers

“In the future, we imagine robots like this could become mini surgeons, squished into a pill that you swallow,” says Daniela Rus at the Massachusetts Institute of Technology. Once in the stomach, the tiny surgeons could use different exoskeletons to cut tissue samples or deliver medicine – applications that are still a long way off but could have many advantages. “Some aspects of surgery could be done without incisions, pain, or infection,” says Rus.

The robots could also be used for exploration tasks, or monitoring abandoned warehouses, says Jamie Paik at the Swiss Federal Institute of Technology in Lausanne. “This is a great example of how origami robots can take on diverse tasks using different clothing, meaning that you can mould the robot to different situations,” she says.-

Jason Dorfman, MIT CSAIL

Journal reference: Science Robotics, DOI: 10.1126/scirobotics.aao4369- 

Ingestible origami robot

Robot unfolds from the ingestible capsule, removes button battery stuck to the wall of the simulated stomach

“It’s really exciting to see our small origami robots doing something with potential important applications to healthcare,” Daniela Rus says. Pictured, an example of a capsule and the unfolded origami device.

In experiments involving a simulation of the human esophagus and stomach, researchers at MIT, the University of Sheffield, and the Tokyo Institute of Technology have demonstrated a tiny origami robot that can unfold itself from a swallowed capsule and, steered by external magnetic fields, crawl across the stomach wall to remove a swallowed button battery or patch a wound. The new work, which the researchers are presenting this week at the International Conference on Robotics and Automation, builds on a long sequence of papers on origami robots from the research group of Daniela Rus, the Andrew and Erna Viterbi Professor in MIT’s Department of Electrical Engineering and Computer Science.

“It’s really exciting to see our small origami robots doing something with potentially important applications to health care,” says Rus, who also directs MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL). “For applications inside the body, we need a small, controllable, untethered robot system. It’s really difficult to control and place a robot inside the body if the robot is attached to a tether.”

Researchers at MIT and elsewhere developed a tiny origami robot that can unfold itself from a swallowed capsule and, steered by external magnetic fields, crawl across the stomach wall to remove a swallowed button battery or patch a wound.

Joining Rus on the paper are first author Shuhei Miyashita, who was a postdoc at CSAIL when the work was done and is now a lecturer in electronics at the University of York, in England; Steven Guitron, a graduate student in mechanical engineering; Shuguang Li, a CSAIL postdoc; Kazuhiro Yoshida of Tokyo Institute of Technology, who was visiting MIT on sabbatical when the work was done; and Dana Damian of the University of Sheffield, in England. Although the new robot is a successor to one reported at the same conference last year, the design of its body is significantly different. Like its predecessor, it can propel itself using what’s called a “stick-slip” motion, in which its appendages stick to a surface through friction when it executes a move, but slip free again when its body flexes to change its weight distribution.

Also like its predecessor — and like several other origami robots from the Rus group — the new robot consists of two layers of structural material sandwiching a material that shrinks when heated. A pattern of slits in the outer layers determines how the robot will fold when the middle layer contracts.

Material difference

The robot’s envisioned use also dictated a host of structural modifications. “Stick-slip only works when, one, the robot is small enough and, two, the robot is stiff enough,” says Guitron. “With the original Mylar design, it was much stiffer than the new design, which is based on a biocompatible material.” To compensate for the biocompatible material’s relative malleability, the researchers had to come up with a design that required fewer slits. At the same time, the robot’s folds increase its stiffness along certain axes.

But because the stomach is filled with fluids, the robot doesn’t rely entirely on the stick-slip motion. “In our calculation, 20 percent of forwarding motion is by propelling water — thrust — and 80 percent is by stick-slip motion,” says Miyashita. “In this regard, we actively introduced and applied the concept and characteristics of the fin to the body design, which you can see in the relatively flat design.” It also had to be possible to compress the robot enough that it could fit inside a capsule for swallowing; similarly, when the capsule dissolved, the forces acting on the robot had to be strong enough to cause it to fully unfold. Through a design process that Guitron describes as “mostly trial and error,” the researchers arrived at a rectangular robot with accordion folds perpendicular to its long axis and pinched corners that act as points of traction.

In the centre of one of the forward accordion folds is a permanent magnet that responds to changing magnetic fields outside the body, which control the robot’s motion. The forces applied to the robot are principally rotational. A quick rotation will make it spin in place, but a slower rotation will cause it to pivot around one of its fixed feet. In the researchers’ experiments, the robot uses the same magnet to pick up the button battery.

Porcine precedents

The researchers tested about a dozen different possibilities for the structural material before settling on the type of dried pig intestine used in sausage casings. “We spent a lot of time at Asian markets and the Chinatown market looking for materials,” Li says. The shrinking layer is a biodegradable shrink wrap called Biolefin. To design their synthetic stomach, the researchers bought a pig stomach and tested its mechanical properties. Their model is an open cross-section of the stomach and oesophagus, moulded from a silicone rubber with the same mechanical profile. A mixture of water and lemon juice stimulates the acidic fluids in the stomach.

Every year, 3,500 swallowed button batteries are reported in the U.S. alone. Frequently, the batteries are digested normally, but if they come into prolonged contact with the tissue of the oesophagus or stomach, they can cause an electric current that produces hydroxide, which burns the tissue. Miyashita employed a clever strategy to convince Rus that the removal of swallowed button batteries and the treatment of consequent wounds was a compelling application of their origami robot. “Shuhei bought a piece of ham, and he put the battery on the ham,” Rus says. “Within half an hour, the battery was fully submerged in the ham. So that made me realize that, yes, this is important. If you have a battery in your body, you really want it out as soon as possible.”

“This concept is both highly creative and highly practical, and it addresses a clinical need in an elegant way,” says Bradley Nelson, a professor of robotics at the Swiss Federal Institute of Technology Zurich. “It is one of the most convincing applications of origami robots that I have seen.”