DARPA Wants Your Insect-Scale Robots for a Micro-Olympics

SHRIMP is a new DARPA program to develop insect-scale robots for disaster recovery and high-risk environments- Evan Ackerman

DARPA's SHRIMP program wants to develop insect-scale robots for disaster recovery and high-risk environments

The DARPA Robotics Challenge was a showcase for how very large, very expensive robots could potentially be useful in disaster recovery and high-risk environments. Humanoids are particularly capable in some very specific situations, but the rest of the time, they’re probably overkilling, and using smaller, cheaper, more specialized robots is much more efficient. This is especially true when you’re concerned with data collection as opposed to manipulation—for the “search” part of “search and rescue,” for example, you’re better off with lots of very small robots covering as much ground as possible.

Yesterday, DARPA announced a new program called SHRIMP: SHort-Range Independent Microrobotic Platforms. The goal is “to develop and demonstrate multi-functional micro-to-milli robotic platforms for use in natural and critical disaster scenarios.” To enable robots that are both tiny and useful, SHRIMP will support fundamental research in the component parts that are the most difficult to engineer, including actuators, mobility systems, and power storage.

From the DARPA program announcement:

Imagine a natural disaster scenario, such as an earthquake, that inflicts widespread damage to buildings and structures, critical utilities and infrastructure, and threatens human safety. Having the ability to navigate the rubble and enter highly unstable areas could prove invaluable to saving lives or detecting additional hazards among the wreckage. Partnering rescue personnel with robots to evaluate high-risk scenarios and environments can help increase the likelihood of successful search and recovery efforts, or other critical tasks while minimizing the threat to human teams.

Technological advances in microelectromechanical systems (MEMS), additive manufacturing, piezoelectric actuators, and low-power sensors have allowed researchers to expand into the realm of micro-to-milli robotics. However, due to the technical obstacles experienced as the technology shrinks, these platforms lack the power, navigation, and control to accomplish complex tasks proficiently.

To help overcome the challenges of creating extremely SWaP-constrained microrobotics, DARPA is launching a new program called SHort-Range Independent Microrobotic Platforms (SHRIMP). The goal of SHRIMP is to develop and demonstrate multi-functional micro-to-milli robotic platforms for use in natural and critical disaster scenarios. To achieve this mission, SHRIMP will explore fundamental research in actuator materials and mechanisms as well as power storage components, both of which are necessary to create the strength, dexterity, and independence of functional micro-robotics platforms.

That term “SWaP” translates into “size, weight, and power,” which are just some of the constraints that very small robots operate under. Power is probably the biggest one—tiny robots that aren’t tethered either run out of power within just a minute or two or rely on some kind of nifty and exotic source, like lasers or magnets. There’s also control to consider, with truly tiny robots almost always using off-board processors. These sorts of things substantially limit the real-world usefulness of microrobots, which is why DARPA is tackling them directly with SHRIMP.

One of our favourite things about DARPA programs like these is their competitive nature, and SHRIMP is no exception. Both components and integrated robots will compete in “a series of Olympic-themed competitions [for] multi-functional mm-to-cm scale robotic platforms,” performing tasks “associated with manoeuvrability, dexterity, [and] manipulation.” DARPA will be splitting the competition into two parts: one for actuators and power sources, and the other for complete robots.

Here are the tentative events for the actuator and power source competition; DARPA expects that teams will develop systems that weigh less than one gram and fit into one cubic centimetre.

High Jump: The micro-robotic actuator-power system must propel itself vertically from a stationary starting position, with distance measured only in the vertical direction and survivability as the judging criteria. Expected result: >5cm.

Long Jump: The micro-robotic actuator-power system must propel itself horizontally from a stationary starting position, with the distance measured only in the horizontal direction and survivability as the judging criteria. Expected result: >5cm

Weightlifting: The micro-robotic actuator-power system must lift a mass, with progressively larger masses until the actuator system fails to lift the weight. Expected result: >10g. 

Shotput: The micro-robotic actuator-power system must propel a mass horizontally, with the distance measured only in the horizontal direction as the judging criteria. Both 1-gram and 5-gram masses must be attempted. Expected result: >10cm @ 1g, >5cm @ 2g.

Tug of War: The micro-robotic actuator-power system will be connected to a load cell to measure the blocking force of the actuator mechanism. Expected result: > 25mN.

Teams competing with entire robots will have a separate set of events, and DARPA is looking for a lot of capability in a very, very small package—in a volume of less than one cubic centimeter and a weight of less than one gram, DARPA wants to see “a micro power source, power converters, actuation mechanism and mechanical transmission and structural elements, computational control, sensors for stability and control, and any necessary sensors and actuators required to improve the maneuverability and dexterity of the platforms.” The robots should be able to move for 3 minutes, with a cost of transport of less than 50. Teams are allowed to develop different robots for different events, but DARPA is hoping that the winning design will be able to compete in at least four events.

Rock Piling: For each attempt, the microrobot must travel to, lift, and stack weights (varying from 0.5 to 2.0 g) in a minimum of two layers without human interaction. Expected result: 2g, 2 layers. 

Steeplechase: Competing teams will be given precise locations and types of obstacles (e.g. hurdle, gap, step, etc.) relative to the starting location. For each attempt, the microrobot must traverse the course without human interaction or recharge between each obstacle. The number of cleared obstacles and total distance will be used as the judging criteria. Expected result: 2 obstacles, 5m.

Biathlon: Competing teams will be given the choice between three beacon types (temperature, light, or sound) or they may choose to use all 3 types of beacons. For each attempt, the microrobot must traverse to a series of beacon waypoints to create an open circuit without human interaction or recharge between each waypoint. Expected result: 2 beacons, 5m.

Vertical Ascent: Microrobots will traverse up two surfaces, one with a shallow incline (10º) and the other with a sharp incline (80º). The total vertical distance travelled will be the judging criteria. Expected result: 10m at 10°, 1m at 80°.

DARPA has the US $32 million of funding to spread around across multiple projects for SHRIMP. Abstracts are due August 10, proposals are due September 26, and the competition could happen as early as March of next year.

Best Programming Language for Robotics

In this post, we’ll look at the top 10 most popular programming languages used in robotics. We’ll discuss their strengths and weaknesses, as well as reasons for and against using them.

It is actually a very reasonable question. After all, what’s the point of investing a lot of time and effort in learning a new programming language, if it turns out you’re never going to use it? If you are a new roboticist, you want to learn the programming languages which are actually going to be useful for your career.

Why “It Depends” is a Useless Answer

Unfortunately, you will never get a simple answer if you asked “What’s the best programming language for robotics?” to a whole roomful of robotics professionals (or on forums like Stack OverflowQuoraTrossenReddit or Research Gate). Electronic engineers will give a different answer from industrial robotic technicians. Computer vision programmers will give a different answer than cognitive roboticists. And everyone would disagree as to what is “the best programming language”. In the end, the answer which most people would all agree with is “it depends.” This is a pretty useless answer for the new roboticist who is trying to decide which language to learn first. Even if this is the most realistic answer because it does depend on what type of application you want to develop and what system you are using.

Which Programming Language Should I Learn First?

It’s probably better to ask, which programming language is the one you should start learning first? You will still get differing opinions, but a lot of roboticists can agree on the key languages. The most important thing for roboticists is to develop “The Programming Mindset” rather than to be proficient in one specific language. In many ways, it doesn’t really matter which programming language you learn first. Each language that you learn develops your proficiency with the programming mindset and makes it easier to learn any new language whenever it’s required.

Top 10 Popular Programming Languages in Robotics

There are over 1500 programming languages in the world, which is far too many to learn. Here are the ten most popular programming languages in robotics at the moment. If your favorite language isn’t on the list, please tell everyone about it in the comments! Each language has different advantages for robotics. The way I have ordered them is only partly in order of importance from least to most valuable.

10. BASIC / Pascal

BASIC and Pascal were two of the first programming languages that I ever learned. However, that’s not why I’ve included them here. They are the basis for several of the industrial robot languages, described below. BASIC was designed for beginners (it stands for Beginners All-Purpose Symbolic Instruction Code), which makes it a pretty simple language to start with. Pascal was designed to encourage good programming practices and also introduces constructs like pointers, which makes it a good “stepping stone” from BASIC to a more involved language. These days, both languages are a bit outdated to be good for “everyday use”. However, it can be useful to learn them if you’re going to be doing a lot of low level coding or you want to become familiar with other industrial robot languages.

9. Industrial Robot Languages

Almost every robot manufacturer has developed their own proprietary robot programming language, which has been one of the problems in industrial robotics. You can become familiar with several of them by learning Pascal. However, you are still going to have to learn a new language every time you start using a new robot. ABB has its RAPID programming language. Kuka has KRL (Kuka Robot Language). Comau uses PDL2, Yaskawa uses INFORM and Kawasaki uses AS. Then, Fanuc robots use Karel, Stäubli robots use VAL3 and Universal Robots use UR Script. In recent years, programming options like ROS Industrial have started to provide more standardized options for programmers. However, if you are a technician, you are still more likely to have to use the manufacturer’s language.


LISP is the world’s second oldest programming language (FORTRAN is older, but only by one year). It is not as widely used as many of the other programming languages on this list; however, it is still quite important within Artificial Intelligence programming. Parts of ROS are written in LISP, although you don’t need to know it to use ROS.

7. Hardware Description Languages (HDLs)

Hardware Description Languages are basically a programming way of describing electronics. These languages are quite familiar to some roboticists, because they are used to program Field Programmable Gate Arrays (FPGAs). FPGAs allow you to develop electronic hardware without having to actually produce a silicon chip, which makes them a quicker and easier option for some development. If you don’t prototype electronics, you may never use HDLs. Even so, it is important to know that they exist, as they are quite different from other programming languages. For one thing, all operations are carried out in parallel, rather than sequentially as with processor-based languages.

6. Assembly

Assembly allows you to program at “the level of ones and zeros”. This is programming at the lowest level (more or less). In the recent past, most low level electronics required programming in Assembly. With the rise of Arduino and other such microcontrollers, you can now program easily at this level using C/C++, which means that Assembly is probably going to become less necessary for most roboticists.


MATLAB, and its open source relatives, such as Octave, is very popular with some robotic engineers for analyzing data and developing control systems. There is also a very popular Robotics Toolbox for MATLAB. I know people who have developed entire robotics systems using MATLAB alone. If you want to analyze data, produce advanced graphs or implement control systems, you will probably want to learn MATLAB.

4. C#/.NET

C# is a proprietary programming language provided by Microsoft. I include C#/.NET here largely because of the Microsoft Robotics Developer Studio, which uses it as its primary language. If you are going to use this system, you’re probably going to have to use C#. However, learning C/C++ first might be a good option for long-term development of your coding skills.

3. Java

As an electronics engineer, I am always surprised that some computer science degrees teach Java to students as their first programming language. Java “hides” the underlying memory functionality from the programmer, which makes it easier to program than, say, C, but also this means that you have less of an understanding of what it’s actually doing with your code. If you come to robotics from a computer science background (and many people do, especially in research) you will probably already have learned Java. Like C# and MATLAB, Java is an interpretive language, which means that it is not compiled into machine code. Rather, the Java Virtual Machine interprets the instructions at runtime. The theory for using Java is that you can use the same code on many different machines, thanks to the Java Virtual Machine. In practice, this doesn’t always work out and can sometimes cause code to run slowly. However, Java is quite popular in some parts of robotics, so you might need it.

2. Python

There has been a huge resurgence of Python in recent years especially in robotics. One of the reasons for this is probably that Python (and C++) are the two main programming languages found in ROS. Like Java, it is an interpretive language. Unlike Java, the prime focus of the language is ease of use. Many people agree that it achieves this very well. Python dispenses with a lot of the usual things which take up time in programming, such as defining and casting variable types. Also, there are a huge number of free libraries for it, which means you don’t have to “reinvent the wheel” when you need to implement some basic functionality. And since it allows simple bindings with C/C++ code, this means that performance heavy parts of the code can be implemented in these languages to avoid performance loss. As more electronics start to support Python “out-of-the-box” (as with Raspberry Pi), we are likely to see a lot more Python in robotics.

1. C/C++

Finally, we reach the Number 1 programming language in robotics! Many people agree that C and C++ are a good starting point for new roboticists. Why? Because a lot of hardware libraries use these languages. They allow interaction with low-level hardware, allow for real-time performance and are very mature programming languages. These days, you’ll probably use C++ more than C, because the language has much more functionality. C++ is basically an extension of C. It can be useful to learn at least a little bit of C first, so that you can recognize it when you find a hardware library written in C. C/C++ are not as simple to use as, say, Python or MATLAB. It can take quite a lot longer to implement the same functionality using C and it will require many more lines of code. However, as robotics is very dependent on real-time performance, C and C++ are probably the closest things that we roboticists have to “a standard language”

Source:- Alex Owen-Hill/blog.robotiq.com


The New WALK-MAN: A Look at IIT’s Multi-Faceted Robotic Endeavor

Standing 1.85 meters tall, and made of lightweight metals, iron, and plastics, the WALK-MAN humanoid robot is controlled remotely by a human operator through a virtual interface- Sam Davis 

The New WALK-MAN: A Look at IIT’s Multi-Faceted Robotic Endeavor

The heart of the Istituto Italiano Di Tecnologia (IIT, Genoa, Italy) robotics strategy has always been the development of state-of-the-art mechatronics systems. This has led to the creation of internationally recognized humanoid robots and pioneering quadrupeds. IIT’s family of cutting-edge robots isn’t limited to legged systems, though. The Mechatronics program has explored completely new designs and operational paradigms, including materials, compliance, soft bodies, and distributed intelligence.

Besides its advanced integrated robot platforms, IIT researchers have developed component-level systems, including novel patented high-performance actuation systems, variable impedance actuators, advanced fingertip as well as large-area tactile sensors, exoskeletons (leg, arm, hand), instrumented haptic devices, novel medical systems, a variety of force/torque sensors, dexterous manipulators (e.g., SoftHand), and advanced industrial end-effectors.

The IIT Mechatronics program is developing new bodies for its integrated robotic systems, particularly for humanoid and legged robots. In these domains, researchers will focus on combining locomotion, manipulation, whole-body capabilities, new materials, and high-dynamics structures. As in most areas of engineering, it will be crucial to optimize energy use. To achieve this, they will use innovative lightweight and sustainable materials, improve mechatronics to better use the available power, and develop robots with more natural gaits and locomotion skills, coupled with enhanced actuator design.

Improvements in ruggedness, robustness, and reliability will require novel kinematics, shock-absorbing materials, and lightweight designs optimized for indoor and outdoor use in dirty and wet environments. They will develop highly integrated actuation solutions and decentralized diagnostics inspired by the new concept of “smart, high-performance mechatronics.”

Looking at the market, systems have been designed for prompt, affordable market applications. Here, the engineering goals require that they reduce mechanical complexity (fewer parts, no exposed wires, robust sensors), boost the payload-to-weight ratio, and improve the manipulation skills (dexterous hands, a wider range of movement in the shoulder and wrist). The reduced complexity will lower the cost of the robots, which is particularly important for the so-called companion robots. These systems will undergo extensive field-testing with end users, in line with the Technology Transfer Mission.

Delving Deeper into Locomotion

Advanced dynamical control and whole-body loco-manipulation are vital for complex human-like robots, particularly for locomotion and human-robot collaboration. In robot locomotion, where a flexible control strategy demands step recovery, walking, and running on possibly uneven terrains, advances will require the close integration of engineering (sensing, actuation, and mechanics), gait generation, dynamic modelling, and control.

The Mechatronics program will investigate locomotion, gait generation, and gait control in both bipeds and quadrupeds. With several robust platforms available, they will develop dynamic locomotion profiles. These will advance locomotion and loco-manipulation, particularly for operation in rough, hazardous, and poorly conditioned terrains, where wheeled and tracked vehicles cannot operate. The current locomotion capabilities on flat and moderately rough terrain will include very challenging environments (e.g., soft and unstable terrains).

The locomotion framework will reach higher levels of autonomy, allowing automatic selection of the most suitable gaits/ behaviours for the environment. Combinations of machine-learning and optimization methods will be used to plan movements and control the robot.

With complex systems such as humanoids, it’s vital to achieve simultaneous manipulation and control, while maintaining operational parameters such as balance, walking, and reaching. This requires a new advanced approach to control. Torque regulation (through hardware and software) will be critical to success in this domain. IIT robots feature fully integrated torque sensing and controllers. In the near future, exciting developments in controller design will advance the functionality of these robots, and fill a crucial gap in humanoid technology.

IIT research in soft robotics will aim to produce soft, lightweight, sensitive structures, such as manipulators and grippers. They will exploit additive manufacturing technologies and customized sewing machines to generate 3D-fiber-reinforced structural composites that feature large deformation capacity, high load capacity, and variable stiffness. This approach may also influence the design of rigid robots by replacing rigid joints with soft, compliant joints or soft and elastic actuators (e.g., McKibben muscles).

IIT’s Soft Robotics program will focus on developing continuum robots (i.e., with similarities to the elephant trunk and cephalopod arms) that can grow, evolve, self-heal, and biodegrade. The goal is for these continuum robots to traverse confined spaces, manipulate objects, and reach difficult-to-access sites. Potential applications will be in medicine, space, inspection, and search-and-rescue missions. The Soft Robotics program will require an unprecedented multidisciplinary effort combining biology (e.g., the study of plants), materials science (e.g., responsive polymer synthesis), engineering, and biomechanics.

The Walk-MAN robot (Credit: IIT-Istituto Italiano di Tecnologia)

Researchers at IIT successfully tested their new version of a WALK-MAN humanoid robot for supporting emergency response teams in fire incidents (see figure). The robot is able to locate the fire position and walk toward it and then activate an extinguisher to eliminate the fire. During the operation, it collects images and transmits these back to emergency teams, who can evaluate the situation and guide the robot remotely. The new WALK-MAN robot’s design consists of a lighter upper body and new hands, helping to reduce construction cost and improve performance.

During the final test, the robot WALK-MAN dealt with a scenario representing an industrial plant damaged by an earthquake that was experiencing gas leaks and fire—no doubt a dangerous situation for humans. The scenario was recreated in IIT laboratories, where the robot was able to navigate through a damaged room and perform four specific tasks: opening and traversing the door to enter the zone; locating the valve which controls the gas leakage and close it; removing debris on its path; and finally identifying the fire and activating a fire extinguisher.

The robot is controlled by a human operator through a virtual interface and a sensorized suit, which permits the robot to operate very naturally, effectively controlling its manipulation and locomotion, like an avatar. The operator guides the robot from a station located remotely from the accident site, receiving images and other information from the robot perception systems.

The first version WALK-MAN was released in 2015, but researchers wanted to introduce new materials and optimize the design to reduce the fabrication cost and improve its performance. The new version of the WALK-MAN robot has a new lighter upper-body, whose realization took six months and involved a team of about 10 people coordinated by Nikolaos Tsagarakis, the researcher at IIT.

The new WALK-MAN robot is a humanoid robot 1.85 meters tall, made of lightweight metal, like Ergal (60%), magnesium alloys (25%) and titanium, iron, and plastics. Researchers reduced its weight by 31 kilos—from the original 133 kilos to 102 kilos—to make the robot more dynamic. Therefore, legs can move faster due to having a lighter upper body mass to carry. The higher dynamic performance allows the robot to react faster with legs, which is very important to adapt its pace to rough terrain and variable interaction scenarios. The lighter upper-body also reduces its energy consumption; the WALK-MAN can operate with a smaller battery (1 kWh) for about two hours.

The lighter upper-body is made of magnesium alloys and composite structures and is powered by a new version of lightweight soft actuators. Its performance has been improved, reaching a higher payload (10 kg/arm) than the original one (7 kg/arm). Thus, it can carry heavy objects around and sustain them for more than 10 minutes.

The new upper body is also more compact in size (62-cm shoulder width and 31-cm torso depth), giving to the robot greater flexibility to pass through standard doors and narrow passages.

The hands are a new version of SoftHand developed by Centro Ricerche E. Piaggio of the University of Pisa (a group led by Prof. A. Bicchi) in collaboration with IIT. They are lighter, thanks to the composite material used to construct fingers, and they have a better finger-to-palm size ratio (more human-like) that allows WALK-MAN to grasp a variety of object shapes. Despite their weight reduction, hands keep the same strength as the original version, as well as their versatility in handling and physical robustness.

WALK-MAN body is controlled by 32 engines and control boards, four force and torque sensors at the hands and feet, and two accelerometers for controlling its balance. Its joints show elastic movement, allowing the robot to be compliant and have safe interactions with humans and the environment. Its software architecture is based on the XBotCore framework. The WALK-MAN head has cameras, 3D laser scanner, and microphone sensors. In the future, it can be also equipped with chemical sensors for detecting toxic agents.

The WALK-MAN robot was designed and implemented by IIT within the project WALK-MAN funded by the European Commission. The project started in 2013 and is now at its final validation phase. The project also involved the University of Pisa in Italy, the École Polytechnique Fédérale de Lausanne (EPFL) in Switzerland, the Karlsruhe Institute of Technology (KIT) in Germany, and the Université Catholique de Louvain (UCL) in Belgium. All partners contributed to different aspects of the robot realization: locomotion control, perception capability, affordances and motion planning, simulation tools, and manipulation control.

The validation scenario was defined in collaboration with the Italian civil protection body in Florence, which participated in the project as an advisor end-user.

Artificial Intelligence & Manufacturing Industry

David Gelernter, famous artist, writer, and a Computer Science Professor at the Yale University was quoted in an article titled Artificial intelligence isn’t the scary future. It’s the amazing present published in Chicago Tribune, “The coming of computers with true human-like reasoning remains decades in the future, but when the moment of “artificial general intelligence” arrives, the pause will be brief. Once artificial minds achieve the equivalence of the average human IQ of 100, the next step will be machines with an IQ of 500, and then 5,000. We don’t have the vaguest idea what an IQ of 5,000 would mean. And in time, we will build such machines–which will be unlikely to see much difference between humans and houseplants”.

Artificial Intelligence –It’s everywhere

There was a time when Artificial Intelligence was something that was considered futuristic and what Professor Gelernter said might have sounded insane.But no more. Especially when we’re using so much AI in our daily lives.

For example, GPS is now smarter than spatial navigation, we have started to rely heavily on Apple’s Siri and Amazon’s Echo. AI has made a lot of progress quickly and it’s because of improved processing, algorithms and a lot of data. With the help of Machine Learning, a lot of data can be analyzed, and critical insights can be provided.

Known as the major propellant of the Fourth Industrial Revolution, AI is expected to wipe out nearly half of the human jobs (mostly the white collar jobs) in the next 20 years. Every industry will opt to replace humans for work that can be performed with the help of AI. Algorithms and automation are a major threat as they offer improved efficiency at a lower price.

The American Manufacturing Industry today

The American Manufacturing Industry has suffered for long, more so since the Great Recession in 2000s. The growth has been sluggish and the key reasons for this are goods manufactured in U.S. have always been more expensive for the foreign market due to the strong dollar, and also due to the major cutbacks in the Energy Sector. In 2010, China replaced US as the largest manufacturing country in the world.

Some of the leading manufacturing industries in US are steel, automobiles, chemicals, food processing, consumer goods, aerospace, and mining.

 How will AI have an impact on Manufacturing Industry?

The manufacturing industry has always been open to adopting new technologies. Drones and industrial robots have been a part of the manufacturing industry since 1960s. The next automation revolution is just around the corner and the US Manufacturing Sector is awaiting this change eagerly. With the adoption of AI if companies can keep inventories lean and reduce the cost, there is a high likelihood that the American Manufacturing Industry will experience an encouraging growth. Having said that, the manufacturing sector has to gear up for networked factories where supply chain, design team, production line, and quality control are highly integrated into an intelligent engine that provides actionable insights.

Virtual Reality

Virtual Reality will enable new tools that help to perform testing in the virtual world. It allows people, remotely located, to connect and jointly work on situations that require trouble shooting. Simulation and product-creation can help reduce the manufacturing time drastically.


Automation will help the manufacturing industry reach a high level of accuracy and productivity, a level that is even beyond human ability. It can even work in environments that are otherwise dangerous, tedious or complicated for humans. Robotics, which are expected in the future, will have capabilities like voice and image recognition that can be used to re-create complex human tasks.

 Internet of Things (IoT)

We all have started to use smart sensors. It is a little known fact that the IoT functionality will have a huge role in the manufacturing industry. It can track, analyze production quotas, and aggregate control rooms, the technology can also help to create models for predictive maintenance. When combined with augmented and virtual reality and analysis of customer feedback, there can be a number of meaningful insights to help towards innovation.


With the promise of increased output, robots are already being used in the manufacturing companies. But with their growing intelligence, the workforce in factories will soon be replaced by robots. Every stage can be closely monitored with the help of sensors and data can be shared with AI and analytics software. Increased output, defect detection and corrective action is much faster and the entire production cycle is way more efficient.


Unlike the big fear of human jobs being lost to AI, the manufacturing will be driven towards higher productivity and increased efficiency with the help of AI. The workforce can focus more on innovation and new operations, and contribute to the growth and bright future of the American Manufacturing Industry.

Taking a leap in bioinspired robotics

Mechanical engineer Sangbae Kim builds animal-like machines for use in disaster response.“Say there are toxic gases leaking in a building, and you need to close a valve inside, but it’s dangerous to send people in,” says Sangbae Kim, associate professor of mechanical engineering at MIT. “Now, there is no single robot that can do this kind of job. I want to create a robotic first responder that can potentially do more than a human and help in our lives.”

“Say there are toxic gases leaking in a building, and you need to close a valve inside, but it’s dangerous to send people in,” says Sangbae Kim, associate professor of mechanical engineering at MIT. “Now, there is no single robot that can do this kind of job. I want to create a robotic first responder that can potentially do more than a human and help in our lives.”

In the not so distant future, first responders to a disaster zone may include four-legged, dog-like robots that can bound through a fire or pick their way through a minefield, rising up on their hind legs to turn a hot door handle or punch through a wall.

Such robot-rescuers may be ready to deploy in the next five to 10 years, says Sangbae Kim, associate professor of mechanical engineering at MIT. He and his team in the Biomimetic Robotics Laboratory are working toward that goal, borrowing principles from biomechanics, human decision-making, and mechanical design to build a service robot that Kim says will eventually do “real, physical work,” such as opening doors, breaking through walls, or closing valves.

“Say there are toxic gases leaking in a building, and you need to close a valve inside, but it’s dangerous to send people in,” Kim says. “Now, there is no single robot that can do this kind of job. I want to create a robotic first responder that can potentially do more than a human and help in our lives.”

To do this, Kim, who was awarded tenure this year, is working to fuse the two main projects in his lab: the MIT Cheetah, a four-legged, 70-pound robot that runs and jumps over obstacles autonomously; and HERMES, a two-legged, teleoperated robot, whose movements and balance is controlled remotely by a human operator, much like a marionette or a robotic “Avatar.”

“I imagine a robot that can do some physical, dynamic work,” Kim says. “Everybody is trying to find overlapping areas where you’re excited about what you’re working on, and it’s useful. A lot of people are excited to watch sports because when you watch someone moving explosively, it is hypothesized to trigger the brain’s  ‘mirror neurons’ and you feel that excitement at the same time. For me, when my robots perform dynamically and balance, I get really excited. And that feeling has encouraged my research.”

A drill sergeant turns roboticist

Kim was born in Seoul, South Korea, where he says his mother remembers him as a tinkerer. “Everything with a screw, I would take apart,” Kim says. “And she said the first time, almost everything broke. After that, everything started working again.”

He attended Yonsei University in the city, where he studied mechanical engineering. In his second year, as has been mandatory in the country, he and other male students joined the South Korean army, where he served as a drill sergeant for two and a half years.

“We taught [new recruits] every single detail about how to be a soldier, like how to wear shirts and pants, buckle your belt, and even how to make a fist when you walk,” Kim recalls. “The day started at 5:30 a.m. and didn’t end until everyone was asleep, around 10:30 p.m., and there were no breaks. Drill sergeants are famous for being mean, and I think there’s a reason for that — they have to keep very tight schedules.”

After fulfilling his military duty, Kim returned to Yonsei University, where he gravitated toward robotics, though there was no formal program in the subject. He ended up participating in a class project that challenged students to build robots to perform specific tasks, such as capturing a flag, and then to compete, bot to bot, in a contest that was similar to MIT’s popular Course 2.007 (Design and Manufacturing), which he now co-teaches.

“[The class] was a really good motivation in my career and made me anchor on the robotic, mechanistic side,” Kim says.

A bioinspired dream

In his last year of college, Kim developed a relatively cheap 3-D scanner, which he and three other students launched commercially through a startup company called Solutionix, which has since expanded on Kim’s design. However, in the early stages of the company’s fundraising efforts, Kim came to a realization.

“As soon as it came out, I lost excitement because I was done figuring things out,” Kim says. “I loved the figuring-out part. And I realized after a year of the startup process, I should be working in the beginning process of development, not so much in the maturation of products.”

After enabling first sales of the product, he left the country and headed for Stanford University, where he enrolled in the mechanical engineering graduate program. There, he experienced his first taste of design freedom.

“That was a life-changing experience,” Kim says. “It was a more free, creativity-respecting environment — way more so than Korea, where it’s a very conservative culture. It was quite a culture shock.”

Kim joined the lab of Mark Cutkosky, an engineering professor who was looking for ways to design bioinspired robotic machines. In particular, the team was trying to develop a climbing robot that mimicked the gecko, which uses tiny hairs on its feet to help it climb vertical surfaces. Kim adapted this hairy mechanism in a robot and found that it worked.

“It was 2:30 a.m. in the lab, and I couldn’t sleep. I had tried many things, and my heart was thumping,” Kim recalls. “On some replacement doors with tall windows, [the robot] climbed up smoothly, using the world’s first directional adhesives, that I invented. I was so excited to show it to the others, I sent them all a video that night.”

He and his colleagues launched a startup to develop the gecko robot further, but again, Kim missed the thrill of being in the lab. He left the company soon after, for a postdoc position at Harvard University, where he helped to engineer the Meshworm, a soft, autonomous robot that inched across a surface like an earthworm. But even then, Kim was setting his sights on bigger designs.

“I was moving away from small robots because it’s very difficult for them to do to real, physical work,” Kim says. “And so I decided to develop a larger, four-legged robot for human-level physical tasks — a long-term dream.”

Searching for principles

In 2009, Kim accepted an assistant professorship in MIT’s Department of Mechanical Engineering, where he established his Biomimetic Robotics Lab and set a specific research goal: to design and build a four-legged, cheetah-inspired robot.

“We chose the cheetah because it was the fastest of all land animals, so we learned its features the best, but there are many animals with similarities [to cheetahs],” Kim says. “There are some subtle differences, but probably not ones that you can learn the design principles from.”

In fact, Kim quickly learned that in some cases, it may not be the best option to recreate certain animal behaviours in a robot.

“A good example in our case is the galloping gait,” Kim says. “It’s beautiful, and in a galloping horse, you hear a da-da-rump, da-da-rump. We were obsessed to recreate that. But it turns out galloping has very few advantages in the robotics world.”

Animals prefer specific gaits at a given speed due to a complex interaction of muscles, tendons, and bones. However, Kim found that the cheetah robot, powered with electric motors, exhibited very different kinetics from its animal counterpart. For example, with high-power motors, the robot was able to trot at a steady clip of 14 miles per hour — much faster than animals can trot in nature.

“We have to understand what is the governing principle that we need, and ask: Is that a constraint in biological systems, or can we realize it in an engineering domain?” Kim says. “There’s a complex process to find out useful principles overarching the differences between animals and machines. Sometimes obsessing over animal features and characteristics can hinder your progress in robotics.”

A “secret recipe”

In addition to building bots in the lab, Kim teaches several classes at MIT, including 2.007, which he has co-taught for the past five years.

“It’s still my favourite class, where students really get out of this homework-exam mode, and they have this opportunity to throw themselves into the mud and create their own projects,” Kim says. “Students today grew up in the maker movement and with 3-D printing and Legos, and they’ve been waiting for something like 2.007.”

Kim also teaches a class he created in 2013 called Bioinspired Robotics, in which 40 students team up in groups of four to design and build a robot inspired by biomechanics and animal motions. This past year, students showcased their designs in Lobby 7, including a throwing machine, a trajectory-optimizing kicking machine, and a kangaroo machine that hopped on a treadmill.

Outside of the lab and the classroom, Kim is studying another human motion: the tennis swing, which he has sought to perfect for the past 10 years.

“In a lot of human motion, there’s some secret recipe, because muscles have very special properties, and if you don’t know them well, you can perform really poorly and injure yourself,” Kim says. “It’s all based on muscle function, and I’m still figuring out things in that world, and also in the robotics world.”- Jennifer Chu