Intelligent Machines A team of AI algorithms just crushed humans in a complex computer game- Will Knight

Algorithms capable of collaboration and teamwork can outmanoeuvre human teams.

Five different AI algorithms have teamed up to kick human butt in Dota 2, a popular strategy computer game.

Researchers at OpenAI, a nonprofit based in California, developed the algorithmic A team, which they call the OpenAI Five. Each algorithm uses a neural network to learn not only how to play the game, but also how to cooperate with its AI teammates. It has started defeating amateur Dota 2 players in testing, OpenAI says.

This is an important and novel direction for AI since algorithms typically operate independently. Approaches that help algorithms cooperate with each other could prove important for commercial uses of the technology. AI algorithms could, for instance, team up to outmanoeuvre opponents in online trading or ad bidding. Collaborative algorithms might also cooperate with humans.

OpenAI previously demonstrated an algorithm capable of competing against top humans at single-player Dota 2. The latest work builds on this using similar algorithms modified to value both individual and team success. The algorithms do not communicate directly except through gameplay.

“What we’ve seen implies that coordination and collaboration can emerge very naturally out of the incentives,” says Greg Brockman, one of the founders of OpenAI, which aims to develop artificial intelligence openly and in a way that benefits humanity. He adds that the team has tried substituting a human player for one of the algorithms and found this to work very well. “He described himself as feeling very well supported,” Brockman says.

Dota 2 is a complex strategy game in which teams of five players compete to control a structure within a sprawling landscape. Players have different strengths, weaknesses, and roles, and the game involves collecting items and planning attacks, as well as engaging in real-time combat.

Pitting AI programs against computer games has become a familiar means of measuring progress. DeepMind, a subsidiary of Alphabet, famously developed a program capable of learning to play the notoriously complex and subtle board game Go with superhuman skill. A related program then taught itself from scratch to master Go and then chess simply by playing against itself.

The strategies required for Dota 2 are more defined than in chess or Go, but the game is still difficult to master. It is also challenging for a machine because it isn’t always possible to see what your opponents are up to and because teamwork is required.

The OpenAI Five learn by playing against various versions of themselves. Over time, the programs developed strategies much like the ones humans use—figuring out ways to acquiring gold by “farming” it, for instance, as well as adopting a particularly strategic role or “lane” within the game.

AI experts say the achievement is significant. “Dota 2 is an extremely complicated game, so even beating strong amateurs is truly impressive,” says Noam Brown, a researcher at Carnegie Mellon University in Pittsburgh. “In particular, dealing with hidden information in a game as large as Dota 2 is a major challenge.”

Brown previously worked on an algorithm capable of playing poker, another imperfect-information game, with superhuman skill (see “Why poker is a big deal in AI”). If the OpenAI Five team can consistently beat humans, Brown says, that would be a major achievement in AI. However, he notes that given enough time, humans might be able to figure out weaknesses in the AI team’s playing style.

Other games could also push AI further, Brown says. “The next major challenge would be games involving communication, like Diplomacy or Settlers of Catan, where balancing between cooperation and competition is vital to success.”

Given a satellite image, machine learning creates the view on the ground

Geographers could use the technique to determine how land is used.

Leonardo da Vinci famously created drawings and paintings that showed a bird’s eye view of certain areas of Italy with a level of detail that was not otherwise possible until the invention of photography and flying machines. Indeed, many critics have wondered how he could have imagined these details. But now researchers are working on the inverse problem: given a satellite image of Earth’s surface, what does that area look like from the ground? How clear can such an artificial image be?

Today we get an answer thanks to the work of Xueqing Deng and colleagues at the University of California, Merced. These guys have trained a machine-learning algorithm to create ground-level images simply by looking at satellite pictures from above. The technique is based on a form of machine intelligence known as a generative adversarial network. This consists of two neural networks called a generator and a discriminator.

The generator creates images that the discriminator assesses against some learned criteria, such as how closely they resemble giraffes. By using the output from the discriminator, the generator gradually learns to produce images that look like giraffes.

In this case, Deng and co-trained the discriminator using real images of the ground as well as satellite images of that location. So it learns how to associate a ground-level image with its overhead view. Of course, the quality of the data set is important. The team use as ground truth the LCM2015 ground-cover map, which gives the class of land at a one-kilometre resolution for the entire UK. However, the team limits the data to a 71×71-kilometre grid that includes London and surrounding countryside. For each location in this grid, they downloaded a ground-level view from an online database called Geograph.

The team then trained the discriminator with 16,000 pairs of overhead and ground-level images. The next step was to start generating ground-level images. The generator was fed a set of 4,000 satellite images of specific locations and had to create ground-level views for each, using feedback from the discriminator. The team tested the system with 4,000 overhead images and compared them with the ground truth images.

The results make for interesting reading. The network produces images that are plausible given the overhead image, if relatively low in quality. The generated images capture basic qualities of the ground, such as whether it shows a road, whether the land is rural or urban, and so on. “The generated ground-level images looked natural although, as expected, they lacked the details of real images,” said Deng and co.

That’s a neat trick, but how useful is it? One important task for geographers is to classify land according to its use, such as whether it is rural or urban. Ground-level images are essential for this. However, existing databases tend to be sparse, particularly in rural locations, so geographers have to interpolate between the images, a process that is little better than guessing.

Now Deng and co’s generative adversarial networks provide an entirely new way to determine land use. When geographers want to know the ground-level view at any location, they can simply create the view with the neural network based on a satellite image. Deng and co even compare the two methods—interpolation versus image generation. The new technique turns out to correctly determine land use 73 per cent of the time, while the interpolation method is correct in just 65 per cent of cases.

That’s interesting work that could make geographers’ lives easier. But Deng and co have greater ambitions. They hope to improve the image generation process so that in future it will produce even more detail in the ground-level images. Leonardo da Vinci would surely be impressed. :  What Is It Like Down There? Generating Dense Ground-Level Views and Image Features From Overhead Imagery Using Conditional Generative Adversarial Networks

Digital Agriculture: Farmers in India are using AI to increase crop yields

The fields had been freshly plowed. The furrows ran straight and deep. Yet, thousands of farmers across Andhra Pradesh (AP) and Karnataka waited to get a text message before they sowed the seeds. The SMS, which was delivered in Telugu and Kannada, their native languages, told them when to sow their groundnut crops.

In a few dozen villages in Telengana, Maharashtra and Madhya Pradesh, farmers are receiving automated voice calls that tell them whether their cotton crops are at risk of a pest attack, based on weather conditions and crop stage. Meanwhile in Karnataka, the state government can get price forecasts for essential commodities such as tur (split red gram) three months in advance for planning for the Minimum Support Price (MSP).

Welcome to digital agriculture, where technologies such as Artificial Intelligence (AI), Cloud Machine Learning, Satellite Imagery and advanced analytics are empowering small-holder farmers to increase their income through higher crop yield and greater price control.

AI-based sowing advisories lead to 30% higher yields

“Sowing date as such is very critical to ensure that farmers harvest a good crop. And if it fails, it results in loss as a lot of costs are incurred for seeds, as well as the fertilizer applications,” says Dr. Suhas P. Wani, Director, Asia Region, of the International Crop Research Institute for the Semi-Arid Tropics (ICRISAT), a non-profit, non-political organization that conducts agricultural research for development in Asia and sub-Saharan Africa with a wide array of partners throughout the world.

Microsoft in collaboration with ICRISAT, developed an AI Sowing App powered by Microsoft Cortana Intelligence Suite including Machine Learning and Power BI. The app sends sowing advisories to participating farmers on the optimal date to sow. The best part – the farmers don’t need to install any sensors in their fields or incur any capital expenditure. All they need is a feature phone capable of receiving text messages.

Flashback to June 2016. While other farmers were busy sowing their crops in Devanakonda Mandal in Kurnool district in AP, G. Chinnavenkateswarlu, a farmer from Bairavanikunta village, decided to wait. Instead of sowing his groundnut crop during the first week of June, as traditional agricultural wisdom would have dictated, he chose to sow three weeks later, on June 25, based on an advisory he received in a text message.

Chinnavenkateswarlu was part of a pilot program that ICRISAT and Microsoft were running for 175 farmers in the state. The program sent farmers text messages on sowing advisories, such as the sowing date, land preparation, soil test based fertilizer application, and so on.

For centuries, farmers like Chinnavenkateswarlu had been using age-old methods to predict the right sowing date. Mostly, they’d choose to sow in early June to take advantage of the monsoon season, which typically lasted from June to August. But the changing weather patterns in the past decade have led to unpredictable monsoons, causing poor crop yields.

“I have three acres of land and sowed groundnut based on the sowing recommendations provided. My crops were harvested on October 28 last year, and the yield was about 1.35 ton per hectare.  Advisories provided for land preparation, sowing, and need-based plant protection proved to be very useful to me,” says Chinnavenkateswarlu, who along with the 174 others achieved an average of 30% higher yield per hectare last year.

“Sowing date as such is very critical to ensure that farmers harvest a good crop. And if it fails, it results in loss as a lot of costs are incurred for seeds, as well as the fertilizer applications.”

– Dr. Suhas P. Wani, Director, Asia Region, ICRISAT

To calculate the crop-sowing period, historic climate data spanning over 30 years, from 1986 to 2015 for the Devanakonda area in Andhra Pradesh was analyzed using AI. To determine the optimal sowing period, the Moisture Adequacy Index (MAI) was calculated. MAI is the standardized measure used for assessing the degree of adequacy of rainfall and soil moisture to meet the potential water requirement of crops.

The real-time MAI is calculated from the daily rainfall recorded and reported by the Andhra Pradesh State Development Planning Society. The future MAI is calculated from weather forecasting models for the area provided by USA-based aWhere Inc. This data is then downscaled to build predictability, and guide farmers to pick the ideal sowing week, which in the pilot program was estimated to start from June 24 that year.

Ten sowing advisories were initiated and disseminated until the harvesting was completed. The advisories contained essential information including the optimal sowing date, soil test based fertilizer application, farm yard manure application, seed treatment, optimum sowing depth, and more. In tandem with the app, a personalized village advisory dashboard provided important insights into soil health, recommended fertilizer, and seven-day weather forecasts.

“Farmers who sowed in the first week of June got meager yields due to a long dry spell in August; while registered farmers who sowed in the last week of June and the first week of July and followed advisories got better yields and are out of loss,“ explains C Madhusudhana, President, Chaitanya Youth Association and Watershed Community Association of Devanakonda.

In 2017, the program was expanded to touch more than 3,000 farmers across the states of Andhra Pradesh and Karnataka during the Kharif crop cycle (rainy season) for a host of crops including groundnut, ragi, maize, rice and cotton, among others. The increase in yield ranged from 10% to 30% across crops.

Pest attack prediction enables farmers to plan

Microsoft is now taking AI in agriculture a step further. A collaboration with United Phosphorous (UPL), India’s largest producer of agrochemicals, led to the creation of the Pest Risk Prediction API that again leverages AI and machine learning to indicate in advance the risk of pest attack. Common pest attacks, such as Jassids, Thrips, Whitefly, and Aphids can pose serious damage to crops and impact crop yield. To help farmers take preventive action, the Pest Risk Prediction App, providing guidance on the probability of pest attacks was initiated.

“Our collaboration with Microsoft to create a Pest Risk Prediction API enables farmers to get predictive insights on the possibility of pest infestation. This empowers them to plan in advance, reducing crop loss due to pests and thereby helping them to double the farm income.”

– Vikram Shroff, Executive Director, UPL Limited

In the first phase, about 3,000 marginal farmers with less than five acres of land holding in 50 villages across in Telangana, Maharashtra and Madhya Pradesh are receiving automated voice calls for their cotton crops. The calls indicate the risk of pest attacks based on weather conditions and crop stage in addition to the sowing advisories. The risk classification is High, Medium and Low, specific for each district in each state.

“Our collaboration with Microsoft to create a Pest Risk Prediction API enables farmers to get predictive insights on the possibility of pest infestation. This empowers them to plan in advance, reducing crop loss due to pests and thereby helping them to double the farm income,” says Vikram Shroff, Executive Director, UPL Limited.

Price forecasting model for policy makers

Predictive analysis in agriculture is not limited to crop growing alone. The government of Karnataka will start using price forecasting for agricultural commodities, in addition to sowing advisories for farmers in the state. Commodity prices for items such as tur, of which Karnataka is the second largest producer, will be predicted three months in advance for major markets in the state.

At present, price forecasting for agricultural commodities using historical data and short-term arrivals is being used by the state government to protect farmers from price crash or shield population from high inflation. However, such accurate data collection is expensive and can be subject to tampering.

“We are certain that digital agriculture supported by advanced technology platforms will truly benefit farmers.”

– Dr. T.N. Prakash Kammardi, Chairman, KAPC, Government of Karnataka

Microsoft has developed a multivariate agricultural commodity price forecasting model to predict future commodity arrival and the corresponding prices. The model uses remote sensing data from geo-stationary satellite images to predict crop yields through every stage of farming.

This data along with other inputs such as historical sowing area, production, yield, weather, among other datasets, are used in an elastic-net framework to predict the timing of arrival of grains in the market as well as their quantum, which would determine their pricing.

“We are certain that digital agriculture supported by advanced technology platforms will truly benefit farmers. We believe that Microsoft’s technology will support these innovative experiments which will help us transform the lives of the farmers in our state,” says Dr. T.N. Prakash Kammardi, Chairman, Karnataka Agricultural Price Commission, Government of Karnataka.

The model currently being used to predict the prices of tur, is scalable, and time efficient and can be generalized to many other regions and crops.

AI in agriculture is just getting started

Shifting weather patterns such as increase in temperature, changes in precipitation levels, and ground water density, can affect farmers, especially those who are dependent on timely rains for their crops. Leveraging the cloud and AI to predict advisories for sowing, pest control and commodity pricing, is a major initiative towards creating increased income and providing stability for the agricultural community.

“Indian agriculture has been traditionally rain dependent and climate change has made farmers extremely vulnerable to crop loss. Insights from AI through the agriculture life cycle will help reduce uncertainty and risk in agriculture operations. Use of AI in agriculture can potentially transform the lives of millions of farmers in India and world over,” says Anil Bhansali, CVP C+E and Managing Director, Microsoft India (R&D) Pvt. Ltd.

Taking a leap in bioinspired robotics

Mechanical engineer Sangbae Kim builds animal-like machines for use in disaster response.“Say there are toxic gases leaking in a building, and you need to close a valve inside, but it’s dangerous to send people in,” says Sangbae Kim, associate professor of mechanical engineering at MIT. “Now, there is no single robot that can do this kind of job. I want to create a robotic first responder that can potentially do more than a human and help in our lives.”

“Say there are toxic gases leaking in a building, and you need to close a valve inside, but it’s dangerous to send people in,” says Sangbae Kim, associate professor of mechanical engineering at MIT. “Now, there is no single robot that can do this kind of job. I want to create a robotic first responder that can potentially do more than a human and help in our lives.”

In the not so distant future, first responders to a disaster zone may include four-legged, dog-like robots that can bound through a fire or pick their way through a minefield, rising up on their hind legs to turn a hot door handle or punch through a wall.

Such robot-rescuers may be ready to deploy in the next five to 10 years, says Sangbae Kim, associate professor of mechanical engineering at MIT. He and his team in the Biomimetic Robotics Laboratory are working toward that goal, borrowing principles from biomechanics, human decision-making, and mechanical design to build a service robot that Kim says will eventually do “real, physical work,” such as opening doors, breaking through walls, or closing valves.

“Say there are toxic gases leaking in a building, and you need to close a valve inside, but it’s dangerous to send people in,” Kim says. “Now, there is no single robot that can do this kind of job. I want to create a robotic first responder that can potentially do more than a human and help in our lives.”

To do this, Kim, who was awarded tenure this year, is working to fuse the two main projects in his lab: the MIT Cheetah, a four-legged, 70-pound robot that runs and jumps over obstacles autonomously; and HERMES, a two-legged, teleoperated robot, whose movements and balance is controlled remotely by a human operator, much like a marionette or a robotic “Avatar.”

“I imagine a robot that can do some physical, dynamic work,” Kim says. “Everybody is trying to find overlapping areas where you’re excited about what you’re working on, and it’s useful. A lot of people are excited to watch sports because when you watch someone moving explosively, it is hypothesized to trigger the brain’s  ‘mirror neurons’ and you feel that excitement at the same time. For me, when my robots perform dynamically and balance, I get really excited. And that feeling has encouraged my research.”

A drill sergeant turns roboticist

Kim was born in Seoul, South Korea, where he says his mother remembers him as a tinkerer. “Everything with a screw, I would take apart,” Kim says. “And she said the first time, almost everything broke. After that, everything started working again.”

He attended Yonsei University in the city, where he studied mechanical engineering. In his second year, as has been mandatory in the country, he and other male students joined the South Korean army, where he served as a drill sergeant for two and a half years.

“We taught [new recruits] every single detail about how to be a soldier, like how to wear shirts and pants, buckle your belt, and even how to make a fist when you walk,” Kim recalls. “The day started at 5:30 a.m. and didn’t end until everyone was asleep, around 10:30 p.m., and there were no breaks. Drill sergeants are famous for being mean, and I think there’s a reason for that — they have to keep very tight schedules.”

After fulfilling his military duty, Kim returned to Yonsei University, where he gravitated toward robotics, though there was no formal program in the subject. He ended up participating in a class project that challenged students to build robots to perform specific tasks, such as capturing a flag, and then to compete, bot to bot, in a contest that was similar to MIT’s popular Course 2.007 (Design and Manufacturing), which he now co-teaches.

“[The class] was a really good motivation in my career and made me anchor on the robotic, mechanistic side,” Kim says.

A bioinspired dream

In his last year of college, Kim developed a relatively cheap 3-D scanner, which he and three other students launched commercially through a startup company called Solutionix, which has since expanded on Kim’s design. However, in the early stages of the company’s fundraising efforts, Kim came to a realization.

“As soon as it came out, I lost excitement because I was done figuring things out,” Kim says. “I loved the figuring-out part. And I realized after a year of the startup process, I should be working in the beginning process of development, not so much in the maturation of products.”

After enabling first sales of the product, he left the country and headed for Stanford University, where he enrolled in the mechanical engineering graduate program. There, he experienced his first taste of design freedom.

“That was a life-changing experience,” Kim says. “It was a more free, creativity-respecting environment — way more so than Korea, where it’s a very conservative culture. It was quite a culture shock.”

Kim joined the lab of Mark Cutkosky, an engineering professor who was looking for ways to design bioinspired robotic machines. In particular, the team was trying to develop a climbing robot that mimicked the gecko, which uses tiny hairs on its feet to help it climb vertical surfaces. Kim adapted this hairy mechanism in a robot and found that it worked.

“It was 2:30 a.m. in the lab, and I couldn’t sleep. I had tried many things, and my heart was thumping,” Kim recalls. “On some replacement doors with tall windows, [the robot] climbed up smoothly, using the world’s first directional adhesives, that I invented. I was so excited to show it to the others, I sent them all a video that night.”

He and his colleagues launched a startup to develop the gecko robot further, but again, Kim missed the thrill of being in the lab. He left the company soon after, for a postdoc position at Harvard University, where he helped to engineer the Meshworm, a soft, autonomous robot that inched across a surface like an earthworm. But even then, Kim was setting his sights on bigger designs.

“I was moving away from small robots because it’s very difficult for them to do to real, physical work,” Kim says. “And so I decided to develop a larger, four-legged robot for human-level physical tasks — a long-term dream.”

Searching for principles

In 2009, Kim accepted an assistant professorship in MIT’s Department of Mechanical Engineering, where he established his Biomimetic Robotics Lab and set a specific research goal: to design and build a four-legged, cheetah-inspired robot.

“We chose the cheetah because it was the fastest of all land animals, so we learned its features the best, but there are many animals with similarities [to cheetahs],” Kim says. “There are some subtle differences, but probably not ones that you can learn the design principles from.”

In fact, Kim quickly learned that in some cases, it may not be the best option to recreate certain animal behaviours in a robot.

“A good example in our case is the galloping gait,” Kim says. “It’s beautiful, and in a galloping horse, you hear a da-da-rump, da-da-rump. We were obsessed to recreate that. But it turns out galloping has very few advantages in the robotics world.”

Animals prefer specific gaits at a given speed due to a complex interaction of muscles, tendons, and bones. However, Kim found that the cheetah robot, powered with electric motors, exhibited very different kinetics from its animal counterpart. For example, with high-power motors, the robot was able to trot at a steady clip of 14 miles per hour — much faster than animals can trot in nature.

“We have to understand what is the governing principle that we need, and ask: Is that a constraint in biological systems, or can we realize it in an engineering domain?” Kim says. “There’s a complex process to find out useful principles overarching the differences between animals and machines. Sometimes obsessing over animal features and characteristics can hinder your progress in robotics.”

A “secret recipe”

In addition to building bots in the lab, Kim teaches several classes at MIT, including 2.007, which he has co-taught for the past five years.

“It’s still my favourite class, where students really get out of this homework-exam mode, and they have this opportunity to throw themselves into the mud and create their own projects,” Kim says. “Students today grew up in the maker movement and with 3-D printing and Legos, and they’ve been waiting for something like 2.007.”

Kim also teaches a class he created in 2013 called Bioinspired Robotics, in which 40 students team up in groups of four to design and build a robot inspired by biomechanics and animal motions. This past year, students showcased their designs in Lobby 7, including a throwing machine, a trajectory-optimizing kicking machine, and a kangaroo machine that hopped on a treadmill.

Outside of the lab and the classroom, Kim is studying another human motion: the tennis swing, which he has sought to perfect for the past 10 years.

“In a lot of human motion, there’s some secret recipe, because muscles have very special properties, and if you don’t know them well, you can perform really poorly and injure yourself,” Kim says. “It’s all based on muscle function, and I’m still figuring out things in that world, and also in the robotics world.”- Jennifer Chu

Bioinspired robots: Examples and the state of the art

Researchers at Carnegie Mellon University attempts to mimic animal motion have resulted in many technological advances that have revolutionized how manmade machines move through the air, in water, and over land.  Despite numerous achievements, engineers and scientists have yet to closely replicate the grace and fluidity of animal movement. This suggests the biological world still has much in the way of suggestions for how to build, design, and program robotic systems whose locomotive capabilities will far outpace what is possible today. The question then becomes: How deeply should we look at biology? Take the transition from snake to snake robot as an example. On the surface, one can see a snake, say, on a hike in the woods and then build an elongated mechanical creature. However, we can go deeper: One can study the fundamental macroscopic principles that can be transferred from muscles and skeleton to conventional motors and mechanical linkages. Going even deeper, one can try to create new muscle-like actuators and controllers based on neural networks in an attempt to accurately copy biological function and control.  The right choice of where to focus on this spectrum remains an open question.

To help address these fundamental questions, the biologically inspired robotics community has to date produced many great works, far too many to summarize in one brief article.  Instead, we focus the attention of this short comment on what works have specifically inspired our research in the Biorobotics lab at Carnegie Mellon University over the past 20 years. In this time, we have built a number of different robots but are perhaps best known for novel snake-like robots (see

In our opinion, the single biggest influence in biological inspiration is Bob Full.  His group at Berkeley studies cockroaches, crabs, and geckos, just to name a few.  Full’s research interest is primarily in comparative biomechanics and physiology [1, 2].  He collaborates with a number of different engineers and other scientists to elucidate biological principles that inspire the design of advanced robotic components, control algorithms, and novel system designs.

Full’s work on geckos leads to a fundamental understanding of how their feet stick to nearly any surface and yet not are encumbered by dirt and other particles. His collaborator Ron Fearing, also at Berkeley, developed new MEMS manufacturing technology to replicate the capabilities of geckos’ feet.  Fearing’s work harnesses features of animal manipulation, locomotion, sensing, actuation, mechanics, dynamics, and control strategies to radically improve robotic capabilities, especially at very small scales.  Fearing’s research ranges from the fundamental understanding of mechanical principles to novel fabrication techniques and system integration for autonomous machines [3, 4].

At Harvard University, Rob Wood also develops novel robotic mechanisms at very small scales [5, 6].  His work uses microfabrication techniques to develop biologically inspired robots with features on the micrometre to centimetre scales. His specific interests include new micro- and mesoscale manufacturing techniques, fluid mechanics of flapping wings, control of sensor-limited and computation-limited systems, active soft materials, and morphable soft-bodied robots.

In addition to novel designs and methods for constructing robot morphologies, biology also inspires us to design improved software to enable robots to better interact with complex environments. Shigeo Hirose is one of the early pioneers in the creation of numerous biologically inspired robotic systems that specialize in weaving their way through complex terrains.  He is probably best known for his original work on serpentine locomotion, both analyzing the fundamental physics governing how biological snakes move as well as employing the lessons learned therein to create and control numerous mechanisms over the years [7].  His ground-breaking insight into biologically inspired control has naturally influenced his robot’s designs.

Dan Koditschek’s name is synonymous with robot control, especially in the area of dynamic legged locomotion [8, 9].  He has played a major role in several seminal works in the area of bioinspired robots throughout his career (many in collaboration with Bob Full).  In addition, he has overseen the construction of biologically inspired robots that have helped roboticists better understand mechanized locomotion as well as offered biologists better insight into the natural world.  Additionally, Full’s observations on the role of compliance in mechanisms and control inspired Koditschek’s group to develop the family of RHex robots. Moreover, Koditschek and Full developed the concept of templates and anchors, a now-ubiquitous method for abstracting the motion of complex systems.  Koditscheck’s more recent work with Full has started to explore both the design and the control of aerial acrobatics using tail-like appendages, originally inspired by the observation of geckos that control the orientation of their bodies using their tails in midair.

Related to the work by Full and Koditschek, the incorporation of compliance in robot mechanisms and control design can also be attributed to Gill Pratt. Pratt, in part, developed a new paradigm for robotic actuation–the series elastic actuator–as well as controllers that employ this technology [10].  This work has directly affected and certainly inspired several generations of robotic devices with different morphologies that move by slithering, crawling, and walking.

A. E. Hosoi’s research covers a diverse set of topics, from fundamentals of materials science and fluid dynamics to the control and practical application of locomotion and manipulation systems [11, 12].  Two projects that are particularly relevant to the study of biologically inspired robots are those that consider the Roboclam and the Robosnail.  Both systems were constructed using direct biological inspiration aimed at practical real-world applications.

George Lauder’s work on fish-like robots has resulted in a series of robotic test platforms that examine fin and body kinematic and hydrodynamic functions during locomotion [13]. Robotic devices have a considerable advantage over studying live fish in the sense that a variety of programmable motions permit the careful investigation of the discrete components of naturally coupled movements.

The idea of using a robot to serve as a surrogate to study biology is also present in Daniel Goldman’s work that focuses on studying systems that locomote on granular media [14-17].  Goldman, faculty and director of the Crablab at Georgia Tech, has recently coined the term “robot physics,” which relates to the practice of using robots as the basis for modelling biological systems in extremely complex terrains.  Goldman’s team uses robots to help study snakes, lizards, ants, and turtles, just to name a few. Goldman’s group specializes in the interaction of physical and biological systems with complex materials, like granular media.  His group looks investigate how organisms like lizards, crabs, and cockroaches generate appropriate musculoskeletal dynamics to scurry rapidly over substrates like sand, bark, leaves, and grass.

Noah Cowan has also applied and made novel advances in the application of control theory to the study of sensorimotor control of animal movement [18, 19].  He and his collaborators study weakly electric fish as well as cockroach antennae. At Northwestern, Malcolm MacIver, one of Cowan’s and Lauder’s collaborators, pursues a research program in the mechanical and neural basis of animal behaviour, particularly at the intersection of information harvesting and biomechanics [20].

Finally, A. Ijspeert’s work on biologically inspired robots focuses on the computational aspects of locomotion control, sensorimotor coordination, and learning in animals and in robots [21, 22]. His group is interested in using robots and numerical simulation to study the neural mechanisms underlying movement control and learning in animals.

In the Biorobotics lab at Carnegie Mellon, inspiration has also been drawn from the works of J. Ostrowski and S. D. Kelly that employ concepts from the field of geometric mechanics to the study of undulatory locomotion [23-26]. In their respective works, Ostrowski and Kelly perform mathematical modeling, analysis, simulation, and control of systems that exhibit nonlinear dynamics. Former CMU student Elie Shammas, now faculty at the American University of Beirut, took this early work and developed visualization tools that enable intuition to guide the design of gaits for idealized articulated systems. Ross Hatton, who succeeded Shammas, took this work to the next level, generating results at the interface of robotics and applied mechanics [27, 28]. Hatton, now faculty at Oregon State University, provided a wealth of analytic tools to study snake-like locomotion as well as other locomoting systems. Recently, Hatton began new work that looks at sensing and control in spiders.  Adding to the work of Goldman’s and Choset’s previous students, Chaohui Gong has recently created a new approach that brings to bear all of the analytic tools, developed by one of Choset’s students, to study both natural as well as robotic systems that locomote in granular media. Gong’s demonstrations include snake robots locomoting on rocks, sandy inclines, and in tight spaces- Matt Travers and Howie Choset

1. T. Libby et al., Tail-assisted pitch control in lizards, robots and dinosaurs. Nature 481, 181 (2012).

2. R. J. Full, T. Kubow, J. Schmitt, P. Holmes, D. Koditschek.  Quantifying dynamic stability and maneuverability in legged locomotion. Int. Comp. Biol. 42, 149 (2002).

3. C. Li et al., Terradynamically streamlined shapes in animals and robots enhance traversability through densely cluttered terrain. Bioinspir. Biomim10, 046003 (2015).

4. A. G. Gillies et al.Gecko toe and lamellar shear adhesion on macroscopic, engineered rough surfaces. J. Exp. Biol. 217, 283 (2014).

5. M. A. Graule et al.Perching and takeoff of a robotic insect on natural and artificial overhangs using switchable electrostatic adhesionScience 352, 978 (2016).

6. J.-S. Koh et al.Jumping on water: Surface tension–dominated jumping of water striders and robotic insectsScience 349, 517 (2015).

‪7. S. Hirose, Biologically Inspired Robots: Snake-Like Locomotors and Manipulators (Oxford University Press, Oxford, 1993).

8. A. Altendorfer et al., RHex: A biologically inspired hexapod runner. J. Autonomous Robots 11, 207 (2002).

9. G. A. Lynch, J. E Clark, P.-C. Lin, D. E. KoditschekA bioinspired dynamical vertical climbing robot. Int. J. Robotics 31, 974 (2012).

10. G. A. Pratt, M. M. Williamson, Series elastic actuators, in vol. 1 of IEEE International Conference on Intelligent Robots and Systems (1995), pp. 399–406.

11. A. G. Winter et al., Razor clam to RoboClam: Burrowing drag reduction mechanisms and their robotic adaptation. Bioinspir. Biomim9, 036009 (2014).

12. B. Chan, N. J. Balmforth, A. E. Hosoi, Building a better snail: Lubrication and gastropod locomotion. Phys. Fluids 17, 113101 (2005).

13. G. V. Lauder, E. J. Anderson, J. Tangorra, P. G. A. Madden, Fish bio robotics: Kinematics and hydrodynamics of self-propulsionJ. Exp. Biol. 210, 2767 (2007).

14. B. McInroe et al., Tail use improves soft substrate performance in models of early vertebrate land locomotors. Science 353, 154 (2016).

15. H. C. Astley et al., Modulation of orthogonal body waves enables high maneuverability in sidewinding locomotion, Proc. Natl. Acad. SciU.S.A. 112, 6200 (2015).

16. J. Aguilar et al., A review on locomotion robophysics: The study of movement at the intersection of robotics, soft matter and dynamical systems. Rep. Prog. Phys. 79, 110001 (2016).

17. Tingnan Zhang, Daniel I. Goldman, The effectiveness of resistive force theory in granular locomotion. Phys. Fluids 26, 101308 (2014).

18. J. M. Mongeau, A. Demir, J. Lee, N. J. Cowan, R. J. Full, Locomotion and mechanics mediated tactile sensing: Antenna reconfiguration simplifies control during high-speed navigation in cockroaches. J. Exp. Biol. 216, 4530 (2013).

19. S. Sefati et al., Mutually opposing forces during locomotion can eliminate the tradeoff between maneuverability and stability. Proc. Natl. Acad. SciU.S.A. 110, 18798 (2013).

20. Y. Bai, J. B. Snyder, M. A. Peshkin, M. A. MacIver, Finding and identifying simple objects underwater with active electrosense. Int. J. Robotics Res. 34, 1255 (2015).

21. K. Karakasiliotis et al., From cineradiography to biorobots: An approach for designing robots to emulate and study animal locomotion, in J. R. Soc. Interface13, 119, (2016).

22. A. Ijspeert. Biorobotics: Using robots to emulate and investigate agile animal locomotionScience346, 196 (2014).

23. J Ostrowski, J. Burdick, Gait kinematics for a serpentine robot, in IEEE International Conference on Robotics and Automation (IEEE, 1996).

24. J Ostrowski, J Burdick, The geometric mechanics of undulatory robotic locomotion. Int. J. Robotics Res. 17, 683 (1998).

25. S. D. Kelly, H. Xiong, Self-propulsion of a free hydrofoil with localized discrete vortex shedding: analytical modeling and simulation. Theor. Comput. Fluid Dyn24, 45 (2010).

26. P. Tallapragada, S. D. Kelly, Dynamics and self-propulsion of a spherical body shedding coaxial vortex rings in an ideal fluid. Dynamics 18, 21 (2013).

27. H. Faraji, R. L. Hatton, Aiming and vaulting: Spider-inspired leaping for jumping robots, in Proceedings of the IEEE International Conference on Robotics and Automation (IEEE, 2016).

28. H. Faraji et al.Impulse redirection of a tethered projectile, in Proceedings of the ASME Dynamic Systems and Controls Conference (DSCC), (ASME, 2015).