Marco Dorigo et al. (2014), Scholarpedia, 9(1):1463.
Swarm robotics is the study of how to design groups of robots that operate without relying on any external infrastructure or on any form of centralized control. In a robot swarm, the collective behaviour of the robots results from local interactions between the robots and between the robots and the environment in which they act. The design of robot swarms is guided by swarm intelligence principles. These principles promote the realization of systems that are fault tolerant, scalable and flexible. Swarm robotics appears to be a promising approach when different activities must be performed concurrently, when high redundancy and the lack of a single point of failure are desired, and when it is technically infeasible to set up the infrastructure required to control the robots in a centralized way. Examples of tasks that could be profitably tackled using swarm robotics are demining, search and rescue, planetary or underwater exploration, and surveillance.
Swarm robotics has its origins in swarm intelligence and, in fact, could be defined as “embodied swarm intelligence”. Initially, the main focus of swarm robotics research was to study and validate biological research (Beni, 2005). Early collaboration between roboticists and biologists helped bootstrap swarm robotics research, which has since become a research field in its own right. In recent years, the focus of swarm robotics has been shifting: from a bio-inspired field of robotics, swarm robotics is increasingly becoming an engineering field whose focus is on the development of tools and methods to solve real problems (Brambilla et al., 2013).
Characteristics of swarm robotics
A robot swarm is a self-organizing multi-robot system characterized by high redundancy. Robots’ sensing and communication capabilities are local and robots do not have access to global information. The collective behaviour of the robot swarm emerges from the interactions of each individual robot with its peers and with the environment. Typically, a robot swarm is composed of homogeneous robots, although some examples of heterogeneous robot swarms do exist (Dorigo et al., 2013).
Desirable properties of swarm robotics systems
The aforementioned characteristics of swarm robotics are deemed to promote the realization of systems that are fault tolerant, scalable and flexible. Swarm robotics promotes the development of systems that are able to cope well with the failure of one or more of their constituent robots: the loss of individual robots does not imply the failure of the whole swarm. Fault tolerance is enabled by the high redundancy of the swarm: the swarm does not rely on any centralized control entity, leaders, or any individual robot playing a predefined role.
Swarm robotics also enables the development of systems that are able to cope well with changes in their group size: ideally, the introduction or removal of individuals does not cause a drastic change in the performance of the swarm. Scalability is enabled by local sensing and communication: provided that the introduction and removal of robots do not dramatically modify the density of the swarm, each individual robot will keep interacting with approximately the same number of peers, those that are in its sensing and communication range.
Finally, swarm robotics promotes the development of systems that are able to deal with a broad spectrum of environments and operating conditions. Flexibility is enabled by the distributed and self-organized nature of a robot swarm: in a swarm, robots dynamically allocate themselves to different tasks to match the requirements of the specific environment and operating conditions; moreover, robots operate on the basis of local sensing and communication and do not rely on pre-existing infrastructure or on any form of global information.
Potential applications of swarm robotics
The properties of swarm robotics systems make them appealing in several potential application domains. The use of robots for tackling dangerous tasks is clearly appealing as it eliminates or reduces risks for humans. The dangerous nature of these tasks implies a high risk of losing robots. Therefore, a fault-tolerant approach is required, making dangerous tasks an ideal application domain for robot swarms. Example of dangerous tasks that could be tackled using robot swarms are demining, search and rescue, and cleaning up toxic spills.
Potential applications for robot swarms are those in which it is difficult or even impossible to estimate in advance the resources needed to accomplish the task. For instance, allocating resources to manage an oil leak can be very hard because it is often difficult to estimate the oil output and to foresee its temporal evolution. In these cases, a solution is needed that is scalable and flexible. A robot swarm could be an appealing solution: robots can be added or removed in time to provide the appropriate amount of resources and meet the requirements of the specific task. Example of tasks that might require an a priori unknown amount of resources is search and rescue, tracking, and cleaning.
Another potential application domain for swarm robotics are tasks that have to be accomplished in large or unstructured environments, in which there is no available infrastructure that can be used to control the robots—e.g., no available communication network or global localization system. Robot swarms could be employed for such applications because they are able to act autonomously without the need of any infrastructure or any form of external coordination. Examples of tasks in unstructured and large environments are underwater or extraterrestrial planetary exploration, surveillance, demining, and search and rescue.
Some environments might change rapidly over time. For instance, in a post-earthquake situation, buildings might collapse—thereby changing the layout of the environment and creating new hazards. In these cases, it is necessary to adopt solutions that are flexible and can react quickly to events. Swarm robotics could be used to develop flexible systems that can rapidly adapt to new operating conditions. Example of tasks in environments that change over time are patrolling, disaster recovery, and search and rescue.
Scientific implications of swarm robotics
Besides being relevant to engineering applications, swarm robotics is also a valuable scientific tool. Indeed, several models of natural swarm intelligence systems have been refined and validated using robot swarms. For example, Garnier et al. (2005) validated the model of a collective decision-making behaviour in cockroaches using robot swarms.
Swarm robotics has also been used to investigate, via controlled experiments, the conditions under which some complex social behaviours might result out of an evolutionary process. For example, robot swarms have been used to study the evolution of communication (Mitri et al., 2009) and collective decision making (Halloy et al., 2007).
In this section, we follow the taxonomy presented in Brambilla et al. (2013).
The design of a robot swarm is a difficult endeavour: requirements are usually expressed at the collective level, but the designer needs to define hardware and behaviour at the level of individual robots. The resulting robots should interact in such a way that the global behaviour of the Swarm meets the desired requirements. Approaches to the design problem in swarm robotics can be divided into two categories: manual design and automatic design.
In manual design, the designer follows a trial-and-error process in which the behaviours of the individual robot are developed, tested and improved until the desired collective behaviour is obtained. The software architecture that is most commonly adopted in swarm robotics is the probabilistic finite state machine. Probabilistic finite state machines have been used to obtain several collective behaviours, including aggregation (Soysal and Sahin, 2005), chain formation (Nouyan et al., 2009), and task allocation (Liu and Winfield, 2010). Another common approach is based on virtual physics. In this approach, robots and environment interact through virtual forces. This approach is particularly suited for spatially organizing collective behaviours, such as pattern formation (Spears et al. 2004) and collective motion (Ferrante et al., 2012). Currently, the main limit of manual design is that it is completely reliant on the ingenuity and expertise of the human designer: designing a robot swarm is more of an art than a science. A systematic and general way to design robot swarms is still missing, even though a few preliminary proposals have been made (Hamann and Worn, 2008; Berman et al., 2011; Brambilla et al., 2012). In swarm robotics, an automatic design has been mostly performed using the evolutionary robotics approach (Nolfi and Floreano, 2004). Typically, individual robots are controlled by a neural network whose parameters are obtained via artificial evolution (Trianni and Nolfi, 2011). Evolutionary robotics has been used to develop several collective behaviours including collective transport (Groß and Dorigo, 2008) and development of communication networks (Huaert et al., 2008). One of the main limits of evolutionary robotics is that defining an effective evolutionary setting is often difficult and labour intensive.
The analysis of a robot swarm usually relies on models. A model of a robot swarm can be realized at two levels: the microscopic level, that is modelling the behaviours of the individual robots; or the macroscopic level, that is modelling the collective behaviour of the swarm. Modelling the microscopic level involves forming a detailed representation of each individual robot in the swarm. Unfortunately, microscopic modelling is problematic due to a large number of robots involved. Often, microscopic modelling relies on computer-based simulations (Kramer and Scheutz, 2007; Pinciroli et al., 2012).
Macroscopic models avoid the complexity and scalability issues of having to model each individual robot by considering only the collective behaviour of the swarm. One of the most common macroscopic modelling approaches is the use of rate or differential equations (Martinoli et al., 2004; Lerman et al., 2005). Rate equations describe the time evolution of the ratio of robots in a particular state, that is, of robots that are performing a specific action or are in a specific area of the environment. Rate equations have been used to model many collective behaviours, including object clustering (Martinoli et al., 1999) and adaptive foraging (Liu and Winfield, 2010). Another common approach is the use of Markov chains, which allow researchers to formally verify properties of a robot swarm (Dixon et al., 2012; Konur et al., 2012; Massink et al., 2013). Control theory has also been used to analyze whether a robot swarm eventually converges to a desired macroscopic state (Liu and Passino, 2004; Hsieh et al., 2008). A hybrid way of modelling robot swarms is based on Fokker-Plank and Langevin equations (Hamann and Worn, 2008; Berman et al., 2009; Prorok et al., 2011). Using these equations, one can model both the behaviour of the individual robot, in the form of a deterministic component of the model; and the collective behaviour of the swarm, in the form of a stochastic component of the model.
A large part of the research effort in swarm robotics is directed towards the study of collective behaviours. Collective behaviours can be categorized into five main groups: spatially organizing behaviours, navigation behaviours, decision-making behaviours, human interaction behaviours, and other behaviours. Spatially-organizing behaviours focus on how to organize and distribute robots and objects in space. Examples of such behaviours are aggregation (Soysal and Ṣahin, 2005), pattern formation (Spears et al. 2004), chain formation (Nouyan et al. 2009), self-assembly (O’Grady et al., 2010), and object clustering/assembling (Werfel et al., 2011).
Navigation behaviours focus on how to coordinate the movement of a robot swarm. Examples of such behaviours are collective exploration (Ducatelle et al., 2014), collective motion (Turgut et al., 2008), and collective transport (Baldassarre et al., 2006). Collective decision-making behaviours focus on how robots influence each other in making decisions. In particular, collective decision-making can be used to achieve consensus on a single alternative (Garnier et al., 2005; Campo et al. 2011) or allocation to different alternatives (Pini et al., 2011). Human-swarm interaction behaviours focus on how a human operator can control a swarm and receive feedback information from it. For example, robots can distributedly recognize the gestures of a human operator (Giusti et al., 2012) or form groups based on visual and vocal inputs (Pourmehr et al., 2013). Other behaviors that do not fall in the previously mentioned categories are collective fault detection (Christensen et al. 2009) and group size regulation (Pinciroli et al, 2013).
Despite its potential to promote robustness, scalability and flexibility, swarm robotics has yet to be adopted for solving real-world problems. Various limiting factors are preventing the real-world uptake of swarm robotics systems. Further research is needed on robotic hardware to overcome hardware shortcomings that limit the functionality of current robotic systems, while further research on behavioural control is needed to discover effective ways to let a human operator interact with a robot swarm. More effort is required to provide compelling case-studies—in particular to demonstrate swarm robotics in outdoor applications (e.g., waste removal), but also to develop business cases and business models that show how and where swarm robotics can be more effective than other approaches. Finally, an engineering methodology is still lacking for swarm robotics systems, which would include the definition of standard metrics, performance assessment testbeds and formal analysis techniques to verify and guarantee the properties of swarm robotics systems.
G. Baldassarre, D. Parisi, and S. Nolfi. Distributed coordination of simulated robots based on self-organization. Artificial Life, 12(3):289–311, 2006.
G. Beni. From swarm intelligence to swarm robotics. In Swarm Robotics, LNCS 3342, pp. 1–9, 2005. Springer.
S. Berman, Á. M. Halász, M. A. Hsieh, and V. Kumar. Optimized stochastic policies for task allocation in swarms of robots. IEEE Transactions on Robotics, 25(4):927–937, 2009.
S. Berman, V. Kumar, and R. Nagpal. Design of control policies for spatially inhomogeneous robot swarms with application to commercial pollination. IEEE International Conference on Robotics and Automation (ICRA), pp 378–385. 2011. IEEE press.
M. Brambilla, C. Pinciroli, M. Birattari, and M. Dorigo. Property-driven design for swarm robotics. In Proceedings of the 11th International Conference on Autonomous Agents and Multiagent Systems (AAMAS), pp 139–146, 2012. IFAAMAS press.
M. Brambilla, E. Ferrante, M. Birattari, and M. Dorigo. Swarm robotics: a review from the swarm engineering perspective. Swarm Intelligence, 7(1):1–41, 2013.
A. Campo, S. Garnier, O. Dédriche, M. Zekkri & M. Dorigo (2011). Self-organized discrimination of resources. PLOS One, 6(5):e19888.
A. L. Christensen, R. O’Grady, and M. Dorigo. From fireflies to fault-tolerant swarms of robots. IEEE Transactions on Evolutionary Computation, 13(4):754–766, 2009.
C. Dixon, A. F. T. Winfield, M. Fisher, and C. Zheng. Towards temporal verification of swarm robotic systems. Robotics and Autonomous Systems, 60(11):1429–1441, 2012.
M. Dorigo, D. Floreano, L. M. Gambardella, F. Mondada, S. Nolfi, T. Baaboura, M. Birattari, M. Bonani, M. Brambilla, A. Brutschy, D. Burnier, A. Campo, A. Christensen, A. Decugnière, G. A. Di Caro, F. Ducatelle, E. Ferrante, A. Förster, J. Guzzi, V. Longchamp, S. Magnenat, J. Martinez Gonzales, N. Mathews, M. Montes de Oca, R. O’Grady, C. Pinciroli, G. Pini, P. Rétornaz, J. Roberts, V. Sperati, T. Stirling, A. Stranieri, T. Stützle, V. Trianni, E. Tuci, A. E. Turgut, and F. Vaussard. Swarmanoid: A novel concept for the study of heterogeneous robotic swarms. IEEE Robotics & Automation Magazine, 20(4):60–71, 2013.
F. Ducatelle, G. A. Di Caro, C. Pinciroli, F. Mondada, and L. M. Gambardella. Cooperative navigation in robotic swarms. Swarm Intelligence, 8(1), in press, 2014.
E. Ferrante, A. E. Turgut, C. Huepe, A. Stranieri, C. Pinciroli, and M. Dorigo. Self-organized flocking with a mobile robot swarm: a novel motion control method. Adaptive Behavior, 20(6):460–477, 2012.
S. Garnier, C. Jost, R. Jeanson, J. Gautrais, M. Asadpour, G. Caprari, and G. Theraulaz. Aggregation behaviour as a source of collective decision in a group of cockroach-like robots. In Advances in Artificial Life, LNAI 3630, pp. 169–178, 2005. Springer.
A. Giusti, J. Nagi, L. Gambardella, S. Bonardi, and G. A. Di Caro. Human-swarm interaction through distributed cooperative gesture recognition. 7th ACM/IEEE International Conference on Human-Robot Interaction (Video Session), 2012.
R. Groß, and M. Dorigo. Evolution of solitary and group transport behaviors for autonomous robots capable of self-assembling. Adaptive Behavior, 16(5):285–305, 2008.
J. Halloy, G. Sempo, G. Caprari, C. Rivault, M. Asadpour, F. Tâche, I. Said, V. Durier, S. Canonge, J.M. Amé, C. Detrain, N. Correll, A. Martinoli, F. Mondada, R. Siegwart, J.-L. Deneubourg. Social integration of robots into groups of cockroaches to control self-organized choices. Science, 318(5853):1155–1158, 2007.
H. Hamann and H. Wörn. A framework of space-time continuous models for algorithm design in swarm robotics. Swarm Intelligence, 2(2–4):209–239, 2008.
S. Hauert, J.-C. Zufferey, and D. Floreano. Evolved swarming without positioning information: an application in aerial communication relay. Autonomous Robots, 26(1):21–32, 2008.
M. A. Hsieh, Á. Halász, S. Berman, and V. Kumar. Biologically inspired redistribution of a swarm of robots among multiple sites. Swarm Intelligence, 2(2–4):121–141, 2008.
S. Konur, C. Dixon, and M. Fisher. Analysing robot swarm behaviour via probabilistic model checking. Robotics and Autonomous Systems, 60(2):199–213, 2012.
J. Kramer and M. Scheutz. Development environments for autonomous mobile robots: a survey. Autonomous Robots, 22(2):101–132, 2007.
K. Lerman, A. Martinoli, and A. Galstyan. A review of probabilistic macroscopic models for swarm robotic systems. In Swarm Robotics, LNCS 3342, pp 143–152, 2005. Springer.
Y. Liu and K. M. Passino. Stable social foraging swarms in a noisy environment. IEEE Transactions on Automatic Control, 49(1):30–44, 2004.
W. Liu, A. F. T. Winfield. A macroscopic probabilistic model for collective foraging with adaptation. International Journal of Robotics Research, 29(14):1743–1760, 2010.
M. Massink, M. Brambilla, D. Latella, M. Dorigo, and M. Birattari. On the use of Bio-PEPA for modelling and analysing collective behaviours in swarm robotics. Swarm Intelligence, 7(2-3):201–228, 2013.
A. Martinoli, A. J. Ijspeert, and F. Mondada. Understanding collective aggregation mechanisms: from probabilistic modelling to experiments with real robots. Robotics and Autonomous Systems, 29(1):51–63, 1999.
A. Martinoli, K. Easton, and W. Agassounon. Modeling swarm robotic systems: a case study in collaborative distributed manipulation. The International Journal of Robotics Research, 23(4–5):415–436, 2004.
S. Mitri, D. Floreano, and L. Keller. The evolution of information suppression in communicating robots with conflicting interests. PNAS, 106(37):15786–15790, 2009.
S. Nolfi and D. Floreano. Evolutionary robotics: intelligent robots and autonomous agents. MIT Press, 2000.
S. Nouyan, R. Groß, M. Bonani, F. Mondada, and M. Dorigo. Teamwork in self-organized robot colonies. IEEE Transactions on Evolutionary Computation, 13(4):695–711, 2009.
R. O’Grady, R. Groß, A. L. Christensen, and M. Dorigo. Self-assembly strategies in a group of autonomous mobile robots. Autonomous Robots, 28(4):439–455, 2010.
C. Pinciroli, V. Trianni, R. O’Grady, G. Pini, A. Brutschy, M. Brambilla, N. Mathews, E. Ferrante, G. A. Di Caro, F. Ducatelle, M. Birattari, L. M. Gambardella and M. Dorigo. ARGoS: a modular, parallel, multi-engine simulator for multi-robot systems. Swarm Intelligence, 6(4):271–295, 2012.
C. Pinciroli, R. O’Grady, A. L. Christensen, M. Birattari and M. Dorigo. Parallel formation of differently sized groups in a robotic swarm. SICE Journal of the Society of Instrument and Control Engineers, 52(3):213–226, 2013.
G. Pini, A. Brutschy, M. Frison, A. Roli, M. Dorigo, and M. Birattari. Task partitioning in swarms of robots: An adaptive method for strategy selection. Swarm Intelligence, 5(3-4):283–304, 2011.
S. Pourmehr, V. M. Monajjemi, R. T. Vaughan, and G. Mori. “You two! Take off!”: Creating, modifying and commanding groups of robots using face engagement and indirect speech in voice commands. In Proceedings of the IEEE International Conference on Intelligent Robots and Systems (IROS), pp. 137–142, 2013. IEEE press.
A. Prorok, N. Correll, and A. Martinoli. Multi-level spatial modeling for stochastic distributed robotic systems. The International Journal of Robotics Research, 30(5):574–589, 2011.
O. Soysal and E. Şahin. Probabilistic aggregation strategies in swarm robotic systems. In Proceedings of the IEEE Swarm Intelligence Symposium (SIS), pp. 325–332, 2005. IEEE press.
W. M. Spears, D. F. Spears, J. C. Hamann, and R. Heil. Distributed, physics-based control of swarms of vehicles. Autonomous Robots, 17(2–3):137–162. 2004.
V. Trianni and S. Nolfi. Engineering the evolution of self-organizing behaviors in swarm robotics: A case study. Artificial Life, 17(3):183–202, 2011.
A. E. Turgut, H. Çelikkanat, F. Gökçe, and E. Ṣahin. Self-organized flocking in mobile robot swarms. Swarm Intelligence, 2(2–4):97–120, 2008.
J. Werfel, K. Petersen, and R. Nagpal. Distributed multi-robot algorithms for the TERMES 3D collective construction system. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2011. IEEE press.
Bees do it. Birds do it. Bacteria do it.
But robots cannot do it. They cannot reproduce themselves.
That’s one reason robotics researchers do not believe that robots will displace humans any time soon. Last month organizers of the Humanoids 2000 conference surveyed some of the participants about possible social implications of their work. On a scale of 0, for highly unlikely, to 5, for highly likely, the robotics researchers rated the possibility that robots ”will be the next step in the evolution and will eventually displace human beings” a zero. They are much less euphoric than other people, say, movie producers,” said Dr. Alois Knoll of the University of Bielefeld in Germany, one of the organizers of the conference, which featured reports on research to create humanoid robots.
The survey was conducted before the announcement by Brandeis University researchers that they had built a robotic system that designs and builds other robots. But at the conference, held last week at the Massachusetts Institute of Technology, most participants said robots capable of challenging humanity, as in the movie ”Terminator,” remained in the realm of science fiction. Dr. Knoll listed the limitations of present-day robots: ”We don’t have the mechanical dexterity. We don’t have the power supply. We don’t have the brains. We don’t have the emotions. We don’t have the autonomy in general to undertake these things to even come close to humans.”
For example, even if intelligent, conniving robots did exist and wanted to take over the world, they would have to act fast: most exhaust their batteries in less than half an hour. ”It’s the same problem as electric cars,” Dr. Knoll said. But the most difficult obstacle to building an intelligent, evolving, self-reproducing robot may turn out to be the self-reproducing part. The Brandeis system’s ability to design and build robots with little help from humans help set off speculation about self-reproducing, evolving robots that could explore the galaxy — or push humans to extinction. Even the Brandeis researchers call that far-fetched. ”We’re so far from that, it’s kind of a silly question,” said one of them, Dr. Jordan B. Pollack.
The machines created at Brandeis were little more than toys, far less complex than the system that designed and built them. In the biological world, reproduction is a mundane ability mastered by every creature from the smallest microbe to the largest whale. Scientists have made self-reproducing, evolving organisms of their own — but only within a computer. In 1994, Karl Sims, who was then a research scientist at Thinking Machines, populated a simulated world with animated, evolving creatures. Other researchers, like Dr. Christoph Adami, a research fellow at the California Institute of Technology, and Dr. Thomas S. Ray Jr. of the University of Oklahoma have created self-replicating computer programs that mutate in ways similar to actual organisms like bacteria, fungi and fruit flies. To give machines the ability to reproduce, however, strikes most robotics researchers as an almost impossible task, even more difficult than building an intelligent robot. Like other robotics researchers, Dr. Rodney Brooks, director of the Artificial Intelligence Laboratory at the Massachusetts Institute of Technology, predicts the development of robots that assemble themselves, so to speak, out of ready-made parts. But to build a copy of itself, a robot would have to forage for raw materials, shape them into motors, sensors, computer chips and other parts and then put the pieces together. Just making computer chips — currently manufactured in sophisticated factories that cost up to a billion dollars to build — would be a daunting task for a robot.
”Self-replicating robots would have to possess all that ability in a few cubic feet,” Dr. Brooks said. ”I don’t see it on the horizon in any way.” Bill Joy, chief scientist of Sun Microsystems, writing in an article in the April issue of Wired magazine, expressed his concern that self-reproducing robots could displace biological life, and he suggested that scientists ought to avoid developing some technologies. Dr. Pollack disagreed. ”I think it’s kind of being used as a bogeyman,” he said. ”The question is, will it get out of control? It would take a large industrial, warlike scenario for someone to build a doomsday robot. I don’t think anyone knows how to do that. Could robots themselves figure out how to become a doomsday robot? And the answer is, it’s as far off as a fax machine is from a Star Trek transporter.”
Then there is the optimistic dissenter. Dr. Hans P. Moravec, a principal research scientist at the Robotics Institute at Carnegie Mellon University, sees robots as the future — and welcomes them. He notes that the processing power of computer chips doubles every year to 18 months. ”By 2040,” said Dr. Moravec, ”the robots will be as smart as we are.” By then robots should be skilled enough to design and build automated factories that manufacture improved versions of themselves, he predicted. ”Business competition will ensure that robots take over human jobs until 100 percent of industry is automated, from top to bottom,” Dr. Moravec said. ”I think we can retire comfortably.” The last significant act of humans, he said, would be the passing of laws to ensure that robot-run companies acted in the interest of humans.
”I’ve been thinking about this for 40 years, and I’ve become very comfortable with this,” Dr. Moravec said. ”As you think about these ideas, they gradually become less and less strange.” Dr. Moravec said he would not be disturbed even if intelligent robots eventually displaced humanity. ”These things are all our descendants,” he said. ”We built them. They were initially, more or less, in our image. The robots are us. The biology is no longer necessary.”
Dr. Frank J. Tipler, a professor of mathematical physics at Tulane University, also believes that robots are the future of life, and he argues that the lack of robotic spacecraft zipping past Earth means there are not any other intelligent species in the galaxy. ”Not only is there no intelligent life in the galaxy,” Dr. Tipler said, ”there isn’t intelligent life within an order of a billion light-years.” Self-replicating robots, Dr. Tipler argues, would also be an efficient way to explore the galaxy. Several spacecraft could be sent from Earth to scout nearby stars. After transmitting reports about what they found, the spacecraft would then set up factories to build more spacecraft to head for the next nearest stars. Even if each probe traveled at a speed of only a tenth of the speed of light, the ever expanding fleet would would be able to visit every star in the galaxy within 10 million years, less than 1/1,000th of the age of the galaxy.
Such visions still seem far off for most researchers.
”I would never say never,” Dr. Knoll said. ”But the likelihood of these things happening in our lifetime is very little.” He estimated that 90 percent of the Humanoid 2000 participants do not believe Dr. Moravec’s predictions. The pessimism may reflect the many obstacles that researchers face in creating useful robots, much less ones that would displace humanity. Two-legged humanoid robots walk slowly and awkwardly. Robots like Kismet at the M.I.T. Artificial Intelligence Laboratory can display childlike reactions when addressed in different tones of voice, but discussions of what would be deemed conscious and intelligent behavior are still rooted in philosophy, not experiments. However, robots do not have to be human like, or even visible, to be useful — or dangerous. In the Aug. 31 issue of Nature, researchers at the University of Lausanne, Switzerland, reported that a group of robots programmed with a few simple rules patterned after ant behavior could efficiently forage their environment.
With the development of nanotechnology, the building of machines out of individual atoms and molecules, Mr. Joy worries about artificial microbes that are better than their biological counterparts. Such minuscule robots, less than 1/25,000th of an inch, are one of the aims of Zyvex, a nanotechnology company in Dallas, but company officials say they will not make anything that could pose any danger. Dr. Ralph C. Merkle, a principal fellow of Zyvex and a consultant to the Foresight Institute, says that by design the robots will not be able to evolve. To minimize risks, the Foresight Institute, which studies nanotechnology, has proposed guidelines for its work, including encrypting the robot’s programming and designing the robots so that they do not function in an uncontrolled environment.
”We’re not interested in evolution,” Dr. Merkle said. ”Quite the reverse.” Zyvex’s nanorobots would be mindless machines that followed instructions to build other nanorobots, including ones that could be injected into the bloodstream of a hospital patient. ”It could be programmed to remove specific stuff you don’t want,” like cancer cells, blood vessel obstructions or invading germs, Dr. Merkle said. But without the instructions, nano-robots could not reproduce. ”If you flush them down the toilet,” Dr. Merkle said, ”they stop working.” Adding a built-in ability to replicate would add unnecessary cost and complexity — unless one was trying to create a dangerous nanorobot as a weapon. Dr. Merkle agrees with Mr. Joy that perhaps some technologies should be avoided. But, he added, research in this area needs to continue to develop defenses if an enemy unleashed a nanorobot weapon. ”There are certain things we need to think about very carefully,” Dr. Merkle said, referring to nanorobots. ”Should we relinquish autonomous, self-replicating devices that can function in a natural environment? The answer is yes, that looks like a fine thing to relinquish.”
Published by Kenneth Chang has been a science reporter at The New York Times since 2000. He covers chemistry, geology, solid state physics, nanotechnology, Pluto, plague and other scientific miscellany.
The concept of mechatronics has long been associated with the robotics industry. The term was coined in 1971 by Tetsuro Mori, an engineer at a robotics company, Yaskawa Electric Corp. He combined the words “mechanical” and “electronic” to describe the electronic control systems that Yaskawa was building for mechanical factory equipment. The term now describes an emerging engineering discipline that includes a coherent background in systems design as well as mechanics and electronics.
The term is now common in many university engineering departments, with many colleges issuing degrees in mechatronic engineering. “Mechatronics is what computer engineering was 15 years ago. People are talking about it and realizing its value of this field of engineering,” said Jim Devaprasad, professor in the School of Engineering and Technology at Lake Superior State University. “Mechatronics encompasses mechanical, electrical, and some manufacturing all put together.” Devaprasad noted that mechatronics is beginning to replace the more amorphous term “systems engineering.” “In the past, we referenced this collection of disciplines as systems engineering, but the term mechatronics is capturing more traction now,” he said. “Now there are some programs that are beginning to appear as associate’s degrees and bachelor’s degrees.”
Just What Is Mechatronics?
The Association of Mechanical Engineers has embraced the concept, stating that mechatronics systems are everywhere, from computer hard drives to robotic assembly systems. They note that even consumer products combine mechanical and electronic systems now, from washing machines and coffee makers to medical devices.The automotive industry leans heavily on mechatronics, as well. Electronics that control mechanical systems account for much of the value of new vehicles. These systems manage everything from stability control and antilock brakes to climate control and memory-adjustment seats. In its essence, mechatronic engineering involves creating smart machines that are aware of their surroundings and can make decisions. While this seems like the perfect definition of a robot, smart machines also involve equipment that does not look robotic yet behaves like a robot in that it can be programmed to conduct specific movements that accomplish goals. A programmed conveyor belt can be a smart programmable machine – a robot.
These smart machines are complex equipment made up of several parts: the mechanical system itself, the sensing and actuation, the control systems, and the software. Developing and operating these intelligent machines involves the full range of disciplines included in mechatronics.
What Do Mechatronic Engineers Do?
Mechatronic engineers work in all aspects of the development of the smart machine – from design and testing through to manufacture and ultimately deployment of an operation. The industries involved include robotics, medical equipment and assistive technology, human-machine interaction, manufacturing, unmanned aerial and ground vehicles, and education. Mechatronic engineers work at companies that require high-tech development into what they are producing. These engineers may work in a laboratory, a processing plant, or an engineering office. Research opportunities for mechatronics engineers abound in emerging fields like bioengineering, nanotechnology, and robotics. These engineers are playing a large role in the development of electric cars and self-driving vehicles.
You will find mechatronic engineers in the defence industry developing futuristic vehicles, and you’ll also find them revolutionizing consumer products. They may work in smaller innovative high-tech companies, designing software, parts, and equipment. You’ll find them in mining as well as the oil and gas industry since the equipment for these industries now includes electronics, mechanical equipment, and systems development.
Robotics Industry Screaming for These Skills
While employers have been seeking this combination of skills in their engineering employees, the term mechatronics to describe these needs is still relatively new. “I don’t think the robotics industry is asking for mechatronics specifically. The term is still new. But they want that type of engineering background,” said Devaprasad. “They are asking for mechanical engineers with experience in electronics and computer science.”. Mechatronics as an engineering discipline came out of the need for a new engineering discipline to meet the changes in industry and manufacturing. “Jobs have been changing since the dawn of the industrial revolution. If you ask 100% of our member companies, they’re having a problem finding the skilled people in robotics and mechatronics,” said Bob Doyle, director of communications at the Association for Advancing Automation, which includes the Robotics Industry Association (RIA). “Our companies are clamouring to hire students who have these skills.”
What’s in a Name?
Devaprasad noted that Lake Superior State University has been careful in choosing the right name for a degree that includes but is not limited to robotics. “Lake Superior State University was the first university to create a bachelor’s in robotics 31 years ago,” he said. “Robotics gained traction in the 1980s. That was good, but we found there was a risk in narrowing down the degree by calling it robotics engineering when actually our graduates were systems engineers.” Even with the growth of the robotics industry, calling an engineering degree “robotics engineering” can be a problem for graduates. “If the robotics industry were slowing down, they wouldn’t hire these graduates. People would say we’re not moving strongly on robots,” said Devaprasad. Robotics work implies mechatronics since it involves mechanical, electronics, and systems design work. “The moment we say industrial robotics, people are able to relate to it right away,” he said. “A lot of companies are looking for people with a background in the skills that make up mechatronics. The robotics industry is seeing record numbers of robot systems being used. That opens up demand for mechatronics engineers.”
The limitations of a “robotics engineering” degree led Lake Superior State University to switch to the term, “mechatronics engineering.” “We wanted to offer a degree that included robotics, but we wanted to do it a different way by calling it mechatronics,” said Devaprasad. “That reduces the risk for the graduates. We include the bread-and-butter engineering fields of mechanical engineering and electrical engineering, and we do it with a robotics concentration. Yet mechatronics is a broader and more useful term for graduates.” Advanced manufacturing requires the range of skills encompassed by mechatronics, even if only a portion of that manufacturing involves robots. “How does robotics fit in? There are times when mechatronics is used interchangeably with robotics because robotics is multiple disciplinary functions,” Devaprasad said. “Universities are offering degrees in robotics engineering, but the engineers coming out of those programs are going to be called mechatronics engineers.”
Rob Spiegel has covered automation and control for 15 years, 12 of them for Design News. Other topics he has covered include supply chain technology, alternative energy, and cybersecurity. For 10 years he was owner and publisher of the food magazine Chile Pepper.
It’s all too easy to imagine nature and technology as being engaged in a centuries-long boxing match, with the 21st delivering the knockout punch.
Humans never were part of nature. We were always part of technology
Sunsets obscured by selfies. Hundreds of thousands of tonnes of toxic ‘e-waste’ dumped in Ghanaian wetlands each year. Words such as ‘acorn’, ‘adder’ and ‘willow’ excised from the Oxford Junior Dictionary to make way for ‘broadband’, ‘analogue’, and ‘cut and paste’. We complain about the colonisation of our wild places with wifi, yet declare internet access to be a human right. We despair about poaching while helping the culprits track down rare animals with our social media posts. We dream of relaxing on tranquil Maldivian islands, but demand unsustainably cheap flights to get us there.
No wonder we’re so conflicted. As the scientific philosopher Christopher Potter points out in his book How To Make A Human Being, “Humans never were part of nature. We were always part of technology.” From the moment modern humans harnessed the power of fire, it put us on a path to global domination and we never looked back. Now, from cooking vessels to virtual reality headsets, technology is simply a set of strategies our species has developed in order to cope with being self-conscious creatures on a chaotic and often hostile planet. That makes our drive to innovate just as ‘natural’ as the structure of our brains.
So we find ourselves stuck between a rock and a MySpace; and all too often we sacrifice our native habitat for the short-term exhilaration of change and short-term resolution of economic and political problems. But although many of our digital inventions serve to estrange us from the world they were created to enrich, technology and nature are also continually cross-pollinating in powerfully positive ways.
Consider the field of biomimetics, where natural design elements and processes are used as a model for new materials, devices and tools. One famous example is the invention of Velcro, which was developed by Swiss engineer Georges de Mestral in 1941 after he observed how cockleburs in the mountains caught on his clothing and in his dog’s fur. More recent advances in the field include the creation of a neonatal surgical tape modelled on the structure of spiders’ webs; the imitation of viruses to create self-assembling nanoparticles which can deliver medication straight into cancer cells; and a super-efficient, reflective, colour e-reader screen based on the way butterfly wings gleam in bright light. Technology has impacted most positively on nature in the past ten years through our emerging ability to achieve near constant monitoring of valuable natural assets
Then there’s the use of tech to support conservation and sustainability projects. Technology For Nature is a unique partnership between Zoological Society of London, University College London and Microsoft Research designed “to rapidly scale up our global conservation response” by bringing together technologists and zoologists. Current projects include Fetch Climate, a fast, free, cloud-based service that allows experts to access accurate climate change data from any geographical region around the world, and Mataki, which develops new devices for recording the behaviour of animals in the wild.
Dr Lucas Joppa, one of the founders of the group, admits that there are challenges in bringing together scientists from disciplines traditionally seen as at loggerheads. “Language, terminology, different motivations,” he sighs. “Pretty much everything!” But he also believes that bridging those differences is more than worth the effort. “The conservation issues we most urgently need to tackle right now include the monitoring of protected areas, tracking species of high commercial value, and online detection of the illegal wildlife trade,” he explains. “Technology has impacted most positively on nature in the past ten years through our emerging ability to achieve near constant monitoring of valuable natural assets, such as protected areas and rhinos. We are creating a powerful nexus of information.”
Of course, nature isn’t all puppies and waterfalls, and tech is also helping people manage her crueller side. Hashtagged tweets and geotagged Instagram photos have become a valuable way to share real-time updates as natural disasters unfold. Google’s Person Finder, which was created to reunite relatives during 2011’s Japanese tsunami, is currently live in Nepal. And the Federal Emergency Management Agency’s (FEMA’s) app allows stricken communities to crowdsource crisis relief.
Then there’s “green city” design. Imagine high-rises transformed into vertical farms, with crops carpeting rooftops and walls; spare footage used to cultivate algae-based biofuels; and trees turned into streetlamps, spliced with bioluminescent genes. London’s Garden Bridge project, despite its manny detractors, has been presented as a first step towards this vision of a hybrid urban-rural landscape; and with projections showing that Earth’s cities will swell with another 2.5 billion people by 2050, it’s not a moment too soon. Clearly, the news isn’t all bad when it comes to tech and nature on a grand scale. But how is the tug-of-war working out for us personally?
Considering the addictive nature of digital platforms, it is sometimes hard to dispute Potter’s belief that “technology evolves a life indoors”. And when we do venture outside, mobiles and wearables can keep us trapped inside our heads, even on the most glorious of countryside walks. We must stop seeing tech and nature as sparring partners, and start concentrating on helping them to dance But there is in fact a blossoming ecosystem of software that aims to boost our appreciation of the great outdoors, from Leaf snap, which applies facial recognition technology to leaves in order to help users identify 156 tree species, to mindfulness apps that can help us learn to reconnect with our environment.
And tech empowers each of us to do our bit for conservation too. Car-sharing apps and home energy monitoring devices are just the start. Joppa is currently developing “algorithms to encourage citizen scientists to go out and collect observations of species that are of high value for international policies. Of course, as Joppa says, “Technology isn’t going to solve all of the conservation problems of today, but it can be a fantastic tool in the toolbox.” Rather than lingering on the mess we’ve got ourselves into, we need to focus on harnessing its potential. Despite the attempts of the Mars One team, it’s unlikely that we will find a new home planet in the near future, and is even more unlikely that it will be as beautiful as ours, bruised though it may be.
The technologist Kevin Kelly believes that technology is “a force of nature”, evolving along the same principles as any living species. Perhaps he’s right. Or perhaps nature, like humanity, is a sort of mysterious technology. Either way, we must stop seeing tech and nature as sparring partners, and start concentrating on helping them to dance.
With rapidly evolving technology, it is inevitable that the future of humanity lies in machines. Traditionally, there has been a divide in the type of progress for humans to achieve an advanced state of being. On one hand, there are people who advocate the development of artificial intelligence technologies to imbue human cognitive abilities on robots. An alternate approach is one parallel to many science fictions fantasizes–the creation of cyborgs or human-machine hybrids. The creation of cyborg technology has already been set in motion and this article will examine its evolution and benefits. The research in this field has also achieved some momentous milestones. The basis of connecting the human cognitive processes to a computer chip has already been achieved in multiple ways.
The most famous example is of Project Cyborg 1.0 conducted in the University of Reading by Professor Kevin Warwick and his colleagues in the department of cybernetics.1 Warwick was actually the subject of the first study that required him to undergo an operation to surgically implant a silicon chip transponder in his forearm. This surgery, conducted on 24th August 1998, enabled a computer to monitor Warwick’s activities in the premises of a university. The computer and Warwick extension were able to “operate doors, heaters, and other computers without lifting a finger.”1 The next major experiment was also conducted by Warwick and was entitled Project Cyborg 2.0.
In March 2002, Warwick underwent another surgery that implanted a “one hundred electrode array” into his wrist.1 Not only was this second implantable to control more advanced machines such as an electric wheelchair and an artificial hand, it was also able to communicate with a similar implant in Warwick’s wife, Irene’s, hand. The implant that connected to Irene’s nervous system was able to send signals, through a computer, to the implant in Warwick’s wrist and thus create an artificial sensation–“a sudden shock down his left index finger.”2 In the most general terms, their nervous systems were speaking to each other. An independently run project in 1996 by physician Phillip Kennedy resulted in the creation of the first-ever human cyborg Johnny Ray.
Before conducting human trials, Kennedy had developed a device called Neurotrophic Electrode that could amplify neural signals. This device was basically a “tiny glass cone…filled with a mix of nerve growth factors, and two fine gold wires,” which, when inserted into the skull, allowed neural cells to grow through the implant and thus establish a solid electric connection.3When this same process was applied to Ray, a victim of stroke who could no longer operate any part of his body except for some muscles in his face, Kennedy was successful in connecting Ray’s neural signals to the computer. He could basically control the mouse with his brain and was thus able to spell out his thoughts.3 As with the previous instance, the development of cyborg technologies has grave medical applications.
One of the main advantages, according to neuroscientist Lee Miller of Northwestern University is the possibility of “helping the paralyzed walk, reach, and grasp.” The signals emitted by the electrodes could be routed to the paralyzed limb and thus enable it to move again.3 Another comparable technology is that which allows the very same types of electrodes to control mechanized devices. This could allow for the creation of more advanced prosthetic limbs whose behaviour closely matched that of a regular human limb.4 To take it even further, more severely affected individuals with greater brain and motor damage could have the opportunity to reinvent their lives. Beyond the medical applications, cyborg technology will pave the way to more advanced human beings. One of the popular beliefs of the contemporary world is the idea that robots will eventually outsmart the humans and take control of the world. Stephen Hawking attests this fear and suggests that humans need to mechanize as fast as possible “so that artificial brains contribute to human intelligence rather than opposing it.”2 Kennedy supports this path of development as it would create an entirely new species of human with unlimited memory, unlimited calculation ability, and instant wireless communication ability.
This type of human would have unsurpassable intelligence, he claims. Just like his colleagues, Warwick envisions a world in which everything could be remotely controlled by the brain and humans could link themselves to external machines or even each other.2 With the rise of computers and cell phones, the transition to cyborg is already half complete as humans rely on these machines to the point that they become mere extensions of their bodies.5 The research being conducted now, though, is working to weave machines into human lives even more permanently. This new technology will create more advanced forms of homo sapiens and thus facilitate the rise of the cyborgs.
- Warwick, Kevin. The University of Reading, “Professor Kevin Warwick.” Accessed November 19, 2012. http://www.kevinwarwick.com/index.asp.
- Stonehouse, David. “The cyborg evolution.” The Sydney Morning Herald, , sec. Technology, March 22, 2003.
- Baker, Sherry. “The Rise of the Cyborgs.” Discover Magazine, September 26, 2008.
- Espingardeiro, Antonio. “When Will We Become Cyborgs?.” Automaton (blog), March 24, 2010.
- Case, Amber. “We are all cyborgs now.” TED Talks. Recorded January 2011. TED Conferences, LLC. Web.
- Image credit (Creative Commons): Carlosramirex. “Neil Harbisson Cyborg.” Wikimedia Commons, 2011.