The New WALK-MAN: A Look at IIT’s Multi-Faceted Robotic Endeavor

Standing 1.85 meters tall, and made of lightweight metals, iron, and plastics, the WALK-MAN humanoid robot is controlled remotely by a human operator through a virtual interface- Sam Davis 

The New WALK-MAN: A Look at IIT’s Multi-Faceted Robotic Endeavor

The heart of the Istituto Italiano Di Tecnologia (IIT, Genoa, Italy) robotics strategy has always been the development of state-of-the-art mechatronics systems. This has led to the creation of internationally recognized humanoid robots and pioneering quadrupeds. IIT’s family of cutting-edge robots isn’t limited to legged systems, though. The Mechatronics program has explored completely new designs and operational paradigms, including materials, compliance, soft bodies, and distributed intelligence.

Besides its advanced integrated robot platforms, IIT researchers have developed component-level systems, including novel patented high-performance actuation systems, variable impedance actuators, advanced fingertip as well as large-area tactile sensors, exoskeletons (leg, arm, hand), instrumented haptic devices, novel medical systems, a variety of force/torque sensors, dexterous manipulators (e.g., SoftHand), and advanced industrial end-effectors.

The IIT Mechatronics program is developing new bodies for its integrated robotic systems, particularly for humanoid and legged robots. In these domains, researchers will focus on combining locomotion, manipulation, whole-body capabilities, new materials, and high-dynamics structures. As in most areas of engineering, it will be crucial to optimize energy use. To achieve this, they will use innovative lightweight and sustainable materials, improve mechatronics to better use the available power, and develop robots with more natural gaits and locomotion skills, coupled with enhanced actuator design.

Improvements in ruggedness, robustness, and reliability will require novel kinematics, shock-absorbing materials, and lightweight designs optimized for indoor and outdoor use in dirty and wet environments. They will develop highly integrated actuation solutions and decentralized diagnostics inspired by the new concept of “smart, high-performance mechatronics.”

Looking at the market, systems have been designed for prompt, affordable market applications. Here, the engineering goals require that they reduce mechanical complexity (fewer parts, no exposed wires, robust sensors), boost the payload-to-weight ratio, and improve the manipulation skills (dexterous hands, a wider range of movement in the shoulder and wrist). The reduced complexity will lower the cost of the robots, which is particularly important for the so-called companion robots. These systems will undergo extensive field-testing with end users, in line with the Technology Transfer Mission.

Delving Deeper into Locomotion

Advanced dynamical control and whole-body loco-manipulation are vital for complex human-like robots, particularly for locomotion and human-robot collaboration. In robot locomotion, where a flexible control strategy demands step recovery, walking, and running on possibly uneven terrains, advances will require the close integration of engineering (sensing, actuation, and mechanics), gait generation, dynamic modelling, and control.

The Mechatronics program will investigate locomotion, gait generation, and gait control in both bipeds and quadrupeds. With several robust platforms available, they will develop dynamic locomotion profiles. These will advance locomotion and loco-manipulation, particularly for operation in rough, hazardous, and poorly conditioned terrains, where wheeled and tracked vehicles cannot operate. The current locomotion capabilities on flat and moderately rough terrain will include very challenging environments (e.g., soft and unstable terrains).

The locomotion framework will reach higher levels of autonomy, allowing automatic selection of the most suitable gaits/ behaviours for the environment. Combinations of machine-learning and optimization methods will be used to plan movements and control the robot.

With complex systems such as humanoids, it’s vital to achieve simultaneous manipulation and control, while maintaining operational parameters such as balance, walking, and reaching. This requires a new advanced approach to control. Torque regulation (through hardware and software) will be critical to success in this domain. IIT robots feature fully integrated torque sensing and controllers. In the near future, exciting developments in controller design will advance the functionality of these robots, and fill a crucial gap in humanoid technology.

IIT research in soft robotics will aim to produce soft, lightweight, sensitive structures, such as manipulators and grippers. They will exploit additive manufacturing technologies and customized sewing machines to generate 3D-fiber-reinforced structural composites that feature large deformation capacity, high load capacity, and variable stiffness. This approach may also influence the design of rigid robots by replacing rigid joints with soft, compliant joints or soft and elastic actuators (e.g., McKibben muscles).

IIT’s Soft Robotics program will focus on developing continuum robots (i.e., with similarities to the elephant trunk and cephalopod arms) that can grow, evolve, self-heal, and biodegrade. The goal is for these continuum robots to traverse confined spaces, manipulate objects, and reach difficult-to-access sites. Potential applications will be in medicine, space, inspection, and search-and-rescue missions. The Soft Robotics program will require an unprecedented multidisciplinary effort combining biology (e.g., the study of plants), materials science (e.g., responsive polymer synthesis), engineering, and biomechanics.

The Walk-MAN robot (Credit: IIT-Istituto Italiano di Tecnologia)

Researchers at IIT successfully tested their new version of a WALK-MAN humanoid robot for supporting emergency response teams in fire incidents (see figure). The robot is able to locate the fire position and walk toward it and then activate an extinguisher to eliminate the fire. During the operation, it collects images and transmits these back to emergency teams, who can evaluate the situation and guide the robot remotely. The new WALK-MAN robot’s design consists of a lighter upper body and new hands, helping to reduce construction cost and improve performance.

During the final test, the robot WALK-MAN dealt with a scenario representing an industrial plant damaged by an earthquake that was experiencing gas leaks and fire—no doubt a dangerous situation for humans. The scenario was recreated in IIT laboratories, where the robot was able to navigate through a damaged room and perform four specific tasks: opening and traversing the door to enter the zone; locating the valve which controls the gas leakage and close it; removing debris on its path; and finally identifying the fire and activating a fire extinguisher.

The robot is controlled by a human operator through a virtual interface and a sensorized suit, which permits the robot to operate very naturally, effectively controlling its manipulation and locomotion, like an avatar. The operator guides the robot from a station located remotely from the accident site, receiving images and other information from the robot perception systems.

The first version WALK-MAN was released in 2015, but researchers wanted to introduce new materials and optimize the design to reduce the fabrication cost and improve its performance. The new version of the WALK-MAN robot has a new lighter upper-body, whose realization took six months and involved a team of about 10 people coordinated by Nikolaos Tsagarakis, the researcher at IIT.

The new WALK-MAN robot is a humanoid robot 1.85 meters tall, made of lightweight metal, like Ergal (60%), magnesium alloys (25%) and titanium, iron, and plastics. Researchers reduced its weight by 31 kilos—from the original 133 kilos to 102 kilos—to make the robot more dynamic. Therefore, legs can move faster due to having a lighter upper body mass to carry. The higher dynamic performance allows the robot to react faster with legs, which is very important to adapt its pace to rough terrain and variable interaction scenarios. The lighter upper-body also reduces its energy consumption; the WALK-MAN can operate with a smaller battery (1 kWh) for about two hours.

The lighter upper-body is made of magnesium alloys and composite structures and is powered by a new version of lightweight soft actuators. Its performance has been improved, reaching a higher payload (10 kg/arm) than the original one (7 kg/arm). Thus, it can carry heavy objects around and sustain them for more than 10 minutes.

The new upper body is also more compact in size (62-cm shoulder width and 31-cm torso depth), giving to the robot greater flexibility to pass through standard doors and narrow passages.

The hands are a new version of SoftHand developed by Centro Ricerche E. Piaggio of the University of Pisa (a group led by Prof. A. Bicchi) in collaboration with IIT. They are lighter, thanks to the composite material used to construct fingers, and they have a better finger-to-palm size ratio (more human-like) that allows WALK-MAN to grasp a variety of object shapes. Despite their weight reduction, hands keep the same strength as the original version, as well as their versatility in handling and physical robustness.

WALK-MAN body is controlled by 32 engines and control boards, four force and torque sensors at the hands and feet, and two accelerometers for controlling its balance. Its joints show elastic movement, allowing the robot to be compliant and have safe interactions with humans and the environment. Its software architecture is based on the XBotCore framework. The WALK-MAN head has cameras, 3D laser scanner, and microphone sensors. In the future, it can be also equipped with chemical sensors for detecting toxic agents.

The WALK-MAN robot was designed and implemented by IIT within the project WALK-MAN funded by the European Commission. The project started in 2013 and is now at its final validation phase. The project also involved the University of Pisa in Italy, the École Polytechnique Fédérale de Lausanne (EPFL) in Switzerland, the Karlsruhe Institute of Technology (KIT) in Germany, and the Université Catholique de Louvain (UCL) in Belgium. All partners contributed to different aspects of the robot realization: locomotion control, perception capability, affordances and motion planning, simulation tools, and manipulation control.

The validation scenario was defined in collaboration with the Italian civil protection body in Florence, which participated in the project as an advisor end-user.

Digital Agriculture: Farmers in India are using AI to increase crop yields

The fields had been freshly plowed. The furrows ran straight and deep. Yet, thousands of farmers across Andhra Pradesh (AP) and Karnataka waited to get a text message before they sowed the seeds. The SMS, which was delivered in Telugu and Kannada, their native languages, told them when to sow their groundnut crops.

In a few dozen villages in Telengana, Maharashtra and Madhya Pradesh, farmers are receiving automated voice calls that tell them whether their cotton crops are at risk of a pest attack, based on weather conditions and crop stage. Meanwhile in Karnataka, the state government can get price forecasts for essential commodities such as tur (split red gram) three months in advance for planning for the Minimum Support Price (MSP).

Welcome to digital agriculture, where technologies such as Artificial Intelligence (AI), Cloud Machine Learning, Satellite Imagery and advanced analytics are empowering small-holder farmers to increase their income through higher crop yield and greater price control.

AI-based sowing advisories lead to 30% higher yields

“Sowing date as such is very critical to ensure that farmers harvest a good crop. And if it fails, it results in loss as a lot of costs are incurred for seeds, as well as the fertilizer applications,” says Dr. Suhas P. Wani, Director, Asia Region, of the International Crop Research Institute for the Semi-Arid Tropics (ICRISAT), a non-profit, non-political organization that conducts agricultural research for development in Asia and sub-Saharan Africa with a wide array of partners throughout the world.

Microsoft in collaboration with ICRISAT, developed an AI Sowing App powered by Microsoft Cortana Intelligence Suite including Machine Learning and Power BI. The app sends sowing advisories to participating farmers on the optimal date to sow. The best part – the farmers don’t need to install any sensors in their fields or incur any capital expenditure. All they need is a feature phone capable of receiving text messages.

Flashback to June 2016. While other farmers were busy sowing their crops in Devanakonda Mandal in Kurnool district in AP, G. Chinnavenkateswarlu, a farmer from Bairavanikunta village, decided to wait. Instead of sowing his groundnut crop during the first week of June, as traditional agricultural wisdom would have dictated, he chose to sow three weeks later, on June 25, based on an advisory he received in a text message.

Chinnavenkateswarlu was part of a pilot program that ICRISAT and Microsoft were running for 175 farmers in the state. The program sent farmers text messages on sowing advisories, such as the sowing date, land preparation, soil test based fertilizer application, and so on.

For centuries, farmers like Chinnavenkateswarlu had been using age-old methods to predict the right sowing date. Mostly, they’d choose to sow in early June to take advantage of the monsoon season, which typically lasted from June to August. But the changing weather patterns in the past decade have led to unpredictable monsoons, causing poor crop yields.

“I have three acres of land and sowed groundnut based on the sowing recommendations provided. My crops were harvested on October 28 last year, and the yield was about 1.35 ton per hectare.  Advisories provided for land preparation, sowing, and need-based plant protection proved to be very useful to me,” says Chinnavenkateswarlu, who along with the 174 others achieved an average of 30% higher yield per hectare last year.

“Sowing date as such is very critical to ensure that farmers harvest a good crop. And if it fails, it results in loss as a lot of costs are incurred for seeds, as well as the fertilizer applications.”

– Dr. Suhas P. Wani, Director, Asia Region, ICRISAT

To calculate the crop-sowing period, historic climate data spanning over 30 years, from 1986 to 2015 for the Devanakonda area in Andhra Pradesh was analyzed using AI. To determine the optimal sowing period, the Moisture Adequacy Index (MAI) was calculated. MAI is the standardized measure used for assessing the degree of adequacy of rainfall and soil moisture to meet the potential water requirement of crops.

The real-time MAI is calculated from the daily rainfall recorded and reported by the Andhra Pradesh State Development Planning Society. The future MAI is calculated from weather forecasting models for the area provided by USA-based aWhere Inc. This data is then downscaled to build predictability, and guide farmers to pick the ideal sowing week, which in the pilot program was estimated to start from June 24 that year.

Ten sowing advisories were initiated and disseminated until the harvesting was completed. The advisories contained essential information including the optimal sowing date, soil test based fertilizer application, farm yard manure application, seed treatment, optimum sowing depth, and more. In tandem with the app, a personalized village advisory dashboard provided important insights into soil health, recommended fertilizer, and seven-day weather forecasts.

“Farmers who sowed in the first week of June got meager yields due to a long dry spell in August; while registered farmers who sowed in the last week of June and the first week of July and followed advisories got better yields and are out of loss,“ explains C Madhusudhana, President, Chaitanya Youth Association and Watershed Community Association of Devanakonda.

In 2017, the program was expanded to touch more than 3,000 farmers across the states of Andhra Pradesh and Karnataka during the Kharif crop cycle (rainy season) for a host of crops including groundnut, ragi, maize, rice and cotton, among others. The increase in yield ranged from 10% to 30% across crops.

Pest attack prediction enables farmers to plan

Microsoft is now taking AI in agriculture a step further. A collaboration with United Phosphorous (UPL), India’s largest producer of agrochemicals, led to the creation of the Pest Risk Prediction API that again leverages AI and machine learning to indicate in advance the risk of pest attack. Common pest attacks, such as Jassids, Thrips, Whitefly, and Aphids can pose serious damage to crops and impact crop yield. To help farmers take preventive action, the Pest Risk Prediction App, providing guidance on the probability of pest attacks was initiated.

“Our collaboration with Microsoft to create a Pest Risk Prediction API enables farmers to get predictive insights on the possibility of pest infestation. This empowers them to plan in advance, reducing crop loss due to pests and thereby helping them to double the farm income.”

– Vikram Shroff, Executive Director, UPL Limited

In the first phase, about 3,000 marginal farmers with less than five acres of land holding in 50 villages across in Telangana, Maharashtra and Madhya Pradesh are receiving automated voice calls for their cotton crops. The calls indicate the risk of pest attacks based on weather conditions and crop stage in addition to the sowing advisories. The risk classification is High, Medium and Low, specific for each district in each state.

“Our collaboration with Microsoft to create a Pest Risk Prediction API enables farmers to get predictive insights on the possibility of pest infestation. This empowers them to plan in advance, reducing crop loss due to pests and thereby helping them to double the farm income,” says Vikram Shroff, Executive Director, UPL Limited.

Price forecasting model for policy makers

Predictive analysis in agriculture is not limited to crop growing alone. The government of Karnataka will start using price forecasting for agricultural commodities, in addition to sowing advisories for farmers in the state. Commodity prices for items such as tur, of which Karnataka is the second largest producer, will be predicted three months in advance for major markets in the state.

At present, price forecasting for agricultural commodities using historical data and short-term arrivals is being used by the state government to protect farmers from price crash or shield population from high inflation. However, such accurate data collection is expensive and can be subject to tampering.

“We are certain that digital agriculture supported by advanced technology platforms will truly benefit farmers.”

– Dr. T.N. Prakash Kammardi, Chairman, KAPC, Government of Karnataka

Microsoft has developed a multivariate agricultural commodity price forecasting model to predict future commodity arrival and the corresponding prices. The model uses remote sensing data from geo-stationary satellite images to predict crop yields through every stage of farming.

This data along with other inputs such as historical sowing area, production, yield, weather, among other datasets, are used in an elastic-net framework to predict the timing of arrival of grains in the market as well as their quantum, which would determine their pricing.

“We are certain that digital agriculture supported by advanced technology platforms will truly benefit farmers. We believe that Microsoft’s technology will support these innovative experiments which will help us transform the lives of the farmers in our state,” says Dr. T.N. Prakash Kammardi, Chairman, Karnataka Agricultural Price Commission, Government of Karnataka.

The model currently being used to predict the prices of tur, is scalable, and time efficient and can be generalized to many other regions and crops.

AI in agriculture is just getting started

Shifting weather patterns such as increase in temperature, changes in precipitation levels, and ground water density, can affect farmers, especially those who are dependent on timely rains for their crops. Leveraging the cloud and AI to predict advisories for sowing, pest control and commodity pricing, is a major initiative towards creating increased income and providing stability for the agricultural community.

“Indian agriculture has been traditionally rain dependent and climate change has made farmers extremely vulnerable to crop loss. Insights from AI through the agriculture life cycle will help reduce uncertainty and risk in agriculture operations. Use of AI in agriculture can potentially transform the lives of millions of farmers in India and world over,” says Anil Bhansali, CVP C+E and Managing Director, Microsoft India (R&D) Pvt. Ltd.

Taking a leap in bioinspired robotics

Mechanical engineer Sangbae Kim builds animal-like machines for use in disaster response.“Say there are toxic gases leaking in a building, and you need to close a valve inside, but it’s dangerous to send people in,” says Sangbae Kim, associate professor of mechanical engineering at MIT. “Now, there is no single robot that can do this kind of job. I want to create a robotic first responder that can potentially do more than a human and help in our lives.”

“Say there are toxic gases leaking in a building, and you need to close a valve inside, but it’s dangerous to send people in,” says Sangbae Kim, associate professor of mechanical engineering at MIT. “Now, there is no single robot that can do this kind of job. I want to create a robotic first responder that can potentially do more than a human and help in our lives.”

In the not so distant future, first responders to a disaster zone may include four-legged, dog-like robots that can bound through a fire or pick their way through a minefield, rising up on their hind legs to turn a hot door handle or punch through a wall.

Such robot-rescuers may be ready to deploy in the next five to 10 years, says Sangbae Kim, associate professor of mechanical engineering at MIT. He and his team in the Biomimetic Robotics Laboratory are working toward that goal, borrowing principles from biomechanics, human decision-making, and mechanical design to build a service robot that Kim says will eventually do “real, physical work,” such as opening doors, breaking through walls, or closing valves.

“Say there are toxic gases leaking in a building, and you need to close a valve inside, but it’s dangerous to send people in,” Kim says. “Now, there is no single robot that can do this kind of job. I want to create a robotic first responder that can potentially do more than a human and help in our lives.”

To do this, Kim, who was awarded tenure this year, is working to fuse the two main projects in his lab: the MIT Cheetah, a four-legged, 70-pound robot that runs and jumps over obstacles autonomously; and HERMES, a two-legged, teleoperated robot, whose movements and balance is controlled remotely by a human operator, much like a marionette or a robotic “Avatar.”

“I imagine a robot that can do some physical, dynamic work,” Kim says. “Everybody is trying to find overlapping areas where you’re excited about what you’re working on, and it’s useful. A lot of people are excited to watch sports because when you watch someone moving explosively, it is hypothesized to trigger the brain’s  ‘mirror neurons’ and you feel that excitement at the same time. For me, when my robots perform dynamically and balance, I get really excited. And that feeling has encouraged my research.”

A drill sergeant turns roboticist

Kim was born in Seoul, South Korea, where he says his mother remembers him as a tinkerer. “Everything with a screw, I would take apart,” Kim says. “And she said the first time, almost everything broke. After that, everything started working again.”

He attended Yonsei University in the city, where he studied mechanical engineering. In his second year, as has been mandatory in the country, he and other male students joined the South Korean army, where he served as a drill sergeant for two and a half years.

“We taught [new recruits] every single detail about how to be a soldier, like how to wear shirts and pants, buckle your belt, and even how to make a fist when you walk,” Kim recalls. “The day started at 5:30 a.m. and didn’t end until everyone was asleep, around 10:30 p.m., and there were no breaks. Drill sergeants are famous for being mean, and I think there’s a reason for that — they have to keep very tight schedules.”

After fulfilling his military duty, Kim returned to Yonsei University, where he gravitated toward robotics, though there was no formal program in the subject. He ended up participating in a class project that challenged students to build robots to perform specific tasks, such as capturing a flag, and then to compete, bot to bot, in a contest that was similar to MIT’s popular Course 2.007 (Design and Manufacturing), which he now co-teaches.

“[The class] was a really good motivation in my career and made me anchor on the robotic, mechanistic side,” Kim says.

A bioinspired dream

In his last year of college, Kim developed a relatively cheap 3-D scanner, which he and three other students launched commercially through a startup company called Solutionix, which has since expanded on Kim’s design. However, in the early stages of the company’s fundraising efforts, Kim came to a realization.

“As soon as it came out, I lost excitement because I was done figuring things out,” Kim says. “I loved the figuring-out part. And I realized after a year of the startup process, I should be working in the beginning process of development, not so much in the maturation of products.”

After enabling first sales of the product, he left the country and headed for Stanford University, where he enrolled in the mechanical engineering graduate program. There, he experienced his first taste of design freedom.

“That was a life-changing experience,” Kim says. “It was a more free, creativity-respecting environment — way more so than Korea, where it’s a very conservative culture. It was quite a culture shock.”

Kim joined the lab of Mark Cutkosky, an engineering professor who was looking for ways to design bioinspired robotic machines. In particular, the team was trying to develop a climbing robot that mimicked the gecko, which uses tiny hairs on its feet to help it climb vertical surfaces. Kim adapted this hairy mechanism in a robot and found that it worked.

“It was 2:30 a.m. in the lab, and I couldn’t sleep. I had tried many things, and my heart was thumping,” Kim recalls. “On some replacement doors with tall windows, [the robot] climbed up smoothly, using the world’s first directional adhesives, that I invented. I was so excited to show it to the others, I sent them all a video that night.”

He and his colleagues launched a startup to develop the gecko robot further, but again, Kim missed the thrill of being in the lab. He left the company soon after, for a postdoc position at Harvard University, where he helped to engineer the Meshworm, a soft, autonomous robot that inched across a surface like an earthworm. But even then, Kim was setting his sights on bigger designs.

“I was moving away from small robots because it’s very difficult for them to do to real, physical work,” Kim says. “And so I decided to develop a larger, four-legged robot for human-level physical tasks — a long-term dream.”

Searching for principles

In 2009, Kim accepted an assistant professorship in MIT’s Department of Mechanical Engineering, where he established his Biomimetic Robotics Lab and set a specific research goal: to design and build a four-legged, cheetah-inspired robot.

“We chose the cheetah because it was the fastest of all land animals, so we learned its features the best, but there are many animals with similarities [to cheetahs],” Kim says. “There are some subtle differences, but probably not ones that you can learn the design principles from.”

In fact, Kim quickly learned that in some cases, it may not be the best option to recreate certain animal behaviours in a robot.

“A good example in our case is the galloping gait,” Kim says. “It’s beautiful, and in a galloping horse, you hear a da-da-rump, da-da-rump. We were obsessed to recreate that. But it turns out galloping has very few advantages in the robotics world.”

Animals prefer specific gaits at a given speed due to a complex interaction of muscles, tendons, and bones. However, Kim found that the cheetah robot, powered with electric motors, exhibited very different kinetics from its animal counterpart. For example, with high-power motors, the robot was able to trot at a steady clip of 14 miles per hour — much faster than animals can trot in nature.

“We have to understand what is the governing principle that we need, and ask: Is that a constraint in biological systems, or can we realize it in an engineering domain?” Kim says. “There’s a complex process to find out useful principles overarching the differences between animals and machines. Sometimes obsessing over animal features and characteristics can hinder your progress in robotics.”

A “secret recipe”

In addition to building bots in the lab, Kim teaches several classes at MIT, including 2.007, which he has co-taught for the past five years.

“It’s still my favourite class, where students really get out of this homework-exam mode, and they have this opportunity to throw themselves into the mud and create their own projects,” Kim says. “Students today grew up in the maker movement and with 3-D printing and Legos, and they’ve been waiting for something like 2.007.”

Kim also teaches a class he created in 2013 called Bioinspired Robotics, in which 40 students team up in groups of four to design and build a robot inspired by biomechanics and animal motions. This past year, students showcased their designs in Lobby 7, including a throwing machine, a trajectory-optimizing kicking machine, and a kangaroo machine that hopped on a treadmill.

Outside of the lab and the classroom, Kim is studying another human motion: the tennis swing, which he has sought to perfect for the past 10 years.

“In a lot of human motion, there’s some secret recipe, because muscles have very special properties, and if you don’t know them well, you can perform really poorly and injure yourself,” Kim says. “It’s all based on muscle function, and I’m still figuring out things in that world, and also in the robotics world.”- Jennifer Chu

3 Areas Where AI Can Boost Productivity Of Mechanical Engineering

Artificial intelligence (AI) apparently has become a convenient approach among emerging technologies and has consequently reached the traditional sector of mechanical engineering. In an article published at German magazine Produktion (production) Accenture gives an overview and describes how huge productivity gains can arise and mechanical engineers can open up for them.

Related image

Up to 39 percent higher return on sales by 2035: This is the quantified thrust that artificial intelligence might have on machine and plant construction. Due to Accenture, AI not only allows remarkable efficiency gains but also enables new growth.

Algorithms For Workflows And Decisions

Technology is largely ready for the market; first experience and best practices do exist. Even mechanical engineering companies are able to tap significant benefits through the use of data, machine learning and other AI procedures. Algorithms can speed up processes or take over completely, help employees access knowledge and relieve them of making decisions. Used correctly, AI opens up new approaches in data analysis and may enable a development of novel, much improved industrial products with considerable added value. In practice, following Accenture, these are the most important scenarios in mechanical and plant engineering:

Smart Data Recovery
Smart Working
Improving Smart Products
Complexity Low Medium High
Known / Common Tools
  • Digitized data management and workflows
  • AI algorithms or services (machine learning / – vision, speech processing)
  • Chatbots and “voice assistants”, co-bots,
  • AI-based apps e.g. Robotics Process Automation
IBM Watson, Microsoft Cortana, Google Assistant, Amazon Alexa …
Prerequisites
  • IT-supported data management
  • Willingness to use cloud technology and Analytics-as-a-Service / comparable on-premise solutions
  • AI algorithms or services (machine learning / – vision)
  • IT-based data management
  • Digital management of knowledge
  • Digital mapping of work processes
  • Technical requirements for co-bot use
  • Digitalized data management and workflows
  • Availability of Analytics / IIoT / Digital Twins / – Thread,
    AI algorithms / services (machine learning / – vision, speech processing)
  • AI-capable IT i.e. IaaS and PaaS services.
Description Algorithms for smart data collection and evaluation:

  • Image recognition
  • Big data analytics
  • Machine learning = integral part of simplest IoT solutions – only one could unlock full value contribution of smart AI exploitation.
  • Completely different information awaits targeted recycling
  • Discovers patterns
  • Detects deviations
  • Improves decisions
  • Significantly increasing effectiveness and efficiency in purchasing, planning, human resources, finance, and sales.
Use of AI to simplify or complete routine tasks.Trained AI to help employees to work independently. AIs are getting better and better. Use of “supporting” AIs is possible for internal processes e.g. manufacturing. In industrial products and solutions, AI increases their utility – opening up access to new revenue and business models and additional revenue.
Smart products Helping a large manufacturing company significantly strengthen its own sourcing through Big Data and Artificial Intelligence. Helping to understand task and context-specific access to knowledge.Co-bots: Robots learning simple manual operations to take over them – extremely flexible. Already implementable:

Included as new user interface:  Industry solutions recognize their respective users, learn way of working, preferences and intentions.

1st step: Use of speech recognition solutions in industrial products, for operating industry software.

Self-configuring machines e.g. to adjust themselves. Self-optimizing machines ensuring a production line independently improving own effectiveness and efficiency.

Up next:

Self-controlling machines, e.g. completely independent warehouse robots, self-propelled forage trucks.

AI Is Ready To Be Used – If A Company Is Prepared

With solutions available on the market companies can already implement applications of most of the scenarios. In order to get all benefits of the new processes, appropriate strategies and plans are needed. For data collection and use, for example, it is recommended: the interaction between employees and AI, use itself and for providing the necessary technology – as well as governance structures to ensure ethical AI use.

Next: Certification for Artificial Intelligence Planned

The German Research Center for Artificial Intelligence (Deutsches Forschungszentrum für Künstliche Intelligenz, DFKI) and TÜV Süd announced that they will start a cooperation to certify systems using Artificial Intelligence (AI) to develop a ‘TÜV for algorithms’. The experts will explore the learning behaviour of the AI ​​systems to be able to control its reactions. Therefore they focus on the development of an open digital platform for OEMs, suppliers and technology companies named ‘Genesis’ to validate Artificial Intelligence and thus create the basis for certification.

AI systems are becoming increasingly popular for electronics of autonomous vehicles to a large number of possible traffic situations be able to master safely. TÜV Süd experts estimate 100 million situations per fully automated driving function. Such systems do not react deterministically and for this reason not exactly predictable. They learn rather from the traffic (Deep Learning) and draw own conclusions for the ‘right reaction’ – making autonomous decisions. To act always as defined by the traffic safety, TÜV Süd will validate and certify the underlying algorithms. In the future, users of new Genesis will be able to upload their data and used Modules to the platform and get – after verification – a corresponding certificate for functional safety- OpenClipart-Vectors / pixabay.com

 

How AI is helping engineers invent new ideas

Smart, wi-fi enabled lighting has been available for several years now, but there’s still a giddy joy in illuminating a room with a voice command or a tap of a smartphone app            -Amit Katwala

Products such as Phillips Hue even allow users to choose exactly which shade of light they want to use.

A new invention takes that one step further, allowing smartphone users to ‘paint’ rooms with the colours they want to see on their walls – choosing different hues and intensities – and then have the lighting system match it as closely as possible.

But perhaps the most fascinating thing about this invention – this lightbulb moment – is that it was inspired by artificial intelligence. Swiss technology company Iprova are taking what they call a data-driven approach to research and development and using artificial intelligence to come up with new ideas.

“Traditionally, R&D teams tasked with developing inventions consist of individuals with particular areas of specialist knowledge,” explains Iprova CEO and founder Julian Nolan. “This gives individuals comprehensive insight into specific technologies, but less in-depth information about things outside of that area.”

Augmenting human intelligence

His company’s technology draws on a wide range of information and “uses machine learning and other technologies to augment human intelligence,” says Nolan. He argues that bringing in advances from other unrelated domains can help engineers come up with ideas more quickly, and bring together technologies from seemingly distant areas.

For example, Iprova were able to connect advances in autonomous vehicle technology with a problem that needed to be solved in personal healthcare. Their machine learning algorithms spotted that the advanced sensors built into an autonomous vehicle could be used to take measurements from its human passenger.

By controlling small variations in its motion, the vehicle can generate precise forces that can be used to carry out human health checks. This technology could, for example, be used to assess passenger’s balance or core stability, or evaluate the body’s response to certain stimuli. Other AI-aided inventions include a way of incorporating gesture recognition more easily by using a smartphone’s existing display and a heat-sensitive coating that can allow users to make sure they’re wearing a protective face-mask correctly.

Futurist Maurice Conti says artificial intelligence is moving from the ‘generative’ stage to the ‘intuitive’. In a TED talk, he highlights the work of Airbus, which has used generative AI to help design lighter and better parts for concept planes.

But this is the next step – as deep-learning systems start to develop intuition, and come up with ideas that their inventors couldn’t have. “Very soon, you’ll literally be able to show something you’ve made, you’ve designed to a computer, and it will look at it and say, “Sorry, homie, that’ll never work. You have to try again,”” says Conti. “Or you could ask it if people are going to like your next song or your next flavour of ice cream. Or, much more important, you could work with a computer to solve a problem that we’ve never faced before.”

The end of the inventor?

He’s worked on a number of projects, including a 3D-printed bridge in Amsterdam, and a car chassis designed based on data collected from a race-car with a nervous system. “We instrumented it with dozens of sensors, put a world-class driver behind the wheel, took it out to the desert and drove the hell out of it for a week,” Conti explains.

“And the car’s nervous system captured everything that was happening to the car. We captured four billion data points; all of the forces that it was subjected to. And then we did something crazy. We took all of that data, and plugged it into a generative-design AI we call “Dreamcatcher.’” This was the result – a car chassis designed using machine learning and artificial intelligence, based on data instead of human input.

(Credit: TED/Maurice Conti)

So does this spell the end for would be Edisons? Is the humble inventor just another job that’s set to be replaced by machines? Some certainly think so. “The use of artificial intelligence to create patentable inventions is the next stage in the evolution of innovation,” says Peter Finnie, managing partner at intellectual property firm Gill, Jennings & Every.

Nolan says that the explosion of information and increasing levels of convergence are making life increasingly difficult for companies to envisage future inventions. “As both the amount of information and the level of convergence are increasing at an ever-accelerating pace, it is possible that data-driven invention will become critical for the success of both product and service-based companies in the coming years.”

Conti talks about technology “amplifying our cognitive abilities so we can imagine and design things that were simply out of our reach as plain-old un-augmented humans.”