AI in CNC machining

Artificial engineering and artificial intelligence have found their way in almost all industries. When mixed with controlled machining or CNC machining, AI can help remove the manual labor in redundant tasks. Generally, the algorithm of the software can be designed in such a way that after receiving the feedback in a specific situation, the decision can be actualized with or without human consultation. In case of repetitive tasks, requiring no consultation, the software and execute the required steps, eliminating manual labor.

For example, with precision CNC machining, you can design a program to shut off a car if it is left unattended for a few minutes. If you leave a car running in the parking lot, or your garage, an embedded code of AI might leave you a message to alert you that the car is on. If there is no response on the owner’s part (your part), the algorithm will dictate the engine to be shut off after 8 minutes.

The CNC machining algorithm can be used in other industries also, like smartphones, which can help you provide situational alerts, or devices, which will help you, understand the dangers surrounding you.

So, now that you know about the different industries, which can use artificial intelligence, let’s have a look how AI can help you speed up your work. Here are some things artificial intelligence can do better than human beings can.

Search the Internet

Well, most of us have heard about Google’s RankBrain algorithm. Do you know that it has been developed based on artificial intelligence? It is actually a machine learning based artificial intelligence, which handles all the search queries for Google. Since RankBrain understands words and phrases, it can easily predict the top ranking pages, compared to human counterparts. Although it is being tweaked even now, the base algorithm for Google’s RankBrain remains unchanged.

Work In Inhumane Conditions

Well, robots don’t have feelings. This is why they can survive in places without oxygen or where no human beings can survive. This is why artificial intelligence is essential for surveillance in deep oceanic trenches, radioactive locations or even in outer space. The only problem associated with CNC machining in AI is that it waits for human interruption in certain crucial decision making. This makes the process not only time consuming, but also to some extent useless when the AIs need to function independently.

If you own an iPhone or even Windows 10 phone or OS, you’d have come across Siri and Cortana. Both of them are some modifications of artificial intelligence helping you achieve what you need. In fact, if you use Google’s voice assistant in any of your Android devices, you’d see how Google responds to your set of commands, and even opens your task list and checks off tasks as per your command. While this might seem freaky and surreal, with a proper set of coded instructions, any artificial intelligence can achieve the desired result you’re looking for. All it needs is a decision making loop.

Artificial Intelligence in Medical Science

One of the biggest achievements of artificial intelligence is perhaps its progress in the medical field. Even if a physician has enough exposure to patients, proper and accurate diagnosis can be a problem. However, with the artificial intelligence in place, the process can not only becomes smoother but also more accurate.

On average, a physician spends 160 hours per month to keep track of the latest medical breakthroughs. As such, remembering the breakthroughs and the latest symptoms of a patient, and applying them in regular diagnosis can become problematic for human brain. Compared to this, IBM Watson can do the proper diagnosis with a fraction of a second. Additionally, the AI’s accuracy rate for diagnosis of lung cancer is 90%, which is quite high compared to the accuracy rate of veteran human physicians (50%).

At the end of the day, even if AI has found its way in different industries, it cannot replace human intelligence totally because of its lack of general reasoning. While a robot or AI is perfect in doing each of the tasks outline above individually, the same might not be true when it comes to completing a different set of tasks, which it has not been programmed for. This makes it tough for AI to replace human intelligence, unless more research is done on it.

The New WALK-MAN: A Look at IIT’s Multi-Faceted Robotic Endeavor

Standing 1.85 meters tall, and made of lightweight metals, iron, and plastics, the WALK-MAN humanoid robot is controlled remotely by a human operator through a virtual interface- Sam Davis 

The New WALK-MAN: A Look at IIT’s Multi-Faceted Robotic Endeavor

The heart of the Istituto Italiano Di Tecnologia (IIT, Genoa, Italy) robotics strategy has always been the development of state-of-the-art mechatronics systems. This has led to the creation of internationally recognized humanoid robots and pioneering quadrupeds. IIT’s family of cutting-edge robots isn’t limited to legged systems, though. The Mechatronics program has explored completely new designs and operational paradigms, including materials, compliance, soft bodies, and distributed intelligence.

Besides its advanced integrated robot platforms, IIT researchers have developed component-level systems, including novel patented high-performance actuation systems, variable impedance actuators, advanced fingertip as well as large-area tactile sensors, exoskeletons (leg, arm, hand), instrumented haptic devices, novel medical systems, a variety of force/torque sensors, dexterous manipulators (e.g., SoftHand), and advanced industrial end-effectors.

The IIT Mechatronics program is developing new bodies for its integrated robotic systems, particularly for humanoid and legged robots. In these domains, researchers will focus on combining locomotion, manipulation, whole-body capabilities, new materials, and high-dynamics structures. As in most areas of engineering, it will be crucial to optimize energy use. To achieve this, they will use innovative lightweight and sustainable materials, improve mechatronics to better use the available power, and develop robots with more natural gaits and locomotion skills, coupled with enhanced actuator design.

Improvements in ruggedness, robustness, and reliability will require novel kinematics, shock-absorbing materials, and lightweight designs optimized for indoor and outdoor use in dirty and wet environments. They will develop highly integrated actuation solutions and decentralized diagnostics inspired by the new concept of “smart, high-performance mechatronics.”

Looking at the market, systems have been designed for prompt, affordable market applications. Here, the engineering goals require that they reduce mechanical complexity (fewer parts, no exposed wires, robust sensors), boost the payload-to-weight ratio, and improve the manipulation skills (dexterous hands, a wider range of movement in the shoulder and wrist). The reduced complexity will lower the cost of the robots, which is particularly important for the so-called companion robots. These systems will undergo extensive field-testing with end users, in line with the Technology Transfer Mission.

Delving Deeper into Locomotion

Advanced dynamical control and whole-body loco-manipulation are vital for complex human-like robots, particularly for locomotion and human-robot collaboration. In robot locomotion, where a flexible control strategy demands step recovery, walking, and running on possibly uneven terrains, advances will require the close integration of engineering (sensing, actuation, and mechanics), gait generation, dynamic modelling, and control.

The Mechatronics program will investigate locomotion, gait generation, and gait control in both bipeds and quadrupeds. With several robust platforms available, they will develop dynamic locomotion profiles. These will advance locomotion and loco-manipulation, particularly for operation in rough, hazardous, and poorly conditioned terrains, where wheeled and tracked vehicles cannot operate. The current locomotion capabilities on flat and moderately rough terrain will include very challenging environments (e.g., soft and unstable terrains).

The locomotion framework will reach higher levels of autonomy, allowing automatic selection of the most suitable gaits/ behaviours for the environment. Combinations of machine-learning and optimization methods will be used to plan movements and control the robot.

With complex systems such as humanoids, it’s vital to achieve simultaneous manipulation and control, while maintaining operational parameters such as balance, walking, and reaching. This requires a new advanced approach to control. Torque regulation (through hardware and software) will be critical to success in this domain. IIT robots feature fully integrated torque sensing and controllers. In the near future, exciting developments in controller design will advance the functionality of these robots, and fill a crucial gap in humanoid technology.

IIT research in soft robotics will aim to produce soft, lightweight, sensitive structures, such as manipulators and grippers. They will exploit additive manufacturing technologies and customized sewing machines to generate 3D-fiber-reinforced structural composites that feature large deformation capacity, high load capacity, and variable stiffness. This approach may also influence the design of rigid robots by replacing rigid joints with soft, compliant joints or soft and elastic actuators (e.g., McKibben muscles).

IIT’s Soft Robotics program will focus on developing continuum robots (i.e., with similarities to the elephant trunk and cephalopod arms) that can grow, evolve, self-heal, and biodegrade. The goal is for these continuum robots to traverse confined spaces, manipulate objects, and reach difficult-to-access sites. Potential applications will be in medicine, space, inspection, and search-and-rescue missions. The Soft Robotics program will require an unprecedented multidisciplinary effort combining biology (e.g., the study of plants), materials science (e.g., responsive polymer synthesis), engineering, and biomechanics.

The Walk-MAN robot (Credit: IIT-Istituto Italiano di Tecnologia)

Researchers at IIT successfully tested their new version of a WALK-MAN humanoid robot for supporting emergency response teams in fire incidents (see figure). The robot is able to locate the fire position and walk toward it and then activate an extinguisher to eliminate the fire. During the operation, it collects images and transmits these back to emergency teams, who can evaluate the situation and guide the robot remotely. The new WALK-MAN robot’s design consists of a lighter upper body and new hands, helping to reduce construction cost and improve performance.

During the final test, the robot WALK-MAN dealt with a scenario representing an industrial plant damaged by an earthquake that was experiencing gas leaks and fire—no doubt a dangerous situation for humans. The scenario was recreated in IIT laboratories, where the robot was able to navigate through a damaged room and perform four specific tasks: opening and traversing the door to enter the zone; locating the valve which controls the gas leakage and close it; removing debris on its path; and finally identifying the fire and activating a fire extinguisher.

The robot is controlled by a human operator through a virtual interface and a sensorized suit, which permits the robot to operate very naturally, effectively controlling its manipulation and locomotion, like an avatar. The operator guides the robot from a station located remotely from the accident site, receiving images and other information from the robot perception systems.

The first version WALK-MAN was released in 2015, but researchers wanted to introduce new materials and optimize the design to reduce the fabrication cost and improve its performance. The new version of the WALK-MAN robot has a new lighter upper-body, whose realization took six months and involved a team of about 10 people coordinated by Nikolaos Tsagarakis, the researcher at IIT.

The new WALK-MAN robot is a humanoid robot 1.85 meters tall, made of lightweight metal, like Ergal (60%), magnesium alloys (25%) and titanium, iron, and plastics. Researchers reduced its weight by 31 kilos—from the original 133 kilos to 102 kilos—to make the robot more dynamic. Therefore, legs can move faster due to having a lighter upper body mass to carry. The higher dynamic performance allows the robot to react faster with legs, which is very important to adapt its pace to rough terrain and variable interaction scenarios. The lighter upper-body also reduces its energy consumption; the WALK-MAN can operate with a smaller battery (1 kWh) for about two hours.

The lighter upper-body is made of magnesium alloys and composite structures and is powered by a new version of lightweight soft actuators. Its performance has been improved, reaching a higher payload (10 kg/arm) than the original one (7 kg/arm). Thus, it can carry heavy objects around and sustain them for more than 10 minutes.

The new upper body is also more compact in size (62-cm shoulder width and 31-cm torso depth), giving to the robot greater flexibility to pass through standard doors and narrow passages.

The hands are a new version of SoftHand developed by Centro Ricerche E. Piaggio of the University of Pisa (a group led by Prof. A. Bicchi) in collaboration with IIT. They are lighter, thanks to the composite material used to construct fingers, and they have a better finger-to-palm size ratio (more human-like) that allows WALK-MAN to grasp a variety of object shapes. Despite their weight reduction, hands keep the same strength as the original version, as well as their versatility in handling and physical robustness.

WALK-MAN body is controlled by 32 engines and control boards, four force and torque sensors at the hands and feet, and two accelerometers for controlling its balance. Its joints show elastic movement, allowing the robot to be compliant and have safe interactions with humans and the environment. Its software architecture is based on the XBotCore framework. The WALK-MAN head has cameras, 3D laser scanner, and microphone sensors. In the future, it can be also equipped with chemical sensors for detecting toxic agents.

The WALK-MAN robot was designed and implemented by IIT within the project WALK-MAN funded by the European Commission. The project started in 2013 and is now at its final validation phase. The project also involved the University of Pisa in Italy, the École Polytechnique Fédérale de Lausanne (EPFL) in Switzerland, the Karlsruhe Institute of Technology (KIT) in Germany, and the Université Catholique de Louvain (UCL) in Belgium. All partners contributed to different aspects of the robot realization: locomotion control, perception capability, affordances and motion planning, simulation tools, and manipulation control.

The validation scenario was defined in collaboration with the Italian civil protection body in Florence, which participated in the project as an advisor end-user.

An AI Primer For Mechatronics


For 65 years, The Turing Test remained unsolvable until a computer program called “Eugene Goostman” conquered it in 2014. The chatbot, which simulates a 13-year-old Ukrainian boy, did the unthinkable, fooling a group of human judges into thinking it was more real than a live person on the other side of the screen. Alan Turing’s original thesis in developing the schema was testing the premise – “Can Machines Think?”

According to the competition’s organizer, Kevin Warwick of Coventry University, “The words Turing test have been applied to similar competitions around the world. However, this event involved the most simultaneous comparison tests than ever before, was independently verified and, crucially, the conversations were unrestricted. We are therefore proud to declare that Alan Turing’s test was passed for the first time.” Since then, many have debated whether the threshold was indeed crossed then, or before, or ever…

Last December, the Turing Test’s sound barrier (of sorts) was broken by a group of Researchers from MIT’s Computer Science and Artificial Intelligence Lab (CSAIL). Using a deep-learning algorithm, the AI fooled human viewers into thinking the images and corresponding sounds were indistinguishable. As a silent video clip played (above), the computer program produced the sound that was realistic enough to fool even the most hardened audiophiles. According to the research paper, the authors envision their algorithms in the future be used to automatically produce movies, TV shows, as well as helping robots better understand their environments.

Lead author Andrew Owens said: “being able to predict sound is an important first step toward being able to predict the consequences of physical interactions with the world. A robot could look at a sidewalk and instinctively know that the cement is hard and the grass is soft, and therefore know what would happen if they stepped on either of them.”

Unpacking Owens’ statement, one uncovers a lot of theory of artificial intelligence. The algorithm is using a deep learning approach to automatically train the computer to match sounds to pictures through experience. According to the paper, the researchers spent several months recording close to 1,000 videos of approximately 46,000 sounds, each representing unique objects being “hit, scraped and prodded with a drumstick.”

Typically artificial intelligence starts with creating an agent to solve a specific program, often agents are symbolic or logical, but deep learning approaches like the MIT example uses sub-symbolic neural networks that attempt to emulate how the human brain learns. Autonomous devices, like robots, use machine learning approaches to combine algorithms with experiences.

AI is very complex and relies on theories derived from a multitude of disciplines, including computer science, mathematics, psychology, linguistics, philosophy, neuroscience and statistics. For the purposes of this article, it may be best to group modern approaches into two categories: “supervised” and “unsupervised” learning. Supervised uses a method of “continuous target values” to mine data to predict outcomes, similar to how nominal variables are used to label values in statistics. In unsupervised learning, there is no notion of a target value. Rather, algorithms perform operations based upon clustering data into classifications and then determine relationships between the inputs and outputs with numerical regression or other filtering methods. The primary difference between the two approaches is that in unsupervised learning the computer program is able to automatically connect and label patterns from streams of inputs. In the above example, the algorithm connects the silent video images to the library of drumstick sounds.

Now that the connections are made between patterns and inputs, the next step is to control behaviour. The most rudimentary AI applications are divided between classifiers (“if shiny then silver”) and controllers (“if shiny then pick up”). It is important to note that controllers also classify conditions before performing actions. Classifiers use pattern matching to determine the closest match. In supervised learning, each pattern belongs to a certain predefined class. A data set is the labelling of classes and the observations received through experience. The more experience the increased number of classes and likewise data to input.


In robotics, unsupervised (machine) learning is required for object manipulation, navigation, localization, mapping and motion planning, which becomes increasingly more challenging in unstructured environments such as, enhanced manufacturing and autonomous driving. As a result, deep learning has given birth to many sub-specialties of AI, such as computer vision, speech recognition and natural language processing. Rodney Brooks (above) professed in 1986 a new theory of AI that led to the greatest advancement in machine intelligence for robotics. Brook’s “Subsumption Architecture” created a paradigm for real-time learning via live interaction through sensor inputs.

In his own words, Brooks said in a 2015 interview, “The work I did in the 80s on what I called subsumption architecture then led directly to the iRobot Roomba. And there are 14 million of them deployed worldwide. And it was also used in the iRobot PackBot, and there were 4,500 PackBots in Iraq and Afghanistan remediating roadside bombs – and by the way, there are some PackBots and Warriors from iRobot inside Fukushima now, using the subsumption architecture. There’s a variation of subsumption inside Baxter – we call it behaviour-based now, but its a descendant of that subsumption architecture. And that’s what lets Baxter be aware of different things in parallel, for instance, it’s picked something up, it’s put it in a box, something goes wrong, and it drops the object, sadly. A traditional robot would just continue and sort of time putting the thing in the box, but Baxter is aware of that changes behaviour – that’s using the behaviour-based approach, which is a variation on subsumption. So it is part of Baxter’s intelligence.”

Many next-generation computer scientists are focusing on designing not just the software that acts like the brain, but hardware that is designed like a human cranium. Today, infamous deep learning algorithms, like Siri and Google Translate, run on traditional computing processing platforms that consume a lot of energy as the logic and memory boards are separated. Last year, Google’s AlphaGo, the most successful deep learning program to date, was only able to beat the world-champion human player in Go after being trained on a database of thirty million moves running on approximately one million watts of power.

When asking Dr Devanand Shenoy, formerly with the U.S. Department of Energy about Google’s project, he said: “AlphaGo had to be retrained for every new game (a feature of narrow AI where the machine does one thing very well, even better than humans). Once learning and training algorithms are implemented on neuromorphic hardware that is distributed, asynchronous, perhaps event-driven, and fault-tolerant, the ability to process data with many orders of magnitude improvements in energy-efficiency as well as superhuman speed across multiple applications could be possible in the future. Recent trends in transfer learning with reservoir computing as an example, suggest that artificial network configurations may be trained to learn more than one application.”



Artificial Intelligence & Manufacturing Industry

David Gelernter, famous artist, writer, and a Computer Science Professor at the Yale University was quoted in an article titled Artificial intelligence isn’t the scary future. It’s the amazing present published in Chicago Tribune, “The coming of computers with true human-like reasoning remains decades in the future, but when the moment of “artificial general intelligence” arrives, the pause will be brief. Once artificial minds achieve the equivalence of the average human IQ of 100, the next step will be machines with an IQ of 500, and then 5,000. We don’t have the vaguest idea what an IQ of 5,000 would mean. And in time, we will build such machines–which will be unlikely to see much difference between humans and houseplants”.

Artificial Intelligence –It’s everywhere

There was a time when Artificial Intelligence was something that was considered futuristic and what Professor Gelernter said might have sounded insane.But no more. Especially when we’re using so much AI in our daily lives.

For example, GPS is now smarter than spatial navigation, we have started to rely heavily on Apple’s Siri and Amazon’s Echo. AI has made a lot of progress quickly and it’s because of improved processing, algorithms and a lot of data. With the help of Machine Learning, a lot of data can be analyzed, and critical insights can be provided.

Known as the major propellant of the Fourth Industrial Revolution, AI is expected to wipe out nearly half of the human jobs (mostly the white collar jobs) in the next 20 years. Every industry will opt to replace humans for work that can be performed with the help of AI. Algorithms and automation are a major threat as they offer improved efficiency at a lower price.

The American Manufacturing Industry today

The American Manufacturing Industry has suffered for long, more so since the Great Recession in 2000s. The growth has been sluggish and the key reasons for this are goods manufactured in U.S. have always been more expensive for the foreign market due to the strong dollar, and also due to the major cutbacks in the Energy Sector. In 2010, China replaced US as the largest manufacturing country in the world.

Some of the leading manufacturing industries in US are steel, automobiles, chemicals, food processing, consumer goods, aerospace, and mining.

 How will AI have an impact on Manufacturing Industry?

The manufacturing industry has always been open to adopting new technologies. Drones and industrial robots have been a part of the manufacturing industry since 1960s. The next automation revolution is just around the corner and the US Manufacturing Sector is awaiting this change eagerly. With the adoption of AI if companies can keep inventories lean and reduce the cost, there is a high likelihood that the American Manufacturing Industry will experience an encouraging growth. Having said that, the manufacturing sector has to gear up for networked factories where supply chain, design team, production line, and quality control are highly integrated into an intelligent engine that provides actionable insights.

Virtual Reality

Virtual Reality will enable new tools that help to perform testing in the virtual world. It allows people, remotely located, to connect and jointly work on situations that require trouble shooting. Simulation and product-creation can help reduce the manufacturing time drastically.


Automation will help the manufacturing industry reach a high level of accuracy and productivity, a level that is even beyond human ability. It can even work in environments that are otherwise dangerous, tedious or complicated for humans. Robotics, which are expected in the future, will have capabilities like voice and image recognition that can be used to re-create complex human tasks.

 Internet of Things (IoT)

We all have started to use smart sensors. It is a little known fact that the IoT functionality will have a huge role in the manufacturing industry. It can track, analyze production quotas, and aggregate control rooms, the technology can also help to create models for predictive maintenance. When combined with augmented and virtual reality and analysis of customer feedback, there can be a number of meaningful insights to help towards innovation.


With the promise of increased output, robots are already being used in the manufacturing companies. But with their growing intelligence, the workforce in factories will soon be replaced by robots. Every stage can be closely monitored with the help of sensors and data can be shared with AI and analytics software. Increased output, defect detection and corrective action is much faster and the entire production cycle is way more efficient.


Unlike the big fear of human jobs being lost to AI, the manufacturing will be driven towards higher productivity and increased efficiency with the help of AI. The workforce can focus more on innovation and new operations, and contribute to the growth and bright future of the American Manufacturing Industry.

Digital Agriculture: Farmers in India are using AI to increase crop yields

The fields had been freshly plowed. The furrows ran straight and deep. Yet, thousands of farmers across Andhra Pradesh (AP) and Karnataka waited to get a text message before they sowed the seeds. The SMS, which was delivered in Telugu and Kannada, their native languages, told them when to sow their groundnut crops.

In a few dozen villages in Telengana, Maharashtra and Madhya Pradesh, farmers are receiving automated voice calls that tell them whether their cotton crops are at risk of a pest attack, based on weather conditions and crop stage. Meanwhile in Karnataka, the state government can get price forecasts for essential commodities such as tur (split red gram) three months in advance for planning for the Minimum Support Price (MSP).

Welcome to digital agriculture, where technologies such as Artificial Intelligence (AI), Cloud Machine Learning, Satellite Imagery and advanced analytics are empowering small-holder farmers to increase their income through higher crop yield and greater price control.

AI-based sowing advisories lead to 30% higher yields

“Sowing date as such is very critical to ensure that farmers harvest a good crop. And if it fails, it results in loss as a lot of costs are incurred for seeds, as well as the fertilizer applications,” says Dr. Suhas P. Wani, Director, Asia Region, of the International Crop Research Institute for the Semi-Arid Tropics (ICRISAT), a non-profit, non-political organization that conducts agricultural research for development in Asia and sub-Saharan Africa with a wide array of partners throughout the world.

Microsoft in collaboration with ICRISAT, developed an AI Sowing App powered by Microsoft Cortana Intelligence Suite including Machine Learning and Power BI. The app sends sowing advisories to participating farmers on the optimal date to sow. The best part – the farmers don’t need to install any sensors in their fields or incur any capital expenditure. All they need is a feature phone capable of receiving text messages.

Flashback to June 2016. While other farmers were busy sowing their crops in Devanakonda Mandal in Kurnool district in AP, G. Chinnavenkateswarlu, a farmer from Bairavanikunta village, decided to wait. Instead of sowing his groundnut crop during the first week of June, as traditional agricultural wisdom would have dictated, he chose to sow three weeks later, on June 25, based on an advisory he received in a text message.

Chinnavenkateswarlu was part of a pilot program that ICRISAT and Microsoft were running for 175 farmers in the state. The program sent farmers text messages on sowing advisories, such as the sowing date, land preparation, soil test based fertilizer application, and so on.

For centuries, farmers like Chinnavenkateswarlu had been using age-old methods to predict the right sowing date. Mostly, they’d choose to sow in early June to take advantage of the monsoon season, which typically lasted from June to August. But the changing weather patterns in the past decade have led to unpredictable monsoons, causing poor crop yields.

“I have three acres of land and sowed groundnut based on the sowing recommendations provided. My crops were harvested on October 28 last year, and the yield was about 1.35 ton per hectare.  Advisories provided for land preparation, sowing, and need-based plant protection proved to be very useful to me,” says Chinnavenkateswarlu, who along with the 174 others achieved an average of 30% higher yield per hectare last year.

“Sowing date as such is very critical to ensure that farmers harvest a good crop. And if it fails, it results in loss as a lot of costs are incurred for seeds, as well as the fertilizer applications.”

– Dr. Suhas P. Wani, Director, Asia Region, ICRISAT

To calculate the crop-sowing period, historic climate data spanning over 30 years, from 1986 to 2015 for the Devanakonda area in Andhra Pradesh was analyzed using AI. To determine the optimal sowing period, the Moisture Adequacy Index (MAI) was calculated. MAI is the standardized measure used for assessing the degree of adequacy of rainfall and soil moisture to meet the potential water requirement of crops.

The real-time MAI is calculated from the daily rainfall recorded and reported by the Andhra Pradesh State Development Planning Society. The future MAI is calculated from weather forecasting models for the area provided by USA-based aWhere Inc. This data is then downscaled to build predictability, and guide farmers to pick the ideal sowing week, which in the pilot program was estimated to start from June 24 that year.

Ten sowing advisories were initiated and disseminated until the harvesting was completed. The advisories contained essential information including the optimal sowing date, soil test based fertilizer application, farm yard manure application, seed treatment, optimum sowing depth, and more. In tandem with the app, a personalized village advisory dashboard provided important insights into soil health, recommended fertilizer, and seven-day weather forecasts.

“Farmers who sowed in the first week of June got meager yields due to a long dry spell in August; while registered farmers who sowed in the last week of June and the first week of July and followed advisories got better yields and are out of loss,“ explains C Madhusudhana, President, Chaitanya Youth Association and Watershed Community Association of Devanakonda.

In 2017, the program was expanded to touch more than 3,000 farmers across the states of Andhra Pradesh and Karnataka during the Kharif crop cycle (rainy season) for a host of crops including groundnut, ragi, maize, rice and cotton, among others. The increase in yield ranged from 10% to 30% across crops.

Pest attack prediction enables farmers to plan

Microsoft is now taking AI in agriculture a step further. A collaboration with United Phosphorous (UPL), India’s largest producer of agrochemicals, led to the creation of the Pest Risk Prediction API that again leverages AI and machine learning to indicate in advance the risk of pest attack. Common pest attacks, such as Jassids, Thrips, Whitefly, and Aphids can pose serious damage to crops and impact crop yield. To help farmers take preventive action, the Pest Risk Prediction App, providing guidance on the probability of pest attacks was initiated.

“Our collaboration with Microsoft to create a Pest Risk Prediction API enables farmers to get predictive insights on the possibility of pest infestation. This empowers them to plan in advance, reducing crop loss due to pests and thereby helping them to double the farm income.”

– Vikram Shroff, Executive Director, UPL Limited

In the first phase, about 3,000 marginal farmers with less than five acres of land holding in 50 villages across in Telangana, Maharashtra and Madhya Pradesh are receiving automated voice calls for their cotton crops. The calls indicate the risk of pest attacks based on weather conditions and crop stage in addition to the sowing advisories. The risk classification is High, Medium and Low, specific for each district in each state.

“Our collaboration with Microsoft to create a Pest Risk Prediction API enables farmers to get predictive insights on the possibility of pest infestation. This empowers them to plan in advance, reducing crop loss due to pests and thereby helping them to double the farm income,” says Vikram Shroff, Executive Director, UPL Limited.

Price forecasting model for policy makers

Predictive analysis in agriculture is not limited to crop growing alone. The government of Karnataka will start using price forecasting for agricultural commodities, in addition to sowing advisories for farmers in the state. Commodity prices for items such as tur, of which Karnataka is the second largest producer, will be predicted three months in advance for major markets in the state.

At present, price forecasting for agricultural commodities using historical data and short-term arrivals is being used by the state government to protect farmers from price crash or shield population from high inflation. However, such accurate data collection is expensive and can be subject to tampering.

“We are certain that digital agriculture supported by advanced technology platforms will truly benefit farmers.”

– Dr. T.N. Prakash Kammardi, Chairman, KAPC, Government of Karnataka

Microsoft has developed a multivariate agricultural commodity price forecasting model to predict future commodity arrival and the corresponding prices. The model uses remote sensing data from geo-stationary satellite images to predict crop yields through every stage of farming.

This data along with other inputs such as historical sowing area, production, yield, weather, among other datasets, are used in an elastic-net framework to predict the timing of arrival of grains in the market as well as their quantum, which would determine their pricing.

“We are certain that digital agriculture supported by advanced technology platforms will truly benefit farmers. We believe that Microsoft’s technology will support these innovative experiments which will help us transform the lives of the farmers in our state,” says Dr. T.N. Prakash Kammardi, Chairman, Karnataka Agricultural Price Commission, Government of Karnataka.

The model currently being used to predict the prices of tur, is scalable, and time efficient and can be generalized to many other regions and crops.

AI in agriculture is just getting started

Shifting weather patterns such as increase in temperature, changes in precipitation levels, and ground water density, can affect farmers, especially those who are dependent on timely rains for their crops. Leveraging the cloud and AI to predict advisories for sowing, pest control and commodity pricing, is a major initiative towards creating increased income and providing stability for the agricultural community.

“Indian agriculture has been traditionally rain dependent and climate change has made farmers extremely vulnerable to crop loss. Insights from AI through the agriculture life cycle will help reduce uncertainty and risk in agriculture operations. Use of AI in agriculture can potentially transform the lives of millions of farmers in India and world over,” says Anil Bhansali, CVP C+E and Managing Director, Microsoft India (R&D) Pvt. Ltd.