Intelligent Machines A team of AI algorithms just crushed humans in a complex computer game- Will Knight

Algorithms capable of collaboration and teamwork can outmanoeuvre human teams.

Five different AI algorithms have teamed up to kick human butt in Dota 2, a popular strategy computer game.

Researchers at OpenAI, a nonprofit based in California, developed the algorithmic A team, which they call the OpenAI Five. Each algorithm uses a neural network to learn not only how to play the game, but also how to cooperate with its AI teammates. It has started defeating amateur Dota 2 players in testing, OpenAI says.

This is an important and novel direction for AI since algorithms typically operate independently. Approaches that help algorithms cooperate with each other could prove important for commercial uses of the technology. AI algorithms could, for instance, team up to outmanoeuvre opponents in online trading or ad bidding. Collaborative algorithms might also cooperate with humans.

OpenAI previously demonstrated an algorithm capable of competing against top humans at single-player Dota 2. The latest work builds on this using similar algorithms modified to value both individual and team success. The algorithms do not communicate directly except through gameplay.

“What we’ve seen implies that coordination and collaboration can emerge very naturally out of the incentives,” says Greg Brockman, one of the founders of OpenAI, which aims to develop artificial intelligence openly and in a way that benefits humanity. He adds that the team has tried substituting a human player for one of the algorithms and found this to work very well. “He described himself as feeling very well supported,” Brockman says.

Dota 2 is a complex strategy game in which teams of five players compete to control a structure within a sprawling landscape. Players have different strengths, weaknesses, and roles, and the game involves collecting items and planning attacks, as well as engaging in real-time combat.

Pitting AI programs against computer games has become a familiar means of measuring progress. DeepMind, a subsidiary of Alphabet, famously developed a program capable of learning to play the notoriously complex and subtle board game Go with superhuman skill. A related program then taught itself from scratch to master Go and then chess simply by playing against itself.

The strategies required for Dota 2 are more defined than in chess or Go, but the game is still difficult to master. It is also challenging for a machine because it isn’t always possible to see what your opponents are up to and because teamwork is required.

The OpenAI Five learn by playing against various versions of themselves. Over time, the programs developed strategies much like the ones humans use—figuring out ways to acquiring gold by “farming” it, for instance, as well as adopting a particularly strategic role or “lane” within the game.

AI experts say the achievement is significant. “Dota 2 is an extremely complicated game, so even beating strong amateurs is truly impressive,” says Noam Brown, a researcher at Carnegie Mellon University in Pittsburgh. “In particular, dealing with hidden information in a game as large as Dota 2 is a major challenge.”

Brown previously worked on an algorithm capable of playing poker, another imperfect-information game, with superhuman skill (see “Why poker is a big deal in AI”). If the OpenAI Five team can consistently beat humans, Brown says, that would be a major achievement in AI. However, he notes that given enough time, humans might be able to figure out weaknesses in the AI team’s playing style.

Other games could also push AI further, Brown says. “The next major challenge would be games involving communication, like Diplomacy or Settlers of Catan, where balancing between cooperation and competition is vital to success.”

Given a satellite image, machine learning creates the view on the ground

Geographers could use the technique to determine how land is used.

Leonardo da Vinci famously created drawings and paintings that showed a bird’s eye view of certain areas of Italy with a level of detail that was not otherwise possible until the invention of photography and flying machines. Indeed, many critics have wondered how he could have imagined these details. But now researchers are working on the inverse problem: given a satellite image of Earth’s surface, what does that area look like from the ground? How clear can such an artificial image be?

Today we get an answer thanks to the work of Xueqing Deng and colleagues at the University of California, Merced. These guys have trained a machine-learning algorithm to create ground-level images simply by looking at satellite pictures from above. The technique is based on a form of machine intelligence known as a generative adversarial network. This consists of two neural networks called a generator and a discriminator.

The generator creates images that the discriminator assesses against some learned criteria, such as how closely they resemble giraffes. By using the output from the discriminator, the generator gradually learns to produce images that look like giraffes.

In this case, Deng and co-trained the discriminator using real images of the ground as well as satellite images of that location. So it learns how to associate a ground-level image with its overhead view. Of course, the quality of the data set is important. The team use as ground truth the LCM2015 ground-cover map, which gives the class of land at a one-kilometre resolution for the entire UK. However, the team limits the data to a 71×71-kilometre grid that includes London and surrounding countryside. For each location in this grid, they downloaded a ground-level view from an online database called Geograph.

The team then trained the discriminator with 16,000 pairs of overhead and ground-level images. The next step was to start generating ground-level images. The generator was fed a set of 4,000 satellite images of specific locations and had to create ground-level views for each, using feedback from the discriminator. The team tested the system with 4,000 overhead images and compared them with the ground truth images.

The results make for interesting reading. The network produces images that are plausible given the overhead image, if relatively low in quality. The generated images capture basic qualities of the ground, such as whether it shows a road, whether the land is rural or urban, and so on. “The generated ground-level images looked natural although, as expected, they lacked the details of real images,” said Deng and co.

That’s a neat trick, but how useful is it? One important task for geographers is to classify land according to its use, such as whether it is rural or urban. Ground-level images are essential for this. However, existing databases tend to be sparse, particularly in rural locations, so geographers have to interpolate between the images, a process that is little better than guessing.

Now Deng and co’s generative adversarial networks provide an entirely new way to determine land use. When geographers want to know the ground-level view at any location, they can simply create the view with the neural network based on a satellite image. Deng and co even compare the two methods—interpolation versus image generation. The new technique turns out to correctly determine land use 73 per cent of the time, while the interpolation method is correct in just 65 per cent of cases.

That’s interesting work that could make geographers’ lives easier. But Deng and co have greater ambitions. They hope to improve the image generation process so that in future it will produce even more detail in the ground-level images. Leonardo da Vinci would surely be impressed. :  What Is It Like Down There? Generating Dense Ground-Level Views and Image Features From Overhead Imagery Using Conditional Generative Adversarial Networks

Digital Agriculture: Farmers in India are using AI to increase crop yields

The fields had been freshly plowed. The furrows ran straight and deep. Yet, thousands of farmers across Andhra Pradesh (AP) and Karnataka waited to get a text message before they sowed the seeds. The SMS, which was delivered in Telugu and Kannada, their native languages, told them when to sow their groundnut crops.

In a few dozen villages in Telengana, Maharashtra and Madhya Pradesh, farmers are receiving automated voice calls that tell them whether their cotton crops are at risk of a pest attack, based on weather conditions and crop stage. Meanwhile in Karnataka, the state government can get price forecasts for essential commodities such as tur (split red gram) three months in advance for planning for the Minimum Support Price (MSP).

Welcome to digital agriculture, where technologies such as Artificial Intelligence (AI), Cloud Machine Learning, Satellite Imagery and advanced analytics are empowering small-holder farmers to increase their income through higher crop yield and greater price control.

AI-based sowing advisories lead to 30% higher yields

“Sowing date as such is very critical to ensure that farmers harvest a good crop. And if it fails, it results in loss as a lot of costs are incurred for seeds, as well as the fertilizer applications,” says Dr. Suhas P. Wani, Director, Asia Region, of the International Crop Research Institute for the Semi-Arid Tropics (ICRISAT), a non-profit, non-political organization that conducts agricultural research for development in Asia and sub-Saharan Africa with a wide array of partners throughout the world.

Microsoft in collaboration with ICRISAT, developed an AI Sowing App powered by Microsoft Cortana Intelligence Suite including Machine Learning and Power BI. The app sends sowing advisories to participating farmers on the optimal date to sow. The best part – the farmers don’t need to install any sensors in their fields or incur any capital expenditure. All they need is a feature phone capable of receiving text messages.

Flashback to June 2016. While other farmers were busy sowing their crops in Devanakonda Mandal in Kurnool district in AP, G. Chinnavenkateswarlu, a farmer from Bairavanikunta village, decided to wait. Instead of sowing his groundnut crop during the first week of June, as traditional agricultural wisdom would have dictated, he chose to sow three weeks later, on June 25, based on an advisory he received in a text message.

Chinnavenkateswarlu was part of a pilot program that ICRISAT and Microsoft were running for 175 farmers in the state. The program sent farmers text messages on sowing advisories, such as the sowing date, land preparation, soil test based fertilizer application, and so on.

For centuries, farmers like Chinnavenkateswarlu had been using age-old methods to predict the right sowing date. Mostly, they’d choose to sow in early June to take advantage of the monsoon season, which typically lasted from June to August. But the changing weather patterns in the past decade have led to unpredictable monsoons, causing poor crop yields.

“I have three acres of land and sowed groundnut based on the sowing recommendations provided. My crops were harvested on October 28 last year, and the yield was about 1.35 ton per hectare.  Advisories provided for land preparation, sowing, and need-based plant protection proved to be very useful to me,” says Chinnavenkateswarlu, who along with the 174 others achieved an average of 30% higher yield per hectare last year.

“Sowing date as such is very critical to ensure that farmers harvest a good crop. And if it fails, it results in loss as a lot of costs are incurred for seeds, as well as the fertilizer applications.”

– Dr. Suhas P. Wani, Director, Asia Region, ICRISAT

To calculate the crop-sowing period, historic climate data spanning over 30 years, from 1986 to 2015 for the Devanakonda area in Andhra Pradesh was analyzed using AI. To determine the optimal sowing period, the Moisture Adequacy Index (MAI) was calculated. MAI is the standardized measure used for assessing the degree of adequacy of rainfall and soil moisture to meet the potential water requirement of crops.

The real-time MAI is calculated from the daily rainfall recorded and reported by the Andhra Pradesh State Development Planning Society. The future MAI is calculated from weather forecasting models for the area provided by USA-based aWhere Inc. This data is then downscaled to build predictability, and guide farmers to pick the ideal sowing week, which in the pilot program was estimated to start from June 24 that year.

Ten sowing advisories were initiated and disseminated until the harvesting was completed. The advisories contained essential information including the optimal sowing date, soil test based fertilizer application, farm yard manure application, seed treatment, optimum sowing depth, and more. In tandem with the app, a personalized village advisory dashboard provided important insights into soil health, recommended fertilizer, and seven-day weather forecasts.

“Farmers who sowed in the first week of June got meager yields due to a long dry spell in August; while registered farmers who sowed in the last week of June and the first week of July and followed advisories got better yields and are out of loss,“ explains C Madhusudhana, President, Chaitanya Youth Association and Watershed Community Association of Devanakonda.

In 2017, the program was expanded to touch more than 3,000 farmers across the states of Andhra Pradesh and Karnataka during the Kharif crop cycle (rainy season) for a host of crops including groundnut, ragi, maize, rice and cotton, among others. The increase in yield ranged from 10% to 30% across crops.

Pest attack prediction enables farmers to plan

Microsoft is now taking AI in agriculture a step further. A collaboration with United Phosphorous (UPL), India’s largest producer of agrochemicals, led to the creation of the Pest Risk Prediction API that again leverages AI and machine learning to indicate in advance the risk of pest attack. Common pest attacks, such as Jassids, Thrips, Whitefly, and Aphids can pose serious damage to crops and impact crop yield. To help farmers take preventive action, the Pest Risk Prediction App, providing guidance on the probability of pest attacks was initiated.

“Our collaboration with Microsoft to create a Pest Risk Prediction API enables farmers to get predictive insights on the possibility of pest infestation. This empowers them to plan in advance, reducing crop loss due to pests and thereby helping them to double the farm income.”

– Vikram Shroff, Executive Director, UPL Limited

In the first phase, about 3,000 marginal farmers with less than five acres of land holding in 50 villages across in Telangana, Maharashtra and Madhya Pradesh are receiving automated voice calls for their cotton crops. The calls indicate the risk of pest attacks based on weather conditions and crop stage in addition to the sowing advisories. The risk classification is High, Medium and Low, specific for each district in each state.

“Our collaboration with Microsoft to create a Pest Risk Prediction API enables farmers to get predictive insights on the possibility of pest infestation. This empowers them to plan in advance, reducing crop loss due to pests and thereby helping them to double the farm income,” says Vikram Shroff, Executive Director, UPL Limited.

Price forecasting model for policy makers

Predictive analysis in agriculture is not limited to crop growing alone. The government of Karnataka will start using price forecasting for agricultural commodities, in addition to sowing advisories for farmers in the state. Commodity prices for items such as tur, of which Karnataka is the second largest producer, will be predicted three months in advance for major markets in the state.

At present, price forecasting for agricultural commodities using historical data and short-term arrivals is being used by the state government to protect farmers from price crash or shield population from high inflation. However, such accurate data collection is expensive and can be subject to tampering.

“We are certain that digital agriculture supported by advanced technology platforms will truly benefit farmers.”

– Dr. T.N. Prakash Kammardi, Chairman, KAPC, Government of Karnataka

Microsoft has developed a multivariate agricultural commodity price forecasting model to predict future commodity arrival and the corresponding prices. The model uses remote sensing data from geo-stationary satellite images to predict crop yields through every stage of farming.

This data along with other inputs such as historical sowing area, production, yield, weather, among other datasets, are used in an elastic-net framework to predict the timing of arrival of grains in the market as well as their quantum, which would determine their pricing.

“We are certain that digital agriculture supported by advanced technology platforms will truly benefit farmers. We believe that Microsoft’s technology will support these innovative experiments which will help us transform the lives of the farmers in our state,” says Dr. T.N. Prakash Kammardi, Chairman, Karnataka Agricultural Price Commission, Government of Karnataka.

The model currently being used to predict the prices of tur, is scalable, and time efficient and can be generalized to many other regions and crops.

AI in agriculture is just getting started

Shifting weather patterns such as increase in temperature, changes in precipitation levels, and ground water density, can affect farmers, especially those who are dependent on timely rains for their crops. Leveraging the cloud and AI to predict advisories for sowing, pest control and commodity pricing, is a major initiative towards creating increased income and providing stability for the agricultural community.

“Indian agriculture has been traditionally rain dependent and climate change has made farmers extremely vulnerable to crop loss. Insights from AI through the agriculture life cycle will help reduce uncertainty and risk in agriculture operations. Use of AI in agriculture can potentially transform the lives of millions of farmers in India and world over,” says Anil Bhansali, CVP C+E and Managing Director, Microsoft India (R&D) Pvt. Ltd.

Taking a leap in bioinspired robotics

Mechanical engineer Sangbae Kim builds animal-like machines for use in disaster response.“Say there are toxic gases leaking in a building, and you need to close a valve inside, but it’s dangerous to send people in,” says Sangbae Kim, associate professor of mechanical engineering at MIT. “Now, there is no single robot that can do this kind of job. I want to create a robotic first responder that can potentially do more than a human and help in our lives.”

“Say there are toxic gases leaking in a building, and you need to close a valve inside, but it’s dangerous to send people in,” says Sangbae Kim, associate professor of mechanical engineering at MIT. “Now, there is no single robot that can do this kind of job. I want to create a robotic first responder that can potentially do more than a human and help in our lives.”

In the not so distant future, first responders to a disaster zone may include four-legged, dog-like robots that can bound through a fire or pick their way through a minefield, rising up on their hind legs to turn a hot door handle or punch through a wall.

Such robot-rescuers may be ready to deploy in the next five to 10 years, says Sangbae Kim, associate professor of mechanical engineering at MIT. He and his team in the Biomimetic Robotics Laboratory are working toward that goal, borrowing principles from biomechanics, human decision-making, and mechanical design to build a service robot that Kim says will eventually do “real, physical work,” such as opening doors, breaking through walls, or closing valves.

“Say there are toxic gases leaking in a building, and you need to close a valve inside, but it’s dangerous to send people in,” Kim says. “Now, there is no single robot that can do this kind of job. I want to create a robotic first responder that can potentially do more than a human and help in our lives.”

To do this, Kim, who was awarded tenure this year, is working to fuse the two main projects in his lab: the MIT Cheetah, a four-legged, 70-pound robot that runs and jumps over obstacles autonomously; and HERMES, a two-legged, teleoperated robot, whose movements and balance is controlled remotely by a human operator, much like a marionette or a robotic “Avatar.”

“I imagine a robot that can do some physical, dynamic work,” Kim says. “Everybody is trying to find overlapping areas where you’re excited about what you’re working on, and it’s useful. A lot of people are excited to watch sports because when you watch someone moving explosively, it is hypothesized to trigger the brain’s  ‘mirror neurons’ and you feel that excitement at the same time. For me, when my robots perform dynamically and balance, I get really excited. And that feeling has encouraged my research.”

A drill sergeant turns roboticist

Kim was born in Seoul, South Korea, where he says his mother remembers him as a tinkerer. “Everything with a screw, I would take apart,” Kim says. “And she said the first time, almost everything broke. After that, everything started working again.”

He attended Yonsei University in the city, where he studied mechanical engineering. In his second year, as has been mandatory in the country, he and other male students joined the South Korean army, where he served as a drill sergeant for two and a half years.

“We taught [new recruits] every single detail about how to be a soldier, like how to wear shirts and pants, buckle your belt, and even how to make a fist when you walk,” Kim recalls. “The day started at 5:30 a.m. and didn’t end until everyone was asleep, around 10:30 p.m., and there were no breaks. Drill sergeants are famous for being mean, and I think there’s a reason for that — they have to keep very tight schedules.”

After fulfilling his military duty, Kim returned to Yonsei University, where he gravitated toward robotics, though there was no formal program in the subject. He ended up participating in a class project that challenged students to build robots to perform specific tasks, such as capturing a flag, and then to compete, bot to bot, in a contest that was similar to MIT’s popular Course 2.007 (Design and Manufacturing), which he now co-teaches.

“[The class] was a really good motivation in my career and made me anchor on the robotic, mechanistic side,” Kim says.

A bioinspired dream

In his last year of college, Kim developed a relatively cheap 3-D scanner, which he and three other students launched commercially through a startup company called Solutionix, which has since expanded on Kim’s design. However, in the early stages of the company’s fundraising efforts, Kim came to a realization.

“As soon as it came out, I lost excitement because I was done figuring things out,” Kim says. “I loved the figuring-out part. And I realized after a year of the startup process, I should be working in the beginning process of development, not so much in the maturation of products.”

After enabling first sales of the product, he left the country and headed for Stanford University, where he enrolled in the mechanical engineering graduate program. There, he experienced his first taste of design freedom.

“That was a life-changing experience,” Kim says. “It was a more free, creativity-respecting environment — way more so than Korea, where it’s a very conservative culture. It was quite a culture shock.”

Kim joined the lab of Mark Cutkosky, an engineering professor who was looking for ways to design bioinspired robotic machines. In particular, the team was trying to develop a climbing robot that mimicked the gecko, which uses tiny hairs on its feet to help it climb vertical surfaces. Kim adapted this hairy mechanism in a robot and found that it worked.

“It was 2:30 a.m. in the lab, and I couldn’t sleep. I had tried many things, and my heart was thumping,” Kim recalls. “On some replacement doors with tall windows, [the robot] climbed up smoothly, using the world’s first directional adhesives, that I invented. I was so excited to show it to the others, I sent them all a video that night.”

He and his colleagues launched a startup to develop the gecko robot further, but again, Kim missed the thrill of being in the lab. He left the company soon after, for a postdoc position at Harvard University, where he helped to engineer the Meshworm, a soft, autonomous robot that inched across a surface like an earthworm. But even then, Kim was setting his sights on bigger designs.

“I was moving away from small robots because it’s very difficult for them to do to real, physical work,” Kim says. “And so I decided to develop a larger, four-legged robot for human-level physical tasks — a long-term dream.”

Searching for principles

In 2009, Kim accepted an assistant professorship in MIT’s Department of Mechanical Engineering, where he established his Biomimetic Robotics Lab and set a specific research goal: to design and build a four-legged, cheetah-inspired robot.

“We chose the cheetah because it was the fastest of all land animals, so we learned its features the best, but there are many animals with similarities [to cheetahs],” Kim says. “There are some subtle differences, but probably not ones that you can learn the design principles from.”

In fact, Kim quickly learned that in some cases, it may not be the best option to recreate certain animal behaviours in a robot.

“A good example in our case is the galloping gait,” Kim says. “It’s beautiful, and in a galloping horse, you hear a da-da-rump, da-da-rump. We were obsessed to recreate that. But it turns out galloping has very few advantages in the robotics world.”

Animals prefer specific gaits at a given speed due to a complex interaction of muscles, tendons, and bones. However, Kim found that the cheetah robot, powered with electric motors, exhibited very different kinetics from its animal counterpart. For example, with high-power motors, the robot was able to trot at a steady clip of 14 miles per hour — much faster than animals can trot in nature.

“We have to understand what is the governing principle that we need, and ask: Is that a constraint in biological systems, or can we realize it in an engineering domain?” Kim says. “There’s a complex process to find out useful principles overarching the differences between animals and machines. Sometimes obsessing over animal features and characteristics can hinder your progress in robotics.”

A “secret recipe”

In addition to building bots in the lab, Kim teaches several classes at MIT, including 2.007, which he has co-taught for the past five years.

“It’s still my favourite class, where students really get out of this homework-exam mode, and they have this opportunity to throw themselves into the mud and create their own projects,” Kim says. “Students today grew up in the maker movement and with 3-D printing and Legos, and they’ve been waiting for something like 2.007.”

Kim also teaches a class he created in 2013 called Bioinspired Robotics, in which 40 students team up in groups of four to design and build a robot inspired by biomechanics and animal motions. This past year, students showcased their designs in Lobby 7, including a throwing machine, a trajectory-optimizing kicking machine, and a kangaroo machine that hopped on a treadmill.

Outside of the lab and the classroom, Kim is studying another human motion: the tennis swing, which he has sought to perfect for the past 10 years.

“In a lot of human motion, there’s some secret recipe, because muscles have very special properties, and if you don’t know them well, you can perform really poorly and injure yourself,” Kim says. “It’s all based on muscle function, and I’m still figuring out things in that world, and also in the robotics world.”- Jennifer Chu

DARPA’s Brain Chip Implants Could Be the Next Big Mental Health Breakthrough—Or a Total Disaster

How did a Massachusetts woman end up with two electrodes implanted into her brain? Why is the Defense Advanced Research Projects Agency developing a controversial, cutting-edge brain chip technology that could one day treat everything from major depressive disorder to hand cramps? How did we get to deep brain stimulation and where do we go from here?

In 1848, a rail foreman named Phineas Gage was clearing a railroad bend in Vermont when a blast hole exploded, sending the tamping iron he had been using to pack explosives through his left cheek, his brain’s left frontal lobe and finally out the top of his skull before landing 25 yards away, stuck upright in the dirt. Despite his pulverized brain mass, Gage went on to make a full recovery, with the exception of a blinded left eye. It was, by all accounts, miraculous.

But while Gage could walk and talk, those who knew him found that after the accident he seemed, well, different. A local physician who treated him the day of the accident observed that “the equilibrium … between his intellectual faculties and his animal propensities seems to have been destroyed.” His friends put it more simply: Gage, they said, “was no longer Gage.”

Gage’s case was the first to suggest the link between the brain and personality—that the brain is intimately connected to our identity, our sense of self.

A portrait of Phineas Gage holding the tamping iron that injured him. Image: Phyllis Gage Hartley/Creative Commons.

Since then, science has frequently exploited that link in the name of (sometimes misguided) self-improvement. Change the brain, and change the self. Once-common but in hindsight abhorred lobotomies were the first treatment to offer relief from mental illness by disrupting the brain’s circuitry, severing the connections to and from the prefrontal cortex. Electroconvulsive therapy, a once-cutting edge treatment now reserved for extreme cases, sends a shock of electric current through the brain for a near-instant change in its chemical balance. Antidepressants target neurotransmitters like serotonin to affect mood and emotions. As we have unpacked more thoroughly the mysteries of the brain, we have become better able to precisely target the changes we want to affect. This is how Liss Murphy wound up with two 42-centimetre-long electrodes implanted deep within the white matter of her brain.

For years, Murphy had suffered from severe depression that seemed untreatable—rounds of Effexor, Risperdal, Klonopin, Lithium, Cymbalta, Abilify, electroshock therapy and even an adorable new puppy failed to get her up out of bed. Then doctors offered her a new option, something called deep brain stimulation. On June 6, 2006—6/6/06—doctors at Massachusetts General Hospital drilled two holes into Murphy’s skull and implanted two electrodes into a dense bundle of fibres within her brain’s internal capsule. The axons here carry signals to many of the brain’s circuits that have been linked to depression. Those electrodes were then connected to two wires that ran behind her ears and under her skin to her clavicle, where two battery packs just slightly larger than a matchbox were implanted to power them. When turned on, the hope was that the electrical signals emitted by Murphy’s new implants would in effect re-wire the circuits in her brain that was causing her to feel depressed.

It worked. Murphy became one of the first people in the world successfully treated for a psychiatric illness using deep brain stimulation, in which electronic neurostimulators are embedded deep within the brain to correct misfiring signals. Like Gage, the experience changed her, but for the better. She got out of bed, had a kid, and went back to work part-time after years of being able to barely leave the house.

“My greatest hope the day of the surgery was that I would die on the table,” Murphy recently told Gizmodo. “I can cobble together a regular day now. It truly gave me my life back.”

Deep brain stimulation is the bleeding edge of mental health treatment. Originally developed to treat the terrible tremors that patients with Parkinson’s disease suffer from, many researchers now view it as a potentially revolutionary method of treating mental illness. For many patients with mental health disorders like depression, therapies like drugs are often insufficient or come with terrible side effects. The numbers are all over the place, but doctors and researchers generally agree that significant numbers of people don’t respond adequately to current treatment methods—one often-cited study pegs that number somewhere around 10%-30%. But what if doctors could simply open up the brain and go directly to the source of a problem, just as a mechanic might pop open the hood of a car and tighten a loose gasket?

Now, the same team that implanted electrodes into Murphy’s brain is halfway through a five-year, $65 million research effort funded by the Defense Advanced Research Projects Agency to use the same technology to tackle some of the trickiest psychiatric disorders on the books. The goal is ambitious. DARPA is betting that the research teams it is funding at Mass. General and UCSF will uncover working therapies for not just one disorder, but many at once. And in developing treatments for schizophrenia, PTSD, traumatic brain injury, borderline personality disorder, anxiety, addiction and depression, along the way their work also aims to completely reframe how we approach mental illness to shed new light on how it flows through the brain.

“This is a radical departure from traditional neuropsychiatric illness treatment,” said Justin Sanchez, the director of DARPA’s Biological Technologies Office. “We’re talking about being able to go directly to the brain to treat people. That’s transformative.”

Unfortunately, it’s not quite as simple as all that.

For starters, psychiatric illnesses are complicated, and often not all that well understood in terms of where they exist in the brain. For more than a decade, DBS has been used in patients with Parkinson’s disease, but targeting the brain’s motor cortex to manage Parkinson’s violent trembling is a lot less complicated than targeting, say, depression. A diagnosis of major depressive disorder requires that a person exhibit five of nine symptoms, but two people could be depressed and have almost no symptoms in common. That means that for those two people, treating depression with deep brain stimulation might require stimulating entirely different regions of their brain. And there is still disagreement about what those regions even are.

Then there is the array of ethical questions that brain technologies like DBS inspire. Does inserting a chip into someone’s brain to mediate their brain circuitry change their identity? Might it, eventually, lead to the ability to simply treat ourselves when feeling blue, a sort of high-tech take on Aldous Huxley’s soma? Could you use a DBS device to hack into someone’s brain? Or control them? Or enhance them? Is it potentially dangerous in the wrong hands?

Rumours have swirled that the DARPA’s real goal in all this research is to create enhanced super soldiers. The agency has several other brain-computer interface projects, which seek not just to use chips to treat mental illness, but also to restore memories and movement to battle-wounded soldiers. A 2015 book about about the history of DARPA, “The Pentagon’s Brain,” suggested that government scientists hope that implanting chips in soldiers will eventually unlock the secrets of artificial intelligence and allow us to give machines the kind of higher-level reasoning that humans can do, or allow soldiers to perform feats like waging war using their thoughts alone. DARPA, though, has maintained that its main goal is to develop therapies for the many thousands of soldiers and veterans with wounded brains.

An X-ray of a monkey’s head in which neuroscientist Jose Delgado implanted electrodes arrays in the frontal lobes and the thalamus. Image: Physical Control of the Mind by Jose Delgado.

Murphy was among the first mental health patients to be successfully treated using DBS, but the idea that we might use electrical signals to right our sometimes faulty wiring is by no means a new one. In the 1970s, a Yale University neuroscientist named Jose Delgado implanted radio-equipped electrode arrays—he called them “stimoceivers”—into cats, monkeys, bulls and even humans. His work demonstrated that electrically stimulating the brain could elicit movement and on occasion even particular emotions.

In one now-famous experiment, Delgado agitated the temporal lobe of a young epileptic woman while she calmly played the guitar, prompting her to react by violently smashing the guitar against the wall in rage. Less sensational, but more promising for clinical purposes was Delgado’s research that found stimulating a part of the human brain’s limbic region called the septum could invoke euphoria strong enough to counteract depression and even pain.

In 1970, The New York Times Magazine hailed Jose Delgado as the “impassioned prophet of a new ‘psychocivilized society’ whose members would influence and alter their own mental functions.” They also called it “frightening.” His work eventually became engulfed in controversy. Strangers accused him of having secretly implanted stimoceivers into their brains. Delgado, who was Spanish, left the U.S. shortly after Congressional hearings in which he was accused of developing “totalitarian” mind-control devices. His work receded into the archives of history.

More recent forays into deep brain stimulation began in 1987, when a French neurosurgeon named Alim Louis Benabid was preparing to remove a piece of the thalamus in a patient who suffered from severe tremors, a then-common practice known as lesioning that aimed to calm problematic areas of the brain by surgically damaging them. While probing the thalamus to ensure he didn’t accidentally remove something crucial, he inadvertently discovered that jolts of electricity could stop the tremors, no brain damage necessary. A little more than a decade later, the U.S. Food and Drug Administration approved DBS for use in patients with Parkinson’s disease. Today, there are over 100,000 Parkinson’s patients with tiny chips in their brain to control their symptoms. Parkinson’s is still the most common use of DBS. In 2009, the FDA approved a humanitarian exemption to allow patients with the severe obsessive-compulsive disorder to receive implants. All other uses of DBS are considered experimental.

Case studies of patients who have received the treatment have shown that those implants sometimes have severe side effects. In one case study, a 43-year-old man suffering from debilitating Tourette’s Syndrome received DBS. His doctors targeted well-known areas of the brain considered safe for treatment in order to relieve his tics. And it worked. But a year after the operation, he began to dissociate from his previous self. Doctors observed that increasing the amount of electrical stimulation in his brain resulted in him “anxiously crouching in a corner, covering his face with his hands” and speaking “with a childish high-pitched voice.” When it was decreased, he went back to normal, with little memory of what had happened. A 2015 review of cases using DBS to treat Tourette’s found that Tourette’s patients seem more likely to experience post-DBS complications but ultimately concluded the treatment still seemed promising, citing successes.

Another study found that 20% of 29 Parkinson’s patients reported experiencing an altered body image due to a DBS brain implant, telling researchers things like I “feel like a machine.” In some cases, DBS seems to bring on side effects like a decline in word fluency and verbal memory, depression, increased suicide tendencies, anxiety and mania. In other cases, like Murphy’s, though, there seem to be really no changes in personality at all.

A common argument is that DBS, unlike a lobotomy, can be turned off by switching off the electric current flowing to the brain. A patient could always simply let the battery run out. But some evidence suggests it actually does cause long-term, irreversible effects, like damaging brain tissue. The full extent of those effects is yet unknown.

For patients like Murphy, for whom depression was a debilitating life suck, those risks might be a worthy trade-off. But interest in using DBS to treat all manners of conditions is growing. In addition to disorders like depression and Tourette Syndrome, it’s been used to treat chronic pain, headaches, morbid obesity and even writer’s cramp that had not responded to other treatments. The controversial Italian neurosurgeon Sergio Canavero has made the case for using psychosurgery procedures like DBS on criminals and drug addicts, reasoning that “psychopathic behaviour is a purely biological epiphenomenon and can be induced.”

“With any treatment of any brain disease we risk trying to make everyone the same, and treat any variation from the norm as sickness,” Karen Rommelfanger, a neuroethicist at Emory, told Gizmodo. “We want to have magical thinking. But are we going to eradicate depression? No, and we shouldn’t. Being human means the full spectrum of experience.”

Doctors who treat patients with intractable conditions make the case that DBS is a much-needed treatment only being used to treat patients for whom it is their last resort. Many neuroethicists, though, counter that its negative effects are still poorly studied and often downplayed in both academic literature and the press. “Is the extreme that we have a kind of neuro-eugenics with only one correct brain? Well, yeah,” said Rommelfanger. “We are already moving towards a right way of being in society at large. That’s kind of what consumer culture relies on.”

Really, at this point, it’s hard to know what might happen. Gage is often trotted out as the cautionary tale of what might happen when messing with the brain. But recent historical work has begun to suggest that eventually after his accident, he actually returned to a basically normal life, weird personality tics and all. One scientist who studied him throughout his life observed that he “quite recovered in his faculties of body and mind.” A recent book about Gage suggests that, over generations, his story had been embellished to tell the legend of a man who suffered a brain injury and saw his humanity vanish. Instead, it may really be a tale about the brain’s incredible ability to heal itself. Perhaps a more immediate risk, though, is that deep brain stimulation will simply not be as effective as we dream it will be.

Monkey in which neuroscientist Jose Delgado implanted electrode arrays. IMAGE: Physical Control of the Mind by Jose Delgado

Dr Emad Eskandar, a neurosurgeon at Mass. General and one of the lead researchers on the DARPA project, has been working on using DBS to treat mental illness for over a decade. He was the one who implanted those two electrodes into Liss Murphy back in 2006. But while for Murphy and many other patients the treatment seemed to work, a clinical trial revealed that the treatment had a significant placebo effect. In a study of 30 people conducted in the mid-2000s, participants who received DBS did not improve at a rate much better than those who did not, and the FDA halted the trials.

Eskandar told me that they eventually realized that they were thinking about it all wrong. “Depression is not one thing,” he said. “It sounds obvious in retrospect, but at the time it really wasn’t.” That was the aha moment that moved them to reframe their research entirely. Instead of trying to treat a psychiatric diagnosis, like depression, they decided to focus on treating the particular symptoms that a person exhibited.

“It’s much more tangible for us to measure things like ‘Are you cognitively flexible or rigid? Are you emotionally flat?” he said. Two years in, their work has identified patterns of activity in certain areas of the brain that seem to correlate with specific traits, but they still need to home in on exactly which frequency band is the right signal to the target. 

One recent revelation at Mass. General was that cognitive flexibility, decision making and approach-avoidance—traits associated with several disorders—are all located in one part of the centre of the brain known as the striatum. Luckily, it was a region of the brain already known to be safe for electrical stimulation. Some traits, though, are easier to locate in the brain than others. Impulsivity, for example, a major trait in most people with addiction, is easier to pinpoint than symptoms like fatigue or physical pain.

“The 30,000-foot view is that we have pretty good data for the set of domains we are treating,” said Darin Dougherty, a psychiatrist at Mass. General and Eskandar’s long-time collaborator. Ahead, though, are still likely years more of fine-tuning. Their second hurdle, on top of figuring out where in the brain to target, will be to design a plan for how best to stimulate that spot.

Murphy’s implant is what’s known as “open loop”— her electrodes send out signals to her brain, but the brain isn’t sending any signals back. Her implant works in some ways much like a drug, delivering a single, constant electrical stimulation, albeit one targeted at a specific area of the brain.

In hopes of targeting the brain more precisely, the Mass. General team has enlisted Boston’s Draper to design a “closed loop” implant to replace the old system. A closed loop system would work much more like the brain itself, both sending and receiving information to multiple sites of the brain in a natural, dynamic fashion. This would allow the electrodes to only fire off a signal when necessary, meaning patients would only receive treatment when their brains are sending out the signal responsible for unwanted behaviour. “What’s turning out to be most important for us is timing,” said Alik Widge, the engineering lead for the DBS project. “If you hit the right region at just the right moment you can nudge a decision. It’s all about knowing when the brain is the right state.”

Last November, I visited Mass. General, where Widge showed me the fridge-sized machine that housed the algorithms behind the team’s DBS technology. Draper will have to figure out a way to fit those complex algorithms onto a device smaller than a cellphone. With the new system, the entire DBS unit, including rechargeable batteries, will be implanted on the back of the skull. The implant will contain five electrodes, with 64 points of contact allowing them to target the brain with incredible geographic specificity. Those electrodes will gather data from the brain, process it, and then administer the appropriate dose of stimulation accordingly.

In January, the FDA gave the Mass. General team approval to, for the first time, hook a prototype up to a patient. Right now it’s still about the size of a brick, far too big to implant permanently. The plan is to hook it up to the patient and test it temporarily, at first for a few hours, and eventually a few days. The goal is that by the end of DARPA’s five-year contract, they have both a device and protocol ready to be put to the test of an FDA clinical trial.

Widge told me that he imagines their device one day being sophisticated enough that patients could control some settings via an app, giving them control over how much psychiatric assistance they receive on a day-to-day basis. Listening to patients like Murphy describe their experience—a sudden lightness, an immediate surge of warmth—it’s hard not to wonder whether, in tweaking a person’s circuitry, we aren’t also altering something at their core. Murphy, though, disagrees. She actually finds the term “cyborg” offensive. “People think that when you have something implanted, it changes who you are,” she told me. “It’s like another body part. It’s just part of me. The device didn’t change anything about who I am.”– Kristen V. Brown