AI and brain interfaces may be about to change how we make music

Article Image

When Yamaha demonstrated their AI allowing a dancer to play the piano with movement in a Tokyo concert hall in November 2017, it was the latest example of the ways in which computers are increasingly getting involved in music-making. We’re not talking about the synthesizers and other CPU-based instruments in contemporary music. We’re talking about computers’ potential as composers’ tools, or as composers themselves.

Yamaha’s use of AI is quite different. In the recent performance, world-renowned dancer Kaiji Moriyama was outfitted with electrodes on his back, wrists, and ankles, and set free to express himself as AI algorithms converted his movements into musical phrases for transmission to Yamaha’s Disklavier piano via MIDI messages. (MIDI is a computer language through which musical instruments can be controlled.)

diagram AI music(Yamaha Corporation)

Yamaha’s AI, which they’re still developing, worked with a database of linked musical phrases from which it selected and drew melodies for sending to the instrument based on Moriyama’s motions.  Moriyama was accompanied onstage by the Berlin Philharmonic Orchestra Scharoun Ensemble.

We’ve written previously about other musical AI, the web-based AI platform called Amper that uses AI to compose passages based on descriptors of style, instrumentation, and mood. Singer/songwriter Taryn Southern was using Amper as her primary collaborator in writing an album.  Another fascinating avenue being explored is the use of brain-to-computer (BCI) interfaces that allow wearers to think music into existence. It’s a fascinating way for anyone to play music, but it’s especially promising for people whose physical limitations make the creation of music difficult or even impossible otherwise.

Certain electroencephalogram signals correspond to known brain activities such as the P300 ERP (for “event-related potential”) that signifies a person’s reaction to some stimulus. It’s previously been by brain-computer interface (BCI) applications in spelling, operating local environmental controls, operating web browsers, and for painting. In September 2017, researchers led by BCI expert Gernot Müller-Putz from TU Graz’s Institute of Neural Engineering published research in PLOS/One describing their “Brain Composer” project that leveraged P300 to bring musical ideas directly from composers’ mind to notated sheets of music. They work in collaboration with MoreGrasp and “Feel Your Reach”.

The researchers’ first step was training the BCI application to recognize alphabetical letters before moving on to note length and pitch, as well as notation values such as rests, ties, and such. In this video, the researchers demonstrate the successful outcome of just 90-minutes’ work for a motor-impaired subject.

This is all exciting stuff — and a heaven-send for musical souls with physical limitations — even if the results are a little odd, as in Yamaha’s case. (The BCI example sounds remarkably like the theme from Fringe.) We’ve been augmenting our natural musical capabilities with technology ever since we picked up our first rock — and certainly by the time we honked steam-punk-looking saxophones. We should have no issue with adding AI and BCIs to our toolbox.

If AI comes up with music we might not, that’s fine. The workings of music remain pretty mysterious, in any event, so here’s an intriguing thought. Though many of us basically prefer music that’s catchy and sonorous, the stuff that really gets us in the gut tends to have something of the unexpected to it, a surprising dissonance or rhythm, and odd “hair” out of place, that makes the moment leap from our speakers or the stage and into our lives as more of an experience than a piece of art, leaving us a little startled and even moved. So if AI can beat a flesh-and-blood chess player by not thinking like a human, imagine what we’re about to hear

– ROBBY BERMAN

Brain-Computer Interfaces Are Already Here

The mind-reading technology touted by Zuckerberg and Musk may soon help quadriplegics move more than a cursor

BrainGate, a consortium of researchers from universities including Stanford, Brown, and Case Western Reserve, has given a dozen patients the ability to control a cursor just by thinking about it. SOURCE: MARK HANLON/STANFORD UNIVERSITY

For the first 54 years of his life, Dennis DeGray was an active guy. In 2007 he was living in Pacific Grove, Calif., not far from the ocean and working at a beachside restaurant. He surfed most mornings. Then, while taking out the trash one rainy night, he slipped, fell, and hit his chin on the pavement, snapping his neck between the second and third vertebrae. DeGray was instantly rendered, as he puts it, “completely nonfunctional from the collarbone south.” He’s since depended on caregivers to feed, clothe, and clean him and meet most any other need. He had every expectation this would be the case for the rest of his life.

“My first six months were really something,” DeGray, now 64, says ruefully from his single room in a Menlo Park nursing facility, decorated with fairy lights, a National Lampoon poster, and a 6-foot-tall plastic alien. “And then the next two years were also something. And, frankly, this morning, it’s still something.” He operates his motorized wheelchair by blowing into a straw. Most of his days consist of TV and trips to the local park, the library, and neighbourhood restaurants, where familiar staff help him eat.

For the past year, though, the routine has been broken on Mondays and Wednesdays. Around noon, two or three scientists arrive at the nursing facility. They roll out a rack of computer equipment parked in a corner of DeGray’s room and plug a cable into a socket on the top of his head. Once he’s connected, a 1/6-inch-square silicon chip in his motor cortex allows him to move a cursor on a computer screen just by thinking about it.

This so-called brain-computer interface, or BCI, provides a way to directly measure neuron activity and translate it into information or action. To manipulate the cursor on his screen, DeGray imagines that his hand is resting on a ball on a table and that he’s trying to roll it in one of four directions: left, right, toward, away. When he first tried the system in September 2016, “it was like a bumblebee in the wind, bouncing around,” he says. Soon, though, he got the hang of it, and the researchers used his efforts to teach the computer to better interpret his brain activity. Today, with a keyboard laid out on the screen, DeGray can bang out nine and a half words per minute. If that doesn’t sound speedy in touch-typing terms, well, the Wright Flyer wasn’t a particularly fast aeroplane.

A cable leads from a socket installed on the top of a patient’s head to a rack of computer equipment that communicates with a nearby monitor.

DeGray has been working with BrainGate, a consortium of researchers from the likes of Stanford, Brown, and Case Western Reserve University that’s successfully treated a dozen patients. The BrainGate team is among a growing set of university scientists, government agencies, and startups trying to give humans the ability to sense, control, and communicate with the outside world through the power of thought.

So far these advances are limited to controlled settings, but there’s big money dedicated to getting them out into the world faster than most people imagined when DeGray broke his neck. Bryan Johnson, the founder of the payments service Braintree, has committed $100 million to a BCI startup called Kernel. Facebook Inc. is developing a skullcap it says will allow users to mentally type their thoughts at 100 words per minute. Tesla Inc. and SpaceX Chief Executive Officer Elon Musk are backing a similar technology from startup Neuralink that he says supports his vision of a “closer merger of biological intelligence and digital intelligence.” The Pentagon’s research and development arm, the Defense Advanced Research Projects Agency (Darpa), is funding nine BCI projects it aims to bring to the U.S. Food and Drug Administration for clinical trials in three to five years. Justin Sanchez, director of Darpa’s Biological Technologies Office, predicts that medical device makers will be able to apply BCI hardware to a wide range of projects. DeGray is focused on one in particular: bypassing damaged nerves to reconnect his brain and body. “Ten years from now,” he says, “a guy is going to fall down just like I did, and in short order, he’ll wake up in the morning, and someone will put his exoskeleton on, and he’ll get up and walk to Starbucks.”

The wiring together of brains and computers is a saddle-worn sci-fi trope. Think of William Gibson’s hacker heroes “jacking in” to cyberspace, or the captive humans plugged into the Matrix, or RoboCop. In practice, though, the brain is a lot tougher to hack. It contains 100 billion microscopic neurons, each connected to thousands of others. While some parts of the motor and sensory cortices correspond to parts of a person’s body, most elements of the brain, including the areas responsible for language and memory, aren’t as intuitively organized. In fact, we hardly understand them at all.

The least invasive tool for measuring brain activity is the electroencephalogram, or EEG, which works through an array of electrodes fastened to the scalp and measures the strength of the electric field in each spot. This kind of gear is safe, cheap, and imprecise, best suited to applications that ask researchers only to distinguish between the brain activity required for sharply contrasting thoughts: left vs. right, up vs. down. To restore function to quadriplegics, BCI devices need vastly better precision and speed. For now, the only way to achieve that is by affixing sensors directly to the cerebral cortex.

This brain-computer interface, or BCI, provides a way to directly measure neuron activity and translate it into information or action.

Cutting into the brain to insert electrodes is about as tricky and dangerous as you’d think. (Maybe a little more so.) But people have been doing it, albeit with some public outcry, since at least the 1950s, when controversial neurologist José Rodríguez Delgado experimented with the cortices of epileptics and schizophrenics. In the ’90s neuroscientist, Phil Kennedy implanted electrodes in the brains of subjects suffering from locked-in syndrome, a paralysis of almost all voluntary muscles besides those that control the eyes so they could type out messages. (The FDA halted Kennedy’s work because of safety concerns.) Around that time, Brown professor John Donoghue began developing neural interfaces to study how the brain turns thought into action, which ultimately led to the BrainGate project.

“The field is progressing very, very quickly,” says David Borton, head of the Brown Neuromotion Laboratory. The past year has been particularly impressive. Researchers at the University of Pittsburgh Medical Center connected touch sensors from a robot’s fingertips to a paralyzed man’s sensory cortex so he could feel what it was touching. At Case Western, scientists linked a paralyzed man’s motor cortex to a computer that electrically stimulated muscles in his arm, enabling him to bring a forkful of food from a dish to his mouth. At Brown, Borton’s team implanted electrodes and a wireless transmitter in a monkey’s motor cortex and connected it to a receiver wired to the animal’s leg, restoring its walking motion. Taken together, these procedures provide a roadmap for artificial workarounds of nervous system malfunctions caused by accident or disease. The minds of quadriplegic patients could be reconnected to their own muscles or patched into machines. Borton says it’s a question of when not if.

Just as Lasik surgery has gone from Kubrickian nightmare to the sort of thing you get done over a lunch hour, brain implants could come to be a reasonable intervention for conditions such as Parkinson’s, epilepsy, or chronic pain. They might even be used to improve healthy brains by adding memory storage or enabling communication by thought alone. Stare at the ceiling long enough, and it’s easy to worry about the darker possibilities of BCI. The kind of cybernetic fusion that gives us a doorway out of our bodies and minds could also give other people a way in. Once tiny robots can change people’s moods, what can’t they change? What does spam, social media addiction, or hacking look like inside your brain?

That’s a ways off, as DeGray notes. He’s been reading up on BCI research and is convinced that he’ll eventually be able to do a lot more than type nine and a half words a minute. “We’re building the foundation, learning how to directly control things from the cortex,” he says.

A keyboard laid out on the screen allows the patient to type.

Already several companies, including Raytheon Co. and Lockheed Martin Corp., have developed powered exoskeletons that augment the strength of healthy bodies. If scientists can develop sensors and actuators that allow quadriplegics to feel and manipulate objects, they can integrate human and exoskeleton into a fully functioning cyborg. That will be no small feat—neurologists don’t fully understand how our brains seamlessly coordinate sensation and action—but one-day paralysis will effectively be a solved problem.

The technology will also extend the distance between user and machine. Pressure-sensitive pads on a robot’s fingertips could feed into the sensory cortex of a user in the next room, the next state, or half a world away, and motor information travelling the other way could guide the robot hand to act. “It doesn’t really matter where the brain is located,” DeGray says. “I’ll be able to fly like a bird at some point. Literally, the sky’s the limit.”

DeGray is ready for a change. His breath-operated wheelchair can get him the 2½ miles from his bedroom to BrainGate’s Stanford lab and back, but the chair’s best feature, he says, lets him hike himself up an extra 13 inches by blowing into the straw to manipulate a digital menu on the chair’s screen. That 13-inch difference means “I can sit at a bar and watch the soccer game on TV and talk to the guy next to me, just like another guy.” Imagine, he says, stripping away the rest of the social barriers he feels. The potential to reconnect with people one-to-one, I suggest, could be enormous. “Enormous,” DeGray says. “All capital letters, double exclamation points at the back end. Enormous.”

– Jeff Wise

 

Brain Machine Interfaces, Artificial Intelligence and Neuro rights

A New Phase of BMI Research

Progress in neurotechnology is critical to improve our understanding of the human brain and improve the delivery of neurorehabilitation and mental health services at the global level. We are now entering a new phase of neurotechnology development characterized by higher and more systematic public funding (e.g. through the US BRAIN Initiative, the EU supported Human Brain Project or the government-initiated China Brain project), diversified private sector investment (among many others, through neuro-focused companies like Kernel and Neuralink), and increased availability of non-clinical neuro devices. Meanwhile, advances in the interplay between neuroscience and artificial intelligence (AI) are rapidly augmenting the computational resources of neuro devices. As neurotechnology becomes more socially pervasive and computationally powerful, several experts have called for preparing the ethical terrain and charting a route ahead for science and policy [1-3].

Ethical Challenges

Within the neurotechnology spectrum, brain-machine interfaces (BMIs) are of particular relevance from a social and ethical perspective, as their capacity to establish a direct connection pathway between human neural processing and artificial computation has been described as “qualitatively different” by experts [1], hence believed to raise “unique concerns” [2].

Among these concerns, privacy is paramount. The informational richness of brain recordings holds the potential of encoding highly private and sensitive information about individuals, including predictive features of their health status and mental states. Decoding such private information is anticipated to become increasingly easier in the near future due to coordinated advances in sensor capability, the spatial resolution of recordings and machine learning techniques for pattern recognition and feature extraction [2].

Three major types of privacy risks seem to be associated with BMI: incidental disclosure of private information, unintended data leakage and malicious data theft [4]. Given the intimate link between neural recordings, on the one hand, and mental states and predictors of behavior, on the other hand, scholars have argued that privacy challenges raised by BMIs are characterized by greater complexity and ethical sensitivity compared to conventional privacy issues in digital technology and urged for a domain-specific ethical and legal assessment. They have called this domain mental privacy [5].

The increasing use of machine learning and artificial intelligence in BMI has also implications for the notion of agency. For example, researchers have hypothesized that when BMI control is partly reliant on AI components, it might become hard to discern whether the resulting behavioural output is genuinely performed by the user [6], possibly affecting the users’ sense of agency and personal identity. This hypothesis has recently obtained preliminary empirical corroboration [7]. It should be noted, however, that while AI could obfuscate subjective aspects of personal agency, AI-enhanced BMI, considered as a whole, can massively augment the capability of the BMI user to act in a given environment -especially when used for device control by a patient with severe motor impairment.

With the increase in non-clinical uses of BMI, an additional ethical challenge will soon be neuroenhancement. While clinical BMI applications are aimed at restoring function in people with physical or cognitive impairments such as stroke survivors, neuroenhancement applications could, in the near future, produce higher-than-baseline performance among healthy individuals.

Do We Need Neuro right?

As I stated elsewhere [8], the ethical challenges posed by BMI and other neurotechnologies urge us to address a fundamental societal question: determining whether, or under what conditions, it is legitimate to gain access to or to interfere with another person’s or one’s own neural activity.

This question needs to be asked at various levels of jurisdiction including research ethics and technology governance. In addition, since neural activity is largely seen as the critical substrate of personhood and legal responsibility, ethicists and lawyers have recently proposed to address this question also at the level of fundamental human rights [5].

A recent comparative analysis on this topic has concluded that existing safeguards and protections might be insufficient to adequately address the specific ethical and legal challenges raised by advances at the brain-machine interface [5]. After reviewing international treaties and other human rights instruments, the authors called for the creation of new neuro-specific human rights. In particular, four basic neuro rights have been proposed.

First, a right to mental privacy should protect individuals from the three types of privacy risk delineated previously. In its positive connotation, this right should allow individuals to seclude neural information from unconsented access and scrutiny, especially from information processed below the threshold of conscious perception. Authors have argued that individuals might be more vulnerable to breaches of mental privacy compared to other domains of information privacy due to their limited degree of voluntary control of brain signals [5].

Second, a right to psychological continuity might guide the responsible integration of AI in BMI control and preserve people’s sense of agency and personal identity -often defined as the continuity of one’s mental life- from unconsented manipulation. Users of BMIs should retain the right to be in control of their behaviour, without experiencing “feelings of loss of control” or even a “rupture” of personal identity [7]. At the same time, the right to psychological continuity is well-suited to protect from unconsented interventions by third parties such as unauthorized neuromodulation. This principle might become particularly important in the context of national security and military research, where personality-altering neuro applications are currently being tested for warfighter enhancement and other strategic purposes [9].

When the unconsented manipulation of neural activity results in physical or psychological harm to the user, a right to mental integrity might be enforced. This right is already recognized by international law (Article 3 of the EU’s Charter of Fundamental Rights) but is codified as a general right to access mental health services, with no specific provision about the use or misuse of neurotechnology. Therefore, a reconceptualization of this basic right should aim not only at protecting from mental illness but also at demarcating the domain of legitimate manipulation of neural processing.

Finally, a right to cognitive liberty should protect the fundamental freedom of individuals to make free and competent decisions regarding the use of BMIs and other neurotechnologies. Based on this principle, competent adults should be free to use BMIs for both clinical or neuroenhancement purposes as long as they do not infringe other people’s liberties. At the same time, they should have the right to refuse coercive applications, including implicitly coercive ones [10].

Uncertainty Ahead

This proposal for creating neuro-specific rights has been recently endorsed by impact analysis experts [11] and leading neuroscientists and neurotechnology researchers [1], and earlier this year, has been echoed by “A Proposal for a Universal Declaration on Neuroscience and Human Rights” to the UNESCO Chair of Bioethics [12].

However, many questions still need to be addressed. First, it remains an open question whether neuro rights should be seen as brand new legal provisions or rather as evolutionary interpretations of existing human rights. Similarly, it is unclear who should be the actor to which neuro rights are ascribed, if the brain itself, as it was recently proposed [11], or the whole individual person. Finally, grey zones of legal and ethical provision should be further explored. For example, while a right to cognitive liberty might protect the free choices of competent adults, it is questionable whether parents should have the right to neuro enhance their children or whether family representatives should have the right to refuse clinically beneficial neuro interventions to cognitively disabled patients. Addressing these kinds of moral dilemmas will require an open and public debate involving not only scientists and ethicists but also general citizens.

In addition, responsible ethical and legal impact assessments should be based on scientific evidence and realistic time frames, avoiding fear-mongering narratives that might delay scientific innovation and obliterate the benefits of BMI for the population in need.

A Roadmap for Responsible Neuroengineering

When addressing these fundamental questions, ethical evaluations should not be reactive but proactive. Instead of simply reacting to ethical conflicts raised by new products, ethicists have a duty to work together with neuroscientists, neuro engineers, and clinicians to anticipate ethical challenges and promptly develop proactive solutions. A framework for proactive ethical design in neuro-engineering has been recently proposed [13] and could be applied to various areas of neurotechnology research.

In addition, calibrated policy responses should take into account issues of fairness and equality. BMI should be fairly distributed and should not exacerbate pre-existing socioeconomic inequalities. While access to BMI-mediated health solutions should be as widespread as possible, open-development initiatives including hackathons, open-source platforms (e.g. Open BCI) and citizen-led data-sharing initiatives should be incentivized. In parallel, the growing involvement of for-profit corporations in BMI development urges us to assess the democratic accountability of company-driven technology development. In a not-too-distant future where BMI will likely be widespread, there will be an increasing need to maintain trust in data donation among individual citizens. This could be obtained through clear rules for data collection and secondary use enhanced data protection infrastructures, public engagement and neuro rights enforcement.

References

  1. Yuste, R., Goering, S., Bi, G., Carmena, J.M., Carter, A., Fins, J.J., Friesen, P., Gallant, J., Huggins, J.E., and Illes, J.: ‘Four ethical priorities for neurotechnologies and AI’, Nature News, 2017, 551, (7679), pp. 159
  2. Clausen, J., Fetz, E., Donoghue, J., Ushiba, J., Spörhase, U., Chandler, J., Birbaumer, N., and Soekadar, S.R.: ‘Help, hope, and hype: Ethical dimensions of neuroprosthetics’, Science, 2017, 356, (6345), pp. 1338-1339
  3. Ienca, M.: ‘Neuroprivacy, neurosecurity and brain-hacking: Emerging issues in neural engineering’, in Editor (Ed.)^(Eds.): ‘Book Neuroprivacy, neurosecurity and brain-hacking: Emerging issues in neural engineering’ (Schwabe, 2015, edn.), pp. 51-53
  4. Ienca, M., and Haselager, P.: ‘Hacking the brain: brain-computer interfacing technology and the ethics of neurosecurity’, Ethics and Information Technology, 2016, 18, (2), pp. 117-129
  5. Ienca, M., and Andorno, R.: ‘Towards new human rights in the age of neuroscience and neurotechnology’, Life Sciences, Society and Policy, 2017, 13, (1), pp. 5
  6. Haselager, P.: ‘Did I Do That? Brain-Computer Interfacing and the Sense of Agency’, Minds and Machines, 2013, 23, (3), pp. 405-418
  7. Gilbert, F., Cook, M., O’Brien, T., and Illes, J.: ‘Embodiment and Estrangement: Results from a First-in-Human “Intelligent BCI” Trial’, Science and Engineering Ethics, 2017
  8. Ienca, M.: ‘The Right to Cognitive Liberty’, Sci. Am., 2017, 317, (2), pp. 10-10
  9. Tennison, M.N., and Moreno, J.D.: ‘Neuroscience, Ethics, and National Security: The State of the Art’, PLoS Biol., 2012, 10, (3), pp. e1001289
  10. Hyman, S.E.: ‘Cognitive enhancement: promises and perils’, Neuron, 2011, 69, (4), pp. 595-598
  11. Cascio, J.: ‘Do brains need rights?’, New Scientist, 2017, 234, (3130), pp. 24-25
  12. Pizzetti, F.: ‘A Proposal for a: “Universal Declaration on Neuroscience and Human Rights”’, Bioethical Voices (Newsletter of the UNESCO Chair of Bioethics), 2017, 6, (10), pp. 3-6
  13. Ienca, M., Kressig, R.W., Jotterand, F., and Elger, B.: ‘Proactive Ethical Design for Neuroengineering, Assistive and Rehabilitation Technologies: the Cybathlon Lesson’, J. Neuroeng. Rehabil., 2017, 14, pp. 115

Marcello Ienca is a Postdoctoral researcher at the Health Ethics & Policy Lab (Dept. of Health Sciences & Technology), ETH Zurich. His research focuses on the ethics of human-machine interaction as well as on responsible innovation in neurotechnology, big-data driven research and artificial intelligence. He was awarded the Pato de Carvalho Prize for Social Responsibility in Neuroscience and the Schotsmans Prize of the European Association of Centres of Medical Ethics (EACME). He is a Board Member of the International Neuroethics Society (INS).

Artificial Intelligence Could Hijack Brain-Computer Interfaces

Brain-computer interfaces have been around for a good while. However, recent developments have shown how BCIs could do more than just help people with disabilities. These brain hacking devices could make us better humans in the future.

BRAIN HACKING

Ever since Tesla CEO and founder Elon Musk announced his plans to develop a brain-computer interface (BCI) through his Neuralink startup, BCI technologies have received more attention. Musk, however, wasn’t the first to propose the possibility of enhancing human capabilities through brain-computer interfacing. A number of other startups are working on a similar goal, including Braintree founder Bryan Johnson with Kernel. Even the U.S. Defense Department’s Defense Advanced Research Projects Agency (DARPA) is working on one.

Now, according to a collaboration of 27 experts—neuroscientists, neurotechnologies, clinicians, ethicists and machine-intelligence engineers—calling themselves the Morningside Group, BCIs present a unique and rather disturbing conundrum in the realm of artificial intelligence (AI). Essentially designed to hack the brain, BCIs themselves run the risk of being hacked by AI.

“Such advances could revolutionize the treatment of many conditions, from brain injury and paralysis to epilepsy and schizophrenia, and transform the human experience for the better,” the experts wrote in a comment piece in the journal Nature. “But the technology could also exacerbate social inequalities and offer corporations, hackers, governments or anyone else new ways to exploit and manipulate people. And it could profoundly alter some core human characteristics: private mental life, individual agency and an understanding of individuals as entities bound by their bodies.”

The experts used the analogy of a paralyzed man who participates in a BCI trial but isn’t fond of the research team working with him. An artificial intelligence could then read his thoughts and (mis)interpret his dislike for the researchers as a command to cause them harm, despite the man not having given such a command explicitly.

The explained it further:

Technological developments mean that we are on a path to a world in which it will be possible to decode people’s mental processes and directly manipulate the brain mechanisms underlying their intentions, emotions and decisions; where individuals can communicate with others simply by thinking; and where powerful computational systems linked directly to people’s brains facilitate their interactions with the world such that their mental and physical abilities are greatly enhanced.

CONCERNS OF ETHICS IN ARTIFICIAL INTELLIGENCE

In order to prepare for this eventuality, the Morningside Group proposed four ethical considerations that need to be addressed: privacy and consent, agency and identity, augmentation, and bias. “For neurotechnologies to take off in general consumer markets, the devices would have to be non-invasive, of minimal risk, and require much less expensive to deploy than current neurosurgical procedures,” they wrote.

The Top Artificial Intelligence Movies of All Time
Click to View Full Infographic

“Nonetheless, even now, companies that are developing devices must be held accountable for their products, and be guided by certain standards, best practices and ethical norms.” These become even more crucial when considering how “profit hunting will often trump social responsibility” when it comes to the pursuit of technology, according to human history.

One of the potential uses for BCIs is in the workplace. As Luke Tang, the general manager for AI technologies accelerator TechCode, noted in a commentary sent to Futurism: “I believe the biggest vertical in which this technology has a play is in the business setting – the brain-machine will shape our future workplaces.” Concretely, BCI technologies could improve remote collaboration, increase knowledge, and enhance communication.

For the latter, BCI would work as a “[t]echnology that can translate your thoughts into speech or actions will no doubt prove transformative to today’s tech-enabled communication methods. Brain-machine technology can lead to a faster and more accurate flow of communication.” Tang said.

It’s precisely this ability to delve into a person’s thoughts that could present a challenge for BCIs as technologies like artificial intelligence become significantly more advanced. In order for us not to lose all the potential that BCIs can offer, it’s important to have the right considerations. “The possible clinical and societal benefits of neurotechnologies are vast,” the Morningside researchers concluded. “To read them, we must guide their development in a way that respects, protects and enables what is best in humanity.”- Bryan Johnson

An Artificially Intelligent Baby Could Unlock the Secrets of Human Nature

Life Lessons

BabyX, the virtual, artificially intelligent creation of Mark Sagar and his new company, Soul Machines Ltd., looks, sounds, and acts so much like a real baby that interacting with her produce a genuine emotional response — just like the kind you get when a real baby coos and giggles at you. That’s exactly the point: BabyX makes it appealing to humans to interact with an AI, and each instance of interaction teaches her more about what it’s like being human.

Sagar is a force for the humanization of AI, which he believes may be important to installing a symbiotic relationship between humans and AIs. Many AI experts argue that robots and AI systems can only realize their full potential if they become more like humans, with emotions and memories informing their behaviour and decision; those are the things that motivate us to seek out new experiences.

Sagar’s techniques in this area are radically innovative, in that his detailed, artistically-rendered faces mask biological models and simulations of unprecedented complexity. For example, each time BabyX smiles, she has perceived something with her “senses” which has triggered her simulated brain to release virtual endorphins, dopamine, and serotonin into her AI system. One layer of her visualized self-reveals glows in the areas of her brain connected to language and pleasure when she sees words and receives praise.

 “Researchers have built lots of computational models of cognition and pieces of this, but no one has stuck them together,” Sagar told Bloomberg. “This is what we’re trying to do: wire them together and put them in an animated body. We are trying to make a central nervous system for human computing.” The team has begun this in earnest, and created the world’s most detailed map of the human brain — all of this part of the team’s larger feat of reverse-engineering the inner life of the human.

More Human Than Human

Soul Machines debuted its first AI face, Nadia, in February. Nadia, who speaks with Cate Blanchett’s voice, will work for Australia’s National Disability Insurance Agency, interacting with customers full-time on the agency’s website by early next year. The goal is to be more usable and personable than the typical text-based chatbots we encounter online. Soul Machines has 10 other trials underway with airlines, financial-services firms, and health-care providers.

Image Credit: Soul Machines, Ltd.

Image Credit: Soul Machines, Ltd.

As the technology improves, it will have broader applications that are less reliant upon users’ proximity to a computer screen. These kinds of personable AIs will likely be part of autonomous cars, and Google parent Alphabet, Amazon, and Apple will probably want their virtual assistants to have faces. While the research could lead to far more likable, believable virtual assistants and other wonderful breakthroughs, it does raise questions about the nature of free will and what it means to enslave an intelligent being, regardless of its origins.

Would an AI stuck with customer service duty grow weary of pushing that proverbial rock up the logistical hill? Could an AI toddler be traumatized by the collective human fear of the uncanny likeness of nonhuman reactions to human emotions? Do virtual babies dream of electric cradles?

As BabyX learns to play the piano, laugh at jokes, and interact with humans, it’s easy to anthropomorphize it; to wonder if it is self-aware. Who are we to say, proponents of the Simulation Hypothesis will argue, for whom a virtual baby is perhaps the only kind of baby that ever was.