Electric shock to the brain boosts memory ’15 per cent’

The device noticed when patients struggled to learn a word

The device noticed when patients struggled to learn a word

The device that gives the brain electric shocks has prompted hope for Alzheimer’s patients after an initial trial showed it could improve memory by 15 per cent. The US study was described as “promising” by British Experts, but they cautioned it was too early to say for sure if it promises a treatment for dementia. The machine works by monitoring the brain for indications they are not learning words properly and giving short bursts of electricity to neurons. Stimulating a specific area that processes language “reliably and significantly” boosted patients ability to remember words, the researchers said.

Other examples of neurostimulation technology have been used to improve sleep and curb anxiety. The experiment involved 25 patients and focused on the medial temporal lobe, which is understood to play a significant role in the formation and consolidation of new memories. The lobe was stimulated with the device detected, by the position of the participant’s head, indicated they were struggling to memorise a particular word. The patients were asked to memorise 12 common words which appeared on a screen for 1.6 seconds.

They were then given a number of arithmetical tasks to distract them before being asked to recite the words back. Dr David Reynolds, chief scientific officer at Alzheimer’s Research UK, said: “While dementia involves a range of complex symptoms, memory problems are among the most common and can have a devastating impact on many people’s lives. “Brain function depends on electrical as well as chemical signals, and as technology advances, research is beginning to investigate whether direct electrical stimulation of certain areas brain could help improve aspects of memory and thinking. He added: “Ways to improve memory and thinking skills is a key goal in dementia research, but it has now been over 15 years since researchers developed a new drug that is able to do this.” The study carried out at the University of Pennsylvania was published in Nature Communications.

 

Stimulating the Brain’s Emotional Center Enhances Memory

Brief electrical stimulation of the amygdala augments links to other memory regions in the brain, raising hope for treating memory loss- Elie Dolgin

Medical illustration representing activation of the amygdala in the brain.

Cory Inman has fond memories of tending campfires at boy scout camp in the north Georgia mountains. A cognitive neuroscientist by training, he knew he had his amygdala to thank for those strong emotional recollections. As Inman started his postdoctoral fellowship at Emory University, in Atlanta, he began to wonder: If a warm-and-fuzzy event in his distant past could stimulate this particular almond-shaped cluster of neurons in his brain and strengthen memory formation, could an electrode implanted into the brain of someone with cognitive deficits do effectively the same thing?

MRI of the brain showing implanted electrodes.

A merge of MRI and CT scans shows electrodes targeted to the amygdala (top), anterior hippocampus (middle), and middle hippocampus (bottom).

To find out, Inman and his colleagues at Emory University, in Atlanta—including neurosurgeon Jon Willie and memory researcher Joseph Manns—recruited 14 patients with epilepsy who already had electrodes placed in their brains to detect and monitor seizures. The researchers sat each person in front of a computer screen and displayed images of everyday objects: celery stalk, clipboard, roller skate, butterfly, skeleton key, telescope, ship-in-a-bottle, basketball, park bench—160 in total.

Half of the time, at random and immediately after seeing the picture for three seconds, participants received a one-second zap of low-amplitude electrical stimulation directly to their amygdalas, delivered in eight short bursts, each 50 Hertz in frequency and at a current of 0.5 milliamps. That jolt made all the difference on memory tests administered one day later. As the researchers report today in the Proceedings of the National Academy of Sciences, participants were about 10 percent more likely, on average, to recognize an image they’d seen before if that picture had been followed by the electrical stimulation—and the effect was not linked to any emotional responses to the treatment.

This confirms the finding, previously shown only in animal models, that the amygdala plays a central role in helping convert short-term memories into long-term oneseven in the absence of emotional input. “Emotional arousal is not essential, but activation of the amygdala is,” says James McGaugh, a memory researcher from the University of California, Irvine, who was not involved in the study. “These are the first findings to show that in human subjects.”

Notably, the electrical stimulation only had an effect on patients’ recollection skills the next day, and not on their memory when it was tested immediately after they first saw the photos. According to Inman, that’s likely because of the amygdala’s central role in working with the hippocampus to make significant memories last. That process of ‘memory consolidation’ takes time—and it was something Inman’s team saw experimentally when they measured electrical signals in the brain and found that network interactions between the amygdala and other memory regions of the brain were stronger on the day-later test but not the same-day one.

Schematic representation of oscillatory activity during the one-day recognition test in the basolateral complex of the amygdala (blue), hippocampus (orange), and perirhinal cortex (purple) for objects in the stimulation condition. The oscillations depict increased theta interactions between the three regions and gamma power in perirhinal cortex modulated by those theta oscillations.

One day after brain stimulation, electrical signals in the amygdala (blue), hippocampus (orange), and perirhinal cortex (magenta) appear more in sync during memory retrieval on an image recognition test.

The benefit of the amygdala stimulation was also greatest for those who had the worst memory skills, to begin with—and thus the most to gain from treatment. “This is encouraging,” says Inman, “because those are the kinds of patients we want to help.” One participant, for example, could hardly remember anything from one day to the next. Without any intervention, she could only recall around 5 percent of the images she had seen 24 hours earlier. With the electrical stimulation, her image retention went up to 37 percent—more than a seven-fold improvement.

Other research teams, including one led by neurosurgeon Itzhak Fried at the University of California, Los Angeles, had previously shown that electrical stimulation to other parts of the brain could improve people’s short-term recollections—over the course of 30 seconds to an hour after initial learning. But, says Inman, “no study has shown memory enhancement for specific events or images at longer time scales, like 24 hours later,” as his team now has. The Emory researchers are now testing whether they can make memories last even longer, perhaps even forever, by changing the way they stimulate the amygdala. If they succeed, this type of electrical stimulation therapy could conceivably one day help people with memory loss of all sorts, including Alzheimer’s disease and other kinds of dementia.

Unfortunately, “at this point,” notes Inman, “there’s no technology that exists that can stimulate the amygdala non-invasively.” But he hopes that, by studying memory in people with epilepsy today and with the advent of new technologies tomorrow, the research will open a new path to memory enhancement for those with cognitive problems of all kinds.

Artist Creates a “Factory of the Future” With Machines Controlled by Brain Waves

A new exhibit by experimental philosopher Jonathon Keats challenges participants to think about the future of work- Eliza Strickland

A woman wearing a brain-scanning headset controls a shiny aluminum machine with her neural signals.

A participant in the Mental Work exhibit uses a brain-scanning headset to make a machine crank into action.

When you sign up to labour in the “Mental Work” factory, you’re equipped with a brain-scanning headset and taught how to use it. The headset uses EEG electrodes to record your brainwaves, and the associated software can pick out specific patterns. The factory overseer explains that this brain-computer interface has been programmed to respond to a neural pattern that occurs when you imagine squeezing a ball in your hand.

Then you’re introduced to the machines you’ll be controlling. They are things of beauty, made of lightweight aluminium and finished in chrome. At three stations of increasing complexity, you’ll use your brain signals to manage the machines’ operations, causing them to manufacture… deep thoughts? The future? That part isn’t entirely clear.

Photo shows the entrance to a building. Above the doors are written the words "Mental Work: The Cognitive Revolution Starts Here."
Photo: Adrien Baraka

The Mental Work factory is a participatory art installation by the provocateur and “experimental philosopher” Jonathon Keats, made in collaboration with neuroscientists at the École Polytechnique Fédérale de Lausanne (EPFL) in Switzerland. The exhibit just opened at EPFL’s ArtLab, where the factory will be up and running through January. After that, eager would-be workers can take part in the experience at Swissnex San Francisco and then Swissnex Boston.

Experimental philosophers apparently don’t spend their time in hushed libraries writing scholarly articles. Instead, Keats takes the questions he’s wrestling with and plunks them down in the real world. “We need an open space in which to encounter possible futures,” where we can physically and experientially grapple with them.” Keats’ inspiration for the Mental Work project is rooted in the Industrial Revolution when machines of iron and steel replaced human sweat and sinews. Now, Keats argues, we’re in the middle of a Cognitive Revolution, and artificial intelligence may replace our human brainpower.

The upheaval caused by the Industrial Revolution showed that “a lot of people can get hurt and be displaced even as society is being improved by a new technology,” Keats says. “We need to have foresight: We need to think about what relationship we want to have with these new technologies before they have the power to determine what our society becomes.” Keats hopes the people who labour in the Mental Work factory will come out with ideas about what kind of technological future they want and will work to bring it about.

Does artificial intelligence truly work like the human brain?

In 1739, Parisians flocked to see an exhibition of automata by the French inventor Jacques de Vaucanson performing feats assumed impossible by machines. In addition to human-like flute and drum players, the collection contained a golden duck, standing on a pedestal, quacking and defecating. It was, in fact, a digesting duck. When offered pellets by the exhibitor, it would pick them out of his hand and consume them with a gulp. Later, it would excrete a gritty green waste from its back end to the amazement of audience members.

Vaucanson died in 1782 with his reputation as a trailblazer in artificial digestion intact. Sixty years later, the French magician Jean-Eugène Robert-Houdin gained possession of the famous duck and set about repairing it. Taking it apart, however, he realized that the duck had no digestive tract. Rather than breaking down the food, the pellets the duck was fed went into one container, and pre-loaded green-dyed breadcrumbs came out of another.

The field of artificial intelligence is currently exploding, with computers able to perform at near- or above-human level on tasks as diverse as video games, language translation, trivia and facial identification. Like the French exhibit-goers, any observer would be correctly impressed by these results. What might be less clear, however, is how these results are being achieved. Does modern AI reach these feats by functioning the way that biological brains do, and how can we know?

In the realm of replication, definitions are important. An intuitive response to hearing about Vaucanson’s cheat is not to say that the duck is doing digestion differently but rather that it’s not doing digestion at all. But a similar trend appears in AI. Checkers? Chess? Go? All were considered formidable tests of intelligence until they were solved by increasingly more complex algorithms. Learning how a magic trick works make it no longer magic, and discovering how a test of intelligence can be solved makes it no longer a test of intelligence.

So let’s look to a well-defined task: identifying objects in an image. Our ability to recognize, for example, a school bus, feels simple and immediate. But given the infinite combinations of individual school buses, lighting conditions and angles from which they can be viewed, turning the information that enters our retina into an object label is an incredibly complex task — one out of reach for computers for decades. In recent years, however, computers have come to identify certain objects with up to 95 percent accuracy, higher than the average individual human.

Like many areas of modern AI, the success of computer vision can be attributed to artificial neural networks. As their name suggests, these algorithms are inspired by how the brain works. They use as their base unit a simple formula meant to replicate what a neuron does. This formula takes in a set of numbers as inputs, multiplies them by another set of numbers (the “weights,” which determine how much influence a given input has) and sums them all up. That sum determines how active the artificial neuron is, in the same way, that a real neuron’s activity is determined by the activity of other neurons that connect to it. Modern artificial neural networks gain abilities by connecting such units together and learning the right weight for each.

The networks used for visual object recognition were inspired by the mammalian visual system, a structure whose basic components were discovered in cats nearly 60 years ago. The first important component of the brain’s visual system is its spatial map: Neurons are active only when something is in their preferred spatial location, and different neurons have different preferred locations. Different neurons also tend to respond to different types of objects. In brain areas closer to the retina, neurons respond to simple dots and lines. As the signal gets processed through more and more brain areas, neurons start to prefer more complex objects such as clocks, houses, and faces.

The first of these properties — the spatial map — is replicated in artificial networks by constraining the inputs that an artificial neuron can get. For example, a neuron in the first layer of a network might receive input only from the top left corner of an image. A neuron in the second layer gets input only from those top-left-corner neurons in the first layer, and so on.

The second property — representing increasingly complex objects — comes from stacking layers in a “deep” network. Neurons in the first layer respond to simple patterns, while those in the second layer — getting input from those in the first — respond to more complex patterns, and so on.

These networks clearly aren’t cheating in the way that the digesting duck was. But does all this biological inspiration mean that they work like the brain? One way to approach this question is to look more closely at their performance. To this end, scientists are studying “adversarial examples” — real images that programmers alter so that the machine makes a mistake. Very small tweaks to images can be catastrophic: Changing a few pixels on an image of a teapot, for example, can make the network label it an ostrich. It’s a mistake a human would never make and suggests that something about these networks is functioning differently from the human brain.

Studying networks this way, however, is akin to the early days of psychology. Measuring only environment and behaviour — in other words, input and output — is limited without direct measurements of the brain connecting them. But neural-network algorithms are frequently criticized(especially among watchdog groups concerned about their widespread use in the real world) for being impenetrable black boxes. To overcome the limitations of this techno-behaviourism, we need a way to understand these networks and compare them with the brain.

An ever-growing population of scientists is tackling this problem. In one approach, researchers presented the same images to a monkey and to an artificial network. They found that the activity of the real neurons could be predicted by the activity of the artificial ones, with deeper layers in the network more similar to later areas of the visual system. But, while these predictions are better than those made by other models, they are still not 100 percent accurate. This is leading researchers to explore what other biological details can be added to the models to make them more similar to the brain.

Grace Lindsay

Emotion AI Overview What is it and how does it work?

Overview

Artificial emotional intelligence or Emotion AI is also known as emotion recognition or emotion detection technology. In market research, this is commonly referred to as facial coding.

Humans use a lot of non-verbal cues, such as facial expressions, gesture, body language and tone of voice,  to communicate their emotions.  Our vision is to develop Emotion AI that can detect emotion just the way humans do, from multiple channels. Our long-term goal is to develop “Multimodal Emotion AI”, that combines analysis of both face and speech as complementary signals to provide richer insight into the human expression of emotion. For several years now, Affectiva has been offering industry-leading technology for the analysis of facial expressions of emotions. Most recently, Affectiva has added speech capabilities now available to select beta testers

Emotion detection – Face

Our Emotion AI unobtrusively measures unfiltered and unbiased facial expressions of emotion, using an optical sensor or just a standard webcam. Our technology first identifies a human face in real time or in an image or video. Computer vision algorithms identify key landmarks on the face – for example, the corners of your eyebrows, the tip of your nose, the corners of your mouth. Deep learning algorithms then analyze pixels in those regions to classify facial expressions. Combinations of these facial expressions are then mapped to emotions.

In our products, we measure 7 emotion metrics: anger, contempt, disgust, fear, joy, sadness and surprise. In addition, we provide 20 facial expression metrics.  In our SDK and API, we also provide emojis, gender, age, ethnicity and a number of other metrics. Learn more about our metrics.

The face provides a rich canvas of emotion. Humans are innately programmed to express and communicate emotion through facial expressions. Affdex scientifically measures and reports the emotions and facial expressions using sophisticated computer vision and machine learning techniques.

Here are some links to other areas of interest:

  • Determining Accuracy
  • Mapping Expressions to Emotions
  • Obtaining Optimal Results

When you use the Affdex SDK in your applications, you will receive facial expression output in the form of Affdex metrics: seven emotion metrics, 20 facial expression metrics, 13 emojis, and four appearance metrics.

Emotions

Anger

Contempt

Disgust

Fear

Joy

Sadness

Surprise

Furthermore, the SDK allows for measuring valence and engagement, as alternative metrics for measuring the emotional experience.

Engagement: A measure of facial muscle activation that illustrates the subject’s expressiveness. The range of values is from 0 to 100.

Valence: A measure of the positive or negative nature of the recorded person’s experience. The range of values is from -100 to 100.

How do we map facial expressions to emotions?

The Emotion predictors use the observed facial expressions as input to calculate the likelihood of an emotion.


Facial Expressions

Attention – Measure of focus based on the head orientation

Brow Furrow – Both eyebrows moved lower and closer together

Brow Raise – Both eyebrows moved upwards

Cheek Raise – Lifting of the cheeks, often accompanied by “crow’s feet” wrinkles at the eye corners

Chin Raise – The chin boss and the lower lip pushed upwards

Dimpler – The lip corners tightened and pulled inwards

Eye Closure – Both eyelids closed

Eye Widen – The upper lid raised sufficient to expose the entire iris

Inner Brow Raise – The inner corners of eyebrows are raised

Jaw Drop – The jaw pulled downwards

Lid Tighten – The eye aperture narrowed and the eyelids tightened

Lip Corner Depressor – Lip corners dropping downwards (frown)

Lip Press – Pressing the lips together without pushing up the chin boss

Lip Pucker – The lips pushed foward

Lip Stretch – The lips pulled back laterally

Lip Suck – Pull of the lips and the adjacent skin into the mouth

Mouth Open – Lower lip dropped downwards

Nose Wrinkle – Wrinkles appear along the sides and across the root of the nose due to skin pulled upwards

Smile – Lip corners pulling outwards and upwards towards the ears, combined with other indicators from around the face

Smirk – Left or right lip corner pulled upwards and outwards

Upper Lip Raise – The upper lip moved upwards


Emoji Expressions

Laughing – Mouth opened and both eyes closed

Smiley – Smiling, mouth opened and both eyes opened

Relaxed – Smiling and both eyes opened

Wink – Either of the eyes closed

Kissing – The lips puckered and both eyes opened

Stuck Out Tongue – The tongue clearly visible

Stuck Out Tongue and Winking Eye – The tongue clearly visible                                            and either of the eyes closed

Scream – The eyebrows raised and the mouth opened

Flushed – The eyebrows raised and both eyes widened

Smirk – Left or right lip corner pulled upwards and outwards

Disappointed – Frowning, with both lip corners pulled downwards

Rage – The brows furrowed, and the lips tightened and pressed

Neutral – Neutral face without any facial expressions


Using the Metrics

Emotion, Expression and Emoji metrics scores indicate when users show a specific emotion or expression (e.g., a smile) along with the degree of confidence. The metrics can be thought of as detectors: as the emotion or facial expression occurs and intensifies, the score rises from 0 (no expression) to 100 (expression fully present).

In addition, we also expose a composite emotional metric called valence which gives feedback on the overall experience. Valence values from 0 to 100 indicate a neutral to the positive experience, while values from -100 to 0 indicate a negative to neutral experience.


Appearance

Our SDKs also provide the following metrics about the physical appearance:

Age

The age classifier attempts to estimate the age range. Supported ranges: Under 18, from 18 to 24, 25 to 34, 35 to 44, 45 to 54, 55 to 64, and 65 Plus.

Ethnicity

The ethnicity classifier attempts to identify the person’s ethnicity. Supported classes: Caucasian, Black African, South Asian, East Asian and Hispanic.

At the current level of accuracy, the ethnicity and age classifiers are more useful as a quantitative measure of demographics than to correctly identify the age and ethnicity on an individual basis. We are always looking to diversify the data sources included in training those metrics to improve their accuracy levels.

Gender

The gender classifier attempts to identify the human perception of gender expression.

In the case of video or live feeds, the Gender, Age and Ethnicity classifiers track a face for a window of time to build confidence in their decision. If the classifier is unable to reach a decision, the classifier value is reported as “Unknown”.

Glasses

A confidence level of whether the subject in the image is wearing eyeglasses or sunglasses.


Face Tracking and Head Angle Estimation

The SDKs include our latest face tracker which calculates the following metrics:

Facial Landmarks Estimation

The tracking of the cartesian coordinates for the facial landmarks. See the facial landmark mapping here.

Head Orientation Estimation

Estimation of the head position in a 3-D space in Euler angles (pitch, yaw, roll).

Interocular Distance

The distance between the two outer eye corners.

Emotion detection – Speech

Our speech capability analyzes not what is said, but how it is said, observing changes in speech paralinguistics, tone, loudness, tempo, and voice quality to distinguish speech events, emotions, and gender. The underlying low latency approach is key to enabling the development of real-time emotion-aware apps and devices.

Our first speech based product is a cloud-based API that analyzes a pre-recorded audio segment, such as an MP3 file. The output file provides the analysis on speech events occurring in the audio segment every few hundred milliseconds and not just at the end of the entire utterance. An Emotion SDK that analyzes speech in real-time will be available in the near future.

Data and accuracy

Our algorithms are trained using our emotion data repository, that has now grown to nearly 6 million faces analyzed in 87 countries. We continuously test our algorithms to provide the most reliable and accurate emotion metrics. Now, also using deep learning approaches, we can very quickly tune our algorithms for high performance and accuracy. Our key emotions achieve accuracy in the high 90th percentile. We sampled our test set, comprised of hundreds of thousands of emotion events, from our data repository. This data has been gathered representing real-world, spontaneous facial expressions and vocal utterances, made under challenging conditions such as changes in lighting and background noise, and variances due to ethnicity, age, and gender. You can find more information on how we measure our accuracy here.

How to get it

Our emotion recognition technology is available in several products.  From an easy-to-use SDK and API for developers, to robust solutions for market research and advertising.