Now playing: a movie you control with your mind

Richard Ramchurn’s “The Moment” lets you play film director, using just your brainwaves- Rachel Metz

Usually, when you watch a film, you sit back in your chair, eyes trained on a screen, as the story unfolds. It’s a lot different when you watch one of Richard Ramcharan’s latest films. Ramchurn, a graduate student at the University of Nottingham in Nottingham, England, is an artist and director who has spent the last several years creating films that you can control with your mind—simply by putting on a $100 headset that detects electrical activity in your brain. With this EEG headset on, scenes, music, and animation change every time you watch it, depending on the meanderings of your mind.

Ramcharan’s latest work, a 27-minute avant-garde tale called The Moment that (no surprise) explores a dark future where brain-computer interfaces are the norm, is nearly complete. While finishing up editing work, Ramchurn has started screening it in a small trailer around Nottingham, where six to eight people can sit and view it at once. (Just one of the controls it while the others observe.) He will also show it at a film festival in Sheffield, England, in June.

If you’re wearing the headset, a NeuroSky MindWave, while watching The Moment, it will track your level of attention by measuring electrical activity within a frequency range believed to correspond with attentiveness (though it should be noted that there are doubts about how well devices like this can actually do such tracking). The continually computed score is sent wirelessly to a laptop, where Ramcharan’s specially built software uses it to alter the editing of the scenes, the flow of the background music, and more. You don’t have to move a muscle. 

Simply getting all this to work is exciting to Ramchurn. But beyond that, he says, allowing the viewer to effectively edit the film—either by consciously thinking about it or by naturally responding to what’s happening on screen—creates a sort of two-way feedback loop. The film changes because of how you feel, and the way you feel changes because of the film.“It almost becomes part of the system of your mind,” he says.

BCI beginnings

Ramchurn, 39, spent years making short films, documentaries, and music videos, and experimenting with ways of incorporating technology into his work. He started toying with the idea of a brain-computer interface for the film in 2013 when he first tried a NeuroSky headset. He ultimately used it to help make his first brain-controlled film, The Disadvantages of Time Travel, in 2014 and 2015.

That first film is more abstract than The Moment, flitting between the main character’s dream state and reality. The headset monitored the viewer’s blinking to figure out when to cut from one shot to the next, and their attention and meditation (this is another range of brainwave frequencies that the headset can log and score) to determine when and how to switch between fantasy and real-life modes.

Looking back, Ramchurn says, The Disadvantages of Time Travel was too busy. Blinking-based control actually removed people from the interactive experience by making them aware of their own physiology. Having viewed the director’s cut, a version he manipulated by watching himself, I can confirm it is, at the least, demanding to watch.

Trillions of possibilities

For The Moment, Ramchurn dropped blinking and focused on attention data. It tends to rise and fall like a sine wave as your focus shifts, ebbing about every six seconds. So he used these natural dips to signal a cut to a new shot. At any given point, the film is switching back and forth between two of its three narrative threads, which follow three characters who interact throughout. With all the possibilities for mind-directed changes, Ramchurn thinks there are about 101 trillion different versions of the film that you could see. To make this possible in a 27-minute film, he had to create three times as much footage as he would have normally, and gather six times as much audio.

Since I couldn’t get to the United Kingdom to take charge of the film myself, Ramchurn sent me the next best thing: two recordings of The Moment controlled by two different people. The differences were mostly subtle, such as variations in the music and in the animation interspersed between shots of real-life actors. But there were also some clear differences: one version lets me take a peek inside a notebook that one of the main characters was writing and drawing in and included more dialogue that helped flesh out the story.

The overall effect of watching a film whose trajectory was controlled somewhat by previous viewers was strange and compelling. I kept wondering what, exactly, they did (or didn’t) have control over, and how much they were thinking about this while they watched. And how did they (or I) know for sure that they were controlling anything at all? I put this question to Steve Benford, a computer science professor at the University of Nottingham and Ramcharan’s advisor. He agreed that while viewers knew their blinks lined up with film cuts in The Disadvantages of Time Travel, your role in directing The Moment with your brain is fuzzier.

With interactive art like this, Benford explains, “you don’t always know what’s going on. You have to interpret what happens, and the artist has a choice about to what extent they want to make it more or less explicit.”

Audience participation

Ramchurn is not the first person to try to get audiences to interact with movies—the history of cinema is filled with efforts ranging from sing-alongs to smartphone apps meant to be used while watching.

Jacob Gaboury, an assistant professor of film and media at the University of California, Berkeley, remembers sitting in a theatre in the 1990s and using a joystick to choose between two different film endings. Making films that respond to brain activity might lead filmmakers to create different kinds of stories, images, and sounds than they normally would, he says.

“Often, you get bogged down in telling stories in a particular way in the cinema, so it could be interesting to see how that would progress from a director’s perspective,” he says.

But because it’s controlled by a single person, he doesn’t imagine it being the kind of thing you’d watch at a movie theatre. Ramchurn says that he has experimented with ways these films could work in front of a larger audience, such as by letting three people compete to be the main controller (by blinking more and earning higher meditation scores), or by taking an average of the reactions to determine what happened on the screen.

In the end, he says, a cooperative mode that made each person responsible for an element of the film—the soundtrack, the cutting of shots, the blending of layers—worked the best. “The films they made flowed better,” he says.

AI and music: will we be slaves to the algorithm?

Tech firms have developed AI that can learn how to write music. So will machines soon be composing symphonies, hit singles and bespoke soundtracks?

george philip wright jon eades and siavash mahdavi at abbey road studios in north west london
 Pioneers of sound (left to right): George Philip Wright, Jon Eades and Siavash Mahdavi at Abbey Road Studios, London. Photograph: Sonja Horsman for the Observer

From Elgar to Adele, and the Beatles or Pink Floyd to Kanye West, London’s Abbey Road Studios has hosted a storied list of musical stars since opening in 1931. But the man playing a melody on the piano in the complex’s Gatehouse studio when the Observer visits aren’t one of them. The man sitting at the keyboard where John Lennon may have finessed A Day in the Life is Siavash Mahdavi, CEO of AI Music, a British tech startup exploring the intersection of artificial intelligence and music.

His company is one of two AI firms currently taking part in Abbey Road Red, a startup incubator run by the studios that aims to forge links between new tech companies and the music industry. It’s not alone: Los Angeles-based startup accelerator Techstars Music, part-funded by major labels Sony Music and Warner Music Group, included two AI startups in its programme earlier this year: Amper Music and Popgun.

This is definitely a burgeoning sector. Other companies in the field include Jukedeck in London, Melodrive in Berlin, Humtap in San Francisco and Groov.AI in Google’s hometown, Mountain View. Meanwhile, Google has its own AI music research project called Magenta, while Sony’s Computer Science Laboratories (CSL) in Paris has a similar project called Flow Machines.

Whether businesses or researchers, these teams are trying to answer the same question: can machines create music, using AI technologies like neural networks to be trained upon a catalogue of human-made music before producing their own? But these companies’ work poses another question too: if machines can create music, what does that mean for professional human musicians?

The aim is not, ‘Will this get better than X?’ It’s about whether this will be useful for people. Will it help them?

“I’ve always been fascinated by the concept that we could automate, or intelligently do, what humans think is only theirs to do. We always look at creativity as the last bastion of humanity,” says Mahdavi. However, he quickly decided not to pursue his first idea: “Could you press a button and write a symphony?”

Why not? “It’s very difficult to do, and I don’t know how useful it is. Musicians are queuing up to have their music listened to: to get signed and to get on stage. The last thing they need is for this button to exist,” he says.

The button already exists, in fact. Visit Jukedeck’s website, and you can have a song created for you simply by telling it what genre, mood, tempo, instruments and track length you want. Amper Music offers a similar service. This isn’t about trying to make a chart hit, it’s about providing “production music” to be used as the soundtrack for anything from YouTube videos to games and corporate presentations.

Once you’ve created your (for example) two-minute uplifting folk track using a ukulele at a tempo of 80 beats-per-minute, Jukedeck’s system gives it a name (“Furtive Road” in this case), they will sell you a royalty-free licence to use it for $0.99 if you’re an individual or small business, or $21.99 if you’re a larger company. You can buy the copyright to own the track outright for $199.

“A couple of years ago, AI wasn’t at the stage where it could write a piece of music good enough for anyone. Now it’s good enough for some use cases,” says Ed Newton-Rex, Jukedeck’s CEO.

“It doesn’t need to be better than Adele or Ed Sheeran. There’s no desire for that, and what would that even mean? Music is so subjective. It’s a bit of a false competition: there is no agreed-upon measure of how ‘good’ a piece of music is. The aim [for AI music] is not ‘will this get better than X?’ but ‘will it be useful for people?’. Will it help them?”

The phrase “good enough” crops up regularly during interviews with people in this world: AI music doesn’t have to be better than the best tracks made by humans to suit a particular purpose, especially for people on a tight budget.

“Christopher Nolan isn’t going to stop working with Hans Zimmer any time soon,” says Cliff Fluet, the partner at London law firm Lewis Silkin, who works with several AI music startups. “But for people who are making short films or YouTubers who don’t want their video taken down for copyright reasons, you can see how a purely composed bit of AI music could be very useful.”

Striking a more downbeat note, music industry consultant Mark Mulligan suggests that this strand of AI music is about “sonic quality” rather than music quality. “As long as the piece has got the right sort of balance of desired instrumentation, has enough pleasing chord progressions and has an appropriate quantity of builds and breaks then it is good enough,” he says.

“AI music is nowhere near being good enough to be a ‘hit’, but that’s not the point. It is creating 21st-century muzak. In the same way that 95% of people will not complain about the quality of the music in a lift, so most people will find AI music perfectly palatable in the background of a video.”

Not every AI-music startup is targeting production music. AI Music (the company) is working on a tool that will “shape-change” existing songs to match the context they are being listened to in. This can range from a subtle adjustment of its tempo to match someone’s walking pace through to what are essentially automated remixes created on the fly.

“Maybe you listen to a song and in the morning it might be a little bit more of an acoustic version. Maybe that same song, when you play it as you’re about to go to the gym, it’s a deep house or drum N bass version. And in the evening it’s a bit jazzier. The song can actually shift itself,” says Mahdavi.

Watch the Alice demo on YouTube here.

Australian startup Popgun has a different approach again. Its AI – called “Alice” – is learning to play the piano like a child would, by listening to thousands of songs and watching how more experienced pianists play them. In its current form, you play a few notes to Alice, and it will guess what might come next and play it, resulting in a back-and-forth human/AI duet. The next step will be to get her to accompany a human in real-time. “It’s a new, fun way to interact with music. My 10-year-old daughter is playing the piano, and it’s the bane of our existence to get her to practice! But with Alice she plays for hours: it’s a game, and you’re playing with somebody else,” says CEO Stephen Phillips.

Vochlea, which is the other AI startup in the Abbey Road Red incubator, is in a similar space to Popgun. Beatbox into its VM Apollo microphone and its software will turn your vocals into drum samples. Approximate the sound of a guitar or trumpet with your mouth, and it will whip up a riff or brass section using that melody.

“It’s a little bit like speech recognition, but it’s non-verbal,” says CEO George Philip Wright. “I’m focusing on using machine-learning and AI to reward the creative input rather than taking away from it. It came from thinking if you’ve got all these ideas for music in your head, what if you had a device to help you express and capture those ideas?” Many of the current debates about AI are framed around its threat to humans, from driverless trucks and taxis putting millions of people out of work, to Tesla boss Elon Musk warning that if not properly regulated, AI could be “a fundamental risk to the existence of civilisation”.

AI music companies are keen to tell a more positive story. AI Music hopes its technology will help fans fall in love with songs because those songs adapt to their context, while Popgun and Vochlea think AI could become a creative foil for musicians.

We will always value sitting with another person and making art. It’s part of what we are as humans

Jon Eades, who runs the Abbey Road Red incubator, suggests that AI will be a double-edged sword, much like the last technology to shake up the music industry and its creative community.

“I think there will be collateral damage, just like the internet. It created huge opportunity and completely adjusted the landscape. But depending on where you sat in the pre-internet ecosystem, you either called it an opportunity or a threat,” he says.

“It was the same change, but depending on how much you had to gain or lose, your commentary was different. I think the same thing is occurring here. AI is going to be as much of a fundamental factor in how the businesses around music are going to evolve as the internet was.”

That may include the businesses having the biggest impact on how we listen to music, and how the industry and creators make money from it: streaming services. They already use one subset of AI – machine learning – to provide their music recommendations: for example in personalised playlists like Spotify’s Discover Weekly and Apple’s My New Music Mix.

The songs on those playlists are made by humans, though. Could a Spotify find a use for AI-composed music? Recently, the company poached François Pachet from Sony CSL, where he’d been in charge of the Flow Machines project.

It was under Pachet that in September 2016 Sony released two songs created by AI, although with lyrics and production polish from humans. Daddy’s Car was composed in the style of the Beatles, while The Ballad of Mr Shadowtook its cues from American songwriters like Irving Berlin, Duke Ellington, George Gershwin and Cole Porter. You wouldn’t mistake either for their influences, but nor would you likely realise they weren’t 100% the work of humans.

Now Pachet is working for Spotify, amid speculation within the industry that he could build a team there to continue his previous line of work. For example, exploring whether AI can create music for Spotify’s mood-based playlists for relaxing, focusing and falling asleep.

For now, Spotify is declining to say what Pachet will be doing. “I have no idea,” admits Jukedeck’s Newton-Rex. “But to the question: ‘One day, will a piece of software that knows you are able to compose music that puts you to sleep?’ Absolutely. That’s exactly the kind of field in which AI can be useful.”

What’s also unclear is the question of authorship. Can an AI legally be the creator of a track? Can it be sued for copyright infringement? Might artists one day have “intelligence rights” written into their contracts to prepare for a time when AIs can be trained on their songwriting and then let loose to compose original material?

AI Music’s plans for automated, personalised remixes may bring their own complications. “If an app allows you to shape-change a song to the extent that you can’t even hear the original, does it break away and become its own instance?” says Mahdavi.

“If you stretch something to a point where you can’t recognise it, does that become yours, because you’ve added enough original content to it? And how do you then measure the point at which it no longer belongs to the original?”

The answers to these questions? Mahdavi pauses to choose his words carefully. “What we’re learning is that a lot of this is really quite grey.” It’s also really quite philosophical, with all these startups and research teams grappling with fundamental issues of creativity and humanity.

“The most interesting thing about all this is that it might give us an insight into how the human composition process works. We don’t really know how composition works: it’s hard to define it,” says Newton-Rex. “But building these systems starts to ask questions about how [the same] system works in the human brain.” Will more of those human brains be in danger of being replaced by machines? Even as he boldly predicts that “at some point soon, AI Music will be indistinguishable from human-created music”, Amper Music’s CEO, Drew Silverstein, claims that it’s the process rather than the results that will favour the humans.

“Even when the artistic output of AI and human-created music is indistinguishable, we as humans will always value sitting in a room with another person and making art. It’s part of what we are as humans. That will never go away,” he says.

Mark Mulligan agrees. “AI may never be able to make music good enough to move us in the way human music does. Why not? Because making music that moves people – to jump up and dance, to cry, to smile – requires triggering emotions and it takes an understanding of emotions to trigger them,” he says.

“If AI can learn to at least mimic human emotions then that final frontier may be breached. But that is a long, long way off.”

These startups all hope AI music will inspire human musicians rather than threaten them. “Maybe this won’t make human music. Maybe it’ll make some music we’ve never heard before,” says Phillips. “That doesn’t threaten human music. If anything, it shows there’s new human music yet to be developed.”

Cliff Fluet brings the topic back to the current home for two of these startups, Abbey Road, and the level of the musician it has traditionally attracted.

“Every artist I’ve told about this technology sees it as a whole new box of tricks to play with. Would a young Brian Wilson or Paul McCartney be using this technology? Absolutely,” he says.

“I’ll say it now: Bowie would be working with an AI collaborator if he was still alive. I’m 100% sure of that. It’d sound better than Tin Machine, that’s for sure…”

Try it out

You can experiment with AI music and its close cousin generative music already. Here are some examples.

Jukedeck

As mentioned in this feature, you can visit Jukedeck’s website and get its AI to create tracks based on your inputs.

AI Duet 

Launched by Google this year, this gets you to play some piano notes, then the AI responds to you with its own melody.

Scape 
Brian Eno was involved in this app, where you combine shapes to start music that then generates itself as your soundtrack.

Humtap Music

A little like Vochlea in this feature, Humtap’s AI analyses your vocals to create an instrumental to accompany you.

Weav Run

This is part running app and part music app, using “adaptive” technology to modify the tempo of the song to match your pace.

Advanced AI Aims to Predict the Mood of a Conversation

MIT is working on developing a wearable AI system which can accurately predict the mood of a conversation. Deciphering the way a person articulates a sentence’s mood and tone can significantly alter the meaning of a conversation. Ultimately, the interpretation of its meaning is left to the listener. Being able to distinguish the emotions, a person is portraying is a critical component of conversation. However, not everyone is able to make the distinctions between tones.

For some people, especially those who suffer from anxiety or Aspergers, may interoperate a conversation in a different way than it was intended. The miscommunication can make social interactions extremely stressful. Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory(CSAIL) and Institute of Medical Engineering and Science (IMES) say they may have the solution: a wearable AI device capable of distinguishing if a conversation is happy, sad, or neutral by actively monitoring the way a person speaks.

“Imagine if, at the end of a conversation, you could rewind it and see the moments when the people around you felt the most anxious,” says graduate student Tuka Alhanai, “Our work is a step in this direction, suggesting that we may not be that far away from a world where people can have an AI social coach right in their pocket.”

The mood-predicting wearables actively analyze a person’s speech patterns and physiological signals to determine the tones and moods expressed in a conversation with 83 percent accuracy. The system is programmed to record a “sentiment score” every five seconds during a conversation.

“As far as we know, this is the first experiment that collects both physical data and speech data in a passive but robust way, even while subjects are having natural, unstructured interactions,” says Ghassemi. “Our results show that it’s possible to classify the emotional tone of conversations in real-time.”

Deep-learning techniques will continue to improve the performance of the system as more people use the system creating more data for the algorithms to analyze. To protect the privacy of the user, the data is processed locally on a device to prevent potential privacy breaches. Although, there may still be privacy concerns as the device can potentially record the conversations of unassuming individuals.

How the device operates

Previous studies examining the emotion of a conversation required a participant to artificially act out a specific emotion. In an attempt to create more organic emotions, MIT researchers instead had participants tell a happy or sad story.

MIT's Advanced AI Aims to Predict the Mood of a Conversation[Image Source: MITCSAIL/YouTube]

Participants of the study wore a Samsung Simband- a device able to capture high-resolution physiological waveforms to measure many attributes including heart rate, blood pressure, blood flow, and skin temperature. The device also simultaneously records audio data which is then analyzed to determine tone, pitch, energy, and vocabulary.

“The team’s usage of consumer market devices for collecting physiological data and speech data shows how close we are to having such tools in everyday devices,” says Björn Schuller, professor and chair of Complex and Intelligent Systems at the University of Passau in Germany. “Technology could soon feel much more emotionally intelligent, or even ‘emotional’ itself.”

MIT researchers recorded 31 conversations then used the data to train two separate algorithms. The first one deduces the conversation to categorize it as either happy or sad. The secondary algorithm determines whether the conversation is positive, negative, or neutral over 5-second intervals.

“The system picks up on how, for example, the sentiment in the text transcription was more abstract than the raw accelerometer data,” says Alhanai. “It’s quite remarkable that a machine could approximate how we humans perceive these interactions, without significant input from us as researchers.”

Does it work?

Surprisingly, the algorithms successfully determined most of the emotions which a human would expect during a conversation. Although, the results of the model were only 18 percent above chance. Despite the small percentage, the new technique remains a full 7.5 percent more accurate than existing approaches.

Unfortunately, the model is still too underdeveloped to be of any practical use as a social coach. However, researchers plan to scale up the data collection by enabling the system to be used on commercial devices like the apple watch.

“Our next step is to improve the algorithm’s emotional granularity so that it is more accurate at calling out boring, tense, and excited moments, rather than just labeling interactions as ‘positive’ or ‘negative,’” saysAlhanai. “Developing technology that can take the pulse of human emotions has the potential to dramatically improve how we communicate with each other.”

Music & Human Emotions- Heuristic Avenues in Research


Image result for music and emotions

Introduction

We are more often emotionally moved by music either during listening music or live music performances. This shows that music is consociation with emotions, which prevails among mammals and animals. This draws more attention among scientists who are operative towards neuroscience and applied sciences.  This theme also covers most of the research in the field of computational neuroscience, behavioural science, music research and therapy.  Various studies also confirm that music spreads throughout all parts of everyday life and also characterize some of its functions like mood changes and emotion regulation. The effect of emotion on music is hindered due to lack of appropriate research paragon and methods, scarcity of abstract and conjectural analysis of the process concealing emotion production via music. The three main reckoning methods for emotion draft are basic emotions- the capacity of one person to react with various arousal dimensions and diversified emotion inventories.

When we focus on the continuous list of basic emotions which attach little importance to the more circuitous forms of emotional processes in humans, mainly intuitive states of feelings generated by music which do not contribute to various robust deportment functions, but the effect of musical emotions are limited to valence and arousal balance which inhibits assessment of kind of qualitative discernment which is essential to study the emotional effects of music. The researchers generated assorted lists of emotions mainly to suit the needs of specific study which curtails invalidity and authenticity which makes the results very difficult to compare. A feeling is considered as a hypothesis for the central component of the complement of emotion which serves as a integration for the acquired representation of emotional evolution. The musical effect should be considered and studied as emotions which combines cognitive and physiological effects. Usually, research on musical emotions is performed and presenting antithetic piece d’ occasion selected for assumed emotional generation. Subjects are requested to record their emotional reactions based on the piece of music. Sometimes, listeners may be enquired to express their emotions via words based on questionnaires, mainly measures the idiosyncratic acumen of expressed emotion than felt. The abrogating aspect is that the comment given is fixed, listeners can only come across the experience of those emotions which are listed on a stock. The two main theories which govern the emotions are discrete emotion theory which advocates  the measurement of a small number of fundamental emotions (anger, fear, joy and sadness) and dimensions theory suggest ratings of valence and activation experiences

Psychophysiological effects of music

During this competitive and hectic life schedule, music generally enhances the positive effect, vigilance and also results in convergent evolution in current scenario (Sloboda, 2001). It helps in bestowing opportunities for breaching and provide an intensity of illumination to music (DeNora, 1999). (Bharucha et al. 2006) enumerated three different kinds of emotional experience such as “self-motion” because it kindles the vestibular system, “perceptual description and simulation”. In the perceptual description, music appears to be triggered by a peripheral basis by which the properties of which we can categorize as if “a real object were moving in the real world”. They make an effort to mimic the sounds and the perceptions in music during the experience of such imitation, account for simulation category. The last one is “rules and preferences of music” which constitute a psychological space which constitutes a change in the abstract space, can craft a nous of moving musical objects. Therefore, music is related to inception and dominion of emotions. Some scientists believe that music helps in increasing the expressions by inactivating a negative control system, behavioural and physiological changes, which are related to characteristics of a mental state. (Juslin, 2008) Presented a unique conjectural background highlighting six mechanisms by which listening to music may persuade emotions: “brain stem reflexes, evaluative conditioning, emotional contagion, visual imagery, episodic memory, and musical expectancy”. The authors proposed that these mechanisms change apropos such characteristics, as their information mainly focuses on, ontogenetic improvement, vital brain regions, cultural impact and initiation speed also on the degree of volitional stimulus, modularity, and dependency on musical structure. The authors thus clinch that music-induced emotions via mechanisms are not exclusive to music, the study of musical emotions could help the emotion field in one piece by providing unique archetypes for emotional stimulation.

Related image

There are various mechanisms like linguistics association, emotional pestilence based facial observations and vocal expressions by which music depress emotions (Bezdek, 2008). Studies by (Nyklicek, 1997) shows that large cluster of cardiovascular, respiratory and electrodermal responses are directly related to emotions which are induced by music. In (Nyklicek, 1997), classified emotions based on certain physiological responses like respiratory sinus arrhythmia (RSA) and cardiac inter bent intervals (CIBI) for instance, sadness can be correlated positively with IBI, systolic (SBP) and diastolic (DBP) blood pressure and negative with skin conductance level (SCL). (Krumhansl, 1997) had analysed together with parallels between physiology and emotion judgments which leads to the substantial transformations among the excerpts. The major deviations in “heart rate, blood pressure, skin conductance and temperature” were found in sad excerpts. The largest changes in “blood transit time and amplitude” in fear excerpts and finally there will be the largest changes in the measures of respiration in happy excerpts. These physiological effects of music which was observed help the theorists on the view of emotions and music. together with parallels between physiology and emotion judgments which leads to the substantial transformations among the excerpts. Physiological studies are mainly focussing on congruence between excessive emphasis on one’s own moods, attitudes, opinions and responses to internal and external stimuli and other physiological parts of music induced emotions. Skin Conductance Responses (SCR) was bring into being to be inordinate with two exciting emotions, like fear and happiness, when compared to other tranquilizing emotions like sadness and peacefulness. The results also illustrated that SCRs can be aroused and modified by “musical arousal of emotions, but are not sensitive to emotional transparency”.The report also shows that there is a corelation between music induced happiness and also enhanced skin conductance level (Khalfa, 2002). (Lundqvist, 2009) reported an association between music induced happiness and greater skin conductance level The authors had taken a critical concern in music and emotion research is to find whether “music call to mind unpretentious emotional responses among listeners or the listeners simply observe emotions which expressed by the music. It was found that happy music engendered “more zygomatic facial muscle activity, greater skin conductance, lower finger temperature, more happiness and less sadness” than sad music. The finding also reveals that the emotion persuaded in the listener was as same as the emotion articulated in the music and it is highly reliable with the concept that music may arouse emotions via emotional septicity.

Related image

(Grewe O. N., 2007a) authors put forward a theory that diverse musical patterns cannot persuade emotions which doesn’t statute its stimulus on components of emotions. If we try to infer the feelings of orchestrated responses and physiological arousal component as familiarizing reflexes, it is found that these universal reactions acts as a preliminary point for the evaluation process. So we can predict that familiarizing reflexes need not to be an emotion, but it could be an initiating theme for an emotional assessment process and a prerequisite for an emotion. It has been argued that aesthetic emotions (that are felt during aesthetic activity, such as fear, wonder, sympathy etc.,) are attenuated than other more emotions (Scherer K. R., 2001).  A nine factor model best filled the emotional descriptors that were chosen by music listeners who attended a classical music festivel (Zentner, 2008).  The Geneva Emotional Musical Scale (GEM) is the first instrument that had been specifically designed to measure musically evoked emotions. Gems-45 contains 45 labels that proved to be consistently chosen for describing musically evoked emotive states across the relatively wide range of music and lisener samples (Zentner, 2008). The gems accounts for ratings of music evoked emotions more powerfully than multipurpose scales that are based on non-musical areas of emotional research. Chills are physiological sensations which are caused by involuntary shaking of the body or limbs due to fear, weakness or excitement which is highly induced by music. (Sloboda J. , 1991) and (Panksepp, 1995) suggested that sadness and melancholy result in chills which also induced by emotions. It is a well-known fact that “music is a language of emotions”, so it is the people’s emotions we will be experiencing when the music is being played which is considered as very important to us. When our insinuations and hopes on the above subject of music is despoiled our attention get enhanced; and we begin to understand and enjoy the new pattern. This process of learning is convoyed through an emotional process by motivation or by means of compassion. A chill may result, when we possess high level of concentration or by retaining strong emotional process or having utterly focused senses on music.

Related image

(Grewe O. N., 2007b) authors thus proposed that the music is not the carnal impetus that influences our moods, but it is considered as an unrestrained and communicative submission which provides impact on our feelings in the form of re-creative process. (Rickard, 2004) shows that music talk into chills allied with rise in SCL and HR. The emotional response of the people with respect to music based on various degrees of variability. This can be explained by transformations in absorption process, which is defined as the capability and enthusiasm of the people to be emotionally drained in by a stimulus. (Sandstrom, 2013) developed a new measuring technique known as “AIMS (Absorption in Music Scale)” which helps in assessing the concern’s absorp­tion in music. This illustrates virtuous internal uniformity and temporal dependability which also come together with processes of “general absorption, musical engrossment and compassion”. This can envisage the métier of emotional responses pertaining to music. Their speculation shows that absorption is a broad-spectrum attribute. The authors finally suggest that individuals who have high absorption index in music would agree to engage themselves in many forms of music. (Stefan Gebhart, 2014) explored the data based on association between emotion intonation approaches by the use of music and personality magnitudes. The authors suggested that “timidity and nonchalance” are coupled with the augmented use of music to muddle through negative emotional states. Increasing the knowledge about the music’ influence on psychological disorders among the patients might help in inordinate relief from perceptual anguish by an unambiguous handling of music.

Emotional content modeling

Human has the ability to store and retrieve the information regarding music based on emotional content. The information can be like name of the artist, albums, songs, genres and composers. (Feng,  2003) (Li, 2003) (D.Liu, 2003) tried to classify musical emotions based on various genres into 4 for 13 different emotions. They Used “SVM-based multi-label classification method” to test two main problems: classification into the thirteen adjective groups and classification into the six super-groups like cheerful, gay, happy, fanciful, light, delicate, graceful, dreamy, leisurely, longing, pathetic, dark, depressing, sacred, spiritual, dramatic, emphatic, agitated, exciting, frustrated, mysterious and spooky. The authors erected the “super-group classifier” for each of the four styles such as Ambient, Classical, Fusion, and Jazz. This shows that the information provided by genre is used to improve and enhance the results of emotion detection. They claimed that experiments conducted based on emotion detection is not an easy one rather difficult problem and progress in the performance must be the instant concern which is resolved by: “escalating the sound data sets, collecting labeling in multiple rounds to ensure confidence in labeling, using different sets of adjectives, incorporating style and genre information, and using different types of features.” Their musical studies are mainly based on certain emotion recognition algorithms which comprises musical features representing musical properties like tempo, articulation, timbre and rhythm which are essential to train the classifier.

Image result for music and emotions

It is found that the Emotion usually vary throughout the musical selection and time. Therefore it is necessary to design a time-varying method to measure and analyze the emotions than descrying music with single emotion. To allow varying emotional content of a musical selection, (D.Liu, 2003) utilized Gaussian Mixture Model (GMM) to model the feature set like contentment, depression and exuberance, anxious. In erecting each GMM, the “Expectation Maximization (EM)” algorithm is castoff to evaluate the strictures of the Gaussian component and mixture weights. The initialization is executed with the “K-means algorithm”, here the authors presented a “mood detection methodology” for classical music from acoustic data. “Thayer’s model of mood” is take on for mood taxonomy, and three proficient feature sets were haul out in a straight line from acoustic data representing “intensity, timbre and rhythm” respectively. A hierarchical framework is castoff to notice the mood in a music clip. To distinguish the mood in one piece of music, a segmentation scheme is vacant for mood tracking. This algorithm helped in achieving adequate accuracy in evaluation purposes during experiments.

Related image

(E. Schubert, 1999) analyses emotion as a continuous function of time. Music is considered to be a language of emotions and it is an everyday activity. This can be done by using reliable and suitable emotion based classification algorithms. If a person is asked to pick his/her favorite song, he would select it based on his mood or aura. Therefore mood dominates a lot while selecting the music. The task of retrieving the musical information is an intriguing one. Most of the people are able to connect with the words of song better than its elements of music or musical features. It is beyond any doubt that musical elements plays an important role in depicting the emotion of a song, but the verses of a song expresses more emotion than musical elements which is composed based on lyrical theme. Researchers are struggling in the field of music to achieve the way to retrieve the music which is focused on genre classification using musical elements and metadata about the song and others are focusing on low-level feature analysis like pitch, tempo or rhythm.

Related image

(Ansdell, 2014) considered about the ability of music which can or cannot be related to the type of dainty relied upon musical assignation and its upshots. They refer to an apprehension with blossoming, as conflicting to a concern with more orthodox identifications of “health versus illness”, which can comfort to illuminate some imperceptible processes by which music benefits. An emphasis on prosperous trials more predictable imageries are count as evidence in the role of music relative to health and well-being. Some of them are stated in “DeNora and Ansdell: What can’t music do?”  “provide a pretext for social relating”, “provide opportunities for demonstrating skill”, “provide opportunities in which to receive praise”, “provide metaphors and subject matter for personal and group-historical narrative”, “provide means for shifting mood, individually and collectively”, “provide opportunities for bodily movement and bodily display, including dance and quasi-dance”, “provide opportunities for doing other things (eating and drinking, dressing up, making noise, getting out of the house or ward)”, “develop skills that are transferrable to things other than musical activity”, “ provide a means for renegotiating one’s identity and/or role within group culture or organization”, “ provide a set of events that can be recalled and thus contribute to a sense of Accumulating identity”, “provide opportunities for interaction with others” etc.

Image result for music and emotions

There is a common claim among elderly people that paying attention or listening to thrilling and extreme music is a base for anger and its expressions such as belligerence and criminal behaviour. (Sharman, 2015) conducted a study which consists of 39 hardcore music listeners who were aged 18–34 were laid open to an anger initiation, shadowed by the random task to 10 min of pinning their ears back to extreme music in their own playlist, or 10 min silence (control). The emotion is measured based on various factors which included heart rate and “subjective ratings on the Positive and Negative Affect Scale (PANAS)”. The results exhibited that ratings of PANAS aggression, petulance, and stress get enhanced during the anger initiation, and diminished after the music or silence. Similarly, the Heart rate gets increased during the anger initiation and was remain unremitting (not increased) in the music condition, and decreased during the condition of the condition. These verdicts point out the fact that extreme music is not responsible for making the participants angrier, inversely it gives the impression to equalize their physiological stimulation and results in enhanced positive emotions.

Measuring the mechanisms of emotions induced by music

The emotion is considered as a notional and operative annotation of a cardinal phantasm which constitutes the object of theory and research. Based on the elemental approach to emotion, it is an episode constituting of changes in various components. The three major reverberation elements of emotion are identified as physiological changes, motor expression and introspective feeling. Physiological changes like temperature sensations, respiratory and cardio-vascular acceleration and deceleration feeling of trembling and muscle spasms, which are considered as a part of emotion description (Stemmler, 2004). (Frijda, 1986) deliberates the motivational and neurophysiological prerequisites for emotions, and the habits by which emotions are synchronized by the individual. Making an allowance for the styles of events that cause emotions, the author maintains that emotions were ascended because events are evaluated by people as satisfactory or hurtful to their own concern. He also grosses an “information-processing perspective” that emotions were considered as consequences of the development of weighing the world in terms of one’s own interest by which sequentially it can modify action keenness.

Image result for music and emotions

Focused on the finding of “emotion-differentiated autonomic activity”, which help us to know how that activity was generated. This leads to the innovation that brings into being the “emotion-prototypic patterns of facial muscle action” give rise to autonomic ups and downs of enormous magnitude. Based on the experiment, it was also found that “understanding of the emotion labels resulting from the facial movement instructions” was directly or indirectly responsible for the effect. The authors proposed that contraction of facial muscles were responsible for universal emotion signals which results in emotion-specific autonomic activity. This occurs either through marginal feedback resulting from the contraction of facial muscle movements or through a direct link concerning the motor cortex and hypothalamus which translates “emotion prototypic expression” in the face and “emotion-specific patterning” in the ANS (Ekamn, 1983). The emotional episodic of the neurophysiological changes are attributed to emotional arousal event which undergoes distinct stability and smooth behavioural coordination and helps in the evolution of congruous adaptability responses to generate necessary actions and energy for various phenomena like fight or flight. The three main elements of a central motor system during emotional episodes are considered as facial and vocal expressions as well as gestures and postures (Ekman, 1984) (Ekman, 1994) (Izard, 1971). (Darwin, 1871) abstracted the expression as a base for adaptive behaviour like clinching one’s teeth as an aspect of a biting response. (Scherer K. R., 2001) suggested that emotional reactions are buckled down by a subjective appraisal of events which are attributed to their importance for blooming and mission acquirement of individuals.

Image result for music and emotions

Emotions are mainly responsible for the enduring effect on acumen and cognitive processes such as enthrallment, intelligent, consciousness, problem-solving, decision making, judgement and the like (Dalgleish, 1999). (Scherer K. R., 2001) applied various approaches suggested by (Scherer K. R., 2000) in which the author explained about degree of different centers of 4 psychological models of emotions like “dimensional models on subjective feeling, discrete emotion model on motor expression or adaptive behavior patterns, meaning models on verbal descriptors of subjective feelings and componential model on the link between emotion antecedent evaluation and differentiated reaction patterns”. Among these, the componential model helps in prophesying the assessment response link in a categorical and meticulous approach which comprises typical intensity and duration, the degree of coordination or synchronization of different organismic systems during the state, the rapidity of change in the nature of the state and the degree to which the state affects behaviour. These are required in order to understand non-cognitive effects of music, focussing on the absolute effort in this direction would require a more consideration of inventive emotions.

Related image

Pragmatic emotions are usually investigated in emotional research like anger, fear, joy, disgust, sadness, shame, guilt etc, which play a critical role in the adaptation of individuals that have important consequences for the safety like fight/flight, reformation and assimilation, impetus enhancement etc., the functionality of non-cognitive and pragmatic emotions are mainly based on a prior analysis of the behavioral needs and goals of an individual.  (Scherer K. R., 2001) explained that pragmatic emotions are of the high earnestness of emergency reactions which involves a combination of many organismic systems, which includes, changes in the endocrinal, hormonal and autonomous somatic and central nervous system.

Correlation of brain in music-induced emotion

By indulgencing in the field of biology and neurochemistry of music which agree the people utilize it enhanced scenario in curative, healing and other extents where substantiation point out that music yields benefit further than entertainment. After bypass surgery, patients every so often experience intermittent and irregular ups and downs in blood pressure. Such variations are treated with medications. Various studies show that the patients in ICUs required lower dosages of the drugs equated with patients in units where no background music is played. (Panksepp, 1995) shows that the values given by people to music are that adds some emotional affluence to their lives because music has the ability to bring forth strong emotional responses which are frequently sensed as highly pleasurable and provide chill sensations. (Altenmuller E. K., 2013) (Salimpoor, 2011), suggested that based on various brain imaging studies, emotional arousal is linked to activation of central nervous reward circuits and dopaminergic mechanisms which directly influences cognitive abilities and memory formation.

Related image

The consequences also indicate that intense and all-pervading pleasure in response to music can result in the release of dopamine in the striatal system. The researchers also contemplated that frontal lobe gets activated in both semantic and episodic musical memory tasks. It is also found that left hemisphere gets activated in addition to inferior frontal regions and angular gyrus, in addition to bilateral medial frontal regions. The right side of the bilateral, middle, frontal and precuneus get activated predominantly in episodic and control tasks. During familiar epeisodion and control tasks, the right precuneus and frontal gyrus and during unfamiliar epeisodion and control tasks, superior and middle frontal gyri and medial frontal cortex get activated. (Altenmuller E. a., 2013) explained that long-term music training and allied sensorimotor skillfulness learning can be a resilient tonic for neuroplastic changes in the evolving and adult brain, which in turn have an emotional impact on both white and grey matter besides cortical and subcortical structures. Creation of music like singing and dancing hints to a strong pairing of perception and activities facilitated by sensory-motor and multimodal brain regions, which affects either in a top-down or bottom-up approach, important sound pass on positions in the brain stem and thalamus. Listening and composing music aggravates motions and emotions, escalates between subject’ communications, and umpired thru neuro-hormones such as serotonin and dopamine which helps in experiencing a joyous and rewarding activity, through various changes in activities in the amygdala, ventral striatum, and other constituents of the limbic system. The above scenario helps rehabilitation more pleasurable and can re-arbitrate weakened neural connections by winning brain regions with each other.

Image result for music and emotions

(Watanabe, 2008) shows that during the listening of unfamiliar musical phrases, which is associated with the right hippocampus, the left inferior frontal gyrus, bilateral lateral temporal regions and left precuneus. (Plailly, 2007) stated that familiarity feeling can be activated by impetuses from all carnal modalities, suggesting a “multimodal nature of its neural bases”. Here, the authors studied the above supposition by analyzing the neural pedestals of familiarity dispensation of odours and music. In particular, they focused on familiarity based on the participants’ life experiences. Bits and pieces were categorized as familiar or unfamiliar based on participants’ individual responses, and stimulation patterns conjured by familiar items were compared with those conjured by unfamiliar items. “For the feeling of familiarity, a bimodal activation pattern was perceived in the left hemisphere, unambiguously the superior and inferior frontal gyri, the precuneus, the angular gyrus, the parahippocampal gyrus, and the hippocampus”. “The feeling of unfamiliarity was related to a smaller bimodal activation pattern principally to be found in the right insula and likely related to the detection of novelty”. There will be an enormous release of dopamine in the striatum during the peak emotional arousal in listening music. This was found by (Salimpoor, 2011) who combined the dopamine release measurements with psychophysiological measures of autonomous system activity. There will also be a decrease in blood flow in amygdala, hippocampus, precuneus and ventromedial prefrontal cortex (Blood, 2001).

 Characterising and classifying the emotions induced by the sound of music

Neuroimaging technique has paved an interesting way which helps in identifying an overlying complex of brain regions. The activities are tempered by the frame of mind and cognition. Various studies show that there will be a change in acuity, thoughtfulness, memory and decision-making functions on depressed individuals’. This shows that cognitive functions are directly linked with moods and emotions. The processing of thoughts is mainly conquered by the frame of mind. Studies from (Cabeza, 2000) shows that Unswerving with the intuition, the functional neuroanatomy of mood and that of cognition incorporates the intersecting networks of cortical and subcortical brain regions. The authors reviewed 275 PET and fMRI studies of “attention, perception, imagery, language, working memory, semantic memory retrieval, episodic memory encoding, episodic memory retrieval, priming, and procedural memory”. To detect a consistency of all stimulation patterns allied with above mentioned cognitive actions, data from 412 contrasts were concise at the level of cortical Brodmann’s areas, insula, thalamus, medial-temporal lobe, basal ganglia, and cerebellum. It was found that activation patterns for perception and imagery included primary and secondary regions in the dorsal and ventral pathways. For attention and working memory, they were usually found in prefrontal and parietal regions. For language and semantic memory retrieval, typical regions included left prefrontal and temporal regions. For episodic memory encoding, consistently activated regions included left prefrontal and medial temporal regions. For episodic memory retrieval, activation patterns included prefrontal, medial temporal, and posterior midline regions. For priming, deactivations in prefrontal (conceptual) or extrastriate (perceptual) regions were regularly seen. For procedural memory, activations were found in the motor as well as in non-motor brain areas. When we pay attention to music, people be likely to develop self-forgetfulness and seems to be isolated from present scenario. The current studies show that the detachment which the person come across is often considered as emotional responses to music which leads to separate virtual phenomena called “dreams”.

Related image

(Levinson, 1990) explained in his research that feelings are generated which lead us to isolation and is limited in duration when people come across music. He equalized the feelings generated during listening music to “wine tasting” phenomena and sampling the delight of various vintages. The metaphor “wine tasting” is applied equally to positive and negative emotions. (Sloboda J. &., 2001) explained that when a person is listening to music, he/she is actually recollecting past events which synchronizes with that particular piece of music. This also leads to striking findings is that “nostalgia” is located in the spectrum of music-induced feelings. (Darwin, 1871) cited that one of the most influential musically induced emotions is “love”, which appears in 2 different ways, such as affection and tenderness. These feelings are accompanied with various qualities of music like feeling enchanted, charmed, dazzled and amazed. Feelings like inspiring and admiring are also coming under this category. (Clayton, 2004) explained that happiness which is caused during listening to music takes the form enchantment or joy or the combination of both which is considered as “uniform affordance of music” which has the ability to enhance the motor entrainment regarding joyful activation. (Meyer, 1956) & (Huron, 2006) suggested that surprise, tension and relief were the principal emotions related to music. The main reason behind this findings is that harmonic, rhythmic and melodic progressions create expectations that are fulfilled. It was also found that when a person is exposed to music which they cannot understand, arises a new feeling of irritation and frustration. (Gowensmith, 1997) authored a research paper in which the author explained about the exaggeration caused when a person is exposed to heavy metal music. It is common to find the people who became more aggressive during the listening of heavy metal music. Actually, it does not create or support anger in people who are familiarized with it. Only new listeners show elevated levels of anger.

Signal transduction in listening music

(Brown D., 1991) shows that music assists in developing good health and well-being, mainly due to the engagement of neurotransmitters and neurochemical systems which is mainly responsible for reward, motivation, pleasure, stress and arousal, immunity and social affiliation. Music is mainly responsible for evoking the wide spectrum of music from exhilaration to relaxation, joyful to sadness, fear to comfort and sometimes combinations Music is mainly used in various fields, for instance: neurosurgeons use it to improve the concentration, armies to coordinate movements and also to enhance cooperation, workers to improve attention and vigilance and athletes to increase stamina and motivation. Also play an important role in pain management, relaxation, psychotherapy and personal growth. (A.C North and Hargreaves, 1996) (Sloboda J. &., 2001) revealed in their research found that music helps in evoking a strong and wide variety of emotions like joy, sadness, fear and peacefulness or tranquillity.

Image result for music and emotions

(Quintin, 2011) conducted an experiment in which Participants with ASD were cluster accorded with TD, so that act and full-scale IQ scores obtained with the Wechsler Abbreviated Scale of Intelligence changed by less than one standard deviation between clutches. Core trial task was executed to test the amygdala theory of autism at the perceptual level in the domain of music. The emotional greatness ranking was used to evaluate the theory. Based on this study which provides various astounding results, like highly active adolescents with ASD can identify basic emotions in music. This is precisely the circumstance for happy, sad and scary music. These discoveries put forward to differ the types of incitements used to test emotion recognition in ASD. These outcomes can also be applied in the milieu of music therapy or other intercession programs to target societal expansive and emotional skills. The authors agree with the concept that perception of music is considered as a comparative métier for personalities with ASD in the framework of a profile portrayed by strength and weakness. (Panksepp, 1995) (Sloboda J., 1991) explained that when a person has exposed some sort of music, he/she is subjected to a feeling of intense pleasure or euphoria in the listener, sometimes they also experience as “thrills” or “chills” down the spine. (Goldstein, 1980) described thrill as a trivial tremor, chill or tickling cognizance, which is confined to a small area at the backbone of the neck and evanescent, it must be presumed that this is a stated perception from a central neural focus. The thrill of high intensity lasts for a longer duration. It promulgates through the “point of origin, up over the scalp, downward along the spine and forward over the chest, abdomen, thighs and legs”.

Image result for music and emotions

Mostly it is conveyed by perceptible “gooseflesh” specifically on the arms. The apparent thrill replicates a scattering electrical activity in some parts of the brain area with the somatotopic institute, and provide neurological links to the limbic system and to central autonomic regulation. (Dube’, 2003) demonstrated that music does not contribute to any survival benefit like food, drinks, or sex nor it helps in making the person to addict to drugs abuse. Listening music is considered as one of the most entertaining activities and he/she spends considerable time during their lifetime. (Blood, 2001), (Brown S. e., 2004), (Menon, 2005) presented the music pieces in a broken 24s experimental and control epochs using a standard fMRI block design. The improved functional and effective connectivity between the different regions of the brain which are responsible for reward, autonomic and cognitive processing help us to understand the experiences caused by listening music. The authors performed the statistical analysis which used the general linear models and the theory of Gaussian random fields as implemented in SPM99. Various musical stimuli were used to analyze the veracity of the mesolimbic reward system and to look at whether shortfalls in these regions are allied with clinical indicators. Singing and speaking are allied with lateralized variances in cerebral activities: right and left hemisphere regions are more active during singing and speaking.

Image result for music and emotions

Comparative surges are perceived in homologous portions of the left and right hemispheres regions which are usually linked with sensorimotor function. This disproportionateness put forward the concepts that “oral laryngeal motor activity” is further unswervingly controlled by regions in the right hemisphere during singing. This type of pattern provides insight by looking into the nature of neural conditions such as stammering and aphasia by which singing can persuade fluency. It was also found that speaking is linked with “greater activity in left perisylvian regions” and singing is related with “augmented activity in right anterior temporal, prefrontal, and para-limbic cortices” which are structurally unified. These areas support the “fluency-inducing effects of words” produced in melody during activation. Due to the advancement of neuroimaging technology, it sheds some light to review the purposeful activation, connectivity of network and release of dopamine during the acuity of gratifying music (Jeffries, 2003), (Kolesch, 2006) explained based on the studies using Positron Emission Tomography (PET) to analyze cerebral blood flow when experiencing pleasurable music. When a person selected the music he/she loves which helps in inducing chills down the spine which was compared to the neutrally related music. It is also found that the music which induces chill helps in enhancing the cerebral blood flow within the network of interconnected structures which comprises mesocorticolimbic system and are critical to rewards and reinforcement, such as ventral striatum including nucleus accumbens and mid-brain, as well as the thalamus, cerebellum, insula, anterior cingulate cortex and orbitofrontal cortex Blood, (Blood, 2001). There will be activation of nucleus accumbens when a person is exposed to an unfamiliar pleasurable music and also during singing when compared to speech (Brown S. e., 2004) (Jeffries, 2003). These studies prove that the reward during listening music involves the activation of nucleus accumbens and opioid-rich midbrain nuclei known to normalize morphine numbness and plunging the hang-up of music.

Related image

(Menon, 2005), (Kolesch, 2006), (Salimpoor, 2011) explained in their works used some higher resolution functional magnetic resonance imaging (fMRI) in order to investigate the neural correlates of musical pleasure. (Janata, 2009) reported that the medial prefrontal cortex (MPFC) is regarded as one of the vital brain regions that support self-referential processes, which is an amalgamation of “sensory information with self-knowledge and the retrieval of autobiographical information”. The author used functional magnetic resonance imaging (fMRI) technique and a unique procedure to bring about “autobiographical memories” with the extracts of popular music in order to test the postulate that “music and autobiographical memories are integrated into the medial prefrontal cortex. It was found that dorsal regions of the MPFC (Brodmann area 8/9) answer back parametrically to the number of autobiographical features experienced over the path of individual 30 s pieces. The author put forward the findings that the dorsal MPFC link music and memories during the experience of emotionally striking sporadic memories that are elicited by acquainted songs from our personal past. (Craig, 2002) found that when a person is listening to some type of pleasurable music, it was found to be associated with the activation of nucleus accumbens and ventral tegmental area mediated interactions between nucleus accumbens and other brain structures which are known to regulate autonomic, emotional and cognitive functions. (Swanson, 1982), (Wise, 2004) found that dopaminergic neurons originating in the ventral tegmental area with major projections to the nucleus accumbens and forebrain regions are necessary for the efficiency of rewarding stimuli. (Cepeda, 2006), (Dileo, 2007), (Nilsson, 2008), (Knight, 2001), (Pittman, 2011), (Tam, 2008), (Spintage, 2012) explained that when a person is exposed to some type of relaxing and soothing music, which comprises of having slow tempo, low pitch and no lyrics, helps in reducing stresses and depression in healthier individuals.

Related image

The studies also show that music helps the patients who undergo invasive procedures like surgeries, colonoscopy and other dental procedures. It also assists the patients who are suffering from coronary heart diseases. The researchers also explained that music helps in reducing the sedation as well as pain and analgesics requirements. (Bradt, 2009) reported that physiological data from the patients indicates that they experience less apprehension during the medical practice when they pay attention to music. Since physiological reactions are uninterruptedly supervised during procedures with Coronary Heart Disease (CHD) patients, and music intrusions is easily arrested, once the patient could not experience any favourable effects. It is very well acclaimed that listening to music is used as an “anxiety management intervention” preceding and during procedures. The author of this review paper provided pieces of evidence that when a person is listening to pre-recorded music provided health benefits for entities with CHD. They have not considered the use of other music interventions, like “music improvisation, singing, experiencing live music, songwriting”, were not properly examined. (McKinney, 1997) explained about the new music therapy which is the amalgamation of relaxation techniques and listening to classical music, named as “Guided Imagery & Music”-GIM reduces hypothalamic pituitary- adrenal HPA axis activation in healthier subjects. There are 2 markers of HPA activation which were found to reduce GIM, Cortisol and β–endorphin.

Results and discussion

Constructed from the above theories, it was found that scholars and scientists are exasperating to conduit a fissure between musical and emotional research, as it benefits in augmenting the encouraging and positive effects and attentiveness inhuman. This would help in developing moral assertiveness among disheartened and depressed thinkers. They also bring into being that listening music may influence emotions in the form of brain stem reflexes, evaluative conditioning, emotional contagion, visual imagery, episodic memory and musical expectancy. This affords core focus on hereditary advancements, enhancing the functioning of vital brain organs and developing the cultural impact on the structure of music. Music furthermore helps in depressing the emotions centred on a number of mechanisms like semantics associations, emotive pestilence based facial annotations and vocal expressions. Most of the physiological and psychological responses like cardiovascular, respiratory and electrodermal responses, RSA, CIBI are utterly interconnected to music. There will be sturdy aberrations in heart rate, blood pressure, skin conductance and temperature in the course of the emotions resembling sadness and happiness. These illustrations obviously state that there is a comprehensive parallelism stuck between music-induced happiness and other responses like skin conductance level. The emotional arousal amongst the listeners through music is very much trustworthy than any other activities.

Related image

Aesthetic emotions that are touched thru aesthetic activities like fear, wonder, sympathy etc. are diminished more during listening to music. People with more fear allied teething troubles can be treated with music which helps them in modifying their social behaviour. The research also establishes the fact that chills elicited due to fear, weakness and excitement can be persuaded through music. Musical genres are well thought-out to be more important, for the reason that each genre helps in inducing different types of emotions among human. Most of the techniques used for recognizing emotions deficiencies consistency, due to complexities in emotion recognition algorithms in which the signals to be mined through the brain. The algorithms require various features related to music such as lyrics or verses of the song and other properties like tempo, articulation, timbre, rhythm etc.,. since these are required as one of the important parameters to train the classifier. The ratings on PANAS indicates that belligerence, moodiness and constant worry get enhanced during the anger initiation and it is weakened after the music or silence.

Related image

Emotions are mainly responsible for the enduring effect on perspicacity and cognitive processes such as enthrallment, intelligent, consciousness, problem-solving decision making, and judgement. Most of the emotional research are usually reconnoitred for pragmatic emotions which plays a vital role in the adaptation of individuals that have very serious correlations and consequences for the safety like “ fight/flight, reformation and assimilation, impetus enhancement” etc. A recent study shows that the patients after surgery experiences high relaxation and less pain when they come through music. It is advisable to play pacifying and soothing music in the postoperative care which helps the patients to get cured within a short span. It was also reported that proficiencies in long-term music learning and training, results in considerable neuroplastic changes in the evolving and adult brains which further have an emotional impact on both white and grey matter besides cortical and subcortical structures. Listening and composing music aggravates emotions, intensifies between subjects’ communications and arbitrated through neural hormones such as serotonin and dopamine-mainly accountable for undergoing joyous and rewarding activities through changes in the amygdala, ventral striatum and other constituents of the limbic system. This makes rehabilitation, a pleasurable process which can re-attribute weakened neural connections by winning brain regions with each other. Love is considered as one of the most important and influential musically induced emotion which appear in different ways such as fondness and painfulness and are complemented with various qualities of music like feeling enchanted, charmed, dazzled and amazed. It was found that highly active adolescents with acute stress disorder ASD can categorize basic emotions in music. This phenomenal discovery put forward to differ the types of incitements and to test the emotions in ASD.

Related image

The outcomes are applied in the milieu of music therapy or other intervention programs to board societal expansive and emotional skills. Singing and speaking are allied with lateralized discrepancies in cerebral activities: right and left hemispheric regions are further full of life all through singing and speaking. Music helps in providing insights, by considering into the nature of neural conditions such as stammering and aphasia by which singing can improve fluency. Music-induced chills help in enhancing the Cerebral Blood Flow within the network of interconnected structures which comprises meso-corticolimbic systems and are critical to rewards and reinforcement. It was found that when a person is experiencing a unfamiliar pleasurable music, there will be an activation of nucleus accumbens and ventral tegmental area mediated interactions between nucleus accumbens and other brain structures which are recognized to regulate autonomic, emotional and cognitive functions. Music helps the patients to relieve stress and pain who go through invasive procedures like surgeries, colonoscopy, coronary heart ailments and other dental procedures. Researchers also explained that music helps in reducing the sedation as well as pain and analgesics requirements.

Concluding remarks

Even though emotions can be viewed through music, it is not possible to recognize emotions through it, in the convenient settings. The main reason behind is that by construing the signals formed inside the brain is a tedious task, for the reason that our brain is composed of more than millions of active neurons. When these neurons act together, biochemical feedbacks flare up the electrical compulsions which can be measured thro EEG devices. The widely held of the functional and well-designed brain is spread on the outside surface stratum of the brain, which is extremely pleated and folded. This collapsible folding is considered as a momentous encounter to understand the signals. Every single cortex is folded in a different way precisely similar to our fingerprints. The somatic position of the signals are not the same, even for monozygotic twins, there is no uniformity in the surface signals. The mind-numbing task is to craft an algorithm to unfold and reveal the information in the cortex, mapping the signal handler to the source, then sort it to work from corner to corner with the bulk populace. Exploration on emotions and music have been considerably greater than before past two decades with voluminous areas subsidizing which includes “psychology, neuroscience, endocrinology, medicine, history, sociology and even computer science”. The plentiful theories that are attempting to elucidate the starting point, neural-biology, involvement and purpose of emotions have only nurtured additional penetrating research on this subject. Present areas of research in the theme of emotions take account of the growth of things that arouse and bring about emotions. Additionally  PET and fMRI scans support in bone up the emotional progressions in the brain.

Image result for music and emotions

 References

  • A.C North and Hargreaves, D. J. (1996). Responses to music in aerobic exercise and yogic relaxation classes. Br. J. Psychol., 87, 535-547.
  • Altenmuller, E. a. (n.d.). Neurology Music Therapy: the beneficial effects of music making on neuro-rehabilitation. Acoust. Sci. Tech., 341.  doi :10.1250/ast 34.5
  • Altenmuller, E. K. (2013). “A contribution to the evolutionary basis of music: lessons from the chill response”, in Evolution of Emotional Communication From Sounds in Non-human Mammals to speech and Music in Man . In S. S. E.Altenmuller, Series in Affective Sciences (pp. 313-335). Oxford University Press.
    doi :10.1093/9780199583560.001.0001
  • Ansdell, D. a. (2014). What can’t music do? Psychology of Well-Being: Theory, Research and Practice, 1, 23.
  • Ansdell, D. a. (n.d.). What can’t music do? Psychology of Well-Being: Theory, Research and Practice
  • Bezdek, M. &. (2008). Musical emotions in the context of the narrative film. Behavioural and Brain Sciences(31), 578.
  • Bharucha, J. C. (2006). Varieties of musical experience. Cognition, 100(1), 131-172.
  • Blood, A. a. (2001). Intensely pleasurable responses to music correlate with activity in brain regions implicated in reward and emotion. Proceedings of National Academy of Science, 98, pp. 11818-11823. The USA.
  • Bradt, J. a. (2009). Music for stress and anxiety reduction in coronary heart disease patients. Cochrane Database Systems, 2.
    doi:CD006577
  • Brown, D. (1991). Human Universals. McGraw-Hill North.
  • Brown, S. e. (2004). Passive music listening spontaneously engages limbic and paralimbic systems. NeuroReport, 15, 2033-2037.
  • Cabeza, R. &. (2000). Imaging cognition II: An empirical review of 275 PET and fMRI studies. Journal of Cognitive Neuroscience, 12, 1-47.
  • Cepeda, M. e. (2006). Music for pain relief. Cochrane Database System. Rev, 2.
    doi:CD004843
  • Clayton, M. s. (2004). The concept of entrainment and its significance for ethnomusicology. EESM Counterpoint, 1, 1-82.
  • Craig, A. (2002). How do you feel? Interoception: the sense of the physiological condition of the body. Nat. Rev. Neurosci., 3, 655-666.
  • D.Liu, L. a. (2003). Automatic mood detection from acoustic music data. Proceedings of 5th Int. Symp. Music Information Retrieval, 81-87.
  • Dalgleish, T. &. (1999). Handbook of Cognition and Emotion. Chichester: Wiley.
  • Darwin, C. (1871). The descent of man, and selection in relation to sex (2 volumes). London: Murray.
  • DeNora, T. (1999). Music as a technology of the self. Poetics(27), 31-36.
  • Dileo, C. a. (2007). Music therapy: application to stress management . In P. e. Lehrer, Principles and Practices of Stress Management (pp. 519-544). Guilford Press.
  • Dube’, L. a. (2003). The categorical structure of pleasure. Cogn. Emot., 17, 263-297.
  1. Schubert. (1999). Measurement and time series analysis of emotion in music. University of New South Wales, School of Music and Music Education. Sydney: PhD dissertation.
  • Ekamn, P. L. (1983). Autonomic nervous system activity distinguishes among emotions. Science,, 221, 1208-1210.
  • Ekman, P. (1984). Expressions and the nature of emotions. In &. P. K.R. Scherer, Approaches to Emotion (pp. 319-344). Hillsdale: Erlbaum.
  • Ekman, P. (1994). Strong evidence for universals in facial expressions: A reply to Russel’s mistaken critique. Psychological Bulletin, 115, 268-287.
  • Feng, Y. Z. (2003). Popular music retrieval by detecting mood. Proc. 26th Annual International ACM SIGIR Conf. Research and Development Information Retrieval (SIGIR), (pp. 375-376). Toronto.
  • Feng, Y. Z. (2003). Popular music retrieval by detecting mood. Proc. 26th Annual Int. ACM SIGIR Conf. Research and Development Information Retrieval (SIGIR), (pp. 375-376). Toronto, ON, Canada.
  • Frijda, N. (1986). The Emotions. Cambridge and New York: Cambridge University Press.
  • Goldstein, A. (1980). Thrills in response to music and other stimuli. Physiol. Psychol, 8, 126-129.
  • Gowensmith, W. .. (1997). The effects of heavy metal music on arousal and anger. Journal of Music therapy, 1, 33-45.
  • Grewe, O. N. (2007a). Emotions over time: Synchronicity and development of subjective, physiological, and facial affective reactions to music. Emotions, 7(4), 774-788.
  • Grewe, O. N. (2007b). Listening to music as a re-creative process: Physiological, psychological, and psychoacoustical correlates of chills and strong emotions. Music Perception, 24, 297-314.
  • Huron, D. (2006). Sweet anticipation. In Music and the Psychology of Expectation. Cambridge: MIT Press.
  • Izard, C. (1971). The Face of Emotion. New York: Appleton-Century- Crofts.
  • Janata, P. (2009). The neural architecture of music-evoked autobiographical memories. Cereb. Cortex, 19, 2579-2594.
  • Jeffries, K. a. (2003). Words in melody: an h(2)15o PET study of brain activation during singing and speaking. NeuroReport, 14, 749-754.
  • Juslin, P. &. (2008). Emotional responses to music: The need to consider underlying mechanisms. Behavioural and Brain Sciences,, 31(5), 559-575 (discussions 575-621).
  • Khalfa, S. P. (2002). Event-related skin conductance responses to musical emotions in humans. Neuroscience Letters, 328(2), 145-149.
  • Knight, W. &. (2001). Relaxing music prevents stress-induced increases in subjective anxiety, systolic blood pressure and heart rate in healthy male and females. Journal of Music Therapy, 38, 254-272.
  • Koelsch, S. e. (2006). Investigating emotion with music: a fMRI study. Human Brain Mapping, 27, 239-250.
  • Krumhansl, C. (1997). An exploratory study of music emotions and psychophysiology. Canadian Journal of Experimental Psychology,, 51(4), 336-353.
  • Levinson, J. (1990). Music and the negative emotions. In J. Levinson, Music, art & metaphysics. Essays in philosophical aesthetics (pp. 306-335). Ithaca- NY: Cornell University Press.
  • Li, T. &. (2003). Detecting emotions in music. Proc. 5th Int. Symp. Music Information Retrieval, (pp. 239-240). Baltimore MD.
  • Lundqvist, L. C. (2009). Emotional responses to music: Experience, expression and physiology. Psychology of Music, 37, 61-90.
  • McKinney, C. e. (1997). Effects of guided imagery and music (GIM) therapy on mood and cortisol in healthy adults. Health Psychology, 16, 390-400.
  • McKinney, C. e. (1997). The effect of selected classical music and spontaneous imagery on plasma beta-endorphin. Journal of Behav. Med., 20, 85-99.
  • Menon, V. a. (2005). The rewards of music listening: response and physiological connectivity of the mesolimbic system. NeuroImage, 28, 175-184.
  • Meyer, L. (1956). Emotions and meaning in music. Chicago: Chicago University Press.
  • Nilsson, U. (2008). The anxiety and pain reducing effects of music interventions: a systematic review. AORN Journal, 87, 780-807.
  • Nyklicek, I. T. (1997). Cardiorespiratory differentiation of musically induced emotions. Journal of Psychophysiology, 11, 304-321.
  • Panksepp, J. (1995). The emotional sources of chills induced by music. Music Perception, 13, 171-207.  doi :10.2307/40285693
  • Pittman, S. a. (2011). Music intervention and preoperational anxiety: an integrative review. Int. Nurs. Rev., 58, 157-163.
  • Plailly, J. T. (2007). The feeling of familiarity  of music and odours: the same neural signature? Cereb. Cortex, 17, 2650-2658. doi :10.1093/cercor/bhl173
  • Quintin, E. e. (2011). Emotion perception in music in high functioning adolescents with autism spectrum disorders. Journal of Autism Dev. Disord, 41, 1240-1255.
  • Rickard, N. (2004). Intense emotional responses to music: A test of the physiological arousal hypothesis. Psychology of Music, 32, 371-388.
  • Salimpoor, V. B. (2011). Anatomically distinct dopamine release during anticipation and experience of peak emotion to music. Nature Neuroscience, 14, 257-262.  doi :10.1038/nn.2726
  • Sandstrom, G. M. (2013). Absorption in music: A scale to identify individuals with strong emotional responses to music. Psychology of Music, 41, 216-228.
  • Scherer, K. (2001). Appraisal considered as a process of multilevel sequential checking. In K. S. Johnstone, Appraisal Processes in Emotion: Theory, Methods, Research (pp. 92-120). Oxford, NewYork: Oxford University Press.
  • Scherer, K. R. (2000). Psychological models of emotion. In J. Borod, The Neuropsychology of Emotion (pp. 137-162). Oxford/ New York: Oxford University Press.
  • Scherer, K. R. (2001). Emotional effects of music: Production rules. In P. J. Sloboda, Music and emotion: Theory and research (pp. 361-392). Oxford; New York: Oxford University Press.
  • Sharman, L. a. (2015). Extreme metal music and anger processing. Frontiers in Human Neuroscience, 9, 272.  doi :10.3389/fnhum.2015.00272
  • Sloboda, J. &. (2001). Emotions in everyday listening to music. In P. J. Sloboda, Music and emotion: Theory and research (pp. 415-429). Oxford, England: Oxford University Press.
  • Sloboda, J. (1991). Music structure and emotional response: Some empirical findings. Psychology of Music, 19, 110-120.
  • sloboda, J. O. (2001). Functions of emotions in everyday life: An exploratory study using the experience sampling method. Musicae Scientiae, 9-32.
  • Spintage, R. (2012). Clinical use of music in operating theatres. In R. e. MacDonald, Music, Health and Wellbeing. Oxford University Press.
  • Stefan Gebhart, M. K. (2014). The use of music for emotion modulation in mental disorders: the role of personality dimensions. Journal of Integrative Psychology and Therapeutics.
  • Stemmler, G. (2004). Physiological processes during emotions. In &. P. R.S. Feldman, The regulation of emotion. Mahwah, NJ: Erlbaum.
  • Swanson, L. (1982). The projections of the ventral tegmental area and adjacent regions: a combined fluorescent retrograde tracer and immune fluorescence study in the rat. Brain Res. Bull, 9, 321-353.
  • Tam, W. e. (2008). Effects of Music on procedure time and sedation during colonoscopy: A meta-analysis. World J. Gastroenterol, 14, 5336-5343.
  • Watanabe, T. Y. (2008, Jan 1). The memory of music: roles of right hippocampus and left inferior frontal gyrus. Neuroimage, 39(1), 483-91.
  • Wise, R. (2004). Dopamine, learning and motivation. Nature Rev. Neuroscience, 5, 483-494.
  • Zentner, M.,. (2008). Emotions evoked by the sound of music: Characterization, classification and measurement. Emotions, 8(4), 494-521.

Basim A (2017), Deciphering Human Emotions through Music -A Heuristic Avenues in Research, International Journal of Indian Psychology, Volume 4, (3), DIP: 18.01.100/20170403