Emotion AI Overview What is it and how does it work?


Artificial emotional intelligence or Emotion AI is also known as emotion recognition or emotion detection technology. In market research, this is commonly referred to as facial coding.

Humans use a lot of non-verbal cues, such as facial expressions, gesture, body language and tone of voice,  to communicate their emotions.  Our vision is to develop Emotion AI that can detect emotion just the way humans do, from multiple channels. Our long-term goal is to develop “Multimodal Emotion AI”, that combines analysis of both face and speech as complementary signals to provide richer insight into the human expression of emotion. For several years now, Affectiva has been offering industry-leading technology for the analysis of facial expressions of emotions. Most recently, Affectiva has added speech capabilities now available to select beta testers

Emotion detection – Face

Our Emotion AI unobtrusively measures unfiltered and unbiased facial expressions of emotion, using an optical sensor or just a standard webcam. Our technology first identifies a human face in real time or in an image or video. Computer vision algorithms identify key landmarks on the face – for example, the corners of your eyebrows, the tip of your nose, the corners of your mouth. Deep learning algorithms then analyze pixels in those regions to classify facial expressions. Combinations of these facial expressions are then mapped to emotions.

In our products, we measure 7 emotion metrics: anger, contempt, disgust, fear, joy, sadness and surprise. In addition, we provide 20 facial expression metrics.  In our SDK and API, we also provide emojis, gender, age, ethnicity and a number of other metrics. Learn more about our metrics.

The face provides a rich canvas of emotion. Humans are innately programmed to express and communicate emotion through facial expressions. Affdex scientifically measures and reports the emotions and facial expressions using sophisticated computer vision and machine learning techniques.

Here are some links to other areas of interest:

  • Determining Accuracy
  • Mapping Expressions to Emotions
  • Obtaining Optimal Results

When you use the Affdex SDK in your applications, you will receive facial expression output in the form of Affdex metrics: seven emotion metrics, 20 facial expression metrics, 13 emojis, and four appearance metrics.









Furthermore, the SDK allows for measuring valence and engagement, as alternative metrics for measuring the emotional experience.

Engagement: A measure of facial muscle activation that illustrates the subject’s expressiveness. The range of values is from 0 to 100.

Valence: A measure of the positive or negative nature of the recorded person’s experience. The range of values is from -100 to 100.

How do we map facial expressions to emotions?

The Emotion predictors use the observed facial expressions as input to calculate the likelihood of an emotion.

Facial Expressions

Attention – Measure of focus based on the head orientation

Brow Furrow – Both eyebrows moved lower and closer together

Brow Raise – Both eyebrows moved upwards

Cheek Raise – Lifting of the cheeks, often accompanied by “crow’s feet” wrinkles at the eye corners

Chin Raise – The chin boss and the lower lip pushed upwards

Dimpler – The lip corners tightened and pulled inwards

Eye Closure – Both eyelids closed

Eye Widen – The upper lid raised sufficient to expose the entire iris

Inner Brow Raise – The inner corners of eyebrows are raised

Jaw Drop – The jaw pulled downwards

Lid Tighten – The eye aperture narrowed and the eyelids tightened

Lip Corner Depressor – Lip corners dropping downwards (frown)

Lip Press – Pressing the lips together without pushing up the chin boss

Lip Pucker – The lips pushed foward

Lip Stretch – The lips pulled back laterally

Lip Suck – Pull of the lips and the adjacent skin into the mouth

Mouth Open – Lower lip dropped downwards

Nose Wrinkle – Wrinkles appear along the sides and across the root of the nose due to skin pulled upwards

Smile – Lip corners pulling outwards and upwards towards the ears, combined with other indicators from around the face

Smirk – Left or right lip corner pulled upwards and outwards

Upper Lip Raise – The upper lip moved upwards

Emoji Expressions

Laughing – Mouth opened and both eyes closed

Smiley – Smiling, mouth opened and both eyes opened

Relaxed – Smiling and both eyes opened

Wink – Either of the eyes closed

Kissing – The lips puckered and both eyes opened

Stuck Out Tongue – The tongue clearly visible

Stuck Out Tongue and Winking Eye – The tongue clearly visible                                            and either of the eyes closed

Scream – The eyebrows raised and the mouth opened

Flushed – The eyebrows raised and both eyes widened

Smirk – Left or right lip corner pulled upwards and outwards

Disappointed – Frowning, with both lip corners pulled downwards

Rage – The brows furrowed, and the lips tightened and pressed

Neutral – Neutral face without any facial expressions

Using the Metrics

Emotion, Expression and Emoji metrics scores indicate when users show a specific emotion or expression (e.g., a smile) along with the degree of confidence. The metrics can be thought of as detectors: as the emotion or facial expression occurs and intensifies, the score rises from 0 (no expression) to 100 (expression fully present).

In addition, we also expose a composite emotional metric called valence which gives feedback on the overall experience. Valence values from 0 to 100 indicate a neutral to the positive experience, while values from -100 to 0 indicate a negative to neutral experience.


Our SDKs also provide the following metrics about the physical appearance:


The age classifier attempts to estimate the age range. Supported ranges: Under 18, from 18 to 24, 25 to 34, 35 to 44, 45 to 54, 55 to 64, and 65 Plus.


The ethnicity classifier attempts to identify the person’s ethnicity. Supported classes: Caucasian, Black African, South Asian, East Asian and Hispanic.

At the current level of accuracy, the ethnicity and age classifiers are more useful as a quantitative measure of demographics than to correctly identify the age and ethnicity on an individual basis. We are always looking to diversify the data sources included in training those metrics to improve their accuracy levels.


The gender classifier attempts to identify the human perception of gender expression.

In the case of video or live feeds, the Gender, Age and Ethnicity classifiers track a face for a window of time to build confidence in their decision. If the classifier is unable to reach a decision, the classifier value is reported as “Unknown”.


A confidence level of whether the subject in the image is wearing eyeglasses or sunglasses.

Face Tracking and Head Angle Estimation

The SDKs include our latest face tracker which calculates the following metrics:

Facial Landmarks Estimation

The tracking of the cartesian coordinates for the facial landmarks. See the facial landmark mapping here.

Head Orientation Estimation

Estimation of the head position in a 3-D space in Euler angles (pitch, yaw, roll).

Interocular Distance

The distance between the two outer eye corners.

Emotion detection – Speech

Our speech capability analyzes not what is said, but how it is said, observing changes in speech paralinguistics, tone, loudness, tempo, and voice quality to distinguish speech events, emotions, and gender. The underlying low latency approach is key to enabling the development of real-time emotion-aware apps and devices.

Our first speech based product is a cloud-based API that analyzes a pre-recorded audio segment, such as an MP3 file. The output file provides the analysis on speech events occurring in the audio segment every few hundred milliseconds and not just at the end of the entire utterance. An Emotion SDK that analyzes speech in real-time will be available in the near future.

Data and accuracy

Our algorithms are trained using our emotion data repository, that has now grown to nearly 6 million faces analyzed in 87 countries. We continuously test our algorithms to provide the most reliable and accurate emotion metrics. Now, also using deep learning approaches, we can very quickly tune our algorithms for high performance and accuracy. Our key emotions achieve accuracy in the high 90th percentile. We sampled our test set, comprised of hundreds of thousands of emotion events, from our data repository. This data has been gathered representing real-world, spontaneous facial expressions and vocal utterances, made under challenging conditions such as changes in lighting and background noise, and variances due to ethnicity, age, and gender. You can find more information on how we measure our accuracy here.

How to get it

Our emotion recognition technology is available in several products.  From an easy-to-use SDK and API for developers, to robust solutions for market research and advertising.

Music & Human Emotions- Heuristic Avenues in Research

Image result for music and emotions


We are more often emotionally moved by music either during listening music or live music performances. This shows that music is consociation with emotions, which prevails among mammals and animals. This draws more attention among scientists who are operative towards neuroscience and applied sciences.  This theme also covers most of the research in the field of computational neuroscience, behavioural science, music research and therapy.  Various studies also confirm that music spreads throughout all parts of everyday life and also characterize some of its functions like mood changes and emotion regulation. The effect of emotion on music is hindered due to lack of appropriate research paragon and methods, scarcity of abstract and conjectural analysis of the process concealing emotion production via music. The three main reckoning methods for emotion draft are basic emotions- the capacity of one person to react with various arousal dimensions and diversified emotion inventories.

When we focus on the continuous list of basic emotions which attach little importance to the more circuitous forms of emotional processes in humans, mainly intuitive states of feelings generated by music which do not contribute to various robust deportment functions, but the effect of musical emotions are limited to valence and arousal balance which inhibits assessment of kind of qualitative discernment which is essential to study the emotional effects of music. The researchers generated assorted lists of emotions mainly to suit the needs of specific study which curtails invalidity and authenticity which makes the results very difficult to compare. A feeling is considered as a hypothesis for the central component of the complement of emotion which serves as a integration for the acquired representation of emotional evolution. The musical effect should be considered and studied as emotions which combines cognitive and physiological effects. Usually, research on musical emotions is performed and presenting antithetic piece d’ occasion selected for assumed emotional generation. Subjects are requested to record their emotional reactions based on the piece of music. Sometimes, listeners may be enquired to express their emotions via words based on questionnaires, mainly measures the idiosyncratic acumen of expressed emotion than felt. The abrogating aspect is that the comment given is fixed, listeners can only come across the experience of those emotions which are listed on a stock. The two main theories which govern the emotions are discrete emotion theory which advocates  the measurement of a small number of fundamental emotions (anger, fear, joy and sadness) and dimensions theory suggest ratings of valence and activation experiences

Psychophysiological effects of music

During this competitive and hectic life schedule, music generally enhances the positive effect, vigilance and also results in convergent evolution in current scenario (Sloboda, 2001). It helps in bestowing opportunities for breaching and provide an intensity of illumination to music (DeNora, 1999). (Bharucha et al. 2006) enumerated three different kinds of emotional experience such as “self-motion” because it kindles the vestibular system, “perceptual description and simulation”. In the perceptual description, music appears to be triggered by a peripheral basis by which the properties of which we can categorize as if “a real object were moving in the real world”. They make an effort to mimic the sounds and the perceptions in music during the experience of such imitation, account for simulation category. The last one is “rules and preferences of music” which constitute a psychological space which constitutes a change in the abstract space, can craft a nous of moving musical objects. Therefore, music is related to inception and dominion of emotions. Some scientists believe that music helps in increasing the expressions by inactivating a negative control system, behavioural and physiological changes, which are related to characteristics of a mental state. (Juslin, 2008) Presented a unique conjectural background highlighting six mechanisms by which listening to music may persuade emotions: “brain stem reflexes, evaluative conditioning, emotional contagion, visual imagery, episodic memory, and musical expectancy”. The authors proposed that these mechanisms change apropos such characteristics, as their information mainly focuses on, ontogenetic improvement, vital brain regions, cultural impact and initiation speed also on the degree of volitional stimulus, modularity, and dependency on musical structure. The authors thus clinch that music-induced emotions via mechanisms are not exclusive to music, the study of musical emotions could help the emotion field in one piece by providing unique archetypes for emotional stimulation.

Related image

There are various mechanisms like linguistics association, emotional pestilence based facial observations and vocal expressions by which music depress emotions (Bezdek, 2008). Studies by (Nyklicek, 1997) shows that large cluster of cardiovascular, respiratory and electrodermal responses are directly related to emotions which are induced by music. In (Nyklicek, 1997), classified emotions based on certain physiological responses like respiratory sinus arrhythmia (RSA) and cardiac inter bent intervals (CIBI) for instance, sadness can be correlated positively with IBI, systolic (SBP) and diastolic (DBP) blood pressure and negative with skin conductance level (SCL). (Krumhansl, 1997) had analysed together with parallels between physiology and emotion judgments which leads to the substantial transformations among the excerpts. The major deviations in “heart rate, blood pressure, skin conductance and temperature” were found in sad excerpts. The largest changes in “blood transit time and amplitude” in fear excerpts and finally there will be the largest changes in the measures of respiration in happy excerpts. These physiological effects of music which was observed help the theorists on the view of emotions and music. together with parallels between physiology and emotion judgments which leads to the substantial transformations among the excerpts. Physiological studies are mainly focussing on congruence between excessive emphasis on one’s own moods, attitudes, opinions and responses to internal and external stimuli and other physiological parts of music induced emotions. Skin Conductance Responses (SCR) was bring into being to be inordinate with two exciting emotions, like fear and happiness, when compared to other tranquilizing emotions like sadness and peacefulness. The results also illustrated that SCRs can be aroused and modified by “musical arousal of emotions, but are not sensitive to emotional transparency”.The report also shows that there is a corelation between music induced happiness and also enhanced skin conductance level (Khalfa, 2002). (Lundqvist, 2009) reported an association between music induced happiness and greater skin conductance level The authors had taken a critical concern in music and emotion research is to find whether “music call to mind unpretentious emotional responses among listeners or the listeners simply observe emotions which expressed by the music. It was found that happy music engendered “more zygomatic facial muscle activity, greater skin conductance, lower finger temperature, more happiness and less sadness” than sad music. The finding also reveals that the emotion persuaded in the listener was as same as the emotion articulated in the music and it is highly reliable with the concept that music may arouse emotions via emotional septicity.

Related image

(Grewe O. N., 2007a) authors put forward a theory that diverse musical patterns cannot persuade emotions which doesn’t statute its stimulus on components of emotions. If we try to infer the feelings of orchestrated responses and physiological arousal component as familiarizing reflexes, it is found that these universal reactions acts as a preliminary point for the evaluation process. So we can predict that familiarizing reflexes need not to be an emotion, but it could be an initiating theme for an emotional assessment process and a prerequisite for an emotion. It has been argued that aesthetic emotions (that are felt during aesthetic activity, such as fear, wonder, sympathy etc.,) are attenuated than other more emotions (Scherer K. R., 2001).  A nine factor model best filled the emotional descriptors that were chosen by music listeners who attended a classical music festivel (Zentner, 2008).  The Geneva Emotional Musical Scale (GEM) is the first instrument that had been specifically designed to measure musically evoked emotions. Gems-45 contains 45 labels that proved to be consistently chosen for describing musically evoked emotive states across the relatively wide range of music and lisener samples (Zentner, 2008). The gems accounts for ratings of music evoked emotions more powerfully than multipurpose scales that are based on non-musical areas of emotional research. Chills are physiological sensations which are caused by involuntary shaking of the body or limbs due to fear, weakness or excitement which is highly induced by music. (Sloboda J. , 1991) and (Panksepp, 1995) suggested that sadness and melancholy result in chills which also induced by emotions. It is a well-known fact that “music is a language of emotions”, so it is the people’s emotions we will be experiencing when the music is being played which is considered as very important to us. When our insinuations and hopes on the above subject of music is despoiled our attention get enhanced; and we begin to understand and enjoy the new pattern. This process of learning is convoyed through an emotional process by motivation or by means of compassion. A chill may result, when we possess high level of concentration or by retaining strong emotional process or having utterly focused senses on music.

Related image

(Grewe O. N., 2007b) authors thus proposed that the music is not the carnal impetus that influences our moods, but it is considered as an unrestrained and communicative submission which provides impact on our feelings in the form of re-creative process. (Rickard, 2004) shows that music talk into chills allied with rise in SCL and HR. The emotional response of the people with respect to music based on various degrees of variability. This can be explained by transformations in absorption process, which is defined as the capability and enthusiasm of the people to be emotionally drained in by a stimulus. (Sandstrom, 2013) developed a new measuring technique known as “AIMS (Absorption in Music Scale)” which helps in assessing the concern’s absorp­tion in music. This illustrates virtuous internal uniformity and temporal dependability which also come together with processes of “general absorption, musical engrossment and compassion”. This can envisage the métier of emotional responses pertaining to music. Their speculation shows that absorption is a broad-spectrum attribute. The authors finally suggest that individuals who have high absorption index in music would agree to engage themselves in many forms of music. (Stefan Gebhart, 2014) explored the data based on association between emotion intonation approaches by the use of music and personality magnitudes. The authors suggested that “timidity and nonchalance” are coupled with the augmented use of music to muddle through negative emotional states. Increasing the knowledge about the music’ influence on psychological disorders among the patients might help in inordinate relief from perceptual anguish by an unambiguous handling of music.

Emotional content modeling

Human has the ability to store and retrieve the information regarding music based on emotional content. The information can be like name of the artist, albums, songs, genres and composers. (Feng,  2003) (Li, 2003) (D.Liu, 2003) tried to classify musical emotions based on various genres into 4 for 13 different emotions. They Used “SVM-based multi-label classification method” to test two main problems: classification into the thirteen adjective groups and classification into the six super-groups like cheerful, gay, happy, fanciful, light, delicate, graceful, dreamy, leisurely, longing, pathetic, dark, depressing, sacred, spiritual, dramatic, emphatic, agitated, exciting, frustrated, mysterious and spooky. The authors erected the “super-group classifier” for each of the four styles such as Ambient, Classical, Fusion, and Jazz. This shows that the information provided by genre is used to improve and enhance the results of emotion detection. They claimed that experiments conducted based on emotion detection is not an easy one rather difficult problem and progress in the performance must be the instant concern which is resolved by: “escalating the sound data sets, collecting labeling in multiple rounds to ensure confidence in labeling, using different sets of adjectives, incorporating style and genre information, and using different types of features.” Their musical studies are mainly based on certain emotion recognition algorithms which comprises musical features representing musical properties like tempo, articulation, timbre and rhythm which are essential to train the classifier.

Image result for music and emotions

It is found that the Emotion usually vary throughout the musical selection and time. Therefore it is necessary to design a time-varying method to measure and analyze the emotions than descrying music with single emotion. To allow varying emotional content of a musical selection, (D.Liu, 2003) utilized Gaussian Mixture Model (GMM) to model the feature set like contentment, depression and exuberance, anxious. In erecting each GMM, the “Expectation Maximization (EM)” algorithm is castoff to evaluate the strictures of the Gaussian component and mixture weights. The initialization is executed with the “K-means algorithm”, here the authors presented a “mood detection methodology” for classical music from acoustic data. “Thayer’s model of mood” is take on for mood taxonomy, and three proficient feature sets were haul out in a straight line from acoustic data representing “intensity, timbre and rhythm” respectively. A hierarchical framework is castoff to notice the mood in a music clip. To distinguish the mood in one piece of music, a segmentation scheme is vacant for mood tracking. This algorithm helped in achieving adequate accuracy in evaluation purposes during experiments.

Related image

(E. Schubert, 1999) analyses emotion as a continuous function of time. Music is considered to be a language of emotions and it is an everyday activity. This can be done by using reliable and suitable emotion based classification algorithms. If a person is asked to pick his/her favorite song, he would select it based on his mood or aura. Therefore mood dominates a lot while selecting the music. The task of retrieving the musical information is an intriguing one. Most of the people are able to connect with the words of song better than its elements of music or musical features. It is beyond any doubt that musical elements plays an important role in depicting the emotion of a song, but the verses of a song expresses more emotion than musical elements which is composed based on lyrical theme. Researchers are struggling in the field of music to achieve the way to retrieve the music which is focused on genre classification using musical elements and metadata about the song and others are focusing on low-level feature analysis like pitch, tempo or rhythm.

Related image

(Ansdell, 2014) considered about the ability of music which can or cannot be related to the type of dainty relied upon musical assignation and its upshots. They refer to an apprehension with blossoming, as conflicting to a concern with more orthodox identifications of “health versus illness”, which can comfort to illuminate some imperceptible processes by which music benefits. An emphasis on prosperous trials more predictable imageries are count as evidence in the role of music relative to health and well-being. Some of them are stated in “DeNora and Ansdell: What can’t music do?”  “provide a pretext for social relating”, “provide opportunities for demonstrating skill”, “provide opportunities in which to receive praise”, “provide metaphors and subject matter for personal and group-historical narrative”, “provide means for shifting mood, individually and collectively”, “provide opportunities for bodily movement and bodily display, including dance and quasi-dance”, “provide opportunities for doing other things (eating and drinking, dressing up, making noise, getting out of the house or ward)”, “develop skills that are transferrable to things other than musical activity”, “ provide a means for renegotiating one’s identity and/or role within group culture or organization”, “ provide a set of events that can be recalled and thus contribute to a sense of Accumulating identity”, “provide opportunities for interaction with others” etc.

Image result for music and emotions

There is a common claim among elderly people that paying attention or listening to thrilling and extreme music is a base for anger and its expressions such as belligerence and criminal behaviour. (Sharman, 2015) conducted a study which consists of 39 hardcore music listeners who were aged 18–34 were laid open to an anger initiation, shadowed by the random task to 10 min of pinning their ears back to extreme music in their own playlist, or 10 min silence (control). The emotion is measured based on various factors which included heart rate and “subjective ratings on the Positive and Negative Affect Scale (PANAS)”. The results exhibited that ratings of PANAS aggression, petulance, and stress get enhanced during the anger initiation, and diminished after the music or silence. Similarly, the Heart rate gets increased during the anger initiation and was remain unremitting (not increased) in the music condition, and decreased during the condition of the condition. These verdicts point out the fact that extreme music is not responsible for making the participants angrier, inversely it gives the impression to equalize their physiological stimulation and results in enhanced positive emotions.

Measuring the mechanisms of emotions induced by music

The emotion is considered as a notional and operative annotation of a cardinal phantasm which constitutes the object of theory and research. Based on the elemental approach to emotion, it is an episode constituting of changes in various components. The three major reverberation elements of emotion are identified as physiological changes, motor expression and introspective feeling. Physiological changes like temperature sensations, respiratory and cardio-vascular acceleration and deceleration feeling of trembling and muscle spasms, which are considered as a part of emotion description (Stemmler, 2004). (Frijda, 1986) deliberates the motivational and neurophysiological prerequisites for emotions, and the habits by which emotions are synchronized by the individual. Making an allowance for the styles of events that cause emotions, the author maintains that emotions were ascended because events are evaluated by people as satisfactory or hurtful to their own concern. He also grosses an “information-processing perspective” that emotions were considered as consequences of the development of weighing the world in terms of one’s own interest by which sequentially it can modify action keenness.

Image result for music and emotions

Focused on the finding of “emotion-differentiated autonomic activity”, which help us to know how that activity was generated. This leads to the innovation that brings into being the “emotion-prototypic patterns of facial muscle action” give rise to autonomic ups and downs of enormous magnitude. Based on the experiment, it was also found that “understanding of the emotion labels resulting from the facial movement instructions” was directly or indirectly responsible for the effect. The authors proposed that contraction of facial muscles were responsible for universal emotion signals which results in emotion-specific autonomic activity. This occurs either through marginal feedback resulting from the contraction of facial muscle movements or through a direct link concerning the motor cortex and hypothalamus which translates “emotion prototypic expression” in the face and “emotion-specific patterning” in the ANS (Ekamn, 1983). The emotional episodic of the neurophysiological changes are attributed to emotional arousal event which undergoes distinct stability and smooth behavioural coordination and helps in the evolution of congruous adaptability responses to generate necessary actions and energy for various phenomena like fight or flight. The three main elements of a central motor system during emotional episodes are considered as facial and vocal expressions as well as gestures and postures (Ekman, 1984) (Ekman, 1994) (Izard, 1971). (Darwin, 1871) abstracted the expression as a base for adaptive behaviour like clinching one’s teeth as an aspect of a biting response. (Scherer K. R., 2001) suggested that emotional reactions are buckled down by a subjective appraisal of events which are attributed to their importance for blooming and mission acquirement of individuals.

Image result for music and emotions

Emotions are mainly responsible for the enduring effect on acumen and cognitive processes such as enthrallment, intelligent, consciousness, problem-solving, decision making, judgement and the like (Dalgleish, 1999). (Scherer K. R., 2001) applied various approaches suggested by (Scherer K. R., 2000) in which the author explained about degree of different centers of 4 psychological models of emotions like “dimensional models on subjective feeling, discrete emotion model on motor expression or adaptive behavior patterns, meaning models on verbal descriptors of subjective feelings and componential model on the link between emotion antecedent evaluation and differentiated reaction patterns”. Among these, the componential model helps in prophesying the assessment response link in a categorical and meticulous approach which comprises typical intensity and duration, the degree of coordination or synchronization of different organismic systems during the state, the rapidity of change in the nature of the state and the degree to which the state affects behaviour. These are required in order to understand non-cognitive effects of music, focussing on the absolute effort in this direction would require a more consideration of inventive emotions.

Related image

Pragmatic emotions are usually investigated in emotional research like anger, fear, joy, disgust, sadness, shame, guilt etc, which play a critical role in the adaptation of individuals that have important consequences for the safety like fight/flight, reformation and assimilation, impetus enhancement etc., the functionality of non-cognitive and pragmatic emotions are mainly based on a prior analysis of the behavioral needs and goals of an individual.  (Scherer K. R., 2001) explained that pragmatic emotions are of the high earnestness of emergency reactions which involves a combination of many organismic systems, which includes, changes in the endocrinal, hormonal and autonomous somatic and central nervous system.

Correlation of brain in music-induced emotion

By indulgencing in the field of biology and neurochemistry of music which agree the people utilize it enhanced scenario in curative, healing and other extents where substantiation point out that music yields benefit further than entertainment. After bypass surgery, patients every so often experience intermittent and irregular ups and downs in blood pressure. Such variations are treated with medications. Various studies show that the patients in ICUs required lower dosages of the drugs equated with patients in units where no background music is played. (Panksepp, 1995) shows that the values given by people to music are that adds some emotional affluence to their lives because music has the ability to bring forth strong emotional responses which are frequently sensed as highly pleasurable and provide chill sensations. (Altenmuller E. K., 2013) (Salimpoor, 2011), suggested that based on various brain imaging studies, emotional arousal is linked to activation of central nervous reward circuits and dopaminergic mechanisms which directly influences cognitive abilities and memory formation.

Related image

The consequences also indicate that intense and all-pervading pleasure in response to music can result in the release of dopamine in the striatal system. The researchers also contemplated that frontal lobe gets activated in both semantic and episodic musical memory tasks. It is also found that left hemisphere gets activated in addition to inferior frontal regions and angular gyrus, in addition to bilateral medial frontal regions. The right side of the bilateral, middle, frontal and precuneus get activated predominantly in episodic and control tasks. During familiar epeisodion and control tasks, the right precuneus and frontal gyrus and during unfamiliar epeisodion and control tasks, superior and middle frontal gyri and medial frontal cortex get activated. (Altenmuller E. a., 2013) explained that long-term music training and allied sensorimotor skillfulness learning can be a resilient tonic for neuroplastic changes in the evolving and adult brain, which in turn have an emotional impact on both white and grey matter besides cortical and subcortical structures. Creation of music like singing and dancing hints to a strong pairing of perception and activities facilitated by sensory-motor and multimodal brain regions, which affects either in a top-down or bottom-up approach, important sound pass on positions in the brain stem and thalamus. Listening and composing music aggravates motions and emotions, escalates between subject’ communications, and umpired thru neuro-hormones such as serotonin and dopamine which helps in experiencing a joyous and rewarding activity, through various changes in activities in the amygdala, ventral striatum, and other constituents of the limbic system. The above scenario helps rehabilitation more pleasurable and can re-arbitrate weakened neural connections by winning brain regions with each other.

Image result for music and emotions

(Watanabe, 2008) shows that during the listening of unfamiliar musical phrases, which is associated with the right hippocampus, the left inferior frontal gyrus, bilateral lateral temporal regions and left precuneus. (Plailly, 2007) stated that familiarity feeling can be activated by impetuses from all carnal modalities, suggesting a “multimodal nature of its neural bases”. Here, the authors studied the above supposition by analyzing the neural pedestals of familiarity dispensation of odours and music. In particular, they focused on familiarity based on the participants’ life experiences. Bits and pieces were categorized as familiar or unfamiliar based on participants’ individual responses, and stimulation patterns conjured by familiar items were compared with those conjured by unfamiliar items. “For the feeling of familiarity, a bimodal activation pattern was perceived in the left hemisphere, unambiguously the superior and inferior frontal gyri, the precuneus, the angular gyrus, the parahippocampal gyrus, and the hippocampus”. “The feeling of unfamiliarity was related to a smaller bimodal activation pattern principally to be found in the right insula and likely related to the detection of novelty”. There will be an enormous release of dopamine in the striatum during the peak emotional arousal in listening music. This was found by (Salimpoor, 2011) who combined the dopamine release measurements with psychophysiological measures of autonomous system activity. There will also be a decrease in blood flow in amygdala, hippocampus, precuneus and ventromedial prefrontal cortex (Blood, 2001).

 Characterising and classifying the emotions induced by the sound of music

Neuroimaging technique has paved an interesting way which helps in identifying an overlying complex of brain regions. The activities are tempered by the frame of mind and cognition. Various studies show that there will be a change in acuity, thoughtfulness, memory and decision-making functions on depressed individuals’. This shows that cognitive functions are directly linked with moods and emotions. The processing of thoughts is mainly conquered by the frame of mind. Studies from (Cabeza, 2000) shows that Unswerving with the intuition, the functional neuroanatomy of mood and that of cognition incorporates the intersecting networks of cortical and subcortical brain regions. The authors reviewed 275 PET and fMRI studies of “attention, perception, imagery, language, working memory, semantic memory retrieval, episodic memory encoding, episodic memory retrieval, priming, and procedural memory”. To detect a consistency of all stimulation patterns allied with above mentioned cognitive actions, data from 412 contrasts were concise at the level of cortical Brodmann’s areas, insula, thalamus, medial-temporal lobe, basal ganglia, and cerebellum. It was found that activation patterns for perception and imagery included primary and secondary regions in the dorsal and ventral pathways. For attention and working memory, they were usually found in prefrontal and parietal regions. For language and semantic memory retrieval, typical regions included left prefrontal and temporal regions. For episodic memory encoding, consistently activated regions included left prefrontal and medial temporal regions. For episodic memory retrieval, activation patterns included prefrontal, medial temporal, and posterior midline regions. For priming, deactivations in prefrontal (conceptual) or extrastriate (perceptual) regions were regularly seen. For procedural memory, activations were found in the motor as well as in non-motor brain areas. When we pay attention to music, people be likely to develop self-forgetfulness and seems to be isolated from present scenario. The current studies show that the detachment which the person come across is often considered as emotional responses to music which leads to separate virtual phenomena called “dreams”.

Related image

(Levinson, 1990) explained in his research that feelings are generated which lead us to isolation and is limited in duration when people come across music. He equalized the feelings generated during listening music to “wine tasting” phenomena and sampling the delight of various vintages. The metaphor “wine tasting” is applied equally to positive and negative emotions. (Sloboda J. &., 2001) explained that when a person is listening to music, he/she is actually recollecting past events which synchronizes with that particular piece of music. This also leads to striking findings is that “nostalgia” is located in the spectrum of music-induced feelings. (Darwin, 1871) cited that one of the most influential musically induced emotions is “love”, which appears in 2 different ways, such as affection and tenderness. These feelings are accompanied with various qualities of music like feeling enchanted, charmed, dazzled and amazed. Feelings like inspiring and admiring are also coming under this category. (Clayton, 2004) explained that happiness which is caused during listening to music takes the form enchantment or joy or the combination of both which is considered as “uniform affordance of music” which has the ability to enhance the motor entrainment regarding joyful activation. (Meyer, 1956) & (Huron, 2006) suggested that surprise, tension and relief were the principal emotions related to music. The main reason behind this findings is that harmonic, rhythmic and melodic progressions create expectations that are fulfilled. It was also found that when a person is exposed to music which they cannot understand, arises a new feeling of irritation and frustration. (Gowensmith, 1997) authored a research paper in which the author explained about the exaggeration caused when a person is exposed to heavy metal music. It is common to find the people who became more aggressive during the listening of heavy metal music. Actually, it does not create or support anger in people who are familiarized with it. Only new listeners show elevated levels of anger.

Signal transduction in listening music

(Brown D., 1991) shows that music assists in developing good health and well-being, mainly due to the engagement of neurotransmitters and neurochemical systems which is mainly responsible for reward, motivation, pleasure, stress and arousal, immunity and social affiliation. Music is mainly responsible for evoking the wide spectrum of music from exhilaration to relaxation, joyful to sadness, fear to comfort and sometimes combinations Music is mainly used in various fields, for instance: neurosurgeons use it to improve the concentration, armies to coordinate movements and also to enhance cooperation, workers to improve attention and vigilance and athletes to increase stamina and motivation. Also play an important role in pain management, relaxation, psychotherapy and personal growth. (A.C North and Hargreaves, 1996) (Sloboda J. &., 2001) revealed in their research found that music helps in evoking a strong and wide variety of emotions like joy, sadness, fear and peacefulness or tranquillity.

Image result for music and emotions

(Quintin, 2011) conducted an experiment in which Participants with ASD were cluster accorded with TD, so that act and full-scale IQ scores obtained with the Wechsler Abbreviated Scale of Intelligence changed by less than one standard deviation between clutches. Core trial task was executed to test the amygdala theory of autism at the perceptual level in the domain of music. The emotional greatness ranking was used to evaluate the theory. Based on this study which provides various astounding results, like highly active adolescents with ASD can identify basic emotions in music. This is precisely the circumstance for happy, sad and scary music. These discoveries put forward to differ the types of incitements used to test emotion recognition in ASD. These outcomes can also be applied in the milieu of music therapy or other intercession programs to target societal expansive and emotional skills. The authors agree with the concept that perception of music is considered as a comparative métier for personalities with ASD in the framework of a profile portrayed by strength and weakness. (Panksepp, 1995) (Sloboda J., 1991) explained that when a person has exposed some sort of music, he/she is subjected to a feeling of intense pleasure or euphoria in the listener, sometimes they also experience as “thrills” or “chills” down the spine. (Goldstein, 1980) described thrill as a trivial tremor, chill or tickling cognizance, which is confined to a small area at the backbone of the neck and evanescent, it must be presumed that this is a stated perception from a central neural focus. The thrill of high intensity lasts for a longer duration. It promulgates through the “point of origin, up over the scalp, downward along the spine and forward over the chest, abdomen, thighs and legs”.

Image result for music and emotions

Mostly it is conveyed by perceptible “gooseflesh” specifically on the arms. The apparent thrill replicates a scattering electrical activity in some parts of the brain area with the somatotopic institute, and provide neurological links to the limbic system and to central autonomic regulation. (Dube’, 2003) demonstrated that music does not contribute to any survival benefit like food, drinks, or sex nor it helps in making the person to addict to drugs abuse. Listening music is considered as one of the most entertaining activities and he/she spends considerable time during their lifetime. (Blood, 2001), (Brown S. e., 2004), (Menon, 2005) presented the music pieces in a broken 24s experimental and control epochs using a standard fMRI block design. The improved functional and effective connectivity between the different regions of the brain which are responsible for reward, autonomic and cognitive processing help us to understand the experiences caused by listening music. The authors performed the statistical analysis which used the general linear models and the theory of Gaussian random fields as implemented in SPM99. Various musical stimuli were used to analyze the veracity of the mesolimbic reward system and to look at whether shortfalls in these regions are allied with clinical indicators. Singing and speaking are allied with lateralized variances in cerebral activities: right and left hemisphere regions are more active during singing and speaking.

Image result for music and emotions

Comparative surges are perceived in homologous portions of the left and right hemispheres regions which are usually linked with sensorimotor function. This disproportionateness put forward the concepts that “oral laryngeal motor activity” is further unswervingly controlled by regions in the right hemisphere during singing. This type of pattern provides insight by looking into the nature of neural conditions such as stammering and aphasia by which singing can persuade fluency. It was also found that speaking is linked with “greater activity in left perisylvian regions” and singing is related with “augmented activity in right anterior temporal, prefrontal, and para-limbic cortices” which are structurally unified. These areas support the “fluency-inducing effects of words” produced in melody during activation. Due to the advancement of neuroimaging technology, it sheds some light to review the purposeful activation, connectivity of network and release of dopamine during the acuity of gratifying music (Jeffries, 2003), (Kolesch, 2006) explained based on the studies using Positron Emission Tomography (PET) to analyze cerebral blood flow when experiencing pleasurable music. When a person selected the music he/she loves which helps in inducing chills down the spine which was compared to the neutrally related music. It is also found that the music which induces chill helps in enhancing the cerebral blood flow within the network of interconnected structures which comprises mesocorticolimbic system and are critical to rewards and reinforcement, such as ventral striatum including nucleus accumbens and mid-brain, as well as the thalamus, cerebellum, insula, anterior cingulate cortex and orbitofrontal cortex Blood, (Blood, 2001). There will be activation of nucleus accumbens when a person is exposed to an unfamiliar pleasurable music and also during singing when compared to speech (Brown S. e., 2004) (Jeffries, 2003). These studies prove that the reward during listening music involves the activation of nucleus accumbens and opioid-rich midbrain nuclei known to normalize morphine numbness and plunging the hang-up of music.

Related image

(Menon, 2005), (Kolesch, 2006), (Salimpoor, 2011) explained in their works used some higher resolution functional magnetic resonance imaging (fMRI) in order to investigate the neural correlates of musical pleasure. (Janata, 2009) reported that the medial prefrontal cortex (MPFC) is regarded as one of the vital brain regions that support self-referential processes, which is an amalgamation of “sensory information with self-knowledge and the retrieval of autobiographical information”. The author used functional magnetic resonance imaging (fMRI) technique and a unique procedure to bring about “autobiographical memories” with the extracts of popular music in order to test the postulate that “music and autobiographical memories are integrated into the medial prefrontal cortex. It was found that dorsal regions of the MPFC (Brodmann area 8/9) answer back parametrically to the number of autobiographical features experienced over the path of individual 30 s pieces. The author put forward the findings that the dorsal MPFC link music and memories during the experience of emotionally striking sporadic memories that are elicited by acquainted songs from our personal past. (Craig, 2002) found that when a person is listening to some type of pleasurable music, it was found to be associated with the activation of nucleus accumbens and ventral tegmental area mediated interactions between nucleus accumbens and other brain structures which are known to regulate autonomic, emotional and cognitive functions. (Swanson, 1982), (Wise, 2004) found that dopaminergic neurons originating in the ventral tegmental area with major projections to the nucleus accumbens and forebrain regions are necessary for the efficiency of rewarding stimuli. (Cepeda, 2006), (Dileo, 2007), (Nilsson, 2008), (Knight, 2001), (Pittman, 2011), (Tam, 2008), (Spintage, 2012) explained that when a person is exposed to some type of relaxing and soothing music, which comprises of having slow tempo, low pitch and no lyrics, helps in reducing stresses and depression in healthier individuals.

Related image

The studies also show that music helps the patients who undergo invasive procedures like surgeries, colonoscopy and other dental procedures. It also assists the patients who are suffering from coronary heart diseases. The researchers also explained that music helps in reducing the sedation as well as pain and analgesics requirements. (Bradt, 2009) reported that physiological data from the patients indicates that they experience less apprehension during the medical practice when they pay attention to music. Since physiological reactions are uninterruptedly supervised during procedures with Coronary Heart Disease (CHD) patients, and music intrusions is easily arrested, once the patient could not experience any favourable effects. It is very well acclaimed that listening to music is used as an “anxiety management intervention” preceding and during procedures. The author of this review paper provided pieces of evidence that when a person is listening to pre-recorded music provided health benefits for entities with CHD. They have not considered the use of other music interventions, like “music improvisation, singing, experiencing live music, songwriting”, were not properly examined. (McKinney, 1997) explained about the new music therapy which is the amalgamation of relaxation techniques and listening to classical music, named as “Guided Imagery & Music”-GIM reduces hypothalamic pituitary- adrenal HPA axis activation in healthier subjects. There are 2 markers of HPA activation which were found to reduce GIM, Cortisol and β–endorphin.

Results and discussion

Constructed from the above theories, it was found that scholars and scientists are exasperating to conduit a fissure between musical and emotional research, as it benefits in augmenting the encouraging and positive effects and attentiveness inhuman. This would help in developing moral assertiveness among disheartened and depressed thinkers. They also bring into being that listening music may influence emotions in the form of brain stem reflexes, evaluative conditioning, emotional contagion, visual imagery, episodic memory and musical expectancy. This affords core focus on hereditary advancements, enhancing the functioning of vital brain organs and developing the cultural impact on the structure of music. Music furthermore helps in depressing the emotions centred on a number of mechanisms like semantics associations, emotive pestilence based facial annotations and vocal expressions. Most of the physiological and psychological responses like cardiovascular, respiratory and electrodermal responses, RSA, CIBI are utterly interconnected to music. There will be sturdy aberrations in heart rate, blood pressure, skin conductance and temperature in the course of the emotions resembling sadness and happiness. These illustrations obviously state that there is a comprehensive parallelism stuck between music-induced happiness and other responses like skin conductance level. The emotional arousal amongst the listeners through music is very much trustworthy than any other activities.

Related image

Aesthetic emotions that are touched thru aesthetic activities like fear, wonder, sympathy etc. are diminished more during listening to music. People with more fear allied teething troubles can be treated with music which helps them in modifying their social behaviour. The research also establishes the fact that chills elicited due to fear, weakness and excitement can be persuaded through music. Musical genres are well thought-out to be more important, for the reason that each genre helps in inducing different types of emotions among human. Most of the techniques used for recognizing emotions deficiencies consistency, due to complexities in emotion recognition algorithms in which the signals to be mined through the brain. The algorithms require various features related to music such as lyrics or verses of the song and other properties like tempo, articulation, timbre, rhythm etc.,. since these are required as one of the important parameters to train the classifier. The ratings on PANAS indicates that belligerence, moodiness and constant worry get enhanced during the anger initiation and it is weakened after the music or silence.

Related image

Emotions are mainly responsible for the enduring effect on perspicacity and cognitive processes such as enthrallment, intelligent, consciousness, problem-solving decision making, and judgement. Most of the emotional research are usually reconnoitred for pragmatic emotions which plays a vital role in the adaptation of individuals that have very serious correlations and consequences for the safety like “ fight/flight, reformation and assimilation, impetus enhancement” etc. A recent study shows that the patients after surgery experiences high relaxation and less pain when they come through music. It is advisable to play pacifying and soothing music in the postoperative care which helps the patients to get cured within a short span. It was also reported that proficiencies in long-term music learning and training, results in considerable neuroplastic changes in the evolving and adult brains which further have an emotional impact on both white and grey matter besides cortical and subcortical structures. Listening and composing music aggravates emotions, intensifies between subjects’ communications and arbitrated through neural hormones such as serotonin and dopamine-mainly accountable for undergoing joyous and rewarding activities through changes in the amygdala, ventral striatum and other constituents of the limbic system. This makes rehabilitation, a pleasurable process which can re-attribute weakened neural connections by winning brain regions with each other. Love is considered as one of the most important and influential musically induced emotion which appear in different ways such as fondness and painfulness and are complemented with various qualities of music like feeling enchanted, charmed, dazzled and amazed. It was found that highly active adolescents with acute stress disorder ASD can categorize basic emotions in music. This phenomenal discovery put forward to differ the types of incitements and to test the emotions in ASD.

Related image

The outcomes are applied in the milieu of music therapy or other intervention programs to board societal expansive and emotional skills. Singing and speaking are allied with lateralized discrepancies in cerebral activities: right and left hemispheric regions are further full of life all through singing and speaking. Music helps in providing insights, by considering into the nature of neural conditions such as stammering and aphasia by which singing can improve fluency. Music-induced chills help in enhancing the Cerebral Blood Flow within the network of interconnected structures which comprises meso-corticolimbic systems and are critical to rewards and reinforcement. It was found that when a person is experiencing a unfamiliar pleasurable music, there will be an activation of nucleus accumbens and ventral tegmental area mediated interactions between nucleus accumbens and other brain structures which are recognized to regulate autonomic, emotional and cognitive functions. Music helps the patients to relieve stress and pain who go through invasive procedures like surgeries, colonoscopy, coronary heart ailments and other dental procedures. Researchers also explained that music helps in reducing the sedation as well as pain and analgesics requirements.

Concluding remarks

Even though emotions can be viewed through music, it is not possible to recognize emotions through it, in the convenient settings. The main reason behind is that by construing the signals formed inside the brain is a tedious task, for the reason that our brain is composed of more than millions of active neurons. When these neurons act together, biochemical feedbacks flare up the electrical compulsions which can be measured thro EEG devices. The widely held of the functional and well-designed brain is spread on the outside surface stratum of the brain, which is extremely pleated and folded. This collapsible folding is considered as a momentous encounter to understand the signals. Every single cortex is folded in a different way precisely similar to our fingerprints. The somatic position of the signals are not the same, even for monozygotic twins, there is no uniformity in the surface signals. The mind-numbing task is to craft an algorithm to unfold and reveal the information in the cortex, mapping the signal handler to the source, then sort it to work from corner to corner with the bulk populace. Exploration on emotions and music have been considerably greater than before past two decades with voluminous areas subsidizing which includes “psychology, neuroscience, endocrinology, medicine, history, sociology and even computer science”. The plentiful theories that are attempting to elucidate the starting point, neural-biology, involvement and purpose of emotions have only nurtured additional penetrating research on this subject. Present areas of research in the theme of emotions take account of the growth of things that arouse and bring about emotions. Additionally  PET and fMRI scans support in bone up the emotional progressions in the brain.

Image result for music and emotions


  • A.C North and Hargreaves, D. J. (1996). Responses to music in aerobic exercise and yogic relaxation classes. Br. J. Psychol., 87, 535-547.
  • Altenmuller, E. a. (n.d.). Neurology Music Therapy: the beneficial effects of music making on neuro-rehabilitation. Acoust. Sci. Tech., 341.  doi :10.1250/ast 34.5
  • Altenmuller, E. K. (2013). “A contribution to the evolutionary basis of music: lessons from the chill response”, in Evolution of Emotional Communication From Sounds in Non-human Mammals to speech and Music in Man . In S. S. E.Altenmuller, Series in Affective Sciences (pp. 313-335). Oxford University Press.
    doi :10.1093/9780199583560.001.0001
  • Ansdell, D. a. (2014). What can’t music do? Psychology of Well-Being: Theory, Research and Practice, 1, 23.
  • Ansdell, D. a. (n.d.). What can’t music do? Psychology of Well-Being: Theory, Research and Practice
  • Bezdek, M. &. (2008). Musical emotions in the context of the narrative film. Behavioural and Brain Sciences(31), 578.
  • Bharucha, J. C. (2006). Varieties of musical experience. Cognition, 100(1), 131-172.
  • Blood, A. a. (2001). Intensely pleasurable responses to music correlate with activity in brain regions implicated in reward and emotion. Proceedings of National Academy of Science, 98, pp. 11818-11823. The USA.
  • Bradt, J. a. (2009). Music for stress and anxiety reduction in coronary heart disease patients. Cochrane Database Systems, 2.
  • Brown, D. (1991). Human Universals. McGraw-Hill North.
  • Brown, S. e. (2004). Passive music listening spontaneously engages limbic and paralimbic systems. NeuroReport, 15, 2033-2037.
  • Cabeza, R. &. (2000). Imaging cognition II: An empirical review of 275 PET and fMRI studies. Journal of Cognitive Neuroscience, 12, 1-47.
  • Cepeda, M. e. (2006). Music for pain relief. Cochrane Database System. Rev, 2.
  • Clayton, M. s. (2004). The concept of entrainment and its significance for ethnomusicology. EESM Counterpoint, 1, 1-82.
  • Craig, A. (2002). How do you feel? Interoception: the sense of the physiological condition of the body. Nat. Rev. Neurosci., 3, 655-666.
  • D.Liu, L. a. (2003). Automatic mood detection from acoustic music data. Proceedings of 5th Int. Symp. Music Information Retrieval, 81-87.
  • Dalgleish, T. &. (1999). Handbook of Cognition and Emotion. Chichester: Wiley.
  • Darwin, C. (1871). The descent of man, and selection in relation to sex (2 volumes). London: Murray.
  • DeNora, T. (1999). Music as a technology of the self. Poetics(27), 31-36.
  • Dileo, C. a. (2007). Music therapy: application to stress management . In P. e. Lehrer, Principles and Practices of Stress Management (pp. 519-544). Guilford Press.
  • Dube’, L. a. (2003). The categorical structure of pleasure. Cogn. Emot., 17, 263-297.
  1. Schubert. (1999). Measurement and time series analysis of emotion in music. University of New South Wales, School of Music and Music Education. Sydney: PhD dissertation.
  • Ekamn, P. L. (1983). Autonomic nervous system activity distinguishes among emotions. Science,, 221, 1208-1210.
  • Ekman, P. (1984). Expressions and the nature of emotions. In &. P. K.R. Scherer, Approaches to Emotion (pp. 319-344). Hillsdale: Erlbaum.
  • Ekman, P. (1994). Strong evidence for universals in facial expressions: A reply to Russel’s mistaken critique. Psychological Bulletin, 115, 268-287.
  • Feng, Y. Z. (2003). Popular music retrieval by detecting mood. Proc. 26th Annual International ACM SIGIR Conf. Research and Development Information Retrieval (SIGIR), (pp. 375-376). Toronto.
  • Feng, Y. Z. (2003). Popular music retrieval by detecting mood. Proc. 26th Annual Int. ACM SIGIR Conf. Research and Development Information Retrieval (SIGIR), (pp. 375-376). Toronto, ON, Canada.
  • Frijda, N. (1986). The Emotions. Cambridge and New York: Cambridge University Press.
  • Goldstein, A. (1980). Thrills in response to music and other stimuli. Physiol. Psychol, 8, 126-129.
  • Gowensmith, W. .. (1997). The effects of heavy metal music on arousal and anger. Journal of Music therapy, 1, 33-45.
  • Grewe, O. N. (2007a). Emotions over time: Synchronicity and development of subjective, physiological, and facial affective reactions to music. Emotions, 7(4), 774-788.
  • Grewe, O. N. (2007b). Listening to music as a re-creative process: Physiological, psychological, and psychoacoustical correlates of chills and strong emotions. Music Perception, 24, 297-314.
  • Huron, D. (2006). Sweet anticipation. In Music and the Psychology of Expectation. Cambridge: MIT Press.
  • Izard, C. (1971). The Face of Emotion. New York: Appleton-Century- Crofts.
  • Janata, P. (2009). The neural architecture of music-evoked autobiographical memories. Cereb. Cortex, 19, 2579-2594.
  • Jeffries, K. a. (2003). Words in melody: an h(2)15o PET study of brain activation during singing and speaking. NeuroReport, 14, 749-754.
  • Juslin, P. &. (2008). Emotional responses to music: The need to consider underlying mechanisms. Behavioural and Brain Sciences,, 31(5), 559-575 (discussions 575-621).
  • Khalfa, S. P. (2002). Event-related skin conductance responses to musical emotions in humans. Neuroscience Letters, 328(2), 145-149.
  • Knight, W. &. (2001). Relaxing music prevents stress-induced increases in subjective anxiety, systolic blood pressure and heart rate in healthy male and females. Journal of Music Therapy, 38, 254-272.
  • Koelsch, S. e. (2006). Investigating emotion with music: a fMRI study. Human Brain Mapping, 27, 239-250.
  • Krumhansl, C. (1997). An exploratory study of music emotions and psychophysiology. Canadian Journal of Experimental Psychology,, 51(4), 336-353.
  • Levinson, J. (1990). Music and the negative emotions. In J. Levinson, Music, art & metaphysics. Essays in philosophical aesthetics (pp. 306-335). Ithaca- NY: Cornell University Press.
  • Li, T. &. (2003). Detecting emotions in music. Proc. 5th Int. Symp. Music Information Retrieval, (pp. 239-240). Baltimore MD.
  • Lundqvist, L. C. (2009). Emotional responses to music: Experience, expression and physiology. Psychology of Music, 37, 61-90.
  • McKinney, C. e. (1997). Effects of guided imagery and music (GIM) therapy on mood and cortisol in healthy adults. Health Psychology, 16, 390-400.
  • McKinney, C. e. (1997). The effect of selected classical music and spontaneous imagery on plasma beta-endorphin. Journal of Behav. Med., 20, 85-99.
  • Menon, V. a. (2005). The rewards of music listening: response and physiological connectivity of the mesolimbic system. NeuroImage, 28, 175-184.
  • Meyer, L. (1956). Emotions and meaning in music. Chicago: Chicago University Press.
  • Nilsson, U. (2008). The anxiety and pain reducing effects of music interventions: a systematic review. AORN Journal, 87, 780-807.
  • Nyklicek, I. T. (1997). Cardiorespiratory differentiation of musically induced emotions. Journal of Psychophysiology, 11, 304-321.
  • Panksepp, J. (1995). The emotional sources of chills induced by music. Music Perception, 13, 171-207.  doi :10.2307/40285693
  • Pittman, S. a. (2011). Music intervention and preoperational anxiety: an integrative review. Int. Nurs. Rev., 58, 157-163.
  • Plailly, J. T. (2007). The feeling of familiarity  of music and odours: the same neural signature? Cereb. Cortex, 17, 2650-2658. doi :10.1093/cercor/bhl173
  • Quintin, E. e. (2011). Emotion perception in music in high functioning adolescents with autism spectrum disorders. Journal of Autism Dev. Disord, 41, 1240-1255.
  • Rickard, N. (2004). Intense emotional responses to music: A test of the physiological arousal hypothesis. Psychology of Music, 32, 371-388.
  • Salimpoor, V. B. (2011). Anatomically distinct dopamine release during anticipation and experience of peak emotion to music. Nature Neuroscience, 14, 257-262.  doi :10.1038/nn.2726
  • Sandstrom, G. M. (2013). Absorption in music: A scale to identify individuals with strong emotional responses to music. Psychology of Music, 41, 216-228.
  • Scherer, K. (2001). Appraisal considered as a process of multilevel sequential checking. In K. S. Johnstone, Appraisal Processes in Emotion: Theory, Methods, Research (pp. 92-120). Oxford, NewYork: Oxford University Press.
  • Scherer, K. R. (2000). Psychological models of emotion. In J. Borod, The Neuropsychology of Emotion (pp. 137-162). Oxford/ New York: Oxford University Press.
  • Scherer, K. R. (2001). Emotional effects of music: Production rules. In P. J. Sloboda, Music and emotion: Theory and research (pp. 361-392). Oxford; New York: Oxford University Press.
  • Sharman, L. a. (2015). Extreme metal music and anger processing. Frontiers in Human Neuroscience, 9, 272.  doi :10.3389/fnhum.2015.00272
  • Sloboda, J. &. (2001). Emotions in everyday listening to music. In P. J. Sloboda, Music and emotion: Theory and research (pp. 415-429). Oxford, England: Oxford University Press.
  • Sloboda, J. (1991). Music structure and emotional response: Some empirical findings. Psychology of Music, 19, 110-120.
  • sloboda, J. O. (2001). Functions of emotions in everyday life: An exploratory study using the experience sampling method. Musicae Scientiae, 9-32.
  • Spintage, R. (2012). Clinical use of music in operating theatres. In R. e. MacDonald, Music, Health and Wellbeing. Oxford University Press.
  • Stefan Gebhart, M. K. (2014). The use of music for emotion modulation in mental disorders: the role of personality dimensions. Journal of Integrative Psychology and Therapeutics.
  • Stemmler, G. (2004). Physiological processes during emotions. In &. P. R.S. Feldman, The regulation of emotion. Mahwah, NJ: Erlbaum.
  • Swanson, L. (1982). The projections of the ventral tegmental area and adjacent regions: a combined fluorescent retrograde tracer and immune fluorescence study in the rat. Brain Res. Bull, 9, 321-353.
  • Tam, W. e. (2008). Effects of Music on procedure time and sedation during colonoscopy: A meta-analysis. World J. Gastroenterol, 14, 5336-5343.
  • Watanabe, T. Y. (2008, Jan 1). The memory of music: roles of right hippocampus and left inferior frontal gyrus. Neuroimage, 39(1), 483-91.
  • Wise, R. (2004). Dopamine, learning and motivation. Nature Rev. Neuroscience, 5, 483-494.
  • Zentner, M.,. (2008). Emotions evoked by the sound of music: Characterization, classification and measurement. Emotions, 8(4), 494-521.

Basim A (2017), Deciphering Human Emotions through Music -A Heuristic Avenues in Research, International Journal of Indian Psychology, Volume 4, (3), DIP: 18.01.100/20170403