Have you ever heard of amusia? It’s the inability to properly process music and recent research suggests that those with amusia have difficultly in identifying the emotional tones in various spoken phrases.
As reported in a recent LA Times article, Charles Darwin suggested that before humans had either language or music, they had a “musical protolanguage” useful during courtship, fighting over territory and in expressing emotion (which later researchers considered particularly crucial for parent-infant bonding).
Previous work has shown that people with amusia don’t have much trouble decoding the meaning held in elements beyond the words on a page, like rhythm, stress and intonation. But an international trio of scientists decided to focus on the more subtle changes in pitch that reveal the emotions behind a person’s words.
The evidence appears to back their theory up:
People with amusia are less likely to say they like or love music, and are less likely to report emotional changes from listening to music.
For the paper published in this week’s Proceedings of the National Academy of Sciences, the researchers tested two-dozen volunteers, half with and half without amusia, to see how well they could identify six different emotions in the tones of 96 spoken phrases: happy, tender, afraid, irritated, sad and no emotion.
Depending on the emotion, the participants with amusia fared up to 20% worse than their peers. The gap was wider for some emotions (happy, tender, sad) and negligible for others (fear, no emotion).
The dozen listeners with amusia were also more likely to have trouble distinguishing between very different pairs of emotions – happy versus irritated, for example, or sad versus tender. As it turns out, these pairs sound similar in terms of intensity and duration. (Happy or irritated phrases were spoken more quickly and were higher in intensity, while sad or tender phrases were spoken more slowly and were lower in intensity.)
What’s more, “amusic” participants were more likely to report that they had trouble figuring out people’s emotional states from speaking on the phone, and more likely to say they relied on facial cues and gestures to help them figure out what a person was feeling.
Very interesting!
Thank you for sharing such information!
Eric Goulard
When I have talked today with David B. Givens, he sent me this:
“There’s also”aprosodia” (from The Nonverbal Dictionary’s entry for “Tone of Voice”):”Aprosodia. Like aphasia (the dominant, left-brain hemisphere’s inabilityto articulate or comprehend speech), aprosodia is an inability toarticulate or comprehend emotional voice tones. Aprosodia is due to damageto the right-brain’s temporal-lobe language areas. Patients with aprosodiamiss the affective (or “feeling”) content of speech. Persons with damageto the right frontal lobe speak in flat or monotone voices devoid ofnormal inflection.”Here’s more on amusia, and music:http://center-for-nonverbal-studies.org/music1.htm
Eric –
It is always great to have feedback from you. We appreciate you sharing your insights with the Humintell Community.