Category Archives: music

Music and Language: dissociation between rule-crunching and memory-retrieval systems

I have previously written about how concepts are stored in the brain: they involve rule-based systems (A is bachelor if A is Single AND A is male) and memory based systems (prototypes and exemplars). I have also looked at how language involves both rules (the syntax of the language) as well as memory (semantics or word meanings) systems and our normal language comprehension as well as productions engages both types of systems.

It is a popular paradigm in cognitive linguistic research to present unexpected words in sentences (such as, “I’ll have my coffee with milk and concrete”), while monitoring brain activity using ERP, and find that the presentation of an unexpected word leads to a N400 peak in the temporal lobe areas. This violation of semantics is differentiated from when the syntax of the sentence is wrong, in which case we get changed activity in frontal lobes.

“Up until now, researchers had found that the processing of rules relies on an overlapping set of frontal lobe structures in music and language. However, in addition to rules, both language and music crucially require the memorization of arbitrary information such as words and melodies,” says the study’s principal investigator, Michael Ulmann, Ph.D., professor of neuroscience, psychology, neurology and linguistics.

For the first time , similar results have been obtained for music. If one assumes that changing an in-key note in a familiar melody is akin to an unexpected word in a sentence, then the same N400 peak is observed. Also , if a violation of harmonical rules , like an off-key note in an unfamiliar harmony, is akin to violations of linguistic syntax, then here too similar changes in frontal lobe activity were observed.

The subjects listened to 180 snippets of melodies. Half of the melodies were segments from tunes that most participants would know, such as “Three Blind Mice” and “Twinkle, Twinkle Little Star.” The other half included novel tunes composed by Miranda. Three versions of each well-known and novel melody were created: melodies containing an in-key deviant note (which could only be detected if the melody was familiar, and therefore memorized); melodies that contained an out-of-key deviant note (which violated rules of harmony); and the original (control) melodies.

For listeners familiar with a melody, an in-key deviant note violated the listener’s memory of the melody ? the song sounded musically “correct” and didn’t violate any rules of music, but it was different than what the listener had previously memorized. In contrast, in-key “deviant” notes in novel melodies did not violate memory (or rules) because the listeners did not know the tune.

Out-of-key deviant notes constituted violations of musical rules in both well-known and novel melodies. Additionally, out-of-key deviant notes violated memory in well-known melodies.

Miranda and Ullman examined the brain waves of the participants who listened to melodies in the different conditions, and found that violations of rules andmemory in music corresponded to the two patterns of brain waves seen in previous studies of rule and memory violations in language. That is, in-key violations of familiar (but not novel) melodies led to a brain-wave pattern similar to one called an “N400” that has previously been found with violations of words (such as, “I’ll have my coffee with milk and concrete”). Out-of-key violations of both familiar and novel melodies led to a brain-wave pattern over frontal lobe electrodes similar to patterns previously found for violations of rules in both language and music. Finally, out-of-key violations of familiar melodies also led to an N400-like pattern of brain activity, as expected because these are violations of memory as well as rules.

“This tells us that these two aspects of music, that is rules and memorized melodies, depend on two different brain systems – brain systems that also underlie rules and memorized information in language,” Ullman says. “The findings open up exciting new ways of thinking about and investigating the relationship between language and music, two fundamental human capacities.”

To me this seems exciting. My thesis has been that Men are better at rule-based things (syntax and harmony); while women are better at memory-based things (semantics and melody), so I’ll like to know whether the authors observed any gender effects. If so, this would be further proof for abstract vs concrete gender difference theory.

Music is mapped onto Space

We all know that as per the conceptual metaphor theory, music (especially melody and tones) is mapped onto space. Thus, we speak of high notes and low notes and thus use spatial terms to conceptualize the musical scale.

A new study by New Zealand researchers indicates that this mapping may not be just metaphorical and conceptual, but there might be neural basis and mechanisms that indicate that same mental abilities, and possibly brain areas, are involved in music and spatial represnetations.

The study found that people who have amusia, or are tone deaf, are also poor at mental spatial rotation tasks. The fact that tone deafness and spatial abilities are correlated is a strong indicator that same mental abilities may be underlying the representations of space and tones .

In a second experiment, they found that doing a mental rotation task and a tone related task simultaneously caused poorer performance in normal controls , than in amusics. This is indicative of the fact that spatial and melodic representations are related and cause greater interference in normal controls, than in amusics who have both capacities in diminished form originally and so preform relatively better than the normal controls.

If it is true that the same brain areas/networks, mechanisms are involved in spatial and melodic representations than there seems to be a strong case for embodiment and also for conceptual metaphor theory which posits that abstract concepts like melodies and tones are mapped onto concrete entities like space.

Encephalon #18 is online now!

Check out the brand new edition of Encephalon at Pharyngula. My favorites include the Musical harmony as grammar studies. With my obsession of extending grammar to Universal Moral Grammar or Universal Spiritual Grammar; this new direction of Universal Musical Grammar seems very attractive and feasible. Chris has already presented evidence that Musical Melody serves the function of Semantics; so we are left with Morphology and Pragmatics as the two remaining broad domains of language that need to be mapped to the musical concepts like Rhythm and Dynamics.