Music in the mind

How is it that music, at its core an abstract collection of sounds, triggers such strong emotions? And how does our brain process melodies and lyrics? It is questions like these that neuroscientist Robert Zatorre ponders daily in his Montreal laboratory. With his pioneering brain research, he even hopes to unravel what is at the heart of being human. In 2020, he received the C.L. de Carvalho-Heineken Prize for Cognitive Sciences for his work.
Robert Zatorre, professor of neuroscience at McGill University in Canada, made his first forays into music when he was about thirteen years old. He was eager to learn to play the sensational organ parts from the rock music of the 1960s, so he knocked on the door of an organ teacher. The latter, however, assured him that he had better ignore this rubbish and focus on the real thing. He took Zatorre to church and played Bach. Zatorre: ‘Afterwards I said, okay, you’re right. The rest is rubbish.’

Lees dit interview in het Nederlands (NewScientist)

Years later, when Zatorre combined training as an organist with studying psychology, the flame for science ignited in him as well. He wanted to discover how it was possible that music has such a powerful effect on the brain. Zatorre went on to become a pioneer in the cognitive neuroscience of music, the field that studies the effects of music on our brains. He made many groundbreaking discoveries, with important implications for neuroscience in general.
He never said farewell to the organ. In fact, in the middle of his lab is a beautiful digital organ, the sounds of which are based on an organ in Zwolle. When we visited him in Montreal, he surprised us by performing a beautiful piece by fifteenth-century Flemish composer Jacob Obrecht. Zatorre explains that the organ is the ideal addition to his lab not only because of his own background. ‘The organ is one of the few instruments for which you use two hands and two feet. This makes it ideally suited for music cognition experiments because of the coordination and flexibility required,’ he says. His two hands and two feet fly over the keys, producing beautiful sounds.

Anticipation of reward
One of the main questions Zatorre’s research seeks to answer is why music causes such strong emotions. ‘We tend to take this for granted,’ says Zatorre. ‘But how is it that we find a completely abstract pattern of sound beautiful, that it gives us pleasure, and sometimes even shivers or goosebumps? What happens in your brain?’
The mechanism behind this has only been discovered in recent years, and the research of Zatorre’s group played an important role in this. It showed that music acts on the reward system in the brain in the same way that, for example, food and sex do: through the neurotransmitter dopamine. It is pretty obvious why food and sex give us a sense of reward: we need them to survive and reproduce. Music, on the other hand, does not have such an essential function at first glance. So, why do we react to it so strongly?
‘Several things play a role,’ Zatorre explains. ‘One of the most important ones has to do with music’s ability to generate expectations, and then at times deviate from them just a little. We have long known that our reward system is sensitive not only to rewards, but also to the anticipation of rewards. If you repeatedly give a thirsty rat water after a flash of light, at some point its reward system will also respond to the flash of light itself. But what is even more interesting: if you put some sugar in the water, the reward system reacts even more strongly, because the reward is better than expected. We believe that music plays into this system of prediction, in which you have a sequence of events, and then you deviate just a little from the expectation.’

Statistical prediction
What those expectations are has everything to do with your previous exposure to music. Children do not yet have many expectations, which is why very simple songs give them a response. Adults usually find these songs boring. In order to excite their reward system, the composer and musicians have to try a little harder. Through years of exposure to music, adults have a certain statistical prediction of how music works. This can be in standard chord sequences, or melodies made up of notes from a specific scale. A good composer will occasionally put a ‘violation’ of these statistical regularities into his pieces on purpose. It is precisely these deviations that excite our reward system. ‘But the balance between predictability and surprise is very important,’ Zatorre stresses. ‘It should not be too surprising. That is why it is so hard to write beautiful music.’
That previous exposure, and the statistical information you get from it, also explains why different people like different types of music. If you are used to listening to pop music, you have a different frame of reference than if you have been listening to jazz from a young age. Differences between cultures are even greater, as Zatorre once experienced first-hand. ‘Colleagues took me to a concert in India once. I had no idea what was happening, but out of politeness I said afterwards that I thought it was very nice. But my colleagues apologized for the many mistakes the musician had made. I had no idea, it all sounded strange to me.’

Text and melody
A lot of music contains lyrics in addition to chords and melodies, and that combination puts our brains to work even harder. When we listen to a song, it sounds like a whole to our ears. But our brain processes the melody and lyrics of a song in a completely different way, in different hemispheres of the brain. Zatorre’s team demonstrated why this is. ‘Sound contains two different types of cues,’ Zatorre said. ‘The first type, called temporal cues, relates to how quickly things change in time: the rhythm of sound. In addition, you have the spectral cues: what frequencies, or pitches, are in the signal.’
Zatorre’s team filtered these cues out of a song in turn. And what did they find? When they removed the spectral information, you could follow the text easily, but the melody was completely gone. Conversely, without temporal information, the melody was easy to perceive, but the speech was incomprehensible. Zatorre: ‘Once we had shown that, the next step was to look at brain activity in the MRI scanner.’ A so-called functional MRI scan shows which brain regions are active at any given time. There is more blood flow with oxygenated blood in these regions. ‘When we had the subjects listen to the filtered songs in the MRI scanner, we saw that the left auditory cortex was able to follow the temporal cues easily (and, therefore, the speech), but not the spectral ones. And the right auditory cortex was able to follow the spectral information (and, therefore, the melody) easily, but not the temporal information.’ So, the brain processes these elements separately. How our brain then reassembles these elements into a single experience is something Zatorre hopes to find out in the coming years.

Voice recognition
Zatorre’s work is quite fundamental in nature, but that does not take away from the fact that there are all kinds of practical applications that arise from his research. For example, he showed that musically trained people are better at distinguishing speech in an environment with a lot of background noise, compared to people without musical training. He hopes to use this result to help (older) people with hearing problems. A colleague has already tried this out, Zatorre says. ‘He taught a group of elderly people, with no previous musical training, to play the piano for six months. After this, they were better at distinguishing speech from background noise. The effect was small but encouraging.’ However, this way is still a bit of a shotgun approach. Zatorre: ‘We want to find out what it is in musical training that causes this improvement. That way, you can design a teaching method specifically applied to this problem, and not just teach them to play an instrument.’
Zatorre’s group has already discovered that two different mechanisms are involved. First, as a musician, you sharpen your perception skills. ‘The ability to track information about frequencies, and therefore pitches, is more accurate in musicians,’ Zatorre explains. ‘This is because throughout their lives they often listen to different frequencies embedded in other sounds. For example, when playing in an orchestra or band, it is sometimes important to pick out one instrument among all the other sounds. This mechanism ensures that individual sounds are better registered in the brain, where in non-musicians it is more likely to become a wall of sound. The second mechanism also allows musicians to better select on which of those sounds to focus. We see more brain activity in the frontal areas in musicians. We think that has to do with control signals that are able to suppress some sounds and amplify others.’
But if you want to hear someone talk in a busy environment, you have to discern temporal information. To do this, as Zatorre’s group had previously demonstrated, you use a different hemisphere of the brain than you do to distinguish frequencies. So, why does frequently listening to different frequencies help filter speech? ‘We are not sure about that yet,’ Zatorre says. ‘Our hypothesis is that initially you mainly use the frequency information to distinguish the target text from the background. Then, you mainly need the temporal information to translate the text into the actual syllables.’

Fundamental questions
Zatorre’s other work also has important applications. For example, the better we know how music acts on the reward system, the better we can use these insights in music therapy. For example, to help regulate a less active reward system, as in depression or Parkinson’s, or, conversely, an overactive reward system, as in addiction. But Zatorre derives by far the greatest satisfaction from working with his team to explore and answer new fundamental questions. ‘When we do our experiments, we focus on very specific questions. But those specific questions are part of something much broader. Music and speech are things that distinguish us from other species. By studying how the brain enables us to produce, experience, and enjoy these things, we gain insight into what it means to be human.’

CV
Robert Zatorre (Buenos Aires, 1955) studied psychology and music at Boston University and received his PhD from Brown University in Providence in 1981. He then continued his career as a researcher at the Montreal Neurological Institute at McGill University. He has been a professor of neuroscience there since 2001. In 2006, together with Isabelle Peretz, he founded the International Laboratory for Brain, Music, and Sound Research (BRAMS), of which he is currently still co-director. In addition to the C.L. de Carvalho-Heineken Prize, Zatorre has received the IPSEN Foundation Neuronal Plasticity Prize, the Hugh Knowles Prize, the Oliver Sacks award, and the Grand Prix Scientifique from the Fondation pour l’Audition in Paris for his work, among others.

Research
Music and speech are important but complicated human forms of communication. Robert Zatorre and his team are researching how our brains process music and speech. As one of the pioneers in the field of music cognition, he uses music as a framework to explore complex cognitive functions, such as emotion, perception, movement, memory, attention, and aesthetics. For example, he demonstrated that music acts on our reward system in the same way that food and sex do. He also unravelled how and why different parts of our brain process music and speech, and he is researching how people learn music and how this process can be influenced.

Video

Robert Zatorre — Neuroscientist