After researching the neurologic components of language and music in order to better understand how music instruction can benefit adolescents with language-based learning differences, I would like to discuss the big picture. What have I learned? Where do I need to go with my work? The fact is, I never planned to be an English teacher. When I graduated high school, my plan was to play the drums. I just love to hit things.It was drums all along. My dad had different ideas, so to appease him I studied Literary Theory at a local college. Four years later, I was still looking for ways to play the drums. Even though one opportunity did come up, I didn’t have the chance to take it, so I ended up teaching. When I started teaching, I didn’t realize that I’d fall in love with it, but I did very quickly. And yet even now, more than twenty years later, I still describe myself as “an English teacher who plays the drums.” So, when I discovered that drums could help my most at-risk students, you can imagine how excited and motivated I became to find ways to use drums to help them. Here was a way to join together the two things I loved the most. As a result, I find myself asking, can drumming and improvisation really help students who learn differently? Of course, music provides a brain workout every time we engage with it, either by listening or playing it. What about those of us who learn differently? Are my ideas about drumming and improvisation they really founded in any meaningful evidence? Can my approach really make the differences I want to argue that it can?
To answer these questions, I want to consider an experience I had with a student in my English class this past week. Margie is a tenth grader who has been diagnosed with dyslexia and an auditory processing disorder. Dyslexia, as many of us know, includes a constellation of impairments in how people process and produce written and spoken language, while auditory processing disorders refer to impairments and delays in the systems through which the brain processes sound. What does that mean for a kid like Margie in the day-to-day life of school? Just last week, she came to class and told me that, although she did the reading that was required for homework, she literally didn’t understand any of it. Keep in mind that this is a text that is considered three grade levels lower than her in its readability. Later in the same class, she asked me to repeat a few ideas that I had spoken several times. Each time, she asked me to say it slower. And later, when asked to read out loud, Margie willingly agreed despite her difficulties. However, as she read, she completely ignored all punctuation. As a result, for example, the first word of a sentence was read as if it were the last word of the previous sentence. She paused where there was no reason to pause, and her reading did not sound at all like spoken language. Not only that, but she incorrectly decoded several words, reading them out loud as words which they weren’t. All of this is not to point how disabled the child is, but to illustrate just exactly how difficult it must be for Margie to read independently, to process what she hears in the classroom, and to produce written assignments, all fundamental skills for academic success. Of course she didn’t understand anything she read that night. In fact, with all of the inaccuracy errors, punctuation marks missed, and fluency difficulties, even though she read the same words as everyone else, to her, it was as if she actually read a completely different chapter than what was assigned. What’s the answer here? There are, fortunately, many ways we can help Margie, beginning with systematic, multi-sensory reading instruction. However, I am exploring how music, and specifically, the rhythmic, improvised approach I take in my music classes, can help her, and why.
To do this, we have to look at two elements of music that are very closely related to language: beat synchronization and what I call melodic perception. Let’s first consider beat synchronization. The prevailing view among neuroscientists today is that beat synchronization is a crucial skill when acquiring and processing language (Woodruff-Carr et al 76). Why is that? We don’t speak in time to drums. Language doesn’t run to a back beat (Patel 122). In fact, from a drummer’s perspective, language is like the most complex jazz fusion around. The time is constantly shifting as words vary their duration and emphasis depending on their placement within a sentence. The melody rises and falls at seemingly random intervals, and follows no key signature, or other hierarchical tonic structure like we find in music (Patel 203). So, what does it mean to say that beat synchronization is so essential to language?
First, picture all the neurons in your brain. When stimulated, they send out tiny electric impulses to each other and trigger potential responses from our body. Our brains are a gooey mess of tissue that is literally alive with electricity. Attach a few electrodes to the surface of your scalp, and we can measure Event Related Potential (ERP), otherwise known as Frequency Following Response (FFR), a reading of those little electrical impulses that are careening around our skulls. These are so powerful that, at different frequencies, they can be detected on our scalp. Provide the stimulus, and watch the ERP fire away. The frequency at which it is detected tells us what part of the brain is being activated. High frequencies mean that the lower parts of the brain – those associated with intake of information – are being activated: the basal ganglia, the brainstem, the cerebellum. Mid-range frequencies correspond to the mid-brain – the auditory cortex, the inferior colliculus, and the higher frequencies relate to areas of the brain that process higher order thought – abstract thinking, language, memory, executive function – the temporal lobe, broca’s area, the frontal lobe, the occipital lobe ("Brain Rhythms: Functional Brain Networks Mediated by Oscillatory Neural Coupling").
Okay, let’s take it easy. I can hear my musician friends making fun of me right now. Just shut up and play the drums. But that’s exactly what I’m trying to do. Because playing the drums with a kid like Margie can really help her. I swear to god. So, let’s get back to work for a minute - all of these neurons are firing away all the time. Not only that, but they are doing it simultaneously all around the brain. That’s a lot of energy. How is it conserved? One way is through rhythm. When our synapses are activated, they tend to fire in rhythmic syncopation; they all fire at the same time ("Brain Rhythms: Functional Brain Networks Mediated by Oscillatory Neural Coupling"). Let’s consider a classic experimental procedure. You’ve got some electrodes on your head, and through some earphones, you hear a syllable - /dah/ (“dah”) (Woodruff-Carr et al 78). That sound goes into your ear, it travels into the cochlea, and vibrates the basilar membrane. In fractions of a second, the area of this membrane that gets vibrated creates a frequency that then gets transmitted into the brain. As it travels up the brainstem and into the auditory cortex for processing, it first passes through the inferior colliculus (IC), a tiny node that acts like a gatekeeper, determining where in the brain the stimuli will be sent for further processing and, ultimately, the triggering of more neurons (Riggs, "Special Senses 7- Auditory Pathways"). This an incredibly simplified version of the Ascending Pathway of Auditory Perception. I told you it was the drummer’s view.
How does the IC know where to send information? While the Ascending Pathway is occurring, there is a simultaneous Descending Pathway also occurring. As that information is traveling through the midbrain towards the IC, the other parts of the brain are also being stimulated by the information (Riggs, "Special Senses 7- Auditory Pathways"). This is where memory and rhythm come into play. Previous experience with a sound like /dah/ triggers responses from the parts of the brain where memory and understanding of that stimulus is located. In the case of a syllable, it would come from the occipital lobe (Shaywitz 2003). So as the information is ascending into the brain, the brain is simultaneously sending down another signal that will take the information in and process it in the proper part of the brain, and therefore trigger the proper potential reaction - laughter, anger, the swing of a stick towards a snare drum. What happens? You hear the sound, and you know what it says. You know that this is a syllable beginning with the letter D, etc. This is basic phonological processing, right? Sure, unless the Ascending Pathway and the Descending Pathway are out of sync. As these areas of the brain are sending out signals and are being read as FRP’s on an EEG, we can see a series of pulses. All of these pulses should be happening together, rhythmically. No matter the area of the brain from which they originate, they should be happening in regular rhythmic oscillations ("Brain Rhythms: Functional Brain Networks Mediated by Oscillatory Neural Coupling"). Boom. Boom. Boom. Boom. We call this Oscillatory Brain Function, or Phase Coherence (Tierney and Kraus 782). However, in a student like Margie, who struggles with language processing, the mid-brain releases signals that are out of sync. Boom. (Bam) Boom. Boom. (Bam) Boom. (Bam) This dysfunction in phase coherence leads to auditory processing impairment and dyslexia, and overwhelmingly, we are beginning to see that musical training, especially training that involves a heavy focus on rhythm and improvisation can help improve oscillatory brain function (Kraus and Slater 210, 216). All of this, of course, leads us back to beat synchronization.
One compelling study in this area looked at young children playing a conga drum. Some could synchronize their beat with the beat that the tester was playing. Among the ones who could not, there was a higher incidence of language impairment (Kraus and Anderson 56). Another study found that participants who could more quickly and easily synchronize to a rhythm, and also adjust to changes in a rhythm had stronger phase coherence (Tierney and Kraus 782). The auditory system is constantly taking in and responding to sound at the millisecond level (782). Some researchers call this Dynamic Attending Theory. As we grow and acquire language, we develop temporal expectancies based on the rhythms of all the signals that travel around our brain. Once these oscillations are out of sync, these expectancies begin to drift, and we process language and sound in a delayed manner (Kraus and Slater 210). Picture Margie asking me to repeat what I said. “Can you say it again more slowly, please? Ok. … I’m sorry, can you say that one more time?” Boom. (Bam) Boom. Boom. (Bam) Boom. (Bam) And think of this same student hearing that syllable /dah/. Delayed processing means that she does not always recognize that this is a syllable beginning with the D sound. Failure to recognize the syllable in sound, also means delayed processing of it in written form, hence inaccurate decoding and automaticity.
So, what’s the point? Am I arguing that teaching Margie to play the drums will help her language impairment? Well, yes and no. Kraus argues, “…musical experience can strengthen aspects of brain function which also support language related skills, and may thereby offer a framework for the remediation of language difficulties” (Kraus and Slater 212). In another study, Kraus states, “…children with dyslexia have a less accurate perception of musical rhythm than typically developing children do. Sensitivity to musical rhythm predicts phonological awareness and reading development” (Kraus and Anderson 55). However, I have no preconceived ideas that rhythmic musical training will completely “cure” Margie’s language impairments. However, many researchers are arguing that rhythmic musical training, because it forces the exercising of temporal expectancies that are so essential to phase coherence, can actually help improve oscillatory functions in the brain. Among test subjects with stronger beat synchronization skills, we also see a high occurrence of faster auditory processing speeds and stronger language skills (Tierney and Kraus 790). Therefore, I propose that systematic, multi-sensory reading instruction can be enhanced when partnered with musical training, especially musical training that is based in rhythmic exercises. Beat synchronization is linked to the discrimination of sounds within spoken language. When we speak, the spaces between the words, the duration of syllables, and the fluctuations in amplitude and pitch all help us determine where words begin and end, creating the illusion of a single flow of ideas, when in reality we are only hearing a series of sounds. But this illusion happens, in one way, through the synchronization of the Ascending and Descending Pathways, through Temporal Expectancies that already exist, and in the speed at which we can synchronize the neural responses within our brain. My simple argument is this: students like Margie that struggle with impaired language skills can only benefit from drumming.
There is still much work to do. I think I have a handle on how the brain is processing rhythm, and why rhythm is central to language. I’m in a good position professionally because I can see my language-impaired students read, but I also get to hear them drum. We play music together and read together every day. Over and over again, I see the rhythmic aptitudes that are described in my research played out in real life. Students like Margie who struggle to process what they hear in class, who read dysfluently, who cannot recreate the prosody of language that is so central to reading comprehension come to the music room and struggle to find the rhythm of even the simplest songs. I spend hours a day hitting a floor tom, just to provide a loud enough pulse so that the students can more easily synchronize. But just banging away is not going to be enough. I don’t feel right just knowing that drumming is good for their brains. The next step is to develop lesson plans in my music curriculum that are specifically related to the different studies I have read. If one study describes the relationships between adjusting rhythm and language skills (Tierney and Kraus 2016), then I need to develop lessons that require my students to work towards better rhythmic adjustment. This will not only make them better musicians, but also target specific areas of phase coherence while simultaneously indirectly exercise their language skills.
When I started teaching music, the focus was drums, but in order to create a more robust, competitive program, I expanded my horizons, studied music and improvisation, and created a program that is based in improvisation. I really believe that improvisation is essential for teaching music to students who learn differently, not only because the alternative approach meets them where their greatest strengths are, but also because improvisation targets another area of language that is related to music – melodic perception.
Goswami calls it Temporal Sampling Theory. When we hear a syllable like /dah/, not only are we hearing the sounds of the letters, but we are also hearing the almost imperceptible rise in amplitude and volume as the syllable is uttered. The rise in amplitude and the decay that follows tells us where the syllable begins and ends, and the ability to distinguish the beginning and ends of the sounds within words and phrases enables us to determines exactly what is being said to us. It gives language meaning. This is called Rise Time Perception (Goswami 106). Different sounds have different rise times, therefore, the changes in rise time as sounds flow through our ears help us determine what these different sounds are. As we grow, our brain takes auditory snapshots of the different sounds and the rise times that are associated with them, and then, as new sounds enter, the rhythms of the pulses that are being emitted by the neurons taking in the information – the oscillations – become synchronized with the rise times of the different syllables (Goswami 109). So our brain actually synchronizes its endogenous rhythms with the external rhythms of the language it is attempting to process. It does this by drawing on these auditory temporal samples that it has already stored. Hearing sounds again, we draw upon these old snapshots and can quickly synchronize. This has a lot to do, obviously, with beat synchronization. In fact, Goswami even states, “The rise time difficulties found in dyslexia suggest that an orchestra of people with dyslexia would be poor at keeping in time” (106). But Temporal Sampling Theory also deals with melodic perception. Why? Melody is the element of music that most relates to language because of prosody (Patel 225). Prosody refers to the fluctuations in duration, amplitude, and volume that give meaning to spoken language. Spoken language is not merely a series of sounds, but also those sounds coupled with their associated variations in volume and duration. It seems to me that the most elemental part of prosody is rise time perception. Melody, being founded in the rising and falling of pitch, the ways in which it follows rhythm, but also relies upon duration and amplitude, is a mirror of language in music, and improvised melodies, some researchers argue, are closer in sound and in processing to the flow of language (Kraus and Slater 210).
This is why I rely so heavily on improvisation as a technique that I believe to be perfectly suited for students with learning differences, especially language-based impairments. Many music teachers tell their students what to play or sing. They give them sheet music, or they sing along with them as a guideline. Improvisation removes these supports, and requires that the student process the sounds they hear, synchronize with them, and respond to them. It is a full workout of the neurologic aptitudes for both beat synchronization and rise time perception, or melodic perception. Ultimately, “…music training can modify reading and phonological abilities even when these skills are severely impaired” (Flaugnacco et al 1). And if the elements of music found in improvisation can be so closely linked to the elements of language, it makes sense to me that a curriculum based in improvisation effectively targets both musical and linguistic skills.
Lots of work still has to be done here, but it is with great excitement that I look for more and more ways to make music with my students, and to make drumming part of our everyday live.
Brain Rhythms: Functional Brain Networks Mediated by Oscillatory Neural Coupling.
Youtube.com. Http://spot.colorado.edu/~gilley, 19 June 2014. Web. 20 Nov. 2016.
Flaugnacco, Elena, Luisa Lopez, Chiara Terribili, Marcella Montico, Stefania Zoia, and Daniele
Schon. "Music Training Increases Phonological Awareness and Reading Skills in Developmental Dyslexia: A Randomized Control Trial." PLoS One 25 Sept. 2015: 1-17. Print.
Goswami, Usha. "Dyslexia - in Tune but out of Time." Psychologist Feb. 2013: 106-09. Print.
Kraus, Nina, and Jessica Slater. "Music and Language: Relations and Disconnections."
Handbook of Clinical Neurology 3rd ser. 129 (2015): n. pag. Web. 9 Aug. 2016.
Kraus, Nina, PhD, and Samira Anderson, Aud,PhD. "Beat-Keeping Ability Relates to Reading
Readiness." Hearing Journal (2015): 54-56. EBSCOhost. Web. 8 Aug. 2016.
Patel, Aniruddh D. Music, Language, and the Brain. 2nd. edition ed. New York, NY: Oxford
University Press, Inc., 2010. Print.
Shaywitz, Sally E. Overcoming Dyslexia: A New and Complete Science-based Program for
Reading Problems at Any Level. New York: A.A. Knopf, 2003. Print.
Special Senses 7- Auditory Pathways. Dir. Wendy Riggs. Special Senses 7- Auditory Pathways.
Youtube.com, 16 Oct. 2014. Web. 20 Nov. 2016.
Tierney, Adam, and Nina Kraus. "Evidence for Multiple Rhythmic Skills." PLoS One 16 Sept.
2015: 1-14. Print.
Tierney, Adam, and Nina Kraus. "Getting Back on the Beat: Links between Auditory-motor
Integration and Precise Auditory Processing at Fast Time Scales." European Journal of
Neuroscience 43 (2016): 782-91. Web.
Woodruff-Carr, Kali, Adam Tierney, Travis White-Schwoch, and Nina Kraus. "Intertrial
Auditory Neural Stability Supports Beat Synchronization in Preschoolers."Developmental Cognitive Neuroscience (2015): 76-82. EBSCOhost. Web. 20 Nov. 2016.