Times Educational Supplement magazine. 17/08/07
The language barrier.
Why do Italians with dyslexia have an inbuilt advantage compared with English children?
Usha Goswami explains
Does dyslexia really exist? Of course. All over the world, it is recognised as a specific learning difficulty intimately linked to the way we process language. Recent scientific research has found that dyslexia reflects atypical development in learning the sound structure of language -its "phonology".
Modern brain imaging helps to show where problems may lie. In skilled readers, brain activity in the left hemisphere's network of spoken language areas increases as they read. In children with developmental dyslexia, this network activity is reduced and there is more activity in right hemisphere networks.
Particularly crucial is an area in the left hemisphere that turns print into sound. It is called the posterior superior temporal cortex. All children with dyslexia find it difficult to count syllables in spoken words, to judge whether spoken words rhyme and to retain speech-based information in short-term memory.
The neural inefficiencies which result in dyslexia are shared across languages, with a similar prevalence of 5 to 7 per cent. Dyslexics in Chinese, French and Italian show similar characteristics. Nevertheless, its manifestation differs according to language. This is because of syllable structure and spelling systems.
Children with dyslexia learning to read languages such as Italian and Greek are best off developmentally. Syllable structure is simple: mostly consonant-vowel pairings, as in mama. There is a consistent, one-to-one correspondence between letters and sounds. In these languages, dyslexies show slow, effortful but accurate reading and poor spelling.
Children with dyslexia find it more difficult learning to read in languages such as English. The syllable structure is complex. Correspondence between letters and sounds is inconsistent (for instance, "a" makes a different sound in make, man, mark and mall). English dyslexic children show inaccurate reading, slow decoding and poor spelling characteristic of dyslexia in other languages.
Studies in psychology and neuroscience reveal important new information about how the brain builds a language system. Before they produce words, infants learn the basic sounds (called "phonemes"), the order they occur, and how to segment the stream of sound into separate words and syllables. Segmentation depends on speech rhythm and stress (called "prosody").
Babies between one and four days old can distinguish between languages such as Dutch and Japanese using rhythmic cues. This is a basic mammal skill: research has shown that rats and monkeys can also distinguish Dutch from Japanese. Early babbling also reflects rhythmic differences between languages. Adults who are played taped babble from French, Cantonese and Arabic infants can distinguish each "language".
Apes babble too, producing calls with tonal notes, repetition, rhythm and
phrasing. Rhythmic structure is basic to how the mammals' brains process and produce sounds (infants babble syllables, not phonemes).
In humans, the way carers speak to babies is important. This "motherese" (although fathers do it, too) uses higher pitch and increased syllable length for emphasis.
Cognitive neuroscience has shown there are populations of neurons in the brain that oscillate at the syllabic rate of speech. These neurons align their intrinsic rhythmic activity to the start of each spoken syllable. Children with developmental dyslexia find it hard to tell when syllables start. Syllables with abrupt onsets, such as "ba", are more difficult to distinguish from those with extended onsets, such as "wa". This is true in a variety of languages - including French, Hungarian and Finnish. Brain imaging studies also show this.
In languages with consistent spelling, children with dyslexia can use the
written word to sharpen up their phonological system. In English, spelling is less helpful.
Structured teaching of how sound and spelling are linked is the best way to help. In English, this is challenging, because spelling-sound consistencies occur at two levels, rhyme and phoneme. One useful scheme that trains children to make the link at both levels is Sound Linkage.
Usha Goswami is Professor of Education and director of the Centre for Neuroscience in Education at the University of Cambridge.
Calling a Spade a Spade
Reading is Not a Biological Property of the Human Brain
In the TES Magazine, Friday, 17th August, Usha Goswami asked a rhetorical question: “Does dyslexia really exist?” and provides this answer:
“All over the world, it is recognized as a specific learning difficulty intimately linked to the way we process language.”
But what does this mean? “Dyslexia” – Greek for “poor reading” means nothing more nor less than it says. If you are a poor reader this could be due to a variety of problems, the most common of which is inadequate instruction.
This statement implies that reading is a biological imperative rather than a human invention that has to be taught. One is no more likely to find “reading neurons” or ‘reading modules’ in the brain than brain cells devoted to deciphering electronic circuit diagrams, algebraic symbols, or musical notation. Inventions are objects or symbol-sets created by the human brain to help us do what we can’t normally do. If we could memorize verbatim everything that was ever said to us, we might not need a writing system. All codes – like those listed above, make it possible to link a biologically based aptitude to abstract symbols that stand for units of that aptitude – like spoken language or music. Codes have to be taught – people do not come equipped with them at birth.
Some codes are easy to learn (the Italian alphabet code is almost entirely transparent – one sound/one symbol) and thus is completely reversible. This means that reading and spelling are instantly connected and reinforce one another. Some codes are hard to learn. The English alphabet code is one of the most opaque writing systems in the world with multiple spellings for almost every sound, and multiple ways to decode the same symbol. The fact that nearly every child in Italy can read, write, and spell after the first term in school, but 30% or more of children in England (or any English speaking country) can scarcely read or spell anything after 4 or 5 years of school, tells us a lot about our writing system and the way it is taught. It tells us nothing about the human brain - unless one wanted to argue that Italians have entirely different brains to English-speaking people.
For these reasons and others, it is puzzling why Goswami prefers to believe that “dyslexia” is real, a property of the brain, and constitutes a primary disorder, rather than that language is a property of the brain and language problems or other problems (visual tracking, visual and verbal memory) may make it difficult for a child to master the symbol-to-sound correspondences of a writing system. Language is a biological imperative. Reading is not. We know a lot about the anatomical locus of language-related brain systems. But you will never find a brain module (or central locus) for translating sets of symbols created by humans to assist learning and memory for specific tasks.
Goswami continues: “Particularly crucial is an area in the left hemisphere that turns print into sound. It is called the posterior superior temporal cortex.”
Having worked in a brain research lab for 10 years, and taught neuropsychology for 20, I know enough to state unequivocally that this statement makes no sense. First, no area of the brain is designated to turn print into sound! Print is a human invention, and linking print to segments of speech is a complex cognitive task that has to be taught. Areas in the brain process what they are biologically (evolutionarily) primed to do. In complex brains, these areas can “gang up” to produce quite sophisticated learning and behaviour. Symbolic thought – for example - is very late in evolution.
To decode a writing system, various regions of the brain combine to make this possible. The frontal eye-fields (part of the frontal lobes) are entrained to scan print from left-to-right, both eyes in focus, and other frontal areas are engaged to keep attention on the task.. The visual system receives this input directly and processes it (or should) as individual letters or multi-letter units (digraphs), and transmits this elsewhere. The auditory system (superior temporal cortex) which has great facility in discriminating phonemes – links this input to its auditory representation. Cross-modal signal processing – largely a function of the parietal lobes (left and right hemisphere), help make this connection, and the left-hemisphere motor systems output subvocal or vocal speech – which it does spontaneously once a word is “decoded.”
A developmental delay or difficulty in any one of these brain systems could create problems learning to read. Thus, children with general language delays, weak auditory or verbal short-term memory, or other perceptual and cognitive deficits could have problems learning to read and spell. But these are language and memory problems, not “reading disorder” problems. These children are few and far between, constituting less than 5% of the population, and this cannot account for the 30-40% poor or non-readers in English-speaking schools.
If we look at the last 20 years of research on speech and language development, we find that there is little correspondence between what speech and hearing scientists, developmental psychologists, linguists, psychophysicists, etc. have learned about language development, and what Goswami identifies in her report as causal links to reading problems. For example, she argues that one reason English children have more difficulty learning to read is because the English “syllable structure is complex,” and that “Children with developmental dyslexia find it hard to tell when syllables start.”
But dyslexia is not a ‘developmental’ disorder (implying a biological basis for reading), nor do children anywhere in the world have difficulty segmenting syllables, a fact that has been repeatedly demonstrated in research over the last 20 years. If Goswami’s thesis was correct, English children would take much longer to learn to talk and understand speech than children in other countries, and we know this isn’t true from scores of cross-cultural studies on how infants hear and process speech. They do not, as she claims, have any more trouble hearing syllable onsets than children learning any other language. If a child couldn’t segment words out of the speech stream, he would never learn to understand or produce speech!
There is large scientific literature on infants’ auditory processing skills which dates back to 1971 with Eimas’ startling discovery that infants from 1 to 4 months old exhibit the same categorical perception for consonant-vowel contrasts (‘ba’-‘da’) as adults. In 1998, Aslin and his colleagues revealed 8 month-old infants’ astonishing skill in analyzing the phoneme cues that help wrench words out of the speech stream. And they can do this even when all rhythmic cues are eliminated.
Scores of developmental studies show that phonemic processing is one of the most “buffered” language skills humans possess, and is least susceptible to disruption and malfunction. Chaney showed that by age three, children are highly sensitive to the phoneme level of speech. Nearly all of the 87 three-year-olds in her study could listen to isolated phonemes (/b/ -- /a/ -- /t/), blend them into a word, and point to a picture representing that word – with nearly 90% scoring well above chance. Of the 22 tasks that she administered, this was the second easiest task. And contrary to Goswami’s assertion, the ability to reproduce rhyming endings or alliteration were the most difficult, with the vast majority of the children failing these tasks.
Despite results like these, Goswami persists in holding to her theory that “rhyme” is as important as phonemes in learning to master an alphabetic writing system. She even claims that rhyme is relevant to our spelling system: “spelling-sound consistencies occur at two levels, rhyme and phoneme.” The notion that the rhyme (word endings that sound alike) is relevant to learning an alphabetic writing system (which is entirely based on phonemes) has been largely discredited. When the National Reading Panel in the US published their landmark survey of reading research in 2000, results showed that rhyme-based teaching methods were singularly ineffective either alone or combined with something else. By contrast, the programmes which were highly successful all shared these features:
Teach the 40+ phonemes in English as the basis for the code (and NO OTHER UNITS), teach children to decode and encode in sequence from left to right (segmenting and blending), introduce letters as soon as possible (don’t teach phoneme awareness independently of print), include lots of copying and writing to link visual, auditory, and motor systems, avoid letter names, and never allow or encourage children to “guess” words on the basis of partial cues or pictures on the page.
Emeritus Professor in Psychology
University of South Florida
Support for the above comments and analysis of the scientific literature is set out in depth in Early Reading Instruction (2004) and Language Development and Learning to Read (2005) MIT Press. These books review a vast literature on these topics dating from the mid 1960s to the 21st century.
Please contribute past postings from the message board that you have have found particularly inspiring.
1 post • Page 1 of 1
Who is online
Users browsing this forum: No registered users and 1 guest