After over eight years, I want to revive the original subject of this thread, because it has recently come up on DDOLL (the Australian mailing list concerned with dyslexia) to which several of us belong, and I have had some further thoughts about it.
One of the forms of the dual-route model, which was described on DDOLL, is that the non-lexical route can identify only regular words, where regular words are defined as those which have no irregular correspondences, and the regular correspondence for each grapheme is the one that occurs most often in the language. In this model, the non-lexical route continues to be used for regular words and the lexical route for all others. Debbie responded to a description along these lines with the following, which she has kinly gidven me permission to quote. Quotes of the post to which she was responding have been replaced by dots, since repeating them here would breach DDOLL rules.
> Please bear in mind the situation regarding the official definition of ‘irregular’ words in the literature – and the difference that is made when alphabetic code knowledge is taught comprehensively such that elements of the code are not taught as if they are ‘irregular’ but just ‘alternative spellings’ or ‘alternative pronunciations’. In this scenario, I cannot agree with the statement above in red.
> I would say that the official definition [according to what I’ve gleaned from ....] would lead to a huge number of ‘irregular’ words which are, in reality, readily decodable by applying alphabetic code knowledge.
> For example, let’s take the grapheme ‘ea’. If I’ve understood [....] the official definition of ‘irregular’ correctly, any words including ‘ea’ as code for the sound /e/ as in ‘head’, would be classified as ‘irregular’.
> There are many words, however, which are very common and in children’s spoken language. They may well be able to decode by applying alphabetic code knowledge (having been taught, or by sub-conscious deduction – or even self-taught deduction) words such as ‘head’, ‘bread’, ‘dead’, ‘feather’, ‘weather’, ‘steady’, ‘ready’, ‘instead’ and so on.
> [I apologise if I’ve misunderstood ....’s statement but this is what I’ve understood by it.]
> In effect, the notion of ‘disobeying the GPC rules’ is different from the way the alphabetic code can be taught via the ‘alternatives’ route – and plenty of decoding practice which makes children enormously able and flexible in their reading.
I agree with this view. I think fluent readers identify irregular and regular correspondences (making no such distinction between them) in parallel and rapidly, after which their brains link the sounds and access the lexicon. In my view, the lexicon is not purely orthographic, as the original dual route model has it, but is the one developed to relate sound, meaning and grammar of spoken language, elaborated in readers by the addition of written information, possibly just spellings. These may be needed to choose among the possible pronunciations that happen to be real words in the lexicon. Thus the fluent reader uses the lexical route for all familiar words, using sound as the key; the non-lexical one is invoked only for unfamiliar ones.
I therefore consider the categorisation of correspondences and words as "regular" or "irregular" unhelpful. I have heard many reports here of the satisfaction that a child just starting to learn by SP gets when s/he sounds out a written word and recognizes it in her/his spoken vocabulary. At that stage s/he has no idea whether the correspondences are regular or irregular, and is unlikely ever to learn of these categories. Disambiguation remains a complicated problem to be solved, because spelling, meaning and syntax are all possible contributors to it.
"... the innovator has as enemies all those who have done well under the old regime, and only lukewarm allies among those who may do well under the new." Niccolo Macchiavelli, "The Prince", Chapter 6