| Literature DB >> 24012145 |
Abstract
Reading is a complex process that draws on a remarkable number of diverse perceptual and cognitive processes. In this review, I provide an overview of computational models of reading, focussing on models of visual word recognition-how we recognise individual words. Early computational models had 'toy' lexicons, could simulate only a narrow range of phenomena, and frequently had fundamental limitations, such as being able to handle only four-letter words. The most recent models can use realistic lexicons, can simulate data from a range of tasks, and can process words of different lengths. These models are the driving force behind much of the empirical work on reading. I discuss how the data have guided model development and, importantly, I also provide guidelines to help interpret and evaluate the contribution the models make to our understanding of how we read.Entities:
Keywords: computational modelling; lexical decision; reading; word recognition
Mesh:
Year: 2013 PMID: 24012145 PMCID: PMC3843812 DOI: 10.1016/j.tics.2013.08.003
Source DB: PubMed Journal: Trends Cogn Sci ISSN: 1364-6613 Impact factor: 20.229
Figure 1Different styles of model. The top panel illustrates a simplified interactive activation model. Lines with arrows denote excitatory connections from letters to words. The lines terminated with circles denote inhibitory connections. Similar words (lexical neighbours) compete via these inhibitory connections. In a Bayesian formulation, words also compete; if the probability or likelihood of one word increases, the probability of other words must decrease. The network and mathematical approaches are much more closely related than they might first appear. Note that the Bayesian formulation must necessarily take account of the prior probability of each word; that is, its frequency.
Major computational models of reading organised in terms of their primary focusa, b
| Model | Style | Task | Phenomena | Large lexicon |
|---|---|---|---|---|
| IA | IA | PI | Word-superiority effect | |
| Multiple read-out | IA | PI, LD | Word-superiority effect | |
| SCM | IA | LD, MP | Letter order | |
| BR | Math/comp | LD, MP | Word frequency, letter order, RT distribution | √ |
| LTRS | Math/comp | MP, PI | Letter order | |
| Overlap | Math/comp | PI | Letter order | |
| Diffusion model | Math/comp | LD | RT distribution, word frequency | |
| SERIOL | Math/comp | LD, MP | Letter order | |
| CDP++ | Localist/symbolic | RA | Reading aloud | √ |
| DRC | IA | RA, LD | Reading aloud | |
| Triangle | Distributed connectionist | RA | Reading aloud | |
| Sequence encoder | Distributed connectionist | RA | Reading aloud | √ |
| Junction model | Distributed connectionist | RA | Reading aloud | √ |
| E-Z reader | Symbolic | R | Eye movements | |
| SWIFT | Symbolic | R | Eye movements | |
| Amorphous discriminative learning | Symbolic network | Self-paced reading, LD | Morphology | √ |
The table also indicates the modelling style or framework, the main task that the model simulates, the main phenomena that the model simulates (not exhaustive), and whether the model uses a realistically sized lexicon. Note that the review concentrates on ‘Models of visual word recognition’.
Abbreviations: Math/comp, mathematical or computational; LD, lexical decision; PI, perceptual identification; RA, reading aloud; MP, masked priming; R, natural reading.
Figure 2Three different representations of letter order. The Spatial Coding Model (top) represents letter order as a gradient of activation over letter nodes that increases with letter position. The noisy channel and overlap models (middle) both assume that there is some uncertainty in the location of letters. That is, there is some probability that T might have come before S. Open-bigram models (bottom) code letter order as a set of bigrams.