| Literature DB >> 25528091 |
Amy M Lieberman1, Arielle Borovsky2, Marla Hatrak3, Rachel I Mayberry3.
Abstract
Sign language comprehension requires visual attention to the linguistic signal and visual attention to referents in the surrounding world, whereas these processes are divided between the auditory and visual modalities for spoken language comprehension. Additionally, the age-onset of first language acquisition and the quality and quantity of linguistic input for deaf individuals is highly heterogeneous, which is rarely the case for hearing learners of spoken languages. Little is known about how these modality and developmental factors affect real-time lexical processing. In this study, we ask how these factors impact real-time recognition of American Sign Language (ASL) signs using a novel adaptation of the visual world paradigm in deaf adults who learned sign from birth (Experiment 1), and in deaf adults who were late-learners of ASL (Experiment 2). Results revealed that although both groups of signers demonstrated rapid, incremental processing of ASL signs, only native signers demonstrated early and robust activation of sublexical features of signs during real-time recognition. Our findings suggest that the organization of the mental lexicon into units of both form and meaning is a product of infant language learning and not the sensory and motor modality through which the linguistic signal is sent and received. (c) 2015 APA, all rights reserved.Entities:
Mesh:
Year: 2014 PMID: 25528091 PMCID: PMC4476960 DOI: 10.1037/xlm0000088
Source DB: PubMed Journal: J Exp Psychol Learn Mem Cogn ISSN: 0278-7393 Impact factor: 3.051