| Literature DB >> 23999083 |
C R Lyness1, B Woll, R Campbell, V Cardin.
Abstract
Cochlear implants (CI) are the most successful intervention for ameliorating hearing loss in severely or profoundly deaf children. Despite this, educational performance in children with CI continues to lag behind their hearing peers. From animal models and human neuroimaging studies it has been proposed the integrative functions of auditory cortex are compromised by crossmodal plasticity. This has been argued to result partly from the use of a visual language. Here we argue that 'cochlear implant sensitive periods' comprise both auditory and language sensitive periods, and thus cannot be fully described with animal models. Despite prevailing assumptions, there is no evidence to link the use of a visual language to poorer CI outcome. Crossmodal reorganisation of auditory cortex occurs regardless of compensatory strategies, such as sign language, used by the deaf person. In contrast, language deprivation during early sensitive periods has been repeatedly linked to poor language outcomes. Language sensitive periods have largely been ignored when considering variation in CI outcome, leading to ill-founded recommendations concerning visual language in CI habilitation.Entities:
Keywords: Cochlear implant; Crossmodal reorganisation; Deafness; Delayed/insecure language acquisition; Functional decoupling
Mesh:
Year: 2013 PMID: 23999083 PMCID: PMC3989033 DOI: 10.1016/j.neubiorev.2013.08.011
Source DB: PubMed Journal: Neurosci Biobehav Rev ISSN: 0149-7634 Impact factor: 8.989
What is visual language? For the purposes of this review article, we define visual language as language, or a language derivate, perceived in the visual modality.
| Visual language | Explanation | Notes |
|---|---|---|
| Speech Reading (Lip Reading) | Deducing the content of speech from viewing orofacial gestures. | Information about articulation is partially visible: the tongue is the major articulator and is often hidden within the mouth. Despite this, excellent speechreading can be achieved by some people ( |
| Visual Phonics (Cued Speech) | Specific, consistent manual actions are used simultaneously with seen speech to provide disambiguating phonological information. | This has been designed to support spoken language between hearing caregivers and deaf children ( |
| Sign Supported Speech (SSS) | Speechreading accompanied by manual signs. Unlike sign languages, the signs are not part of any formalised grammatical system. Unlike Cued Speech, the signs do not provide discrete phonological information. The signs follow the order of the spoken language, are typically used to indicate lexical items, and can be considered as a means of providing additional semantic information to the perceiver. | SSS is used to communicate with people who may be deaf or language-impaired or who have problems with speech articulation. Although developed from distinct theoretical bases, Simultaneous Communication (SC) and Total Communication (TC) can be considered forms of SSS, since they afford a means for hearing and deaf people to communicate using a mixture of speech and signs. TC may be implicated especially in language rehabilitation in CI ( |
| Sign Language | Sign languages are the natural languages of deaf communities. Hands, arms, upper torso, and face actions (including mouth actions) are all used in sign languages. Approximately 200 sign languages have been identified, reflecting spontaneous development within deaf communities. They have their own grammars, distinct from the spoken language of the surrounding community | Sign languages, unlike the other forms of visual communication (see above) demonstrate key linguistic universals in the domains of phonology, semantics and syntax. When acquired as a first language, sign language and spoken languages are processed in similar brain regions (see ( |