| Literature DB >> 26981027 |
Jennie E Pyers1, Pamela Perniss2, Karen Emmorey3.
Abstract
Sign languages express viewpoint-dependent spatial relations (e.g., left, right) iconically but must conventionalize from whose viewpoint the spatial relation is being described, the signer's or the perceiver's. In Experiment 1, ASL signers and sign-naïve gesturers expressed viewpoint-dependent relations egocentrically, but only signers successfully interpreted the descriptions non-egocentrically, suggesting that viewpoint convergence in the visual modality emerges with language conventionalization. In Experiment 2, we observed that the cost of adopting a non-egocentric viewpoint was greater for producers than for perceivers, suggesting that sign languages have converged on the most cognitively efficient means of expressing left-right spatial relations. We suggest that non-linguistic cognitive factors such as visual perspective-taking and motor embodiment may constrain viewpoint convergence in the visual-spatial modality.Entities:
Keywords: Spatial language; gesture; sign language; viewpoint
Year: 2015 PMID: 26981027 PMCID: PMC4788639 DOI: 10.1080/13875868.2014.1003933
Source DB: PubMed Journal: Spat Cogn Comput ISSN: 1387-5868