| Literature DB >> 24706789 |
Jennifer Culbertson1, David Adger.
Abstract
Although it is widely agreed that learning the syntax of natural languages involves acquiring structure-dependent rules, recent work on acquisition has nevertheless attempted to characterize the outcome of learning primarily in terms of statistical generalizations about surface distributional information. In this paper we investigate whether surface statistical knowledge or structural knowledge of English is used to infer properties of a novel language under conditions of impoverished input. We expose learners to artificial-language patterns that are equally consistent with two possible underlying grammars--one more similar to English in terms of the linear ordering of words, the other more similar on abstract structural grounds. We show that learners' grammatical inferences overwhelmingly favor structural similarity over preservation of superficial order. Importantly, the relevant shared structure can be characterized in terms of a universal preference for isomorphism in the mapping from meanings to utterances. Whereas previous empirical support for this universal has been based entirely on data from cross-linguistic language samples, our results suggest it may reflect a deep property of the human cognitive system--a property that, together with other structure-sensitive principles, constrains the acquisition of linguistic knowledge.Entities:
Keywords: artificial grammar learning; learning biases; semantic scope; transitional probabilities; typology
Mesh:
Year: 2014 PMID: 24706789 PMCID: PMC4000832 DOI: 10.1073/pnas.1320525111
Source DB: PubMed Journal: Proc Natl Acad Sci U S A ISSN: 0027-8424 Impact factor: 11.205