| Literature DB >> 32623134 |
Martin Zettersten1, Christine E Potter2, Jenny R Saffran3.
Abstract
Non-adjacent dependencies are ubiquitous in language, but difficult to learn in artificial language experiments in the lab. Previous research suggests that non-adjacent dependencies are more learnable given structural support in the input - for instance, in the presence of high variability between dependent items. However, not all non-adjacent dependencies occur in supportive contexts. How are such regularities learned? One possibility is that learning one set of non-adjacent dependencies can highlight similar structures in subsequent input, facilitating the acquisition of new non-adjacent dependencies that are otherwise difficult to learn. In three experiments, we show that prior exposure to learnable non-adjacent dependencies - i.e., dependencies presented in a learning context that has been shown to facilitate discovery - improves learning of novel non-adjacent regularities that are typically not detected. These findings demonstrate how the discovery of complex linguistic structures can build on past learning in supportive contexts.Entities:
Keywords: Artificial language learning; Grammar; Language learning; Non-adjacent dependencies
Mesh:
Year: 2020 PMID: 32623134 PMCID: PMC7376744 DOI: 10.1016/j.cognition.2020.104283
Source DB: PubMed Journal: Cognition ISSN: 0010-0277