Literature DB >> 26638049

Simultaneous segmentation and generalisation of non-adjacent dependencies from continuous speech.

Rebecca L A Frost1, Padraic Monaghan2.   

Abstract

Language learning requires mastering multiple tasks, including segmenting speech to identify words, and learning the syntactic role of these words within sentences. A key question in language acquisition research is the extent to which these tasks are sequential or successive, and consequently whether they may be driven by distinct or similar computations. We explored a classic artificial language learning paradigm, where the language structure is defined in terms of non-adjacent dependencies. We show that participants are able to use the same statistical information at the same time to segment continuous speech to both identify words and to generalise over the structure, when the generalisations were over novel speech that the participants had not previously experienced. We suggest that, in the absence of evidence to the contrary, the most economical explanation for the effects is that speech segmentation and grammatical generalisation are dependent on similar statistical processing mechanisms.
Copyright © 2015 Elsevier B.V. All rights reserved.

Entities:  

Keywords:  Artificial grammar learning; Grammatical processing; Language acquisition; Speech segmentation; Statistical learning

Mesh:

Year:  2015        PMID: 26638049     DOI: 10.1016/j.cognition.2015.11.010

Source DB:  PubMed          Journal:  Cognition        ISSN: 0010-0277


  19 in total

1.  The role of consolidation in learning context-dependent phonotactic patterns in speech and digital sequence production.

Authors:  Nathaniel D Anderson; Gary S Dell
Journal:  Proc Natl Acad Sci U S A       Date:  2018-03-19       Impact factor: 11.205

2.  Tuning in to non-adjacencies: Exposure to learnable patterns supports discovering otherwise difficult structures.

Authors:  Martin Zettersten; Christine E Potter; Jenny R Saffran
Journal:  Cognition       Date:  2020-07-02

3.  Kindergarteners Use Cross-Situational Statistics to Infer the Meaning of Grammatical Elements.

Authors:  Sybren Spit; Sible Andringa; Judith Rispens; Enoch O Aboh
Journal:  J Psycholinguist Res       Date:  2022-07-06

4.  What Constitutes a Phrase in Sound-Based Music? A Mixed-Methods Investigation of Perception and Acoustics.

Authors:  Kirk N Olsen; Roger T Dean; Yvonne Leung
Journal:  PLoS One       Date:  2016-12-20       Impact factor: 3.240

5.  Domain-general mechanisms for speech segmentation: The role of duration information in language learning.

Authors:  Rebecca L A Frost; Padraic Monaghan; Tomoko Tatsumi
Journal:  J Exp Psychol Hum Percept Perform       Date:  2016-11-28       Impact factor: 3.332

6.  Sleep-Driven Computations in Speech Processing.

Authors:  Rebecca L A Frost; Padraic Monaghan
Journal:  PLoS One       Date:  2017-01-05       Impact factor: 3.240

Review 7.  Neurophysiological Markers of Statistical Learning in Music and Language: Hierarchy, Entropy, and Uncertainty.

Authors:  Tatsuya Daikoku
Journal:  Brain Sci       Date:  2018-06-19

8.  Two for the price of one: Concurrent learning of words and phonotactic regularities from continuous speech.

Authors:  Viridiana L Benitez; Jenny R Saffran
Journal:  PLoS One       Date:  2021-06-11       Impact factor: 3.240

9.  Regularity detection under stress: Faster extraction of probability-based regularities.

Authors:  Eszter Tóth-Fáber; Karolina Janacsek; Ágnes Szőllősi; Szabolcs Kéri; Dezso Nemeth
Journal:  PLoS One       Date:  2021-06-15       Impact factor: 3.240

10.  Single, but not dual, attention facilitates statistical learning of two concurrent auditory sequences.

Authors:  Tatsuya Daikoku; Masato Yumoto
Journal:  Sci Rep       Date:  2017-08-31       Impact factor: 4.379

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.