Literature DB >> 26580218

Regularization of languages by adults and children: A mathematical framework.

Jacquelyn L Rische1, Natalia L Komarova2.   

Abstract

The fascinating ability of humans to modify the linguistic input and "create" a language has been widely discussed. In the work of Newport and colleagues, it has been demonstrated that both children and adults have some ability to process inconsistent linguistic input and "improve" it by making it more consistent. In Hudson Kam and Newport (2009), artificial miniature language acquisition from an inconsistent source was studied. It was shown that (i) children are better at language regularization than adults and that (ii) adults can also regularize, depending on the structure of the input. In this paper we create a learning algorithm of the reinforcement-learning type, which exhibits patterns reported in Hudson Kam and Newport (2009) and suggests a way to explain them. It turns out that in order to capture the differences between children's and adults' learning patterns, we need to introduce a certain asymmetry in the learning algorithm. Namely, we have to assume that the reaction of the learners differs depending on whether or not the source's input coincides with the learner's internal hypothesis. We interpret this result in the context of a different reaction of children and adults to implicit, expectation-based evidence, positive or negative. We propose that a possible mechanism that contributes to the children's ability to regularize an inconsistent input is related to their heightened sensitivity to positive evidence rather than the (implicit) negative evidence. In our model, regularization comes naturally as a consequence of a stronger reaction of the children to evidence supporting their preferred hypothesis. In adults, their ability to adequately process implicit negative evidence prevents them from regularizing the inconsistent input, resulting in a weaker degree of regularization.
Copyright © 2015 Elsevier Inc. All rights reserved.

Entities:  

Keywords:  Frequency boosting; Frequency matching; Mathematical modeling; Reinforcement algorithms

Mesh:

Year:  2015        PMID: 26580218     DOI: 10.1016/j.cogpsych.2015.10.001

Source DB:  PubMed          Journal:  Cogn Psychol        ISSN: 0010-0285            Impact factor:   3.468


  5 in total

Review 1.  Personalized learning: From neurogenetics of behaviors to designing optimal language training.

Authors:  Patrick C M Wong; Loan C Vuong; Kevin Liu
Journal:  Neuropsychologia       Date:  2016-10-05       Impact factor: 3.139

2.  Reconsidering retrieval effects on adult regularization of inconsistent variation in language.

Authors:  Carla L Hudson Kam
Journal:  Lang Learn Dev       Date:  2019-06-28

3.  Reinforcement Learning Explains Conditional Cooperation and Its Moody Cousin.

Authors:  Takahiro Ezaki; Yutaka Horita; Masanori Takezawa; Naoki Masuda
Journal:  PLoS Comput Biol       Date:  2016-07-20       Impact factor: 4.475

4.  Language learning, language use and the evolution of linguistic variation.

Authors:  Kenny Smith; Amy Perfors; Olga Fehér; Anna Samara; Kate Swoboda; Elizabeth Wonnacott
Journal:  Philos Trans R Soc Lond B Biol Sci       Date:  2017-01-05       Impact factor: 6.237

5.  Taking the chance!-Interindividual differences in rule-breaking.

Authors:  Leidy Cubillos-Pinilla; Franziska Emmerling
Journal:  PLoS One       Date:  2022-10-07       Impact factor: 3.752

  5 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.