| Literature DB >> 27375543 |
Pyeong Whan Cho1, Emily Szkudlarek2, Whitney Tabor1.
Abstract
Learning is typically understood as a process in which the behavior of an organism is progressively shaped until it closely approximates a target form. It is easy to comprehend how a motor skill or a vocabulary can be progressively learned-in each case, one can conceptualize a series of intermediate steps which lead to the formation of a proficient behavior. With grammar, it is more difficult to think in these terms. For example, center embedding recursive structures seem to involve a complex interplay between multiple symbolic rules which have to be in place simultaneously for the system to work at all, so it is not obvious how the mechanism could gradually come into being. Here, we offer empirical evidence from a new artificial language (or "artificial grammar") learning paradigm, Locus Prediction, that, despite the conceptual conundrum, recursion acquisition occurs gradually, at least for a simple formal language. In particular, we focus on a variant of the simplest recursive language, a (n) b (n) , and find evidence that (i) participants trained on two levels of structure (essentially ab and aabb) generalize to the next higher level (aaabbb) more readily than participants trained on one level of structure (ab) combined with a filler sentence; nevertheless, they do not generalize immediately; (ii) participants trained up to three levels (ab, aabb, aaabbb) generalize more readily to four levels than participants trained on two levels generalize to three; (iii) when we present the levels in succession, starting with the lower levels and including more and more of the higher levels, participants show evidence of transitioning between the levels gradually, exhibiting intermediate patterns of behavior on which they were not trained; (iv) the intermediate patterns of behavior are associated with perturbations of an attractor in the sense of dynamical systems theory. We argue that all of these behaviors indicate a theory of mental representation in which recursive systems lie on a continuum of grammar systems which are organized so that grammars that produce similar behaviors are near one another, and that people learning a recursive system are navigating progressively through the space of these grammars.Entities:
Keywords: artificial grammar learning; center embeddings; context-free grammar; counting recursion; graded state machine; sequence learning
Year: 2016 PMID: 27375543 PMCID: PMC4897795 DOI: 10.3389/fpsyg.2016.00867
Source DB: PubMed Journal: Front Psychol ISSN: 1664-1078
Figure 1A marked up initial display for the Locus Prediction task. In the actual display, the boxes were not numbered.
Finite languages used in Experiment 1 (L.
| S1 = 1 2 3 4 | S1 = 1 2 3 4 | S1 = 1 2 3 4 |
| S2 = 1 1 2 3 4 2 3 4 | S | S2 = 1 1 2 3 4 2 3 4 |
| S3 = 1 1 1 2 3 4 2 3 4 2 3 4 | S3 = 1 1 1 2 3 4 2 3 4 2 3 4 | S3 = 1 1 1 2 3 4 2 3 4 2 3 4 |
| S4 = 1 1 1 1 2 3 4 2 3 4 2 3 4 2 3 4 |
Figure 2Trajectories of average sentence prediction accuracy (SentAcc), separated by sentence types (SentType) in Experiment 1 (the top and middle panels), and Experiment 2 (the bottom panel). The mean sentence prediction accuracy represents the proportion of the participants who correctly processed all deterministic transitions of a level-n sentence at a particular position in a sequence of sentences.
Model profiles of trial-level prediction accuracy motivated by symbolic grammars.
| Finite1 | * 1 1 1 | ** 1 1 0 1 1 1 | *** 1 1 0 1 1 0 1 1 1 | **** 1 1 0 1 1 0 1 1 0 1 1 1 |
| Finite2 | ** 1 1 1 1 1 1 | *** 1 1 1 1 1 0 1 1 1 | **** 1 1 1 1 1 0 1 1 0 1 1 1 | |
| Finite3 | *** 1 1 1 1 1 1 1 1 1 | **** 1 1 1 1 1 1 1 1 0 1 1 1 | ||
| Finite4 | **** 1 1 1 1 1 1 1 1 1 1 1 1 |
Note: A 0 indicates that the participant got the prediction wrong. A 1 indicates that the participant got the prediction right. The asterisks under the word-1 columns indicate that the predictions of nondeterministic transitions are ignored.
Figure 3Sample trajectories of individual grammar changes. Three vertical lines indicate where the first instances of S2, S3, and S4 were introduced.
Figure 4Plot of mean prediction accuracy across different words of S. G0(G2, G3) corresponds to a set of G0 observed between the first G2 and the first G3 in each individual; G0(G3, G4/GR) corresponds to a set of G0 observed between the first G3 and the first G4 or GR in each individual.