| Literature DB >> 21607072 |
Abstract
A model is proposed to characterize the type of knowledge acquired in artificial grammar learning (AGL). In particular, Shannon entropy is employed to compute the complexity of different test items in an AGL task, relative to the training items. According to this model, the more predictable a test item is from the training items, the more likely it is that this item should be selected as compatible with the training items. The predictions of the entropy model are explored in relation to the results from several previous AGL datasets and compared to other AGL measures. This particular approach in AGL resonates well with similar models in categorization and reasoning which also postulate that cognitive processing is geared towards the reduction of entropy.Entities:
Keywords: artificial grammar learning; chunking models; entropy; information theory
Year: 2010 PMID: 21607072 PMCID: PMC3095384 DOI: 10.3389/fpsyg.2010.00016
Source DB: PubMed Journal: Front Psychol ISSN: 1664-1078
Figure 1This is the grammar used in Knowlton and Squire (. From Pothos and Bailey (2000), published by APA. Reprinted with permission.
Figure 2Examples of the types of stimuli used in Pothos et al. (.
Figure 3Examples of the types of stimuli used in Pothos and Bailey (.
The correlation of the four entropy measures for AGL, and the other AGL performance measures considered in this work, with average grammaticality endorsements for the test items in the nine AGL conditions of Pothos et al. (. Note that there were 50 test items in the Pothos et al. (2006) conditions and 32 in the Pothos and Bailey (2000) ones.
| Overall model | Summed bigrament | Summed trigrament | Average bigrament | Average trigrament | Grammaticality | Global Ch. Str. | Anchor Ch. Str. | Edit distance | Length | |
|---|---|---|---|---|---|---|---|---|---|---|
| Letter strings | 0.15 | −0.46** | − | − | 0.48** | 0.57** | −0.32* | 0.07 | ||
| Embedded shapes | 0.09 | −0.26 | −0.25 | −0.47** | 0.36* | −0.36* | 0.26 | |||
| Sequences of cities | 0.19 | −0.61** | − | −0.67** | 0.37** | 0.58** | −0.22 | −0.10 | ||
| Letter strings | −0.00 | −0.08 | −0.08 | −0.07 | 0.12 | −0.06 | 0.09 | |||
| Embedded shapes | 0.23 | 0.01 | 0.04 | 0.04 | −0.06 | 0.05 | −0.03 | |||
| Sequences of cities | 0.27 | − | −0.17 | − | 0.23 | −0.16 | 0.11 | 0.06 | −0.09 | |
| Embedded shapes | 0.42* | 0.17 | −0.05 | − | 0.31 | 0.27 | 0.30 | 0.07 | ||
| Lines | −0.18 | −0.43* | −0.28 | − | 0.28 | 0.43* | −0.29 | −0.10 | ||
| Sequences of shapes | 0.02 | −0.11 | −0.02 | −0.20 | 0.25 | −0.35 | 0.05 | |||
Note: An ‘*’ flags a correlation significant at the 0.05 level and a ‘**’ flags one significant at the 0.01 level. Italic entries simply indicate the highest and next highest correlations with grammaticality endorsements in a particular condition (for the entropy measures, we highlighted only correlations which are in the expected direction). The ‘overall model’ column shows the F test for a regression model to predict grammaticality endorsements on the basis of all AGL performance measures entered concurrently.
The correlations of the entropy measures and the other measures of AGL performance for the Reber and Allen (.
| Summed | Average | |||||
|---|---|---|---|---|---|---|
| Bigram entropy | Trigram entropy | Bigram entropy | Trigram entropy | |||
| Grammaticality | 0.15/0 | −0.67**/−0.33 | −0.68**/−0.54** | −0.80**/−0.70** | ||
| Global Ch. Str. | 0.06/−0.32 | −0.38*/−0.53* | −0.39**/−0.07 | −0.67**/−0.43* | ||
| Anchor Ch. Str. | −0.04/0.25 | −0.54**/0.03 | −0.43**/0.06 | −0.65**/−0.23 | ||
| Edit distance | 0.19/0.51** | 0.37**/0.54** | 0.16/0.02 | 0.22/0.17 | ||
| Length | 0.12/0.96** | 0.52**/0.74** | 0.09/0.06 | −0.11/−0.01 | ||
Note: Each cell of the table shows the correlation between an entropy measure and a standard AGL measure for the Reber and Allen grammar (first number) and the corresponding correlation for the Knowlton and Squire grammar (second number). Correlations which are significant at the 0.05 level are flagged with an ‘*’ and correlations which are significant at the 0.01 level are flagged with an ‘**’.