| Literature DB >> 32542480 |
Abstract
Much of cognitive psychology is premised on the distinction between automatic and intentional processes, but the distinction often remains vague in practice and alternative explanations are often not followed through. For example, Hendricks, Conway and Kellogg (Journal of Experimental Psychology: Learning, Memory, and Cognition, 39, 491-1500, 2013) found that dual tasks at training versus at test dissociated performance in two different artificial grammar learning tasks. This was taken as evidence for underlying automatic and intentional processes. In this article, a different explanation is considered based on test learning and similarity, where participants are assumed to update their knowledge at test. Contrasting formal memory models of test learning are implemented, and it is concluded that the models account for the relevant dissociations without assuming a distinction between automatic and intentional processes.Entities:
Keywords: Dual task; Exemplar model; Memory; Test learning
Year: 2020 PMID: 32542480 PMCID: PMC7547038 DOI: 10.3758/s13423-020-01761-4
Source DB: PubMed Journal: Psychon Bull Rev ISSN: 1069-9384
Model parameters, values, and their meaning
| Parameter | Value | Meaning |
|---|---|---|
| Ltrain | Dual task: {.1, .2, .3} Single task: {.8, .9, 1} | Probability of encoding feature correctly at training |
Dual task: {.1, .2, .3} Single task: {.8, .9, 1} | Probability of encoding feature correctly at test | |
| {2, 3, 4, 5} | Number of sampled components of a sequence | |
| 2 (bigrams only) | Encoded | |
| Set to match endorsement rates | Decision criterion | |
| 100 | Dimensionality of letter vectors |
Bayes factors (BFs) and maximum likelihood ratios (MLRs) for standard and transfer artificial grammar learning (AGL)
| Standard AGL | Transfer AGL | |||
|---|---|---|---|---|
| BF | MLR | BF | MLR | |
| HEM-TC/HEM-T | 4.37 | 1.67 | - | - |
| HEM-TC/HEM-C | - | - | 1.30 | 1.24 |
| HEM-T/HEM-C | - | - | - | - |
Note: Empty cells involve model comparisons including models with zero or very close to zero likelihood across the parameter space. These models are not considered further
Fig. 1Simulated likelihood values for each model and experiment. Likelihood is here the probability of generating the empirical means of hits minus false alarms in the four experimental conditions within a root mean square error window of no more than 0.02, given a specific combination of model parameters. Squares show means ± 1 standard deviation. Circles show simulated values with size proportional to frequency. For the HEM-T model in the transfer condition all values were zero and for the HEM-C model in the standard condition all values but one (.01) were zero
Fig. 2Posterior predictive checks for standard and transfer artificial grammar learning (AGL). The empirical plots are the results from Hendricks et al. (2013). The other plots are model results for different models (HEM-T, HEMT-TC, or HEM-C). The empirical plots illustrate combinations of single and dual training and test tasks. The model panels illustrate combinations of low and high learning rates at training and test. The vertical axis is H - Fa = Hits minus False alarms. E = root mean square error. The model results are based either on the maximum likelihood (ML) parameter estimates or on sampling from the posterior distributions (posterior predictive, PP) of model parameters. Each mean (circles) in the model plots is based on 100 iterations of 20 participants. Note that for transfer AGL the empirical data only involve three conditions, but the simulations include all four combinations. The scale is the same in all plots