| Literature DB >> 36059769 |
Frank Schumann1, Michael B Steinborn2, Hagen C Flehmig3, Jens Kürten2, Robert Langner4,5, Lynn Huestegge2.
Abstract
Here we present a systematic plan to the experimental study of test-retest reliability in the multitasking domain, adopting the multitrait-multimethod (MTMM) approach to evaluate the psychometric properties of performance in Düker-type speeded multiple-act mental arithmetic. These form of tasks capacitate the experimental analysis of integrated multi-step processing by combining multiple mental operations in flexible ways in the service of the overarching goal of completing the task. A particular focus was on scoring methodology, particularly measures of response speed variability. To this end, we present data of two experiments with regard to (a) test-retest reliability, (b) between-measures correlational structure, (c) and stability (test-retest practice effects). Finally, we compared participants with high versus low performance variability to assess ability-related differences in measurement precision (typically used as proxy to "simulate" patient populations), which is especially relevant in the applied fields of clinical neuropsychology. The participants performed two classic integrated multi-act arithmetic tasks, combining addition and verification (Exp. 1) and addition and comparison (Exp. 2). The results revealed excellent test-retest reliability for the standard and the variability measures. The analysis of between-measures correlational structure revealed the typical pattern of convergent and discriminant relationships, and also, that absolute response speed variability was highly correlated with average speed (r > 0.85), indicating that these measures mainly deliver redundant information. In contrast, speed-adjusted (relativized) variability revealed discriminant validity being correlated to a much lesser degree with average speed, indicating that this measure delivers additional information not already provided by the speed measure. Furthermore, speed-adjusted variability was virtually unaffected by test-retest practice, which makes this measure interesting in situations with repeated testing.Entities:
Keywords: cognitive control; concentration; reliability; sustained attention; vigilance
Year: 2022 PMID: 36059769 PMCID: PMC9433926 DOI: 10.3389/fpsyg.2022.946626
Source DB: PubMed Journal: Front Psychol ISSN: 1664-1078
A brief guide to understanding the logic underlying the Multitrait-multimethod (MTMM) approach to individual differences in multitasking.
| Step | Metaphor and symbolic assumptions | |
|
| Skew check | - the |
|
| Reliability | - the |
|
| Convergence | - theoretically similar concepts → expected to be related empirically |
|
| Discriminance | - theoretically different concepts → expected not to be related empirically |
|
| Stability | - concerns the degree to which (absolute) scores remain constant from test to retest |
|
| Reproducibility | - concerns the between-session correlational structure |
|
| Generalizability | - concerns the replicability of the overall findings with conceptually similar tests |
The points 1–2 are basic preconditions that must be fulfilled in order to enable any expectation about relationships of variables with each other. The points 3–4 are the classic evaluation dimensions of the MTMM, as they give an indication of how close (vs. distant) concepts are empirically. The points 5–7 are, in a strict sense, qualitative dimensions of credibility control, achieved through a serial process of replication with slightly varied conceptual variation (cf. Stroebe and Strack, 2014, pp. 61–63; Miller and Ulrich, 2021, 2022).
FIGURE 1Example of a series of trials in the mental addition and verification task. Participants have to perform an addition operation (e.g., “3 + 4 =”) and then to verify (or falsify) the correctness of their mental result with the presented addition term (e.g., “7 = 7”), by pressing the right (or left) response key. The task is self-paced such that each item is presented immediately after the response to the previous item.
Descriptive statistics for Experiment 1 and Experiment 2.
| Experiment 1 (serial mental addition and verification task) | |||||||||
| Session 1 | Session 2 | ||||||||
|
|
| ||||||||
| Measures | M | SD | Skew | Range | M | SD | Skew | Range | |
| 1 | RTM | 1402 | 333 | 0.23 | 848–2042 | 1270 | 297 | 0.50 | 774–2028 |
| 2 | RTMc | 1460 | 359 | 0.15 | 866–2113 | 1322 | 304 | 0.48 | 794–2112 |
| 3 | ER | 3.86 | 2.78 | 0.96 | 0.23–11.19 | 3.94 | 3.68 | 2.34 | 0.47–18.18 |
| 4 | RTSD | 679 | 318 | 0.25 | 189–1390 | 618 | 280 | 0.06 | 185–1141 |
| 5 | RTCV | 0.46 | 0.14 | 0.14 | 0.21–0.78 | 0.47 | 0.15 | –0.11 | 0.23–0.76 |
|
| |||||||||
|
| |||||||||
|
| |||||||||
|
|
| ||||||||
|
|
| ||||||||
|
|
|
|
|
|
|
|
|
| |
|
| |||||||||
| 1 | RTM | 2063 | 471 | 0.78 | 1191–3035 | 1744 | 391 | 0.51 | 196–2744 |
| 2 | RTMc | 2128 | 481 | 0.78 | 1237–3229 | 1785 | 396 | 0.52 | 1133–2776 |
| 3 | ER | 3.05 | 2.59 | 3.35 | 0.33–16.97 | 2.34 | 1.97 | 1.34 | 0.16–9.14 |
| 4 | RTSD | 1074 | 494 | 1.25 | 383–2743 | 809 | 316 | 0.44 | 290–1.708 |
| 5 | RTCV | 0.50 | 0.14 | 1.59 | 0.30–1.03 | 0.45 | 0.10 | 0.10 | 0.26–0.72 |
Population parameters for all performance measures. N = 29 (Exp. 1); N = 50 (Exp. 2); RTM, response time mean; RTMc, error-corrected RTM (inversed efficiency); ER, error percentage; RTSD, response time standard deviation; RTCV, response time coefficient of variation.
Multitrait-multimethod-matrix for Experiment 1 and Experiment 2.
| Experiment 1 (serial mental addition and verification task) | ||||||
| Session 1 | ||||||
|
| ||||||
| Session 2 | 1 | 2 | 3 | 4 | 5 | |
| RTM |
| 0.93 | 0.99 | 0.05 | 0.89 | 0.69 |
| RTMc |
| 0.99 | 0.94 | 0.18 | 0.89 | 0.70 |
| ER |
| –0.17 | –0.01 | 0.84 | 0.13 | 0.22 |
| RTSD |
| 0.88 | 0.88 | –0.04 | 0.91 | 0.93 |
| RTCV |
| 0.59 | 0.62 | –0.07 | 0.90 | 0.91 |
|
| ||||||
|
| ||||||
|
| ||||||
|
| ||||||
|
| ||||||
|
|
|
|
|
|
| |
|
| ||||||
| RTM | 1 | 0.94 | 0.99 | –0.13 | 0.85 | 0.63 |
| RTMc | 2 | 0.99 | 0.94 | –0.01 | 0.84 | 0.62 |
| ER | 3 | –0.17 | –0.08 | 0.78 | –0.11 | –0.05 |
| RTSD | 4 | 0.88 | 0.89 | –0.08 | 0.83 | 0.93 |
| RTCV | 5 | 0.59 | 0.60 | 0.05 | 0.89 | 0.78 |
Test–retest reliability and intercorrelation structure (convergent vs. divergent) of all performance measures, separately for session 1 and session 2. N = 29 (Exp. 1); N = 50 (Exp. 2); RTM, response time mean; ER, error percentage; RTSD, response time standard deviation; RTCV, response time coefficient of variation. Test–retest reliability is shown in the main diagonal (denoted with gray); correlations for the first session are shown above, for the second session below the main diagonal. **p < 0.01.
FIGURE 2Example of a series of trials in the mental addition and comparison task. The task is to solve the addition term (e.g., “2 + 3 = … ?”) and to decide whether the result is larger or smaller than the presented number value (e.g., “5 > 6 … ?”). The participants are to respond by “choosing” the larger of the number values. Each item in the task is presented immediately after the response to the previous item.
FIGURE 3Results of Experiment 1 and Experiment 2. Scatterplot of the relationship between test and retest performance for both Experiment 1 (Panels A,B) and 2 (Panels C,D), separately for a group of fast individual and a group of slow individuals.