| Literature DB >> 31379641 |
Ronald Fischer1,2, Johannes A Karl1.
Abstract
Psychology has become less WEIRD in recent years, marking progress toward becoming a truly global psychology. However, this increase in cultural diversity is not matched by greater attention to cultural biases in research. A significant challenge in culture-comparative research in psychology is that any comparisons are open to possible item bias and non-invariance. Unfortunately, many psychologists are not aware of problems and their implications, and do not know how to best test for invariance in their data. We provide a general introduction to invariance testing and a tutorial of three major classes of techniques that can be easily implemented in the free software and statistical language R. Specifically, we describe (1) confirmatory and multi-group confirmatory factor analysis, with extension to exploratory structural equation modeling, and multi-group alignment; (2) iterative hybrid logistic regression as well as (3) exploratory factor analysis and principal component analysis with Procrustes rotation. We pay specific attention to effect size measures of item biases and differential item function. Code in R is provided in the main text and online (see https://osf.io/agr5e/), and more extended code and a general introduction to R are available in the Supplementary Materials.Entities:
Keywords: DIF (differential item functioning); ESEM; R; alignment; confirmatory factor analysis – CFA; culture; invariance; procrustean analyses
Year: 2019 PMID: 31379641 PMCID: PMC6657455 DOI: 10.3389/fpsyg.2019.01507
Source DB: PubMed Journal: Front Psychol ISSN: 1664-1078
FIGURE 1Schematic display of item difficulty, item discrimination, and guessing parameters in a single group.
FIGURE 2Examples of differential item functioning in two groups. The panels show differential item functioning curves for two groups (group 1 indicated by solid line, group 2 indicated by a broken line). Panel (A) shows two groups differing in item discrimination (slope differences). The item differentiates individuals less well in group 1. This is an example of non-uniform item bias. Panel (B) shows two groups with different item difficulty. The item is easier (individuals with lower ability are able to correctly answer the item with 50% probability) for the group 1 and more difficult for group 2. Individuals in group 2 need higher ability to answer the items correctly with a 50% probability. This is an example of uniform item bias. Panel (C) shows differential guessing or intercept parameters. Group 1 has a higher chance of guessing the item correctly compared to group 2. Scores for group 1 on this item are consistently higher than for group 2, independent of the individual’s underlying ability or trait level. This is an example of uniform item bias. Panel (D) shows two groups differing in all three parameters. Group 1 has a higher guessing parameter, the item is easier overall, but also discriminates individuals better at moderate levels of ability compared to group 2. This is an example of both uniform and non-uniform item bias.
FIGURE 3Example of confirmatory factor analysis model.
FIGURE 4Visual representation of an EFA model.
FIGURE 5Visualization of factor rotations.
An example where identical factor structures show different factor loadings.
| Item 1 | 0.65 | 0.30 | 0.67 | 0.19 |
| Item 2 | 0.66 | 0.30 | 0.69 | 0.15 |
| Item 3 | 0.69 | 0.21 | 0.80 | 0.25 |
| Item 4 | 0.82 | 0.24 | 0.80 | 0.25 |
| Item 5 | 0.79 | 0.33 | 0.67 | 0.32 |
| Item 6 | 0.79 | 0.28 | 0.71 | 0.31 |
| Item 7 | 0.70 | 0.34 | 0.39 | 0.59 |
| Item 8 | 0.44 | 0.67 | 0.22 | 0.79 |
| Item 9 | 0.35 | 0.80 | 0.19 | 0.81 |
| Item 10 | 0.26 | 0.81 | 0.23 | 0.76 |
| Item 11 | 0.30 | 0.78 | 0.43 | 0.59 |
| Item 12 | 0.30 | 0.83 | 0.23 | 0.73 |