| Literature DB >> 32520962 |
Hamed Nili1, Alexander Walther2, Arjen Alink3, Nikolaus Kriegeskorte4.
Abstract
Representational distinctions within categories are important in all perceptual modalities and also in cognitive and motor representations. Recent pattern-information studies of brain activity have used condition-rich designs to sample the stimulus space more densely. To test whether brain response patterns discriminate among a set of stimuli (e.g. exemplars within a category) with good sensitivity, we can pool statistical evidence over all pairwise comparisons. Here we describe a wide range of statistical tests of exemplar discriminability and assess the validity (specificity) and power (sensitivity) of each test. The tests include previously used and novel, parametric and nonparametric tests, which treat subject as a random or fixed effect, and are based on different dissimilarity measures, different test statistics, and different inference procedures. We use simulated and real data to determine which tests are valid and which are most sensitive. A popular test statistic reflecting exemplar information is the exemplar discriminability index (EDI), which is defined as the average of the pattern dissimilarity estimates between different exemplars minus the average of the pattern dissimilarity estimates between repetitions of identical exemplars. The popular across-subject t test of the EDI (typically using correlation distance as the pattern dissimilarity measure) requires the assumption that the EDI is 0-mean normal under H0. Although this assumption is not strictly true, our simulations suggest that the test controls the false-positives rate at the nominal level, and is thus valid, in practice. However, test statistics based on average Mahalanobis distances or average linear-discriminant t values (both accounting for the multivariate error covariance among responses) are substantially more powerful for both random- and fixed-effects inference. Unlike average cross-validated distances, the EDI is sensitive to differences between the distributions associated with different exemplars (e.g. greater variability for some exemplars than for others), which complicates its interpretation. We suggest preferred procedures for safely and sensitively detecting subtle pattern differences between exemplars.Entities:
Mesh:
Year: 2020 PMID: 32520962 PMCID: PMC7286518 DOI: 10.1371/journal.pone.0232551
Source DB: PubMed Journal: PLoS One ISSN: 1932-6203 Impact factor: 3.240
Fig 1
Fig 2
Fig 8
Fig 10
Fig 3
Fig 4
Fig 5
Fig 6
Fig 7
Fig 9An overall view of different methods for testing exemplar discriminability.
| test statistic | multivar. noise model | sensitive to | H0 | inference procedure | inference scope | Validity (specificity) | Power (sensitivity) |
|---|---|---|---|---|---|---|---|
| no | differences in pattern distributions (incl. mean and variance) | exemplar response-patterns drawn from the same distribution | one-sided t test across subjects | subject population | usually acceptable (despite violations of assumptions) | bad | |
| one-sided signed-rank test across subjects | good | bad | |||||
| exemplar-label randomization | subject sample | good | good | ||||
| yes | one-sided t test across subjects | subject population | usually acceptable (despite violations of assumptions) | very good | |||
| one-sided signed-rank test across subjects | good | very good | |||||
| condition-label randomization | subject sample | excellent | |||||
| differences in pattern means | exemplar response-patterns drawn from distributions centered on the same mean pattern | one-sided t test across subjects | subject population | very good | |||
| one-sided signed-rank test across subjects | very good | ||||||
| exemplar-label randomization (different exemplar labels between training and test sets) | subject sample | excellent | |||||
| t test, signed-rank, permutation | subject sample or population | very good, excellent |