| Literature DB >> 31507500 |
Abstract
Previous research has demonstrated that humans are able to match unfamiliar voices to corresponding faces and vice versa. It has been suggested that this matching ability might be based on common underlying factors that have a characteristic impact on both faces and voices. Some researchers have additionally assumed that dynamic facial information might be especially relevant to successfully match faces to voices. In the present study, static and dynamic face-voice matching ability was compared in a simultaneous presentation paradigm. Additionally, a procedure (matching additionally supported by incidental association learning) was implemented which allowed for reliably excluding participants that did not pay sufficient attention to the task. A comparison of performance between static and dynamic face-voice matching suggested a lack of substantial differences in matching ability, suggesting that dynamic (as opposed to mere static) facial information does not contribute meaningfully to face-voice matching performance. Importantly, this conclusion was not merely derived from the lack of a statistically significant group difference in matching performance (which could principally be explained by assuming low statistical power), but from a Bayesian analysis as well as from an analysis of the 95% confidence interval (CI) of the actual effect size. The extreme border of this CI suggested a maximally plausible dynamic face advantage of less than four percentage points, which was considered way too low to indicate any theoretically meaningful dynamic face advantage. Implications regarding the underlying mechanisms of face-voice matching are discussed.Entities:
Keywords: face-voice integration; person identity processing; simultaneous presentation paradigm; static vs. dynamic faces; voice-face matching
Year: 2019 PMID: 31507500 PMCID: PMC6716535 DOI: 10.3389/fpsyg.2019.01957
Source DB: PubMed Journal: Front Psychol ISSN: 1664-1078
Overview of previous experimental procedures in face-voice matching studies.
FIGURE 1Schematic trial sequence. The fixation cross was presented for 316 ms, followed by a briefly presented blank screen (16 ms). Stimuli were presented as long as participants terminated the trial with a left or right key press response (stimuli were looped to ensure that visual and auditory information was consistently present until responding).
FIGURE 2Matching performance (upper panel, error rates in %) and RTs (lower panel, in ms) as a function of group (static vs. dynamic faces). Bars indicate the arithmetic means, dots represent individual data points. Note, however, that instructions did not emphasize response speed in the first place.
Static and dynamic matching accuracy for individual stimulus models.
| Male | 1 | 29 | 29% | 23% |
| 2 | 46 | 20% | 17% | |
| 3 | 34 | 13% | 19% | |
| 4 | 46 | 9% | 9% | |
| 5 | 41 | 23% | 29% | |
| 6 | 53 | 9% | 16% | |
| 7 | 35 | 26% | 18% | |
| 8 | 36 | 32% | 28% | |
| Female | 1 | 35 | 27% | 17% |
| 2 | 37 | 36% | 31% | |
| 3 | 32 | 31% | 31% | |
| 4 | 28 | 19% | 39% | |
| 5 | 47 | 26% | 31% | |
| 6 | 34 | 38% | 49% |
FIGURE 3Development of face-voice matching performance in the static and dynamic group over the course of the experiment.