| Literature DB >> 35310080 |
Fernando Lizcano-Cortés1, Ireri Gómez-Varela1, Cecilia Mares1, Pascal Wallisch2, Joan Orpella2, David Poeppel2,3,4,5, Pablo Ripollés2,4,5,6, M Florencia Assaneo1.
Abstract
The ability to synchronize a motor action to a rhythmic auditory stimulus is often considered an innate human skill. However, some individuals lack the ability to synchronize speech to a perceived syllabic rate. Here, we describe a simple and fast protocol to classify a single native English speaker as being or not being a speech synchronizer. This protocol consists of four parts: the pretest instructions and volume adjustment, the training procedure, the execution of the main task, and data analysis. For complete details on the use and execution of this protocol, please refer to Assaneo et al. (2019a).Entities:
Keywords: Behavior; Clinical Protocol; Cognitive Neuroscience; Neuroscience
Mesh:
Year: 2022 PMID: 35310080 PMCID: PMC8931471 DOI: 10.1016/j.xpro.2022.101248
Source DB: PubMed Journal: STAR Protoc ISSN: 2666-1667
Figure 1Bimodal distributions produced by the two versions of the Speech-to-Speech Synchronization test
Upper panels: Histograms for the synchronization measurements (Phase Locking Values) obtained by using both versions of the test. Implicit Fixed version on the left (N=255). Explicit Accelerated version on the right (N=190). The colored traces represent the two normal distributions obtained by adjusting a two component gaussian mixture model on the data (Implicit Fixed: Component 1, High Synchronizers; mixing proportion: 0.60, mean: 0.58. Component 2, Low Synchronizers; mixing proportion: 0.40, mean: 0.23. Explicit Accelerated: Component 1, High Synchronizers; mixing proportion: 0.67, mean: 0.63. Component 2, Low Synchronizers; mixing proportion: 0.33, mean: 0.27). Lower panels: Probability of belonging to one of the two groups as a function of the participant’s degree of synchrony. Probability curves are derived from the distributions obtained from the gaussian mixture models adjusted to the datasets. In all panels, orange and blue represent the high and low synchronizers, respectively.
Figure 2Examples of a bad and a good audio recording
The upper panel show two schematic outcomes, which are composed of the acoustic signals represented in the middle and bottom panels. In both cases, the participant produced the same train of “tahs” (two bottom rows) and it is the background noise that differs between the right and the left examples. In the example on the left, the audio was recorded with a stable and relatively low background noise, which did not alter the whisper’s envelope. In the example on the right, the background noise shows abrupt increments in amplitude (which could represent different naturalistic sounds such as other voices, a dog’s barking, or a telephone ringing). In this case, as shown in the upper panel, the envelope of the recording does not recover the one of the participant’s whispers. In all panels, the acoustic signal is depicted in gray while the corresponding envelope is highlighted in purple.
| REAGENT or RESOURCE | SOURCE | IDENTIFIER |
|---|---|---|
| Auditory stimuli, wav files | This paper | |
| MATLAB code for running both versions of the test and analyzing its outcome | This paper | |
| Python code analyzing the outcome | This paper | |
| Gorilla open materials to run both versions of the test remotely | This paper | |
| 255 Human participants (112 males; mean age, 30 years; age range, 19–55 years; native English speakers) | ( | N/A |
| 190 Human participants (81 males; mean age, 25 years; age range, 19–45 years; native English speakers) | This paper | N/A |
| MATLAB; Version: 9.10.0.1739362 (R2021a) | MathWorks | |
| Psychtoolbox v3.0.17 | ( | |
| Gorilla Experiment Builder | ( | |
| Praat | ( | |