| Literature DB >> 29670516 |
Tiffani Kittilstved1, Kevin J Reilly1, Ashley W Harkrider1, Devin Casenhiser1, David Thornton1, David E Jenson1, Tricia Hedinger1, Andrew L Bowers2, Tim Saltuklaroglu1.
Abstract
Objective: To determine whether changes in sensorimotor control resulting from speaking conditions that induce fluency in people who stutter (PWS) can be measured using electroencephalographic (EEG) mu rhythms in neurotypical speakers.Entities:
Keywords: EEG; fluency enhancing conditions; independent component analysis; mu rhythm; speech production
Year: 2018 PMID: 29670516 PMCID: PMC5893846 DOI: 10.3389/fnhum.2018.00126
Source DB: PubMed Journal: Front Hum Neurosci ISSN: 1662-5161 Impact factor: 3.169
Demographics and cluster contributions.
| Subject ID # | Sex | Age | Handedness | Cluster contribution |
|---|---|---|---|---|
| C1 | M | 26–30 | R | L, R |
| C2 | M | 26–30 | R | L |
| C3 | M | 36–40 | R | R |
| C4 | F | 21–25 | R | L |
| C5 | M | 21–25 | R | L |
| C6 | M | 21–25 | R | |
| C7 | F | 21–25 | R | L |
| C8 | M | 31–35 | R | |
| C9 | F | 16–20 | R | L, R |
| C10 | M | 41–45 | R | L |
| C11 | M | 21–25 | R | L |
| C12 | F | 26–30 | L | |
| C13 | F | 21–25 | R | R |
| C14 | M | 36–40 | R | |
| C15 | F | 16–20 | R | |
| C16 | M | 26–30 | R | |
| C17 | F | 21–25 | R | L |
| C18 | M | 21–25 | R | L, R |
| C19 | M | 26–30 | R | L, R |
| C20 | M | 16–20 | R | R |
| C21 | F | 21–25 | R | L, R |
| C22 | M | 16–20 | R | R |
| C23 | M | 26–30 | R | L |
| C24 | M | 21–25 | L | |
| C25 | M | 16–20 | Ambi | L |
| C26 | M | 46–50 | R | L, R |
Figure 1Timelines. Seven-thousand milliseconds epoch timelines for single trials in (A) choral and (B) all other experimental conditions.
Figure 2Sources and spectra. Equivalent current dipole (ECD) sources and average spectra for each condition in (A) left and (B) right clusters of mu components.
Figure 3Solo vs. choral contrast. Event-related spectral perturbations (ERSP) analyses showing time-frequency differences between the solo and choral condition. The first two rows show left and right mu rhythm scalp maps, followed by time-frequency patterns of event-related synchronization (ERS) and event-related desynchronization (ERD), followed by statistically different (pFDR < 0.05) time frequency voxels between the two conditions. The last row shows the differences in EMG activity.
Figure 4Solo vs. delayed auditory feedback (DAF) contrast. ERSP analyses showing time-frequency differences between the solo and DAF condition. The first two rows show left and right mu rhythm scalp maps, followed by time-frequency patterns of ERS and ERD, followed by statistically different (pFDR < 0.05) time frequency voxels between the two conditions. The last row shows the differences in EMG activity.
Figure 5Solo vs. prolonged speech contrast. ERSP analyses showing time-frequency differences between the solo and prolonged speech condition. The first two rows show left and right mu rhythm scalp maps, followed by time-frequency patterns of ERS and ERD, followed by statistically different (pFDR < 0.05) time frequency voxels between the two conditions. The last row shows the differences in EMG activity.
Figure 6Solo vs. pseudostuttering contrast. ERSP analyses showing time-frequency differences between the solo and pseudostuttering condition. The first two rows show left and right mu rhythm scalp maps, followed by time-frequency patterns of ERS and ERD, followed by statistically different (pFDR < 0.05) time frequency voxels between the two conditions. The last row shows the differences in EMG activity.