Literature DB >> 35869138

Dataset of Speech Production in intracranial.Electroencephalography.

Maxime Verwoert1, Maarten C Ottenhoff2, Sophocles Goulis2, Albert J Colon3, Louis Wagner3, Simon Tousseyn3, Johannes P van Dijk3,4,5, Pieter L Kubben2,6, Christian Herff7.   

Abstract

Speech production is an intricate process involving a large number of muscles and cognitive processes. The neural processes underlying speech production are not completely understood. As speech is a uniquely human ability, it can not be investigated in animal models. High-fidelity human data can only be obtained in clinical settings and is therefore not easily available to all researchers. Here, we provide a dataset of 10 participants reading out individual words while we measured intracranial EEG from a total of 1103 electrodes. The data, with its high temporal resolution and coverage of a large variety of cortical and sub-cortical brain regions, can help in understanding the speech production process better. Simultaneously, the data can be used to test speech decoding and synthesis approaches from neural data to develop speech Brain-Computer Interfaces and speech neuroprostheses.
© 2022. The Author(s).

Entities:  

Mesh:

Year:  2022        PMID: 35869138      PMCID: PMC9307753          DOI: 10.1038/s41597-022-01542-9

Source DB:  PubMed          Journal:  Sci Data        ISSN: 2052-4463            Impact factor:   8.501


Background & Summary

Brain-Computer Interfaces (BCIs)[1] that directly decode speech from neural activity have recently gained large attention as they could provide an intuitive means of communication for patients who lost the ability to speak[2-7]. The creation of a speech neuroprosthesis depends on a firm understanding of the speech production process in the brain, the particular timing of brain regions involved and where to best decode them. Despite a number of existing models on the speech production process[8,9], the precise role of all areas involved has yet to be understood. Recent advances highlight that deeper brain structures, such as the hippocampus[10-12] and thalamus[13,14], are also involved in language in general and speech production specifically. A dataset providing accessible data for a simple speech production task in cortical and deeper brain structures could help to further understand this intricate process. This understanding may additionally aid functional language mapping prior to resective surgery in patients suffering from pharmaco-resistant epilepsy[15]. Despite the fact that a full understanding of speech production is currently lacking, great advances have been made in the field of speech neuroprostheses recently. The decoding of a textual representation by decoding phonemes[16,17], phonetic[18] or articulatory[19] features, words[20], full sentences[21-24] or spotting of speech keywords[25] is possible from neural recordings during actual speech production. Results are becoming robust enough for first trials in speech impaired patients[7]. To facilitate more natural communication, some studies aimed at directly synthesizing an audio waveform of speech from neural data recorded during speech production[26-29]. Initial results indicate that the decoding of speech processes is possible from imagined speech production from offline data[30-32] and in real-time[33,34]. Most of these recent advances employ electrocorticography (ECoG), an invasive recording modality of neural activity that provides high temporal and spatial resolution and high signal-to-noise ratio[35]. Additionally, ECoG is less affected by movement artifacts than non-invasive measures of neural activity. Other studies have used intracortical microarrays to decode speech[36-39] or a neurotrophic electrode[40] to synthesize formant frequencies[41,42] from the motor cortex. An alternative measure of intracranial neural activity is stereotactic EEG (sEEG), in which electrode shafts are implanted into the brain through small burr holes[43]. sEEG is considered to be minimally invasive, as a large craniotomy is not necessary and the infection risk is therefore smaller[44]. Additionally, the method of implanting the electrodes is very similar to that used in Deep Brain Stimulation (DBS), a method that has been used in the treatment of Parkinson’s Disease for several decades. In DBS, electrodes routinely remain implanted for many years, giving hope for the potential of sEEG for long-term BCIs. Similar to ECoG, sEEG is used in the monitoring of epilogenic zones in the treatment of refractory epilepsy. Between 5 and 15 electrode shafts are typically implanted covering a large variety of cortical and sub-cortical brain areas. Here lies one of the main differences to ECoG: instead of high density coverage of specific regions, sEEG provides sparse sampling of multiple regions. This sparse sampling could provide great potential for various BCI applications as most of them involve processes in deep (sub-cortical or within sulci) and spatially disparate, bilateral, brain regions[45]. For example, besides the primary motor cortex, movement can also be decoded well from the basal ganglia[46] and supramarginal gyrus[47], amongst others. With sEEG, these areas can be recorded simultaneously, leveraging multiple sources of potential information. Invasive recordings of neural activity are usually obtained during a seizure localization procedure or glioma resection surgery and are therefore not available to many researchers working on traditional speech decoding or speech synthesis technologies. These researchers are part of an active research community investigating the potential of non-invasive brain measurement technologies for speech neuroprostheses. Techniques include scalp-electroencephalography (EEG)[48-54], providing high temporal resolution, especially the Kara One database[55] provides the foundation for many studies; Magnetoencephalography[56,57], providing more localized information than EEG due to a larger amount of sensors; and Functional Near Infrared Spectroscopy[58-61], providing localized information of cortical hemoglobin levels. The advances made by this community could also benefit invasive speech neuroprostheses, a dataset that is provided to everyone could be used to evaluate and leverage their approaches. To facilitate an increased understanding of the speech production process in the brain, including deeper brain structures, and to accelerate the development of speech neuroprostheses, we provide this dataset of 10 participants speaking prompted words aloud while audio and intracranial EEG data are recorded simultaneously (Fig. 1).
Fig. 1

Intracranial EEG and acoustic data are recorded simultaneously while participants read Dutch words shown on a laptop screen. Traces on the right of the figure represent 30 seconds of iEEG, audio and stimulus data. The colors in the iEEG traces represent different electrode shafts.

Intracranial EEG and acoustic data are recorded simultaneously while participants read Dutch words shown on a laptop screen. Traces on the right of the figure represent 30 seconds of iEEG, audio and stimulus data. The colors in the iEEG traces represent different electrode shafts.

Methods

Participants

A total of 10 participants suffering from pharmaco-resistant epilepsy participated in our experiment (mean age 32 years (range 16–50 years); 5 male, 5 female). Participants were implanted with sEEG electrodes (Table 1) as part of the clinical therapy for their epilepsy. Electrode locations were purely determined based on clinical necessity. All participants joined the study on a voluntary basis and gave written informed consent. Experiment design and data recording were approved by the Institutional Review Boards of both Maastricht Unviversity and Epilepsy Center Kempenhaeghe. Data recording was conducted under the supervision of experienced healthcare staff. All participants were native speakers of Dutch. Participants’ voices were pitch-shifted with a randomized offset between 1 and 3 semitones up or down that was constant over the entire recording to ensure anonymity.
Table 1

Number of implanted and recorded electrodes of each participant.

sub-01sub-02sub-03sub-04sub-05sub-06sub-07sub-08sub-09sub-10Total
Implanted13323418411761155134561191241317
Recorded12712712711560127127541171221103

Note that the recorded number does not include the reference electrode, which is inherently carried in all recorded electrodes.

Number of implanted and recorded electrodes of each participant. Note that the recorded number does not include the reference electrode, which is inherently carried in all recorded electrodes.

Experimental design

In this study, participants were asked to read aloud words that were shown to them on a laptop screen (Fig. 1). One random word from the stimulus library (the Dutch IFA corpus[62] extended with the numbers one to ten in word form) was presented on the screen for a duration of 2 seconds during which the participant read the word aloud once. This relatively large window accounts for differences in word length and pronunciation speed. After the word, a fixation cross was displayed for 1 second. This was repeated for a total of 100 words, resulting in a total recording time of 300 seconds for each participant. The presented stimuli and timings were saved for later processing, hereafter referred to as stimulus data.

Data acquisition

Participants were implanted with platinum-iridium sEEG electrode shafts (Microdeep intracerebral electrodes; Dixi Medical, Beçanson, France) with a diameter of 0.8 mm, a contact length of 2 mm and a inter-contact distance of 1.5 mm. Each electrode shaft contained between 5 and 18 electrode contacts. Neural data was recorded using two or more Micromed SD LTM amplifier(s) (Micromed S.p.A., Treviso, Italy) with 64 channels each. Electrode contacts were referenced to a common white matter contact. Data were recorded at either 1024 Hz or 2048 Hz and subsequently downsampled to 1024 Hz. We used the onboard microphone of the recording notebook (HP Probook) to record audio at 48 kHz. Audio data was subsequently pitch-shifted to ensure our participants’ anonymity using LibRosa[63]. We used LabStreamingLayer[64] to synchronize the neural, audio and stimulus data.

Anatomical labeling

Electrode locations (Fig. 2) were detected using the img_pipe Python package[65] for anatomical labeling of intracranial electrodes. Within the package, for each participant, a pre-implantation anatomical T1-weighted Magnetic Resonance Imaging (MRI) scan was parcellated using Freesurfer (http://surfer.nmr.mgh.harvard.edu/), a post-implantation Computer Tomography (CT) scan was co-registered to the MRI scan and electrode contacts were manually localized. The anatomical location label of each contact was automatically extracted from the Destrieux atlas[66] based parcellation.
Fig. 2

Electrode locations of each participant in the surface reconstruction of their native anatomical MRI. Each red sphere represents an implanted electrode channel.

Electrode locations of each participant in the surface reconstruction of their native anatomical MRI. Each red sphere represents an implanted electrode channel. By far the most electrodes are located in white matter (40.3%) and unknown areas (12.6%). Unknown areas are contacts that were not able to be labelled through the Freesurfer parcellation, for example a contact located just outside of the cortex. Thereafter, electrodes are predominantly located in the superior temporal sulcus, hippocampus and the inferior parietal gyrus. See Fig. 3 for a full breakdown of anatomical regions and the number of electrodes implanted in those areas.
Fig. 3

Number of electrode contacts in cortical and subcortical areas across all participants. Colors indicate participants. Lengths of the bars show the number of electrodes in the specified region. Note the deviant x-axis for the white matter and unknown regions.

Number of electrode contacts in cortical and subcortical areas across all participants. Colors indicate participants. Lengths of the bars show the number of electrodes in the specified region. Note the deviant x-axis for the white matter and unknown regions.

Data Records

The SingleWordProductionDutch-iBIDS dataset[67] is available at 10.17605/OSF.IO/NRGX6. The raw data files (XDF format) were converted to Neurodata Without Borders (NWB; https://www.nwb.org/) format and organised in the iBIDS[68] data structure format using custom Python scripts. The NWB format allows for compact storage of multiple data streams within a single file. It is compatible to the iBIDS structure, a community-driven effort to improve the transparency, reusibility and reproducibility of iEEG data. The data is structured following the BIDS version 1.7.0 specification (https://bids-specification.readthedocs.io/en/stable/). The root folder contains metadata of the participants (participants.tsv), subject specific data folders (i.e., sub-01) and a derivatives folder. The subject specific folders contain .tsv files with information about the implanted electrode coordinates (_electrodes.tsv), recording montage (_channels.tsv) and event markers (_events.tsv). The _ieeg.nwb file contains three raw data streams as timeseries (iEEG, Audio and Stimulus), which are located in the acquisition container. Descriptions of recording aspects and of specific .tsv columns are provided in correspondingly named .json files (i.e., participants.json). The derivatives folder contains the pial surface cortical meshes of the right (_rh_pial.mat) and left (_lh_pial.mat) hemisphere, the brain anatomy (_brain.mgz), the Destrieux atlas (_aparc.a2009s + aseg.mgz) and a white matter atlas (_wmparc.mgz) per subject, derived from the Freesurfer pipeline. The description column in the _channels.tsv file refers to the anatomical labels derived from the Destrieux atlas[66]. The iBIDS dataset passed a validation check using the BIDS Validator (https://bids-standard.github.io/bids-validator/) and manual inspection of each datafile.

Technical Validation

We validate the recorded data by demonstrating that a spectral representation of speech can be reconstructed from the neural recordings using a simple linear regression model. This analysis is similar to a previous analysis in ECoG[69].

Checking for acoustic contamination

Acoustic contamination of neural recordings has been reported by Roussel et al.[70]. To check the presented dataset for acoustic contamination in the neural timeseries, we apply the method provided by the authors and correlate spectral energy between audio and neural data. We do not find any significant correlations (p > 0.01) on the diagonal of the contamination matrix for any of the participants. The risk for falsely rejecting the hypothesis of no contamination is therefore smaller than 1%.

Feature extraction

We extract the Hilbert envelope of the broadband high-frequency activity (70–170 Hz) for each contact using an IIR bandpass filter (filter order 4). To attenuate the first two harmonics of the 50 Hz line noise, we used two IIR bandstop filters (filter order 4). All filters were applied forward and backward so that no phase-shift is introduced. We averaged the envelope over 50 ms windows with a frameshift of 10 ms. To include temporal information into the decoding process, non-overlapping neighboring windows up to 200 ms into the past and future were stacked. Features are normalized to zero mean and unit variance using the mean and standard deviation of the training data. The same transform is then applied to the evaluation data. The audio data is first downsampled to 16 kHz. To extract features for the audio data, we subsequently calculated the Short-Term-Fourier-Transform in windows of 50 ms with an frameshift of 10 ms. As the frameshift between neural and audio data is the same, there is a correspondence between audio and neural feature vectors. The resulting spectrogram is then compressed into a log-mel representation[71] using 23 triangular filter banks.

Decoding model

To reduce the dimensionality of our decoding problem, we compress the feature space to the first 50 principal components. Principal components are estimated for each fold individually on the training data. The first 50 principal components explain between 29% and 76% of the variance depending on the participant. We reconstruct the log-mel spectrogram from the high-frequency features using linear regression models. In these models, the high-frequency feature vector is multiplied with a weight matrix to reconstruct the log-mel spectrogram. The weights are determined using a least-squares approach. As a baseline, we chose 1000 random split points, at least 10% distant from the beginning and end of the data, and swapped the audio spectrogram on this split point. In this procedure, which is also called random circular shift, the temporal structure and auto-regressive properties of speech are maintained. We then correlated these spectrograms with the original spectrogram to estimate a distribution of chance correlation coefficients.

Waveform reconstruction

The log-mel spectrogram does not contain the phase information anymore and an audio waveform can thus not be reconstructed directly. We utilize the method by Griffin and Lim[72] for waveform reconstruction, in which the phase is initialized with noise and then iteratively modified. For a good algorithmic discription of the method, see[73].

Results

All results are obtained in a non-shuffled 10-fold cross validation in which 9 folds are used for training and the remaining fold is used for evaluation. This process is repeated until each fold has been used for evaluation exactly once.

Spectrograms can be reconstructed

We evaluate the spectral reconstructions in terms of Pearson correlation coefficient between the spectral coefficients of the spectrogram of the original speech and the reconstructed spectrogram. For all 10 participants, speech spectrograms can be reconstructed from the neural data using linear regression (Fig. 4a) with higher correlations than all 1000 randomizations. Reconstruction results were consistent across all 23 mel-scaled spectral coefficients (Fig. 4b) and consistently above all randomization in all frequency ranges. Inspecting the spectrograms further (Fig. 5a), it can be seen that the results are mostly driven by the accurate reconstruction of speech versus silence. The spectral variations within speech are not captured by the linear regression approach. The Pearson correlation is not a perfect evaluation metric as this lack of detail during speech does not have a large impact on the score. We utilize the Pearson correlation here as a better metric has yet to be identified. By providing this open dataset, we hope that researchers developing more advanced metrics, such as the Spectro-Temporal Glimpsing Index (STGI)[74] or the extended Short-Time Objective Intelligibility (eSTOI)[75], will have the means to address this problem. Similarly, we hope that the dataset will be useful for developing and evaluating models that improve the quality of the reconstructed speech, such as those that are more informed about speech processes (e.g. Unit Selection[29]) or neural network approaches with enough trainable parameters to produce high-quality speech[26,27,76-78].
Fig. 4

Results for the spectral reconstruction. (a) Mean correlation coefficients for each participant across all spectral bins and folds. Reconstruction of the spectrogram is possible for all 10 participants. Whiskers indicate standard deviations. Results of individual folds are illustrated by points. (b) Mean correlation coefficients for each spectral bin. Correlations are stable across all spectral bins. Shaded areas show standard errors.

Fig. 5

Spectrograms (a) and waveforms (b) of the original (top) and reconstructed (bottom) audio. The example contains five individual words from sub-06. While the linear regression approach captures speech and silent intervals very accurately, the finer spectral dynamics within speech are lost.

Results for the spectral reconstruction. (a) Mean correlation coefficients for each participant across all spectral bins and folds. Reconstruction of the spectrogram is possible for all 10 participants. Whiskers indicate standard deviations. Results of individual folds are illustrated by points. (b) Mean correlation coefficients for each spectral bin. Correlations are stable across all spectral bins. Shaded areas show standard errors. Spectrograms (a) and waveforms (b) of the original (top) and reconstructed (bottom) audio. The example contains five individual words from sub-06. While the linear regression approach captures speech and silent intervals very accurately, the finer spectral dynamics within speech are lost.

Waveforms of speech can be reconstructed

Using the method by Griffin and Lim[72], we can recreate an audio waveform from the reconstructed spectrograms (Fig. 5b). The timing between original and reconstructed waveforms is very similar, but listening to the audio shows that a substantial part of the audio quality is lost due to the synthesis approach. This is particularly clear when listening to the audio recreated from the original spectrogams (_orig_synthesized.wav). State-of-the-art synthesis approaches, such as WaveGlow[79] or WaveNet[80], may be applied to the current dataset to evaluate an improvement in the reconstruction quality.

Usage Notes

iBIDS data

Scripts to handle the data can be obtained from our Github repository: https://github.com/neuralinterfacinglab/SingleWordProductionDutch.

Data loading and feature extraction

Neural traces, synchronized audio and experimental markers can be loaded using the provided extract_features.py script. High-frequency features are subsequently extracted and aligned to logarithmic mel-scaled spectrograms. Electrode channel names are also loaded.

Spectrogram reconstruction & waveform synthesis

Reconstruction of the spectrogram as well as the resynthesis to an audio waveform is performed in the reconstruction_minimal.py script.

Anatomical data

Electrode locations can be found in the participant folder (_electrodes.tsv) and then be visualized using the cortical meshes (_lh_pial.mat and _rh_pial.mat) within the derivatives folder. These mesh files contain vertices coordinates and triangles, which are described by indices corresponding to vertex numbers. The T1-weighted brain anatomy, the Destrieux parcellation and a white matter parcellation can also be found in the derivatives folder.
Measurement(s)Brain activity
Technology Type(s)Stereotactic electroencephalography
Sample Characteristic - OrganismHomo sapiens
Sample Characteristic - EnvironmentEpilepsy monitoring center
Sample Characteristic - LocationThe Netherlands
  57 in total

Review 1.  The Potential for a Speech Brain-Computer Interface Using Chronic Electrocorticography.

Authors:  Qinwan Rabbani; Griffin Milsap; Nathan E Crone
Journal:  Neurotherapeutics       Date:  2019-01       Impact factor: 7.620

2.  Speech synthesis from ECoG using densely connected 3D convolutional neural networks.

Authors:  Miguel Angrick; Christian Herff; Emily Mugler; Matthew C Tate; Marc W Slutzky; Dean J Krusienski; Tanja Schultz
Journal:  J Neural Eng       Date:  2019-03-04       Impact factor: 5.379

3.  Decoding spoken words using local field potentials recorded from the cortical surface.

Authors:  Spencer Kellis; Kai Miller; Kyle Thomson; Richard Brown; Paul House; Bradley Greger
Journal:  J Neural Eng       Date:  2010-09-01       Impact factor: 5.379

4.  Automatic parcellation of human cortical gyri and sulci using standard anatomical nomenclature.

Authors:  Christophe Destrieux; Bruce Fischl; Anders Dale; Eric Halgren
Journal:  Neuroimage       Date:  2010-06-12       Impact factor: 6.556

5.  Online EEG Classification of Covert Speech for Brain-Computer Interfacing.

Authors:  Alborz Rezazadeh Sereshkeh; Robert Trott; Aurélien Bricout; Tom Chau
Journal:  Int J Neural Syst       Date:  2017-06-13       Impact factor: 5.866

6.  Real-time classification of auditory sentences using evoked cortical activity in humans.

Authors:  David A Moses; Matthew K Leonard; Edward F Chang
Journal:  J Neural Eng       Date:  2018-01-30       Impact factor: 5.379

7.  Speech synthesis from neural decoding of spoken sentences.

Authors:  Gopala K Anumanchipalli; Josh Chartier; Edward F Chang
Journal:  Nature       Date:  2019-04-24       Impact factor: 69.504

Review 8.  Array programming with NumPy.

Authors:  Charles R Harris; K Jarrod Millman; Stéfan J van der Walt; Ralf Gommers; Pauli Virtanen; David Cournapeau; Eric Wieser; Julian Taylor; Sebastian Berg; Nathaniel J Smith; Robert Kern; Matti Picus; Stephan Hoyer; Marten H van Kerkwijk; Matthew Brett; Allan Haldane; Jaime Fernández Del Río; Mark Wiebe; Pearu Peterson; Pierre Gérard-Marchant; Kevin Sheppard; Tyler Reddy; Warren Weckesser; Hameer Abbasi; Christoph Gohlke; Travis E Oliphant
Journal:  Nature       Date:  2020-09-16       Impact factor: 49.962

9.  Imagined speech can be decoded from low- and cross-frequency intracranial EEG features.

Authors:  Timothée Proix; Jaime Delgado Saa; Andy Christen; Stephanie Martin; Brian N Pasley; Robert T Knight; Xing Tian; David Poeppel; Werner K Doyle; Orrin Devinsky; Luc H Arnal; Pierre Mégevand; Anne-Lise Giraud
Journal:  Nat Commun       Date:  2022-01-10       Impact factor: 17.694

10.  Real-time synthesis of imagined speech processes from minimally invasive recordings of neural activity.

Authors:  Miguel Angrick; Maarten C Ottenhoff; Lorenz Diener; Darius Ivucic; Gabriel Ivucic; Sophocles Goulis; Jeremy Saal; Albert J Colon; Louis Wagner; Dean J Krusienski; Pieter L Kubben; Tanja Schultz; Christian Herff
Journal:  Commun Biol       Date:  2021-09-23
View more
  1 in total

1.  Dataset of Speech Production in intracranial.Electroencephalography.

Authors:  Maxime Verwoert; Maarten C Ottenhoff; Sophocles Goulis; Albert J Colon; Louis Wagner; Simon Tousseyn; Johannes P van Dijk; Pieter L Kubben; Christian Herff
Journal:  Sci Data       Date:  2022-07-22       Impact factor: 8.501

  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.