| Literature DB >> 31646154 |
Po-Chih Kuo1, Yi-Li Tseng2, Karl Zilles3, Summit Suen1, Simon B Eickhoff4,5, Juin-Der Lee6, Philip E Cheng1, Michelle Liou1.
Abstract
There is a growing interest in functional magnetic resonance imaging (fMRI) studies on connectivity networks in the brain when subjects are under exposure to natural sensory stimulation. Because of a complicated coupling between spontaneous and evoked brain activity under real-world stimulation, there is no critical mapping between the experimental inputs and corresponding brain responses. The dataset contains auditory fMRI scans and T1-weighted anatomical scans acquired under eyes-closed and eyes-open conditions. Within each scanning condition, the subject was presented 12 different sound clips, including human voices followed by animal vocalizations. The dataset is meant to be used to assess brain dynamics and connectivity networks under natural sound stimulation; it also allows for empirical investigation of changes in fMRI responses between eyes-closed and eyes-open conditions, between animal vocalizations and human voices, as well as between the 12 different sound clips during auditory stimulation. The dataset is a supplement to the research findings in the paper "Brain dynamics and connectivity networks under natural auditory stimulation" published in NeuroImage.Entities:
Keywords: Auditory; Connectivity networks; Real-world; fMRI
Year: 2019 PMID: 31646154 PMCID: PMC6804394 DOI: 10.1016/j.dib.2019.104411
Source DB: PubMed Journal: Data Brief ISSN: 2352-3409
Fig. 1Stimuli and experimental paradigm: (A) The 8-min task began with an eye-closed condition (4 min duration) followed by an eye-open condition (4 min duration). The sounds were played to the subjects under each condition comprised of human voices and animal vocalizations. (B) The sound clips comprised six types of human voices, including a baby prattling, a woman crying, a man guffawing, a woman laughing, a man sneezing, and a crowd chattering. This was followed by six types of animal vocalizations, including a bird chirping, a cow mooing, a dog barking, a sheep bleating, various farm sounds, and a rooster crowing. The duration of each sound and its order of appearance were randomly determined for each subject. (C) No significant differences were observed between human and animal sounds in terms of acoustic features; for example, harmonics-to-noise ratio (HNR), intensity, and pitch.
Specifications Table
| Subject | Neuroscience |
| Specific subject area | Neuroimaging |
| Type of data | Image |
| How data were acquired | Data were acquired using a 3T MAGNETOM Skyra scanner (Siemens Healthcare, Erlangen, Germany) and a standard 20-channel head-neck coil. |
| Data format | Raw |
| Parameters for data collection | EPI images: TR = 2000 ms; TE = 30 ms; flip angle = 84°; 35 slices with slice thickness = 3.4 mm; FOV = 192 mm; voxel size = 3 × 3 × 3.74mm. |
| Description of data collection | Data were collected from 40 subjects. The subjects were instructed to listen passively to the sound stimuli under the eyes-closed condition and then under the eyes-open condition. Each condition comprised human voices followed by animal vocalizations. |
| Data source location | Institution: Institute of Statistical Science, Academia Sinica |
| Data accessibility | Repository name: Mendeley |
| Related research article | Author's name: Po-Chih Kuo, Yi-Li Tseng, Karl Zilles, Summit Suen, Simon B. Eickhoff, Juin-Der Lee, Philip E. Cheng, Michelle Liou |
The dataset can be used for assessing reproducible fMRI time courses under real-life situations. The dataset can be used for investigating connectivity networks under natural sound stimulation. The dataset can be used for decoding mental states during hearing animal vocalizations and human voices. |