Literature DB >> 29092595

A cross-linguistic study of speech modulation spectra.

Léo Varnet1, Maria Clemencia Ortiz-Barajas2, Ramón Guevara Erra2, Judit Gervain2, Christian Lorenzi1.   

Abstract

Languages show systematic variation in their sound patterns and grammars. Accordingly, they have been classified into typological categories such as stress-timed vs syllable-timed, or Head-Complement (HC) vs Complement-Head (CH). To date, it has remained incompletely understood how these linguistic properties are reflected in the acoustic characteristics of speech in different languages. In the present study, the amplitude-modulation (AM) and frequency-modulation (FM) spectra of 1797 utterances in ten languages were analyzed. Overall, the spectra were found to be similar in shape across languages. However, significant effects of linguistic factors were observed on the AM spectra. These differences were magnified with a perceptually plausible representation based on the modulation index (a measure of the signal-to-noise ratio at the output of a logarithmic modulation filterbank): the maximum value distinguished between HC and CH languages, with the exception of Turkish, while the exact frequency of this maximum differed between stress-timed and syllable-timed languages. An additional study conducted on a semi-spontaneous speech corpus showed that these differences persist for a larger number of speakers but disappear for less constrained semi-spontaneous speech. These findings reveal that broad linguistic categories are reflected in the temporal modulation features of different languages, although this may depend on speaking style.

Entities:  

Mesh:

Year:  2017        PMID: 29092595     DOI: 10.1121/1.5006179

Source DB:  PubMed          Journal:  J Acoust Soc Am        ISSN: 0001-4966            Impact factor:   1.840


  22 in total

1.  What you hear first, is what you get: Initial metrical cue presentation modulates syllable detection in sentence processing.

Authors:  Anna Fiveash; Simone Falk; Barbara Tillmann
Journal:  Atten Percept Psychophys       Date:  2021-03-11       Impact factor: 2.199

2.  Amplitude modulation transfer functions reveal opposing populations within both the inferior colliculus and medial geniculate body.

Authors:  Duck O Kim; Laurel Carney; Shigeyuki Kuwada
Journal:  J Neurophysiol       Date:  2020-09-09       Impact factor: 2.714

3.  Dual Coding of Frequency Modulation in the Ventral Cochlear Nucleus.

Authors:  Nihaad Paraouty; Arkadiusz Stasiak; Christian Lorenzi; Léo Varnet; Ian M Winter
Journal:  J Neurosci       Date:  2018-03-29       Impact factor: 6.167

Review 4.  Speech rhythms and their neural foundations.

Authors:  David Poeppel; M Florencia Assaneo
Journal:  Nat Rev Neurosci       Date:  2020-05-06       Impact factor: 34.870

5.  Noise-Sensitive But More Precise Subcortical Representations Coexist with Robust Cortical Encoding of Natural Vocalizations.

Authors:  Samira Souffi; Christian Lorenzi; Léo Varnet; Chloé Huetz; Jean-Marc Edeline
Journal:  J Neurosci       Date:  2020-05-22       Impact factor: 6.167

6.  Neural dynamics differentially encode phrases and sentences during spoken language comprehension.

Authors:  Fan Bai; Antje S Meyer; Andrea E Martin
Journal:  PLoS Biol       Date:  2022-07-14       Impact factor: 9.593

7.  Modulation transfer functions for audiovisual speech.

Authors:  Nicolai F Pedersen; Torsten Dau; Lars Kai Hansen; Jens Hjortkjær
Journal:  PLoS Comput Biol       Date:  2022-07-19       Impact factor: 4.779

8.  Differential activation of a frontoparietal network explains population-level differences in statistical learning from speech.

Authors:  Joan Orpella; M Florencia Assaneo; Pablo Ripollés; Laura Noejovich; Diana López-Barroso; Ruth de Diego-Balaguer; David Poeppel
Journal:  PLoS Biol       Date:  2022-07-06       Impact factor: 9.593

9.  The Intelligibility of Time-Compressed Speech Is Correlated with the Ability to Listen in Modulated Noise.

Authors:  Robin Gransier; Astrid van Wieringen; Jan Wouters
Journal:  J Assoc Res Otolaryngol       Date:  2022-03-07

10.  Eye movements during text reading align with the rate of speech production.

Authors:  Benjamin Gagl; Klara Gregorova; Julius Golch; Stefan Hawelka; Jona Sassenhagen; Alessandro Tavano; David Poeppel; Christian J Fiebach
Journal:  Nat Hum Behav       Date:  2021-12-06
View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.