Literature DB >> 28198405

An acoustic key to eight languages/dialects: Factor analyses of critical-band-filtered speech.

Kazuo Ueda1, Yoshitaka Nakajima1.   

Abstract

The peripheral auditory system functions like a frequency analyser, often modelled as a bank of non-overlapping band-pass filters called critical bands; 20 bands are necessary for simulating frequency resolution of the ear within an ordinary frequency range of speech (up to 7,000 Hz). A far smaller number of filters seemed sufficient, however, to re-synthesise intelligible speech sentences with power fluctuations of the speech signals passing through them; nevertheless, the number and frequency ranges of the frequency bands for efficient speech communication are yet unknown. We derived four common frequency bands-covering approximately 50-540, 540-1,700, 1,700-3,300, and above 3,300 Hz-from factor analyses of spectral fluctuations in eight different spoken languages/dialects. The analyses robustly led to three factors common to all languages investigated-the low &mid-high factor related to the two separate frequency ranges of 50-540 and 1,700-3,300 Hz, the mid-low factor the range of 540-1,700 Hz, and the high factor the range above 3,300 Hz-in these different languages/dialects, suggesting a language universal.

Entities:  

Mesh:

Year:  2017        PMID: 28198405      PMCID: PMC5309770          DOI: 10.1038/srep42468

Source DB:  PubMed          Journal:  Sci Rep        ISSN: 2045-2322            Impact factor:   4.379


Plomp and colleagues1234 found that two acoustic principal components were enough to represent Dutch steady vowels. They extracted principal components from level profiles obtained from a bank of bandpass filters with bandwidths similar to those of critical bands, representing frequency-analysis properties of the auditory periphery, i.e., the basilar membrane356789. They found a clear correspondence between the principal components and the conventional formant analyses2. In the present investigation, the principal-component-analysis (PCA) technique as pioneered by Plomp and his colleagues, and further pursued by Zahorian and Rothenberg10, was extended in two aspects: First, it was applied to a database11 of complete spoken sentences (58–200, depending on languages) rather than steady vowels, and second, the sentences were spoken in eight different languages/dialects, i.e., American English, British English, Cantonese, French, German, Japanese, Mandarin, and Spanish by 10–20 speakers in each language (Table 1; Supplementary Fig. S1 shows a block diagram of the analyses).
Table 1

Analysed speech samples.

Languages/dialectsNumber of sentencesNumber of speakersOverall duration of utterances (s)Mean duration per utterance (s)
FemaleMale
American English8610104,123.22.4
British English200554,038.52.0
Cantonese58551,131.72.0
French200553,533.21.8
German200553,707.01.9
Japanese200555,041.32.5
Mandarin78551,834.92.4
Spanish136552,918.12.1
Total1,158454526,327.82.1

Speech samples were extracted from the database11.

Results

Four blocks of critical bands, i.e., four frequency bands, consistently appeared in both the three- (Fig. 1a) and the four-factor (Fig. 1b) results—one of the factors obtained in the three-factor analysis was bimodal, thus both three- and four-factor analyses yielded four frequency bands. Two-, five-, and six-factor analyses gave rather obscure—inconsistent among the languages/dialects—results (Supplementary Fig. S2). The boundary frequencies dividing the whole frequency range into the four frequency bands are represented with the vertical orange lines in Fig. 1. Shifting the cut-off frequencies of the filter bank upwards by half a critical band (see Methods: Signal processing and analyses) had negligible effects on the three-factor results (Fig. 1a; compare the broken vs. continuous curves of the same colours). The three-factor results (Fig. 1a) exhibited greater agreement across the different languages/dialects than the four-factor results (Fig. 1b). The cumulative contributions, representing proportions of variance explained by the combinations of specified factors, were about 7% higher in the four-factor analysis (Fig. 1b), but the locations of the factor peaks were very similar comparing the three-factor with the four-factor analysis. The discrepancies between languages/dialects, observed in the lowest frequency band in the four-factor analysis, is likely to have been caused by the inclusion of samples spoken by speakers with relatively high fundamental frequency that could make frequency components too sparse in spectra. Including more than four factors resulted in cumulative contributions larger than 50%, however, the added factors were mainly consumed in capturing resolved harmonics in the low frequency region (Supplementary Fig. S2d,e), which was covered by a peak in the lower frequency side of the bimodal factor (low & mid-high factor) in the three-factor results. Thus, it seems to be optimal to take the three factor results for our present purpose, which is to find out the number and frequency ranges of the frequency bands for efficient speech communication.
Figure 1

Factor loadings plotted against the centre frequency of critical bands.

(a) Three-factor analysis. (b) Four-factor analysis. The thick lines represent factor loadings derived from the merged data across eight languages/dialects; the colours of the thick lines are to distinguish factors. The thin lines show the results of individual languages/dialects without distinguishing factors: American English (pink), British English (dark green), Cantonese (purple), French (sky blue), German (black), Japanese (blue), Mandarin (yellow), and Spanish (olive green). The broken lines are the counterparts of the solid lines of the same colours, using a filter-bank shifted up by half a critical bandwidth (Supplementary Table S1). The cumulative contributions ranged from 33–41% (a) and from 40–47% (b), depending on the analysed data set and the utilised filters. One division of the horizontal axis corresponds to 0.5 critical bandwidth, with the two sets of centre frequencies alternating. Orange vertical lines represent schematic frequency boundaries estimated from crossover frequencies of the curves.

Discussion

It is worth noting that spoken sentences can be recognised even when they are conveyed only by power fluctuations of four frequency bands without any temporal fine structure, i.e., through noise-vocoded speech12131415161718. The number and location of these frequency bands (Fig. 1) is suggested both by the present physical analysis and by perceptual studies showing high intelligibility of noise-vocoded speech filtered into nearly the same18 or very similar121314 frequency bands (Supplementary Audios S1 and S2, and Fig. S3). The four-band division must have some value in speech processing if it can be applied to several languages/dialects of different language families. Our own observation showed that the frequency boundaries or factors derived with the present statistical technique were suitable for synthesising noise-vocoded speech in Japanese1819 and German18. There seems a connection between the present frequency boundaries and the past results of speech-filtering investigations. The second boundary frequency, 1,700 Hz, was located near the centre of the range of the crossover frequency (typically from 1,550–1,900 Hz)20212223, which had been derived as a balancing point of intelligibility between highpass and lowpass filtering of speech. It is also to be noted that the frequency response of the telephone system is standardised to cover the range from 300–3,400 Hz. This frequency range covers at least a part of each frequency band in Fig. 1, presumably enabling the analogue telephone line to convey speech sounds all over the world with minimum cost and reasonable intelligibility. We designated the factors obtained in the three-factor analysis as the low & mid-high factor, which appeared in two frequency ranges around 300 and around 2,200 Hz, the mid-low factor, which appeared around 1,100 Hz, and the high factor, which encompasses the range above 3,300 Hz. These factors appeared with surprising resemblance across the eight different languages/dialects of three different language families, and thus they are strong candidates for universal components of spoken languages/dialects, i.e., an acoustic language universal. An initial extension of the present analysis into infant utterances has been explored by a research team including the present authors24. One way to know how the factors relate to speech perception is to examine the correspondence between factor scores and phonemic categories. This line of investigation on speech sounds in British English has been started, as described in a separate paper.

Methods

The following facts rationalise the use of the PCA-based technique in the present investigation. In order to recognise speech in quiet, it is not always necessary to fully utilise the frequency resolution properties of the basilar membrane. It is possible to accurately recognise speech consisting of power fluctuations in only four frequency bands (noise-vocoded speech12). Although this finding had been replicated in a number of studies1314151617, the frequency cut-offs to create such frequency bands have not been derived from systematic research. One of the goals of the present study was to provide the characteristics of frequency channels that best represent the speech signal.

Speech samples

Speech samples were extracted from a speech database11 (16-kHz sampling and 16-bit linear quantisation), upon the condition that the same set of sentences was spoken by all the speakers within each language/dialect. The samples were edited to eliminate irrelevant silent periods and noises. The details of the samples are shown in Table 1.

Signal processing and analyses

Two banks (A and B) of 20 critical-band filters were constructed (Supplementary Table S1). Their centre frequencies ranged from 75–5,800 Hz (for A) and from 100–6,400 Hz (for B). Their overall passbands were 50–6,400 and 50–7,000 Hz, respectively. These two specific filter banks were made in order to check whether there was any artefact caused by cut-off frequencies in the analyses. The cut-off frequencies of each filter in bank A were determined according to Zwicker and Terhardt6, except for the lowest cut-off frequency (50 Hz). The cut-off frequencies in bank B were halfway shifted from those in bank A, except for the lowest cut-off frequency. All subsequent analyses were performed separately for these two filter banks. Each filter was constructed as a concatenate convolution of an upward frequency glide and its temporal reversal. Transition regions were 100 Hz wide, with out-of-band attenuations of 50–60 dB. Each filter output was squared, smoothed with a Gaussian window of σ = 5 (ms) which was equivalent to a lowpass filtering with a 45-Hz cut-off, and sampled at every millisecond. Because our analyses primarily focused on relatively slow movements of the vocal tract (amplitude envelopes) rather than fast movements of the vocal folds (temporal fine structure), power fluctuations were calculated by squaring and smoothing the filter outputs, in stead of using the outputs (amplitudes) themselves. Determining correlation coefficients for every possible combination of the power fluctuations yielded a correlation matrix for each data set. This matrix was fed into the PCA. That is, a correlation-based (normalised) analysis was selected, rather than a covariance-based one, in order to prevent the influence of unbalanced weighting between frequency bands of unequal power levels. After PCA was performed, the first 2–6 principal components were rotated with varimax rotation to yield the factors shown in Supplementary Fig. S2 (the terminology is based on convention).

Additional Information

How to cite this article: Ueda, K. and Nakajima, Y. An acoustic key to eight languages/dialects: Factor analyses of critical-band-filtered speech. Sci. Rep. 7, 42468; doi: 10.1038/srep42468 (2017). Publisher's note: Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
  13 in total

1.  Chimaeric sounds reveal dichotomies in auditory perception.

Authors:  Zachary M Smith; Bertrand Delgutte; Andrew J Oxenham
Journal:  Nature       Date:  2002-03-07       Impact factor: 49.962

2.  The intelligibility of noise-vocoded speech: spectral information available from across-channel comparison of amplitude envelopes.

Authors:  Brian Roberts; Robert J Summers; Peter J Bailey
Journal:  Proc Biol Sci       Date:  2010-11-10       Impact factor: 5.349

3.  Speech intelligibility as a function of the number of channels of stimulation for signal processors using sine-wave and noise-band outputs.

Authors:  M F Dorman; P C Loizou; D Rainey
Journal:  J Acoust Soc Am       Date:  1997-10       Impact factor: 1.840

4.  Frequency analysis of Dutch vowels from 50 male speakers.

Authors:  L C Pols; H R Tromp; R Plomp
Journal:  J Acoust Soc Am       Date:  1973-04       Impact factor: 1.840

5.  A frequency importance function for continuous discourse.

Authors:  G A Studebaker; C V Pavlovic; R L Sherbecoe
Journal:  J Acoust Soc Am       Date:  1987-04       Impact factor: 1.840

6.  Speech recognition with primarily temporal cues.

Authors:  R V Shannon; F G Zeng; V Kamath; J Wygonski; M Ekelid
Journal:  Science       Date:  1995-10-13       Impact factor: 47.728

7.  Size of critical band in infants, children, and adults.

Authors:  B A Schneider; B A Morrongiello; S E Trehub
Journal:  J Exp Psychol Hum Percept Perform       Date:  1990-08       Impact factor: 3.332

8.  Comparison of the roex and gammachirp filters as representations of the auditory filter.

Authors:  Masashi Unoki; Toshio Irino; Brian Glasberg; Brian C J Moore; Roy D Patterson
Journal:  J Acoust Soc Am       Date:  2006-09       Impact factor: 1.840

9.  Acoustic analyses of speech sounds and rhythms in Japanese- and english-learning infants.

Authors:  Yuko Yamashita; Yoshitaka Nakajima; Kazuo Ueda; Yohko Shimada; David Hirsh; Takeharu Seno; Benjamin Alexander Smith
Journal:  Front Psychol       Date:  2013-02-28

10.  Three Factors Are Critical in Order to Synthesize Intelligible Noise-Vocoded Japanese Speech.

Authors:  Takuya Kishida; Yoshitaka Nakajima; Kazuo Ueda; Gerard B Remijn
Journal:  Front Psychol       Date:  2016-04-26
View more
  6 in total

1.  Arrays of rectangular subcritical speech bands: Intelligibility improved by noise-vocoding and expanding to critical bandwidths.

Authors:  Richard M Warren; James A Bashford; Peter W Lenz
Journal:  J Acoust Soc Am       Date:  2018-04       Impact factor: 1.840

2.  A Digital Filter-Based Method for Diagnosing Speech Comprehension Deficits.

Authors:  Gisele V H Koury; Francisca C R da S Araújo; Kauê M Costa; Manoel da Silva Filho
Journal:  Mayo Clin Proc Innov Qual Outcomes       Date:  2021-01-13

3.  Intelligibility of locally time-reversed speech: A multilingual comparison.

Authors:  Kazuo Ueda; Yoshitaka Nakajima; Wolfgang Ellermeier; Florian Kattner
Journal:  Sci Rep       Date:  2017-05-11       Impact factor: 4.379

4.  English phonology and an acoustic language universal.

Authors:  Yoshitaka Nakajima; Kazuo Ueda; Shota Fujimaru; Hirotoshi Motomura; Yuki Ohsaka
Journal:  Sci Rep       Date:  2017-04-11       Impact factor: 4.379

5.  The common limitations in auditory temporal processing for Mandarin Chinese and Japanese.

Authors:  Hikaru Eguchi; Kazuo Ueda; Gerard B Remijn; Yoshitaka Nakajima; Hiroshige Takeichi
Journal:  Sci Rep       Date:  2022-02-22       Impact factor: 4.379

6.  Auditory grouping is necessary to understand interrupted mosaic speech stimuli.

Authors:  Kazuo Ueda; Hiroshige Takeichi; Kohei Wakamiya
Journal:  J Acoust Soc Am       Date:  2022-08       Impact factor: 2.482

  6 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.