| Literature DB >> 31002280 |
Irene de la Cruz-Pavía1, Janet F Werker2, Eric Vatikiotis-Bateson3, Judit Gervain1.
Abstract
The audiovisual speech signal contains multimodal information to phrase boundaries. In three artificial language learning studies with 12 groups of adult participants we investigated whether English monolinguals and bilingual speakers of English and a language with opposite basic word order (i.e., in which objects precede verbs) can use word frequency, phrasal prosody and co-speech (facial) visual information, namely head nods, to parse unknown languages into phrase-like units. We showed that monolinguals and bilinguals used the auditory and visual sources of information to chunk "phrases" from the input. These results suggest that speech segmentation is a bimodal process, though the influence of co-speech facial gestures is rather limited and linked to the presence of auditory prosody. Importantly, a pragmatic factor, namely the language of the context, seems to determine the bilinguals' segmentation, overriding the auditory and visual cues and revealing a factor that begs further exploration.Entities:
Keywords: artificial grammar learning; bilingualism; co-speech visual information; frequency-based information; phrase segmentation; prosody
Mesh:
Year: 2019 PMID: 31002280 PMCID: PMC7254630 DOI: 10.1177/0023830919842353
Source DB: PubMed Journal: Lang Speech ISSN: 0023-8309 Impact factor: 1.500
Figure 1.Shared structure of the artificial languages. The table represents the shared basic structure of the ambiguous artificial languages: (a) the lexical categories and tokens of the languages; (b) the two possible structures of the ambiguous stream; (c) three examples of the 36 test pairs. On the right, a picture of the animated line drawing used in the languages containing visual information.
Figure 2.Graphical depiction of the artificial languages in Experiments 1, 2 and 3. The brackets depict the duration of the head nods, whereas the arrows signal the location of their peak.
Figure 3.Word order preferences of the participants in Experiment 1. Bar graphs (top) and boxplots (bottom) with standard error depicting the number and distribution of frequent-initial responses out of the 36 test trials by the monolingual (dark gray columns) and bilingual (light gray columns) participants.
Number (out of the 36 test trials), percentage of frequent-initial responses and confidence intervals (CI) obtained in each of the groups examined in Experiments 1, 2 and 3.
| Number and percentage of frequent-initial responses | ||
|---|---|---|
| Exp.1: Artificial language | Monolinguals | Bilinguals |
| Frequency-based information | 19.75/36, 54.86% ±1.77 SE | 21.92/36, 60.89%, ±1.48 SE |
| Exp.2: Artificial language | Monolinguals | Bilinguals |
| Frequency and OV prosody | 16.38/36, 45.50%, ±1.71 SE | 20.92/36, 58.11%, ±1.93 SE |
| Frequency, OV prosody and aligned nods | 16.38/36, 45.50%,±1.74 SE | 21.88/36, 60.78%, ±1.28 SE |
| Exp.3: Artificial language | Monolinguals | Bilinguals |
| Frequency and aligned nods | 20.96/36, 58.22%, ±1.61 SE | 21.83/36, 60.64%, ±1.51 SE |
Figure 4.Word order preferences of the participants in Experiment 2. Bar graphs (top) and boxplots (bottom) with standard error depicting the number and distribution of frequent-initial responses out of the 36 test trials by the monolingual (dark gray columns) and bilingual (light gray columns) participants. Note that the patterned columns in the top figure depict Exp.1’s groups exposed to frequency-only information, that is, the baseline groups. Experiment 1 and 2’s artificial languages share the same tokens and test items.
Figure 5.Word order preferences of the participants in Experiment 3. Bar graphs (top) and boxplots (bottom) with standard error depicting the number and distribution of frequent-initial responses out of the 36 test trials by the monolingual (dark gray columns) and bilingual (light gray columns) participants. The patterned columns in the top figure depict Exp.1’s groups exposed to frequency-only information, that is, the baseline groups. Experiment 1, 2 and 3’s artificial languages share the same tokens and test items.
Age, gender and linguistic background information of the six groups of English-OV bilinguals examined.
| ENGLISH – OV BILINGUAL PARTICIPANTS | Frequency | Frequency & VO prosody | Frequency, VO prosody & aligned nods | Frequency & OV prosody | Frequency, OV prosody & aligned nods | Frequency & aligned nods |
|---|---|---|---|---|---|---|
|
| 21.79 | 21.00 | 21.29 | 20.88 | 22.04 | 20.67 |
|
| 17F-7M | 16F-8M | 14F-10M | 18F-6M | 18F-6M | 15F-9M |
|
| 14 | 17 | 10 | 14 | 15 | 11 |
|
| 4.42 | 3.98 | 4.25 | 4.67 | 5.42 | 5.83 |
|
| 7 | 8 | 4 | 3 | 3 | 4 |
|
| 17 | 16 | 20 | 21 | 21 | 20 |
|
| 6.71 | 6.71 | 6.67 | 6.79 | 6.79 | 6.67 |
|
| 6.54 | 6.42 | 6.63 | 6.38 | 6.54 | 6.75 |
|
| 6.67 | 6.63 | 6.46 | 6.79 | 6.58 | 6.63 |
|
| 6.38 | 6.04 | 6.54 | 6.13 | 6.38 | 6.58 |
List of OV languages spoken by the English-OV participants, and distribution in each of the six groups examined.
| OV LANGUAGES | Frequency | Frequency & | Frequency, | Frequency & | Frequency, | Frequency & aligned nods | Total |
|---|---|---|---|---|---|---|---|
|
| 1 | 1 | 2 | 1 | 0 | 0 | 5 |
|
| 1 | 0 | 0 | 0 | 0 | 0 | 1 |
|
| 6 | 2 | 2 | 1 | 7 | 5 | 23 |
|
| 0 | 0 | 0 | 1 | 0 | 0 | 1 |
|
| 3 | 2 | 6 | 2 | 1 | 3 | 17 |
|
| 4 | 4 | 2 | 3 | 1 | 3 | 17 |
|
| 5 | 7 | 8 | 11 | 9 | 6 | 46 |
|
| 0 | 0 | 0 | 1 | 0 | 0 | 1 |
|
| 0 | 0 | 1 | 0 | 0 | 0 | 1 |
|
| 4 | 7 | 3 | 3 | 5 | 5 | 27 |
|
| 0 | 0 | 0 | 0 | 1 | 0 | 1 |
|
| 0 | 1 | 0 | 1 | 0 | 2 | 4 |
Age and gender of the six groups of English monolinguals examined.
| ENGLISH MONOLINGUAL PARTICIPANTS | Frequency | Frequency & VO prosody | Frequency, VO prosody & aligned nods | Frequency & OV prosody | Frequency, OV prosody & aligned nods | Frequency & aligned nods |
|---|---|---|---|---|---|---|
|
| 23.54 | 22.92 | 21.04 | 21.13 | 20.54 | 21.38(18-34) |
|
| 18F-6M | 16F-8M | 16F-8M | 18F-6M | 17F-7M | 19F-5M |