| Literature DB >> 32362663 |
Helena Daffern1, Tamar Keren-Portnoy2, Rory A DePaolis3, Kenneth I Brown4.
Abstract
This project set out to develop an app for infants under one year of age that responds in real time to language-like infant utterances with attractive images on an iPad screen. Language-like vocalisations were defined as voiced utterances which were not high pitched squeals, nor shouts. The app, BabblePlay, was intended for use in psycholinguistic research to investigate the possible causal relationship between early canonical babble and early onset of word production. It is also designed for a clinical setting, (1) to illustrate the importance of feedback as a way to encourage infant vocalisations, and (2) to provide consonant production practice for infant populations that do not vocalise enough or who vocalise in an atypical way, specifically, autistic infants (once they have begun to produce consonants). This paper describes the development and testing of BabblePlay, which responds to an infant's vocalisations with colourful moving shapes on the screen that are analogous to some features of the infant's vocalization including loudness and duration. Validation testing showed high correlation between the app and two human judges in identifying vocalisations in 200 min of BabblePlay recordings, and a feasibility study conducted with 60 infants indicates that they can learn the contingency between their vocalisations and the appearance of shapes on the screen in one five minute BabblePlay session. BabblePlay meets the specification of being a simple and easy-to-use app. It has been shown to be a promising tool for research on infant language development that could lead to its use in home and professional environments to demonstrate the importance of immediate reward for vocal utterances to increase vocalisations in infants.Entities:
Keywords: Acoustic analysis; App development; Babble; Real time feedback
Year: 2020 PMID: 32362663 PMCID: PMC7043348 DOI: 10.1016/j.apacoust.2019.107183
Source DB: PubMed Journal: Appl Acoust ISSN: 0003-682X Impact factor: 2.639
Fig. 1BabbleApp screenshots, demonstrating the variety of shape, size, colour and patterning, as well as patterns of movement across the screen.
Fig. 2A schematic of the audio processing for BabblePlay.
Fig. 3A schematic of BabblePlay.
Table showing the number of vocalisations as counted by humans as compared to those counted by BabblePlay (App).
| Solo Play Trials | App Trials | |
|---|---|---|
| Correlation Human 1 – Human 2 | 0.93 | 0.93 |
| Correlation Human 1 – App | 0.92 | 0.92 |
| Correlation Human 2 – App | 0.95 | 0.87 |
Fig. 4Relations between human and app counts of vocalisations. Top panels: Values for solo-play trials, bottom panels: values for the app trials. As can be seen, the fit between the two humans is very similar to that between each human and the app. The numbers on the axes indicate number of vocalisations identified.