Literature DB >> 35099768

Brain-Computer Interface: Applications to Speech Decoding and Synthesis to Augment Communication.

Shiyu Luo1, Qinwan Rabbani2, Nathan E Crone3.   

Abstract

Damage or degeneration of motor pathways necessary for speech and other movements, as in brainstem strokes or amyotrophic lateral sclerosis (ALS), can interfere with efficient communication without affecting brain structures responsible for language or cognition. In the worst-case scenario, this can result in the locked in syndrome (LIS), a condition in which individuals cannot initiate communication and can only express themselves by answering yes/no questions with eye blinks or other rudimentary movements. Existing augmentative and alternative communication (AAC) devices that rely on eye tracking can improve the quality of life for people with this condition, but brain-computer interfaces (BCIs) are also increasingly being investigated as AAC devices, particularly when eye tracking is too slow or unreliable. Moreover, with recent and ongoing advances in machine learning and neural recording technologies, BCIs may offer the only means to go beyond cursor control and text generation on a computer, to allow real-time synthesis of speech, which would arguably offer the most efficient and expressive channel for communication. The potential for BCI speech synthesis has only recently been realized because of seminal studies of the neuroanatomical and neurophysiological underpinnings of speech production using intracranial electrocorticographic (ECoG) recordings in patients undergoing epilepsy surgery. These studies have shown that cortical areas responsible for vocalization and articulation are distributed over a large area of ventral sensorimotor cortex, and that it is possible to decode speech and reconstruct its acoustics from ECoG if these areas are recorded with sufficiently dense and comprehensive electrode arrays. In this article, we review these advances, including the latest neural decoding strategies that range from deep learning models to the direct concatenation of speech units. We also discuss state-of-the-art vocoders that are integral in constructing natural-sounding audio waveforms for speech BCIs. Finally, this review outlines some of the challenges ahead in directly synthesizing speech for patients with LIS.
© 2022. The American Society for Experimental NeuroTherapeutics, Inc.

Entities:  

Keywords:  Brain-computer interface; ECoG; Electrocorticography; Locked-in syndrome; Speech synthesis

Mesh:

Year:  2022        PMID: 35099768      PMCID: PMC9130409          DOI: 10.1007/s13311-022-01190-2

Source DB:  PubMed          Journal:  Neurotherapeutics        ISSN: 1878-7479            Impact factor:   6.088


  77 in total

1.  Decoding of articulatory gestures during word production using speech motor and premotor cortical activity.

Authors:  Emily M Mugler; Matthew Goldrick; Joshua M Rosenow; Matthew C Tate; Marc W Slutzky
Journal:  Conf Proc IEEE Eng Med Biol Soc       Date:  2015

Review 2.  Altered auditory feedback and the treatment of stuttering: a review.

Authors:  Michelle Lincoln; Ann Packman; Mark Onslow
Journal:  J Fluency Disord       Date:  2006-06-05       Impact factor: 2.538

3.  Effects of postlingual deafness on speech production: implications for the role of auditory feedback.

Authors:  R S Waldstein
Journal:  J Acoust Soc Am       Date:  1990-11       Impact factor: 1.840

4.  Plug-and-play control of a brain-computer interface through neural map stabilization.

Authors:  Daniel B Silversmith; Reza Abiri; Nicholas F Hardy; Nikhilesh Natraj; Adelyn Tu-Chan; Edward F Chang; Karunesh Ganguly
Journal:  Nat Biotechnol       Date:  2020-09-07       Impact factor: 54.908

5.  Toward a Speech Neuroprosthesis.

Authors:  Edward F Chang; Gopala K Anumanchipalli
Journal:  JAMA       Date:  2020-02-04       Impact factor: 56.272

6.  A study of speech deterioration in post-lingually deafened adults.

Authors:  R Cowie; E Douglas-Cowie; A G Kerr
Journal:  J Laryngol Otol       Date:  1982-02       Impact factor: 1.469

7.  Real-time classification of auditory sentences using evoked cortical activity in humans.

Authors:  David A Moses; Matthew K Leonard; Edward F Chang
Journal:  J Neural Eng       Date:  2018-01-30       Impact factor: 5.379

8.  An exoskeleton controlled by an epidural wireless brain-machine interface in a tetraplegic patient: a proof-of-concept demonstration.

Authors:  Alim Louis Benabid; Thomas Costecalde; Andrey Eliseyev; Guillaume Charvet; Alexandre Verney; Serpil Karakas; Michael Foerster; Aurélien Lambert; Boris Morinière; Neil Abroug; Marie-Caroline Schaeffer; Alexandre Moly; Fabien Sauter-Starace; David Ratel; Cecile Moro; Napoleon Torres-Martinez; Lilia Langar; Manuela Oddoux; Mircea Polosan; Stephane Pezzani; Vincent Auboiroux; Tetiana Aksenova; Corinne Mestais; Stephan Chabardes
Journal:  Lancet Neurol       Date:  2019-10-03       Impact factor: 44.182

9.  Electrocorticographic representations of segmental features in continuous speech.

Authors:  Fabien Lotte; Jonathan S Brumberg; Peter Brunner; Aysegul Gunduz; Anthony L Ritaccio; Cuntai Guan; Gerwin Schalk
Journal:  Front Hum Neurosci       Date:  2015-02-24       Impact factor: 3.169

10.  The 'when' and 'where' of semantic coding in the anterior temporal lobe: Temporal representational similarity analysis of electrocorticogram data.

Authors:  Y Chen; A Shimotake; R Matsumoto; T Kunieda; T Kikuchi; S Miyamoto; H Fukuyama; R Takahashi; A Ikeda; M A Lambon Ralph
Journal:  Cortex       Date:  2016-03-16       Impact factor: 4.027

View more
  1 in total

Review 1.  Clinical neuroscience and neurotechnology: An amazing symbiosis.

Authors:  Andrea Cometa; Antonio Falasconi; Marco Biasizzo; Jacopo Carpaneto; Andreas Horn; Alberto Mazzoni; Silvestro Micera
Journal:  iScience       Date:  2022-09-16
  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.