Goal: We hypothesized that COVID-19 subjects, especially including asymptomatics, could be accurately discriminated only from a forced-cough cell phone recording using Artificial Intelligence. To train our MIT Open Voice model we built a data collection pipeline of COVID-19 cough recordings through our website (opensigma.mit.edu) between April and May 2020 and created the largest audio COVID-19 cough balanced dataset reported to date with 5,320 subjects. Methods: We developed an AI speech processing framework that leverages acoustic biomarker feature extractors to pre-screen for COVID-19 from cough recordings, and provide a personalized patient saliency map to longitudinally monitor patients in real-time, non-invasively, and at essentially zero variable cost. Cough recordings are transformed with Mel Frequency Cepstral Coefficient and inputted into a Convolutional Neural Network (CNN) based architecture made up of one Poisson biomarker layer and 3 pre-trained ResNet50's in parallel, outputting a binary pre-screening diagnostic. Our CNN-based models have been trained on 4256 subjects and tested on the remaining 1064 subjects of our dataset. Transfer learning was used to learn biomarker features on larger datasets, previously successfully tested in our Lab on Alzheimer's, which significantly improves the COVID-19 discrimination accuracy of our architecture. Results: When validated with subjects diagnosed using an official test, the model achieves COVID-19 sensitivity of 98.5% with a specificity of 94.2% (AUC: 0.97). For asymptomatic subjects it achieves sensitivity of 100% with a specificity of 83.2%. Conclusions: AI techniques can produce a free, non-invasive, real-time, any-time, instantly distributable, large-scale COVID-19 asymptomatic screening tool to augment current approaches in containing the spread of COVID-19. Practical use cases could be for daily screening of students, workers, and public as schools, jobs, and transport reopen, or for pool testing to quickly alert of outbreaks in groups. General speech biomarkers may exist that cover several disease categories, as we demonstrated using the same ones for COVID-19 and Alzheimer's. This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/.
Goal: We hypothesized that COVID-19 subjects, especially including asymptomatics, could be accurately discriminated only from a forced-cough cell phone recording using Artificial Intelligence. To train our MIT Open Voice model we built a data collection pipeline of COVID-19 cough recordings through our website (opensigma.mit.edu) between April and May 2020 and created the largest audio COVID-19 cough balanced dataset reported to date with 5,320 subjects. Methods: We developed an AI speech processing framework that leverages acoustic biomarker feature extractors to pre-screen for COVID-19 from cough recordings, and provide a personalized patient saliency map to longitudinally monitor patients in real-time, non-invasively, and at essentially zero variable cost. Cough recordings are transformed with Mel Frequency Cepstral Coefficient and inputted into a Convolutional Neural Network (CNN) based architecture made up of one Poisson biomarker layer and 3 pre-trained ResNet50's in parallel, outputting a binary pre-screening diagnostic. Our CNN-based models have been trained on 4256 subjects and tested on the remaining 1064 subjects of our dataset. Transfer learning was used to learn biomarker features on larger datasets, previously successfully tested in our Lab on Alzheimer's, which significantly improves the COVID-19 discrimination accuracy of our architecture. Results: When validated with subjects diagnosed using an official test, the model achieves COVID-19 sensitivity of 98.5% with a specificity of 94.2% (AUC: 0.97). For asymptomatic subjects it achieves sensitivity of 100% with a specificity of 83.2%. Conclusions: AI techniques can produce a free, non-invasive, real-time, any-time, instantly distributable, large-scale COVID-19 asymptomatic screening tool to augment current approaches in containing the spread of COVID-19. Practical use cases could be for daily screening of students, workers, and public as schools, jobs, and transport reopen, or for pool testing to quickly alert of outbreaks in groups. General speech biomarkers may exist that cover several disease categories, as we demonstrated using the same ones for COVID-19 and Alzheimer's. This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/.
Entities:
Keywords:
AI diagnostics; COVID-19 screening; convolutional neural networks; deep learning; speech recognition
Authors: G H R Botha; G Theron; R M Warren; M Klopper; K Dheda; P D van Helden; T R Niesler Journal: Physiol Meas Date: 2018-04-26 Impact factor: 2.833
Authors: Alberto Costa; Thomas Bak; Paolo Caffarra; Carlo Caltagirone; Mathieu Ceccaldi; Fabienne Collette; Sebastian Crutch; Sergio Della Sala; Jean François Démonet; Bruno Dubois; Emrah Duzel; Peter Nestor; Sokratis G Papageorgiou; Eric Salmon; Sietske Sikkes; Pietro Tiraboschi; Wiesje M van der Flier; Pieter Jelle Visser; Stefano F Cappa Journal: Alzheimers Res Ther Date: 2017-04-17 Impact factor: 6.982
Authors: Bruce J Tromberg; Tara A Schwetz; Eliseo J Pérez-Stable; Richard J Hodes; Richard P Woychik; Rick A Bright; Rachael L Fleurence; Francis S Collins Journal: N Engl J Med Date: 2020-07-22 Impact factor: 91.245
Authors: Jerome R Lechien; Carlos M Chiesa-Estomba; Sammy Place; Yves Van Laethem; Pierre Cabaraux; Quentin Mat; Kathy Huet; Jan Plzak; Mihaela Horoi; Stéphane Hans; Maria Rosaria Barillari; Giovanni Cammaroto; Nicolas Fakhry; Delphine Martiny; Tareck Ayad; Lionel Jouffe; Claire Hopkins; Sven Saussez Journal: J Intern Med Date: 2020-06-17 Impact factor: 13.068
Authors: Laura Verde; Giuseppe De Pietro; Ahmed Ghoneim; Mubarak Alrashoud; Khaled N Al-Mutib; Giovanna Sannino Journal: IEEE Access Date: 2021-04-26 Impact factor: 3.367
Authors: Alexander Ponomarchuk; Ilya Burenko; Elian Malkin; Ivan Nazarov; Vladimir Kokh; Manvel Avetisian; Leonid Zhukov Journal: IEEE J Sel Top Signal Process Date: 2022-01-13 Impact factor: 7.695