Literature DB >> 34954610

COVID-19 detection in cough, breath and speech using deep transfer learning and bottleneck features.

Madhurananda Pahar1, Marisa Klopper2, Robin Warren3, Thomas Niesler4.   

Abstract

We present an experimental investigation into the effectiveness of transfer learning and bottleneck feature extraction in detecting COVID-19 from audio recordings of cough, breath and speech. This type of screening is non-contact, does not require specialist medical expertise or laboratory facilities and can be deployed on inexpensive consumer hardware such as a smartphone. We use datasets that contain cough, sneeze, speech and other noises, but do not contain COVID-19 labels, to pre-train three deep neural networks: a CNN, an LSTM and a Resnet50. These pre-trained networks are subsequently either fine-tuned using smaller datasets of coughing with COVID-19 labels in the process of transfer learning, or are used as bottleneck feature extractors. Results show that a Resnet50 classifier trained by this transfer learning process delivers optimal or near-optimal performance across all datasets achieving areas under the receiver operating characteristic (ROC AUC) of 0.98, 0.94 and 0.92 respectively for all three sound classes: coughs, breaths and speech. This indicates that coughs carry the strongest COVID-19 signature, followed by breath and speech. Our results also show that applying transfer learning and extracting bottleneck features using the larger datasets without COVID-19 labels led not only to improved performance, but also to a marked reduction in the standard deviation of the classifier AUCs measured over the outer folds during nested cross-validation, indicating better generalisation. We conclude that deep transfer learning and bottleneck feature extraction can improve COVID-19 cough, breath and speech audio classification, yielding automatic COVID-19 detection with a better and more consistent overall performance.
Copyright © 2021 Elsevier Ltd. All rights reserved.

Entities:  

Keywords:  Bottleneck features; Breath; COVID-19; Cough; Speech; Transfer learning

Mesh:

Year:  2021        PMID: 34954610      PMCID: PMC8679499          DOI: 10.1016/j.compbiomed.2021.105153

Source DB:  PubMed          Journal:  Comput Biol Med        ISSN: 0010-4825            Impact factor:   6.698


Introduction

COVID-19 (COrona VIrus Disease of 2019) was declared a global pandemic on February 11, 2020 by the World Health Organisation (WHO). Caused by the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), this disease affects the respiratory system and includes symptoms like fatigue, dry cough, shortness of breath, joint pain, muscle pain, gastrointestinal symptoms and loss of smell or taste [1,2]. Due to its effect on the vascular endothelium, the acute respiratory distress syndrome can originate from either the gas or vascular side of the alveolus which becomes visible in a chest x-ray or computed tomography (CT) scan for COVID-19 patients [3,4]. Among the patients infected with SARS-CoV-2, between 5% and 20% are admitted to an intensive care unit (ICU) and their mortality rate varies between 26% and 62% [5]. Medical lab tests are available to diagnose COVID-19 by analysing exhaled breaths [6]. This technique was reported to achieve an accuracy of 93% when considering a group of 28 COVID-19 positive and 12 COVID-19 negative patients [7]. Related work using a group of 25 COVID-19 positive and 65 negative patients achieved an area under the ROC curve (AUC) of 0.87 [8]. Previously, machine learning algorithms have been applied to detect COVID-19 using image analysis. For example, COVID-19 was detected from CT images using a Resnet50 architecture with 96.23% accuracy in Ref. [9]. The same architecture also detected pneumonia due to COVID-19 with an accuracy of 96.7% [10] and COVID-19 from x-ray images with an accuracy of 96.30% [11]. The automatic analysis of cough audio for COVID-19 detection has also received recent attention. Coughing is a predominant symptom of many lung ailments and its effect on the respiratory system varies [12,13]. Lung disease can cause the glottis to behave differently and the airway to be either restricted or obstructed and this can influence the acoustics of the vocal audio such as cough, breath and speech [14,15]. This raises the prospect of identifying the coughing audio associated with a particular respiratory disease such as COVID-19 [16,17]. Researchers have found that a simple binary machine learning classifier can distinguish between healthy and COVID-19 respiratory audio, such as coughs gathered from crowdsourced data, with an AUC above 0.8 [18]. Improved performance was achieved using a convolutional neural network (CNN) for cough and breath audio, achieving an AUC of 0.846 [19]. In our previous work, we have also found that automatic COVID-19 detection is possible on the basis of the acoustic cough signal [20]. Here we extend this work firstly by considering whether breath and speech audio can also be used effectively for COVID-19 detection. Secondly, since the COVID-19 datasets at our disposal are comparatively small, we apply transfer learning and extract bottleneck features to take advantage of other datasets that do not include COVID-19 labels. To do this, we use publicly available as well as our own datasets that do not include COVID-19 labels to pre-train three deep neural network (DNN) architectures: a CNN, a long short-term memory (LSTM) and a 50-layer residual-based architecture (Resnet50), which uses convolutional layers with skip connections. For subsequent COVID-19 classifier evaluation, we used the Coswara dataset [21], the Interspeech Computational Paralinguistics ChallengE (ComParE) dataset [22] and the Sarcos dataset [20], all of which do contain COVID-19 labels. We report further evidence of accurate discrimination using all three audio classes and conclude that vocal audio including coughing, breathing and speech are all affected by the condition of the lungs to an extent that they carry acoustic information that can be used by existing machine learning classifiers to detect signatures of COVID-19. We are also able to show that the variability in performance of the classifiers, as measured over the independent outer folds of nested cross-validation, is strongly reduced by the pre-training, despite the absence of COVID-19 labels in the pre-training data. We can therefore conclude that the application of transfer learning enables the COVID-19 classifiers to perform both more accurately and with greater greater consistency. This is key to the viability of the practical implementation of cough audio screening, where test data can be expected to be variable, depending for example on the location and method of data capture. Sections 2 and Section 3 summarise the datasets used for experimentation and the primary feature extraction process. Section 4 describes the transfer learning process and Section 5 explains the bottleneck feature extraction process. Section 6 presents the experimental setup, including the cross-validated hyperparameter optimisation and classifier evaluation process. Experimental results are presented in Section 7 and discussed in Section 8. Finally, Section 9 summarises and concludes this study.

Data

Datasets without COVID-19 labels for pre-training

Audio data with COVID-19 labels remain scarce, which limits classifier training. We have therefore made use of five datasets without COVID-19 labels for pre-training. These datasets contain recordings of coughing, sneezing, speech and non-vocal audio. The first three datasets (TASK, Brooklyn and Wallacedene) were compiled by ourselves as part of research projects concerning cough monitoring and cough classification. The last two (Google Audio Set & Freesound and Librispeech) were compiled from publicly available data. Since all five datasets were compiled before the start of the COVID-19 pandemic, they are unlikely to contain data from COVID-19 positive subjects. All datasets used for pre-training include manual annotations but exclude COVID-19 labels.

TASK dataset

This corpus consists of spontaneous coughing audio collected at a small tuberculosis (TB) clinic near Cape Town, South Africa [23]. The dataset contains 6000 recorded coughs by patients undergoing TB treatment and 11 393 non-cough sounds such as laughter, doors opening and objects moving. This data was intended for the development of cough detection algorithms and the recordings were made in a multi-bed ward environment using a smartphone with an attached external microphone. The annotations consist of the time locations and labels of sounds, including coughs.

Brooklyn dataset

This dataset contains recordings of 746 voluntary coughs by 38 subjects compiled for the development of TB cough audio classification systems [24]. Audio recording took place in a controlled indoor booth, using a RØDE M3 microphone and an audio field recorder. The annotations include the start and end times of each cough.

Wallacedene dataset

This dataset consists of recordings of 1358 voluntary coughs by 51 patients, also compiled for the development of TB cough audio classification [25]. In this case, audio recording took place in an outdoor booth located at a busy primary healthcare clinic. Recording was performed using a RØDE M1 microphone and an audio field recorder. This data has more environmental noise and therefore a poorer signal-to-noise ratio than the Brooklyn dataset. As for the Brooklyn dataset, annotations include the start and end times of each cough.

Google Audio Set & Freesound

The Google Audio Set dataset contains excerpts from 1.8 million Youtube videos that have been manually labelled according to an ontology of 632 audio event categories [26]. The Freesound audio database is a collection of tagged sounds uploaded by contributors from around the world [27]. In both datasets, the audio recordings were contributed by many different individuals under widely varying recording conditions and noise levels. From these two datasets, we have compiled a collection of recordings that include 3098 coughing sounds, 1013 sneezing sounds, 2326 speech excerpts and 1027 other non-vocal sounds such as engine noise, running water and restaurant chatter. Previously, this dataset was used for the development of cough detection algorithms [28]. Annotations consist of the time locations and labels of the particular sounds.

LibriSpeech

As a source of speech audio data, we have selected utterances by 28 male and 28 female speakers from the freely available LibriSpeech corpus [29]. These recordings contain very little noise. The large size of the corpus allowed easy gender balancing.

Summary of data used for pre-training

In total, the data described above includes 11 202 cough sounds (2.45 h of audio), 2.91 h of speech from both male and female participants, 1013 sneezing sounds (13.34 min of audio) and 2.98 h of other non-vocal audio. Hence sneezing is under-represented as a class in the pre-training data. Since such an imbalance can detrimentally affect the performance of neural networks [30,31], we have applied the synthetic minority over-sampling technique (SMOTE) [32]. SMOTE oversamples the minor class by creating additional synthetic samples rather than, for example, random oversampling. We have in the past successfully applied SMOTE to address training set class imbalances in cough detection [23] and cough classification [20] based on audio recordings. In total, therefore, a dataset containing 10.29 h of audio recordings annotated with four class labels (cough, speech, sneeze, noise) was available to pre-train the neural architectures. The composition of this dataset is summarised in Table 1 . All recordings used for pre-training were downsampled to 16 kHz.
Table 1

Summary of the Datasets used in Pre-training. Classifiers are pre-trained on 10.29 h audio recordings annotated with four class labels: cough, sneeze, speech and noise. The datasets do not include any COVID-19 labels.

TypeDatasetSampling RateNo of EventsTotal audioAverage lengthStandard deviation
CoughTASK dataset44.1 kHz600091 min0.91 s0.25 s
Brooklyn dataset44.1 kHz7466.29 min0.51 s0.21 s
Wallacedene dataset44.1 kHz135817.42 min0.77 s0.31 s
Google Audio Set & Freesound16 kHz309832.01 min0.62 s0.23 s
Total (Cough)11 2022.45 h0.79 s0.23 s
SneezeGoogle Audio Set & Freesound16 kHz101313.34 min0.79 s0.21 s
Google Audio Set & Freesound + SMOTE16 kHz97502.14 h0.79 s0.23 s
Total (Sneeze)10 7632.14 h0.79 s0.23 s
SpeechGoogle Audio Set & Freesound16 kHz232622.48 min0.58 s0.14 s
LibriSpeech16 kHz562.54 h2.72 min0.91 min
Total (Speech)23822.91 h4.39 s0.42 s
NoiseTASK dataset44.1 kHz12 7142.79 h0.79 s0.23 s
Google Audio Set & Freesound16 kHz102711.13 min0.65 s0.26 s
Total (Noise)13 7412.79 h0.79 s0.23 s
Summary of the Datasets used in Pre-training. Classifiers are pre-trained on 10.29 h audio recordings annotated with four class labels: cough, sneeze, speech and noise. The datasets do not include any COVID-19 labels.

Datasets with COVID-19 labels for classification

Three datasets of coughing audio with COVID-19 labels were available for experimentation.

Coswara dataset

This dataset is specifically developed with the testing of classification algorithms for COVID-19 detection in mind. Data collection is web-based, and participants contribute by using their smartphones to record their coughing, breathing and speech. Audio recordings were collected of both shallow and deep breaths as well as speech uttered at a normal and fast pace. However, since the deep breaths consistently outperformed the shallow breaths in our initial experiments, the latter will not be presented in our experiments. At the time of writing, the data included contributions from participants located on five different continents [20,21,33]. Fig. 1, Fig. 2 show examples of Coswara breaths and speech respectively, collected from both COVID-19 positive and COVID-19 negative subjects. It is evident that breaths have more higher-frequency content than speech and interesting to note that COVID-19 breaths are, on average, 30% shorter than non-COVID-19 breaths (Table 2 ). All audio recordings were pre-processed to remove periods of silence to within a margin of 50 ms using a simple energy detector.
Fig. 1

Pre-processed breath signals from both COVID-19 positive and COVID-19 negative subjects in the Coswara dataset. Breaths corresponding to inhalation are marked by arrows, and are followed by an exhalation.

Fig. 2

Pre-processed speech (counting from 1 to 20 at a normal pace) from both COVID-19 positive and COVID-19 negative subjects in the Coswara dataset. In contrast to breath (Fig. 1), the spectral energy in this speech is concentrated below 1 kHz.

Table 2

Summary of the datasets used for COVID-19 classification. Cough, breath and speech signals were extracted from the Coswara, ComParE and Sarcos datasets. COVID-19 positive subjects are under-represented in all three.

TypeDatasetSampling RateLabelSubjectsTotal audioAverage per subjectStandard deviation
CoughCoswara44.1 kHzCOVID-19 Positive924.24 min2.77 s1.62 s
Healthy10790.98 h3.26 s1.66 s
Total11711.05 h3.22 s1.67 s
ComParE16 kHzCOVID-19 Positive11913.43 min6.77 s2.11 s
Healthy39840.89 min6.16 s2.26 s
Total51754.32 min6.31 s2.24 s
Sarcos44.1 kHzCOVID-19 Positive180.87 min2.91 s2.23 s
COVID-19 Negative261.57 min3.63 s2.75 s
Total442.45 min3.34 s2.53 s
BreathCoswara44.1 kHzCOVID-19 Positive888.58 min5.85 s5.05 s
Healthy10622.77 h9.39 s5.23 s
Total11502.92 h9.126 s5.29 s
SpeechCoswara (normal)44.1 kHzCOVID-19 Positive8812.42 min8.47 s4.27 s
Healthy10772.99 h9.99 s3.09 s
Total11653.19 h9.88 s3.22 s
Coswara (fast)44.1 kHzCOVID-19 Positive857.62 min5.38 s2.76 s
Healthy10741.91 h6.39 s1.77 s
Total11592.03 h6.31 s1.88 s
ComParE16 kHzCOVID-19 Positive21444.02 min12.34 s5.35 s
Healthy3961.46 h13.25 s4.67 s
Total6102.19 h12.93 s4.93 s
Pre-processed breath signals from both COVID-19 positive and COVID-19 negative subjects in the Coswara dataset. Breaths corresponding to inhalation are marked by arrows, and are followed by an exhalation. Pre-processed speech (counting from 1 to 20 at a normal pace) from both COVID-19 positive and COVID-19 negative subjects in the Coswara dataset. In contrast to breath (Fig. 1), the spectral energy in this speech is concentrated below 1 kHz. Summary of the datasets used for COVID-19 classification. Cough, breath and speech signals were extracted from the Coswara, ComParE and Sarcos datasets. COVID-19 positive subjects are under-represented in all three.

ComParE dataset

This dataset was provided as a part of the 2021 Interspeech Computational Paralinguistics ChallengE (ComParE) [22]. The ComParE dataset contains recordings of both coughs and speech, where the latter is the utterance ‘I hope my data can help to manage the virus pandemic’ in the speaker's language of choice.

Sarcos dataset

This dataset was collected in South Africa as part of this research and currently contains recordings of coughing by 18 COVID-19 positive and 26 COVID-19 negative subjects. Audio was pre-processed in the same way as the Coswara data. Since this dataset is very small, we have used it in our previous work exclusively for independent validation [20]. In this study, however, it has also been used to fine-tune and evaluate pre-trained DNN classifiers by means of transfer learning and the extraction of bottleneck features.

Summary of data used for classification

Table 2 shows that the COVID-19 positive class is under-represented in all datasets available for classification. To address this, we again apply SMOTE during training. We also note that the Coswara dataset contains the largest number of subjects, followed by ComParE and finally Sarcos. As for pre-training, all recordings were downsampled to 16 kHz.

Primary feature extraction

From the time-domain audio signals, we have extracted mel-frequency cepstral coefficients (MFCCs) and linearly-spaced log filterbank energies, along with their respective velocity and acceleration coefficients. We have also extracted the signal zero-crossing rate (ZCR) [34] and kurtosis [34], which are indicative respectively of time-domain signal variability and tailedness, i.e. the prevalence of higher amplitudes. MFCCs have been very effective in speech processing [35], but also in discriminating dry and wet coughs [36], and recently in characterising COVID-19 audio [37]. Linearly-spaced log filterbank energies have proved useful in several biomedical applications, including cough audio classification [24,25,38]. Features are extracted from overlapping frames, where the frame overlap δ is computed to ensure that the audio signal is always divided into exactly frames, as illustrated in Fig. 3 . This ensures that the entire audio event is always represented by a fixed number of frames, which allows a fixed input dimension to be maintained for classification while preserving the general overall temporal structure of the sound. Such fixed two-dimensional feature dimensions are particularly useful for the training of DNN classifiers, and have performed well in our previous experiments [20].
Fig. 3

Feature extraction process for a breath audio. The frame overlap δ is calculated to ensure that the entire recording is divided into segments. For MFCCs, for example, this results in a feature matrix with dimensions ().

Feature extraction process for a breath audio. The frame overlap δ is calculated to ensure that the entire recording is divided into segments. For MFCCs, for example, this results in a feature matrix with dimensions (). The frame length (), number of frames (), number of lower order MFCCs () and number of linearly spaced filters () are regarded as feature extraction hyperparameters, listed in Table 3 . The table shows that in our experiments each audio signal is divided into between 70 and 200 frames, each of which consists of between 512 and 4096 samples, corresponding to between 32 msec and 256 msec of audio. The number of extracted MFCCs () lies between 13 and 65, and the number of linearly-spaced filterbanks () between 40 and 200. This allows the spectral information included in each feature to be varied.
Table 3

Primary feature (PF) extraction hyperparameters. We have used between 13 and 65 MFCCs and between 40 and 200 linearly spaced filters to extract log energies.

HyperparametersDescriptionRange
MFCCs (M)lower order MFCCs to keep13 × k, where k = 1, 2, 3, 4, 5
Linearly spaced filters (B)used to extract log energies40 to 200 in steps of 20
Frame length (F)into which audio is segmented2k where k = 9, 10, 11, 12
Segments (S)number of frames extracted from audio10 × k, where k = 7, 10, 12, 15, 20
Primary feature (PF) extraction hyperparameters. We have used between 13 and 65 MFCCs and between 40 and 200 linearly spaced filters to extract log energies. The input feature matrix to the classifiers has the dimension of () for MFCCs along with their velocity and acceleration coefficients, as shown in Fig. 3. Similarly, for linearly spaced filters, the dimension of the feature matrix is (). We will refer to the features described in this section as primary features (PF) to distinguish them from the bottleneck features (BNF) described in Section 5.

Transfer learning architecture

Since the audio datasets with COVID-19 labels described in Section 2.2 are small, they may lead to overfitting when training deep architectures. Nevertheless, in previous work we have found that deep architectures perform better than shallow classifiers when using these as training sets [20]. In this work, we consider whether the classification performance of such DNNs can be improved by applying transfer learning. To achieve this, we use the datasets described in Section 2.1 containing 10.29 h of audio, labelled with four classes: cough, sneeze, speech and noise, but that do not include COVID-19 labels (Table 1 in Section 2.1). This data is used to pre-train three deep neural architectures: a CNN, an LSTM and a Resnet50. The feature extraction hyperparameters: and delivered good performance in our previous work [20] and thus have also been used here (Table 4 ).
Table 4

Hyperparameters of the pre-trained networks: Feature extraction hyperparameters were adopted from the optimal values in previous related work [20], while classifier hyperparameters were optimised on the pre-training data using cross-validation.

FEATURE EXTRACTION HYPERPARAMETERS
HyperparametersValues
MMFCCs39
FFrame length210 = 1024
S
Segments
150
CLASSIFIER HYPERPARAMETERS
Hyperparameters
Classifier
Values
Convolutional filtersCNN256 & 128 & 64
Kernel sizeCNN2
Dropout rateCNN, LSTM0.2
Dense layer (for pre-training)CNN, LSTM, Resnet50512 & 64 & 4
Dense layer (for fine-tuning)CNN, LSTM, Resnet5032 & 2
LSTM unitsLSTM512 & 256 & 128
Learning rateLSTM10−3 = 0.001
Batch SizeCNN, LSTM, Resnet5027 = 128
EpochsCNN, LSTM, Resnet5070
Hyperparameters of the pre-trained networks: Feature extraction hyperparameters were adopted from the optimal values in previous related work [20], while classifier hyperparameters were optimised on the pre-training data using cross-validation. The CNN consists of three convolutional layers, with 256, 128 and 64 (2 × 2) kernels respectively, each followed by (2), (2) max-pooling. The LSTM consists of three layers with 512, 256 and 128 LSTM units respectively, each including dropout with a rate of 0.2. A standard Resnet50, as described in Table 1 of [39], has been implemented with 512-dimensional dense layers. During pre-training, all three networks (CNN, LSTM and Resnet50) are terminated by three dense layers with dimensionalities 512, 64 and finally 4 to correspond to the four classes mentioned in Table 1. Relu activation functions were used throughout, except in the four-dimensional output layer which was softmax. All the above architectural hyperparameters were chosen by optimising the four-class classifiers during cross-validation (Table 4). After pre-training on the datasets described in Section 2.1, the 64 and 4-dimensional dense layers terminating the network were discarded from the CNN, the LSTM and the Resnet50. This left three trained deep neural networks, each accepting the same input dimensions and each with a 512-dimensional relu output layer. The parameters of these three pre-trained networks were then fixed for the remaining experiments. In order to obtain COVID-19 classifiers by transfer learning, two dense layers are added after the 512-dimensional output layer of each of the three pre-trained deep networks. The final layer is a two-dimensional softmax, to indicate COVID-19 positive and negative classes respectively. The dimensionality of the penultimate layer was also considered to be a hyperparameter and was optimised during nested k-fold cross-validation. Its optimal value was found to be 32 for all three architectures. The transfer learning process for a CNN architecture is illustrated in Fig. 4 .
Fig. 4

CNN Transfer Learning Architecture. Cross-validation on the pre-training data determined the optimal CNN architecture to have three convolutional layers with 256, 128 and 64 (2 × 2) kernels respectively, each followed by (2,2) max-pooling. The convolutional layers were followed by two dense layers with 512 and 64 relu units each, and the network was terminated by a 4-dimensional softmax. To apply transfer learning, the final two layers were removed and replaced with a new dense layer and a terminating 2-dimensional softmax to account for COVID-19 positive and negative classes. Only this newly added portion of the network was trained for classification on the data with COVID-19 labels. In addition, the outputs of the third-last layer (512-dimensional dense relu) from the pre-trained network were used as bottleneck features.

CNN Transfer Learning Architecture. Cross-validation on the pre-training data determined the optimal CNN architecture to have three convolutional layers with 256, 128 and 64 (2 × 2) kernels respectively, each followed by (2,2) max-pooling. The convolutional layers were followed by two dense layers with 512 and 64 relu units each, and the network was terminated by a 4-dimensional softmax. To apply transfer learning, the final two layers were removed and replaced with a new dense layer and a terminating 2-dimensional softmax to account for COVID-19 positive and negative classes. Only this newly added portion of the network was trained for classification on the data with COVID-19 labels. In addition, the outputs of the third-last layer (512-dimensional dense relu) from the pre-trained network were used as bottleneck features.

Bottleneck features

The 512-dimensional output of the three pre-trained networks described in the previous section has a much lower dimensionality than the (, ) i.e. (3 × 39 + 2) × 150 = 17 850 dimensional input matrix consisting of primary features (Table 4). Therefore, the output of this layer can be viewed as a bottleneck feature vector [[40], [41], [42]]. In addition to fine-tuning, where we add terminating dense layers to the three pre-trained networks and optimise these for the binary COVID-19 detection task as shown in Fig. 4, we have trained logistic regression (LR), support vector machine (SVM), k-nearest neighbour (KNN) and multilayer perceptron (MLP) classifiers using these bottleneck features as inputs. Bottleneck features computed by the CNN, the LSTM or the Resnet50 were chosen based on the one which performed better in the corresponding transfer learning experiments. Since the Resnet50 achieved higher development set AUCs than the CNN and the LSTM during transfer learning, it was used to extract bottleneck features on which the LR, SVM, KNN and MLP classifiers were trained.

Experimental method

We have evaluated the effectiveness of transfer learning (Section 4) and bottleneck feature extraction (Section 5) using CNN, LSTM and Resnet50 architectures in improving the performance of COVID-19 classification based on cough, breath and speech audio signals. In order to place these results in context, we provide two baselines. As a first baseline, we train the three deep architectures (CNN, LSTM and Resnet50) directly on the primary features extracted from data containing COVID-19 labels (as described in Section 2.2) and hence skip the pre-training. Some of these baseline results were developed in our previous work [20]. As a second baseline, we train shallow classifiers (LR, SVM, KNN and MLP) on the primary input features (as described in Section 3), also extracted from the data containing COVID-19 labels (described in Section 2.2). The performance of these baseline systems will be compared against: Deep architectures (CNN, LSTM and Resnet50) trained by the transfer learning process. The respective deep architectures are pre-trained (as described in Section 4), after which the final two layers are fine-tuned on the data containing COVID-19 labels (as described in Section 2.2). Shallow architectures (LR, SVM, KNN and MLP) trained on the bottleneck features extracted from the pre-trained networks.

Hyperparameter optimisation

Hyperparameters for three pre-trained networks have already been described in Section 4 and are listed in Table 4. The remaining hyperparameters are those of the baseline deep classifiers (CNN, LSTM and Resnet50 without pre-training), the four shallow classifiers (LR, SVM, KNN and MLP), and the dimensionality of the penultimate layer for the deep architectures during transfer learning. With the exception of Resnet50, all these hyperparameters optimisation and performance evaluation has been performed within the inner loops of nested k-fold cross-validation scheme [43]. Due to the excessive computational requirements of optimising Resnet50 metaparameters within the same cross-validation framework, we have used the standard 50 skip layers in all experiments [39]. Classifier hyperparameters and the values considered during optimisation are listed in Table 5 . A five-fold split, similar to that employed in Ref. [20], was used for the nested cross-validation.
Table 5

Classifier hyperparameters, optimised using leave-p-out nested cross-validation.

HyperparametersClassifierRange
Regularisation Strength (α1)LR, SVM10i where, i = − 7, −6, …, 6, 7
l1 penalty (α2)LR0 to 1 in steps of 0.05
l2 penalty (α3)LR, MLP0 to 1 in steps of 0.05
Kernel Coefficient (α4)SVM10i where, i = − 7, −6, …, 6, 7
No. of neighbours (α5)KNN10 to 100 in steps of 10
Leaf size (α6)KNN5 to 30 in steps of 5
No. of neurons (α7)MLP10 to 100 in steps of 10
No. of convolutional filters (β1)CNN3 × 2k where k = 3, 4, 5
Kernel size (β2)CNN2 and 3
Dropout rate (β3)CNN, LSTM0.1 to 0.5 in steps of 0.2
Dense layer size (β4)CNN, LSTM2k where k = 4, 5
LSTM units (β5)LSTM2k where k = 6, 7, 8
Learning rate (β6)LSTM, MLP10k where, k = − 2, −3, −4
Batch Size (β7)CNN, LSTM2k where k = 6, 7, 8
Epochs (β8)CNN, LSTM10 to 250 in steps of 20
Classifier hyperparameters, optimised using leave-p-out nested cross-validation.

Classifier evaluation

Receiver operating characteristic (ROC) curves were calculated within both the inner and outer loops of the nested cross-validation scheme described in the previous section. The inner-loop ROC values were used for the hyperparameter optimisation, while the average of the outer-loop ROC values indicates final classifier performance on the independent held-out test sets. The AUC score indicates how well the classifier performs over a range of decision thresholds [44]. The threshold that achieves an equal error rate (γ ) was computed from these curves. We note the mean per-frame probability that an event such as a cough is from a COVID-19 positive subject by :where indicates the number of frames in an event and P(Y = 1|X , θ) is the output of the classifier for feature vector X and parameters θ for the ith frame. Now we define the indicator variable C as: We then define two COVID-19 index scores CI1 and CI2 in Equations (3), (4)) respectively, with N 1 the number of events from the subject in the recording and N 2 the total number of frames of the events gathered from the subject. Here, . Hence Equation (1) computes a per-event average probability while Equation (4) computes a per-frame average probability. The use of one of Equations (3), (4)) was considered an additional hyperparameter during cross-validation, and it was found that taking the maximum value of the index scores consistently led to the best performance. The average specificity, sensitivity and accuracy, as well as the AUC together with its standard deviation (σ ) are shown in Table 6, Table 7, Table 8 for cough, breath and speech events respectively. These values have all been calculated over the outer folds during nested cross-validation. Hyperparameters producing the highest AUC over the inner loops have been noted as the ‘best classifier hyperparameter’.
Table 6

COVID-19 cough classification performance. For the Coswara, Sarcos and ComParE datasets the highest AUCs of 0.982, 0.961 and 0.944 respectively were achieved by a Resnet50 trained by transfer learning in the first two cases and a KNN classifier using 12 primary features determined by sequential forward selection (SFS) in the third. When Sarcos is used exclusively as a validation set for a classifier trained on the Coswara data, an AUC of 0.954 is achieved.

DatasetIDClassifierBest Feature HyperparametersBest Classifier Hyperparameters (Optimised inside nested cross-validation)Performance
SpecSensAccAUCσAUC
CoswaraC1Resnet50 + TLTable 4Default Resnet50 (Table 1 in Ref. [39])97%98%97%0.9822 × 10−3
C2CNN + TLTable 492%98%95%0.9723 × 10−3
C3LSTM + TL93%95%94%0.9643 × 10−3
C4MLP + BNFα3 = 0.35, α7 = 5092%96%94%0.9634 × 10−3
C5SVM + BNFα1 = 104, α4 = 10189%93%91%0.9423 × 10−3
C6KNN + BNFα5 = 20, α6 = 1588%90%89%0.9177 × 10−3
C7LR + BNFα1 = 10−1, α2 = 0.5, α3 = 0.584%86%85%0.8988 × 10−3
C8Resnet50 + PF [20]Table 4 in [20]Default Resnet50 (Table 1 in Ref. [39])98%93%95%0.97618 × 10−3
C9CNN + PF [20]Table 4 in [20]99%90%95%0.95339 × 10−3
C10LSTM + PF [20]97%91%94%0.94243 × 10−3
SarcosC11Resnet50 + TLTable 4Default Resnet50 (Table 1 in Ref. [39])92%96%94%0.9613 × 10−3
C12LSTM + TLTable 492%92%92%0.9433 × 10−3
C13CNN + TL89%91%90%0.9174 × 10−3
C14MLP + BNFα3 = 0.75, α7 = 7088%90%89%0.9137 × 10−3
C15SVM + BNFα1 = 10−2, α4 = 10488%89%89%0.9046 × 10−3
C16KNN + BNFα5 = 40, α6 = 2085%87%86%0.8838 × 10−3
C17LR + BNFα1 = 10−3, α2 = 0.4, α3 = 0.683%86%85%0.8679 × 10−3
Sarcos (val only)C18Resnet50 + TLDefault Resnet50 (Table 1 in Ref. [39])92%96%94%0.954
C19LSTM + PF [20]Table 5 in [20]Table 5 in [20]73%75%74%0.779
C20LSTM + PF + SFS [20]96%91%93%0.938
ComParEC21Resnet50 + TLTable 4Default Resnet50 (Table 1 in Ref. [39])89%93%91%0.9344 × 10−3
C22LSTM + TLTable 488%92%90%0.9164 × 10−3
C23CNN + TL86%90%88%0.8984 × 10−3
C24MLP + BNFα3 = 0.25, α7 = 2085%90%88%0.9125 × 10−3
C25SVM + BNFα1 = 10−3, α4 = 10285%90%88%0.9036 × 10−3
C26KNN + BNFα5 = 70, α6 = 2085%86%86%0.8828 × 10−3
C27LR + BNFα1 = 104, α2 = 0.3, α3 = 0.784%86%85%0.8638 × 10−3
C28KNN + PF + SFSB=60,F=211,S=70α5 = 60,α6 = 2584%90%92%0.9449 × 10−3
C29KNN + PFB=60,F=211,S=70α5 = 60, α6 = 2578%80%80%0.85513 × 10−3
C30MLP + PFM=13,F=210,S=100α3 = 0.65, α7 = 4076%80%78%0.83914 × 10−3
C31SVM + PFB=80,F=29,S=70α1 = 10−4, α4 = 10−175%78%77%0.81412 × 10−3
C32LR + PFB=140,F=211,S=70α1 = 10−2, α2 = 0.6, α3 = 0.469%73%71%0.78913 × 10−3
Table 7

COVID-19 breath classifier performance: For breaths, the best performance was achieved by an SVM using bottleneck features (AUC = 0.942). The Resnet50 classifier trained by transfer learning achieves a similar AUC of 0.934.

DatasetIDClassifierBest Feature HyperparametersBest Classifier Hyperparameters (Optimised inside nested cross-validation)Performance
SpecSensAccAUCσAUC
CoswaraB1Resnet50 + TLTable 4Default Resnet50 (Table 1 in Ref. [39])87%93%90%0.9343 × 10−3
B2LSTM + TLTable 486%90%88%0.9273 × 10−3
B3CNN + TL85%89%87%0.9143 × 10−3
B4SVM + BNFα1 = 102,α4 = 10−288%94%91%0.9424 × 10−3
B5MLP + BNFα3 = 0.45, α7 = 5087%93%90%0.9236 × 10−3
B6KNN + BNFα5 = 70, α6 = 1087%93%90%0.9229 × 10−3
B7LR + BNFα1 = 10−4, α2 = 0.8, α3 = 0.286%90%88%0.8918 × 10−3
B8Resnet50 + PFM=39,F=210,S=150Default Resnet50 (Table 1 in Ref. [39])92%90%91%0.92334 × 10−3
B9LSTM + PFM=26,F=211,S=120β3 = 0.1, β4 = 32, β5 = 128, β6 = 0.001, β7 = 256, β8 = 17090%86%88%0.91741 × 10−3
B10CNN + PFM=52,F=210,S=100β1 = 48, β2 = 2, β3 = 0.3, β4 = 32, β7 = 256, β8 = 21087%85%86%0.89842 × 10−3
Table 8

COVID-19 speech classifier performance: For the Coswara (fast and normal speech) and the ComParE speech the highest AUCs were 0.893, 0.861 and 0.923 respectively and achieved by a Resnet50 trained by transfer learning in the first two cases and an SVM using with bottleneck features in the third case.

DatasetIDClassifierBest Feature HyperparametersBest Classifier Hyperparameters (Optimised inside nested cross-validation)Performance
SpecSensAccAUCσAUC
CoswaraS1Resnet50 + TLTable 4Default Resnet50 (Table 1 in Ref. [39])90%85%87%0.8933 × 10−3
normalS2LSTM + TLTable 488%82%85%0.8774 × 10−3
speechS3CNN + TL88%81%85%0.8754 × 10−3
S4MLP + BNFα3 = 0.25, α7 = 6083%85%84%0.8718 × 10−3
S5SVM + BNFα1 = 10−6, α4 = 10583%85%84%0.8677 × 10−3
S6KNN + BNFα5 = 50, α6 = 1080%85%83%0.8686 × 10−3
S7LR + BNFα1 = 102, α2 = 0.6, α3 = 0.479%83%81%0.8527 × 10−3
S8Resnet50 + PFM=26,F=210,S=120Default Resnet50 (Table 1 in Ref. [39])84%80%82%0.86451 × 10−3
S9LSTM + PFM=26,F=211,S=150β3 = 0.1, β4 = 32, β5 = 128, β6 = 0.001, β7 = 256, β8 = 17084%78%81%0.84451 × 10−3
S10CNN + PFM=39,F=210,S=120β1 = 48, β2 = 2, β3 = 0.3, β4 = 32, β7 = 256, β8 = 21082%78%80%0.83252 × 10−3
CoswaraS11Resnet50 + TLTable 4Default Resnet50 (Table 1 in Ref. [39])84%78%81%0.8612 × 10−3
fastS12LSTM + TLTable 483%78%81%0.8603 × 10−3
speechS13CNN + TL82%76%79%0.8513 × 10−3
S14MLP + BNFα3 = 0.55, α7 = 7078%83%81%0.8587 × 10−3
S15SVM + BNFα1 = 104, α4 = 10−278%83%81%0.8568 × 10−3
S16KNN + BNFα5 = 60, α6 = 1577%83%81%0.8548 × 10−3
S17LR + BNFα1 = 10−3, α2 = 0.4, α3 = 0.677%82%80%0.84111 × 10−3
S18LSTM + PFM=26,F=211,S=120β3 = 0.1, β4 = 32, β5 = 128, β6 = 0.001, β7 = 256, β8 = 17084%80%82%0.85647 × 10−3
S19Resnet50 + PFM=39,F=210,S=150Default Resnet50 (Table 1 in Ref. [39])82%78%80%0.82245 × 10−3
S20CNN + PFM=52,F=210,S=100β1 = 48, β2 = 2, β3 = 0.3, β4 = 32, β7 = 256, β8 = 21079%77%78%0.81041 × 10−3
ComParES21Resnet50 + TLTable 4Default Resnet50 (Table 1 in Ref. [39])84%90%87%0.9144 × 10−3
S22LSTM + TLTable 482%88%85%0.8975 × 10−3
S23CNN + TL80%88%84%0.8925 × 10−3
S24SVM + BNFα1 = 10−1,α4 = 10384%88%86%0.9234 × 10−3
S25MLP + BNFα3 = 0.3, α7 = 6080%88%84%0.9056 × 10−3
S26KNN + BNFα5 = 20, α6 = 1580%86%83%0.8917 × 10−3
S27LR + BNFα1 = 102, α2 = 0.45, α3 = 0.781%85%83%0.8907 × 10−3
S28MLP + PF + SFSM=26,F=211,S=150α3 = 0.35, α7 = 7082%88%85%0.91211 × 10−3
S29MLP + PFM=26,F=211,S=150α3 = 0.35, α7 = 7081%85%83%0.89314 × 10−3
S30KNN + PFB=100,F=210,S=120α5 = 70, α6 = 1580%84%82%0.84716 × 10−3
S31SVM + PFB=80,F=211,S=120α1 = 10−2, α4 = 10−379%81%80%0.83615 × 10−3
S32LR + PFB=60,F=210,S=100α1 = 104, α2 = 0.35, α3 = 0.6569%72%71%0.77618 × 10−3
COVID-19 cough classification performance. For the Coswara, Sarcos and ComParE datasets the highest AUCs of 0.982, 0.961 and 0.944 respectively were achieved by a Resnet50 trained by transfer learning in the first two cases and a KNN classifier using 12 primary features determined by sequential forward selection (SFS) in the third. When Sarcos is used exclusively as a validation set for a classifier trained on the Coswara data, an AUC of 0.954 is achieved. COVID-19 breath classifier performance: For breaths, the best performance was achieved by an SVM using bottleneck features (AUC = 0.942). The Resnet50 classifier trained by transfer learning achieves a similar AUC of 0.934. COVID-19 speech classifier performance: For the Coswara (fast and normal speech) and the ComParE speech the highest AUCs were 0.893, 0.861 and 0.923 respectively and achieved by a Resnet50 trained by transfer learning in the first two cases and an SVM using with bottleneck features in the third case.

Experimental results

COVID-19 classification performance based on cough, breath and speech is presented in Table 6, Table 7, Table 8 respectively. These tables include the performance of baseline deep classifiers without pre-training, deep classifiers trained by transfer learning (TL), shallow classifiers using bottleneck features (BNF) and baseline shallow classifiers trained directly on the primary features (PF). The best performing classifiers appear first for each dataset and the baseline results are shown towards the end. Each system is identified by an ‘ID’.

Coughs

We have found in our previous work [20] that, when training a Resnet50 on only the Coswara dataset, an AUC of 0.976 (σ  = 0.018) can be achieved for the binary classification problem of distinguishing COVID-19 coughs from healthy coughs. These results are reproduced as baseline systems C8, C9 and C10 in Table 6. The improved results achieved by transfer learning are indicated by systems C1 to C7 in the same table. Specifically, system C1 shows that, by applying transfer learning as described in Section 4, the same Resnet50 architecture can achieve an AUC of 0.982 (σ  = 0.002). The entries for systems C2 and C3 show that pre-training also improves the AUCs achieved by the deep CNN and LSTM classifiers from 0.953 (system C9) to 0.972 (system C2) and from 0.942 (system C10) to 0.964 (system C3) respectively. Of particular note in all these cases is the substantial decrease in the standard deviation of the AUC (σ ) observed during cross-validation when implementing transfer learning. This indicates that pre-training leads to classifiers with more consistent performance on the unseen test data. The Sarcos dataset is much smaller than the Coswara dataset and too small to train a deep classifier directly. For this reason, it was used only as an independent validation dataset for classifiers trained on the Coswara data in our previous work [20]. It can however be used to fine-tune pre-trained classifiers during transfer learning, and the resulting performance is reflected by systems C11 to C17 in Table 6. Previously an AUC of 0.938 (system C20) was achieved when using Sarcos as an independent validation data and applying sequential forward selection (SFS) [20]. Here, we find that transfer learning applied to the Resnet50 model results in an AUC of 0.961 (system C11) and a lower standard deviation (σ  = 0.003). As an additional experiment, we apply the Resnet50 classifier trained by transfer learning using the Coswara data to the Sarcos data, thus again using the latter exclusively as an independent validation set. The resulting performance is indicated by system C18, while the previous baselines are repeated as systems C19 and C20 [20]. System C18 achieves an AUC of 0.954, which is only slightly below the 0.961, achieved by system C11 where the pre-trained model used the Sarcos data for fine-tuning, and slightly higher than the AUC of 0.938 achieved by system C20 which is the baseline LSTM trained on Coswara without transfer learning but employing SFS [45]. This supports our earlier observation that transfer learning appears to lead to more robust classifiers that can generalise to other datasets. Due to the extreme computational load, we have not yet been able to evaluate SFS within the transfer learning framework. For the ComParE dataset, we have included shallow classifiers trained directly on the primary input features (KNN + PF, MLP + PF, SVM + PF and LR + PF). These are the baseline systems C29 to C32 in Table 6. The best-performing shallow classifier is C29, where a KNN used 60 linearly spaced filterbank log energies as features. System C28 is the result of applying SFS to system C29. In this case, SFS identifies the top 12 features based on the development sets used during nested cross-validation, and results in the best-performing shallow system with an AUC of 0.944. This represents a substantial improvement over the AUC of 0.855 achieved by the same system without SFS (system C29). Systems C21 to C27 in Table 6 are obtained by transfer learning using the ComParE dataset. These show improved performance over the shallow classifiers without SFS. In particular, after transfer learning, the Resnet50 achieves almost the same AUC as the best ComParE system (system C28) with a lower σ . When considering the performance of the shallow classifiers trained on the bottleneck features across all three datasets in Table 6, we see that a consistent improvement over the use of primary features with the same classifiers is observed. The ROC curves for the best-performing COVID-19 cough classifiers are shown in Fig. 5 .
Fig. 5

COVID-19 cough classification: A Resnet50 classifier with transfer learning achieved the highest AUC in classifying COVID-19 coughs for the Coswara and Sarcos datasets (0.982 and 0.961 respectively). For the ComParE dataset, AUCs of 0.944 and 0.934 were achieved by a KNN classifier using 12 features identified by SFS and by a Resnet50 classifier trained by transfer learning respectively.

COVID-19 cough classification: A Resnet50 classifier with transfer learning achieved the highest AUC in classifying COVID-19 coughs for the Coswara and Sarcos datasets (0.982 and 0.961 respectively). For the ComParE dataset, AUCs of 0.944 and 0.934 were achieved by a KNN classifier using 12 features identified by SFS and by a Resnet50 classifier trained by transfer learning respectively.

Breath

Table 7 demonstrates that COVID-19 classification is also possible on the basis of breath signals. The baseline systems B8, B9 and B10 are trained directly on the primary features, without pre-training. By comparing these baselines with B1, B2 and B3, we see that transfer learning leads to a small improvement in AUC for all three deep architectures. Furthermore, systems B4 to B7 show that comparable performance can be achieved by shallow classifiers using the bottleneck features. The best overall performance (AUC = 0.942) was achieved by an SVM classifier trained on the bottleneck features (system B4). However, the Resnet50 trained by transfer learning (system B1) performed almost equally well (AUC = 0.934). The ROC curves for the best-performing COVID-19 breath classifiers are shown in Fig. 6 . As it was observed for coughs, the standard deviation of the AUC (σ ) is consistently lower for the pre-trained networks.
Fig. 6

COVID-19 breath classification: An SVM classifier using bottleneck features (BNF) achieved the highest AUC of 0.942 when classifying COVID-19 breath. The Resnet50 with and without the transfer learning has achieved AUCs of 0.934 and 0.923 respectively, with higher σ for the latter (Table 7).

COVID-19 breath classification: An SVM classifier using bottleneck features (BNF) achieved the highest AUC of 0.942 when classifying COVID-19 breath. The Resnet50 with and without the transfer learning has achieved AUCs of 0.934 and 0.923 respectively, with higher σ for the latter (Table 7).

Speech

Although not as informative as cough or breath audio, COVID-19 classification can also be achieved on the basis of speech audio recordings. For Coswara, the best classification performance (AUC = 0.893) was achieved by a Resnet50 after applying transfer learning (system S1). For the ComParE data, the top performer (AUC = 0.923) was an SVM trained on the bottleneck features (system S24). However, the Resnet50 trained by transfer learning performed almost equally well, with an AUC of 0.914 (system S21). Furthermore, while good performance was also achieved when using the deep architectures without applying the transfer learning process (systems S8–S10, S18–S20 and S28–S32), this again was at the cost of a substantially higher standard deviation σ . Finally, for the Coswara data, performance was generally better when speech was uttered at a normal pace rather than a fast pace. The ROC curves for the best-performing COVID-19 speech classifiers are shown in Fig. 7 .
Fig. 7

COVID-19 speech classification: An SVM classifier using bottleneck features (BNF) achieved the highest AUC of 0.923 when classifying COVID-19 speech in ComParE dataset. A Resnet50 trained by transfer learning achieves a slightly lower AUC of 0.914. Speech (normal and fast) in the Coswara dataset can be used to classify COVID-19 with AUCs of 0.893 and 0.861 respectively using a Resnet50 trained by transfer learning.

COVID-19 speech classification: An SVM classifier using bottleneck features (BNF) achieved the highest AUC of 0.923 when classifying COVID-19 speech in ComParE dataset. A Resnet50 trained by transfer learning achieves a slightly lower AUC of 0.914. Speech (normal and fast) in the Coswara dataset can be used to classify COVID-19 with AUCs of 0.893 and 0.861 respectively using a Resnet50 trained by transfer learning.

Discussion

Previous studies have shown that it is possible to distinguish between the coughing sounds made by COVID-19 positive and COVID-19 negative subjects by means of automatic classification and machine learning. However, the fairly small size of datasets with COVID-19 labels limits the effectiveness of these techniques. The results of the experiments we have presented in this study show that larger datasets of other vocal and respiratory audio that do not include COVID-19 labels can be leveraged to improve classification performance by applying transfer learning [46]. Specifically, we have shown that the accuracy of COVID-19 classification based on coughs can be improved by transfer learning for two datasets (Coswara and Sarcos) while almost optimal performance is achieved on a third dataset (ComParE). A similar trend is seen when performing COVID-19 classification based on breath and speech audio. However, these two types of audio appear to contain less distinguishing information, since the achieved classification performance is a little lower than it is for cough. Our best cough classification system has an area under the ROC curve (AUC) of 0.982, despite being trained on what remains a fairly small COVID-19 dataset with 1171 participants (92 COVID-19 positive and 1079 negative). Other research reports a similar AUC but using a much larger dataset with 8380 participants (2339 positive and 6041 negative) [47]. While our experiments also show that shallow classifiers, when used in conjunction with feature selection, can in some cases match or surpass the performance of the deeper architectures; a pre-trained Resnet50 architecture provides consistent optimal or near-optimal performance across all three types of audio signals and datasets. Due to the very high computational cost involved, we have not yet applied such feature selection to the deep architectures themselves, and this remains part of our ongoing work. Another important observation that we can make for all three types of audio signals is that transfer learning strongly reduces the variance in the AUC (σ ) exhibited by the deep classifiers during cross-validation (Table 6, Table 7, Table 8). This suggests that transfer learning leads to more consistent classifiers that are less prone to over-fitting and better able to generalise to unseen test data. This is important because robustness to variable testing conditions is essential in implementing COVID-19 classification as a method of screening. An informal listening assessment of the Coswara and the ComParE data indicates that the former has greater variance and more noise than the latter. Our experimental results presented in Table 6, Table 7, Table 8 found that, for speech classification on noisy data, fine-tuning a pre-trained networks demonstrates better performance, while for cleaner data, extracting bottleneck features and then applying a shallow classifier exhibits better performance. It is interesting to note that MFCCs are always the features of choice for this noisier dataset, while the log energies of linear filters are often preferred for the less noisy data. Although all other classifiers have shown the best performance when using these log-filterbank energy features, MLP classifiers performed best when using MFCCs and were best at classifying COVID-19 speech. A similar conclusion was drawn in Ref. [24], where coughs were recorded in a controlled environment with little environmental noise. A larger number of frames in the feature matrix also generally leads to better performance as it allows the classifier to find more detailed temporal patterns in the audio signal. Finally, we note that, for the shallow classifiers, hyperparameter optimisation selected a higher number of MFCCs and also a more densely populated filterbank than what is required to match the resolution of the human auditory system. This agrees with an observation already made in our previous work that the information used by the classifiers to detect COVID-19 signature is at least to some extent not perceivable by the human ear.

Conclusions

In this study, we have demonstrated that transfer learning can be used to improve the performance and robustness of the DNN classifiers for COVID-19 detection in vocal audio such as cough, breath and speech. We have used a 10.29 h audio data corpus, which does not have any COVID-19 labels, to pre-train a CNN, an LSTM and a Resnet50. This data contains four classes: cough, sneeze, speech and noise. In addition, we have used the same architectures to extract bottleneck features by removing the final layers from the pre-trained models. Three smaller datasets containing cough, breath and speech audio with COVID-19 labels were then used to fine-tune the pre-trained COVID-19 audio classifiers using nested k-fold cross-validation. Our results show that a pre-trained Resnet50 classifier that is either fine-tuned or used as a bottleneck extractor delivers optimal or near-optimal performance across all datasets and all three audio classes. The results show that transfer learning using the larger dataset without COVID-19 labels led not only to improved performance, but also to a much smaller standard deviation of the classifier AUC, indicating better generalisation to unseen test data. The use of bottleneck features, which are extracted by the pre-trained deep models and therefore also a way of incorporating out-of-domain data, also provided a reduction in this standard deviation and near-optimal performance. Furthermore, we see that cough audio carries the strongest COVID-19 signatures, followed by breath and speech. The best-performing COVID-19 classifier achieved an area under the ROC curve (AUC) of 0.982 for cough, followed by an AUC of 0.942 for breath and 0.923 for speech. We conclude that successful classification is possible for all three classes of audio considered. However, deep transfer learning improves COVID-19 detection on the basis of cough, breath and speech signals, yielding automatic classifiers with higher accuracies and greater robustness. This is significant since such COVID-19 screening is inexpensive, easily deployable, non-contact and does not require medical expertise or laboratory facilities. Therefore it has the potential to decrease the load on the health care systems. As a part of ongoing work, we are considering the application of feature selection in the deep architectures, the fusion of classifiers using various audio classes like cough, breath and speech, as well as the optimisation and adaptation necessary to allow deployment on a smartphone or similar mobile platform.

Declaration of competing interest

We confirm that there is no conflict of interest statement to be declared.
  22 in total

1.  A Generic Deep Learning Based Cough Analysis System From Clinically Validated Samples for Point-of-Need Covid-19 Test and Severity Levels.

Authors:  Javier Andreu-Perez; Humberto Perez-Espinosa; Eva Timonet; Mehrin Kiani; Manuel I Giron-Perez; Alma B Benitez-Trinidad; Delaram Jarchi; Alejandro Rosales-Perez; Nick Gatzoulis; Orion F Reyes-Galaviz; Alejandro Torres-Garcia; Carlos A Reyes-Garcia; Zulfiqar Ali; Francisco Rivas
Journal:  IEEE Trans Serv Comput       Date:  2021-02-23       Impact factor: 11.019

2.  AI4COVID-19: AI enabled preliminary diagnosis for COVID-19 from cough samples via an app.

Authors:  Ali Imran; Iryna Posokhova; Haneya N Qureshi; Usama Masood; Muhammad Sajid Riaz; Kamran Ali; Charles N John; Md Iftikhar Hussain; Muhammad Nabeel
Journal:  Inform Med Unlocked       Date:  2020-06-26

3.  Chronic cough and the cough reflex in common lung diseases.

Authors:  T Higenbottam
Journal:  Pulm Pharmacol Ther       Date:  2002       Impact factor: 3.410

Review 4.  Prevalence, pathogenesis, and causes of chronic cough.

Authors:  Kian Fan Chung; Ian D Pavord
Journal:  Lancet       Date:  2008-04-19       Impact factor: 79.321

5.  Log energy entropy-based EEG classification with multilayer neural networks in seizure.

Authors:  Serap Aydin; Hamdi Melih Saraoğlu; Sadik Kara
Journal:  Ann Biomed Eng       Date:  2009-09-11       Impact factor: 3.934

6.  A Comparative Study of Features for Acoustic Cough Detection Using Deep Architectures.

Authors:  Igor D S Miranda; Andreas H Diacon; Thomas R Niesler
Journal:  Conf Proc IEEE Eng Med Biol Soc       Date:  2019-07

7.  Metabolomics of exhaled breath in critically ill COVID-19 patients: A pilot study.

Authors:  Stanislas Grassin-Delyle; Camille Roquencourt; Pierre Moine; Gabriel Saffroy; Stanislas Carn; Nicholas Heming; Jérôme Fleuriet; Hélène Salvator; Emmanuel Naline; Louis-Jean Couderc; Philippe Devillier; Etienne A Thévenot; Djillali Annane
Journal:  EBioMedicine       Date:  2020-12-03       Impact factor: 8.143

8.  Artificial Intelligence Empowers Radiologists to Differentiate Pneumonia Induced by COVID-19 versus Influenza Viruses.

Authors:  Houman Sotoudeh; Mohsen Tabatabaei; Baharak Tasorian; Kamran Tavakol; Ehsan Sotoudeh; Abdol Latif Moini
Journal:  Acta Inform Med       Date:  2020-09

9.  Persistent Symptoms in Patients After Acute COVID-19.

Authors:  Angelo Carfì; Roberto Bernabei; Francesco Landi
Journal:  JAMA       Date:  2020-08-11       Impact factor: 56.272

10.  End-to-end convolutional neural network enables COVID-19 detection from breath and cough audio: a pilot study.

Authors:  Harry Coppock; Alex Gaskell; Panagiotis Tzirakis; Alice Baird; Lyn Jones; Björn Schuller
Journal:  BMJ Innov       Date:  2021-04-16
View more
  5 in total

1.  Automatic cough classification for tuberculosis screening in a real-world environment.

Authors:  Madhurananda Pahar; Marisa Klopper; Byron Reeve; Rob Warren; Grant Theron; Thomas Niesler
Journal:  Physiol Meas       Date:  2021-11-26       Impact factor: 2.833

Review 2.  Machine learning applications for COVID-19 outbreak management.

Authors:  Arash Heidari; Nima Jafari Navimipour; Mehmet Unal; Shiva Toumaj
Journal:  Neural Comput Appl       Date:  2022-06-10       Impact factor: 5.102

3.  Audio texture analysis of COVID-19 cough, breath, and speech sounds.

Authors:  Garima Sharma; Karthikeyan Umapathy; Sri Krishnan
Journal:  Biomed Signal Process Control       Date:  2022-04-18       Impact factor: 3.880

4.  Sounds of COVID-19: exploring realistic performance of audio-based digital testing.

Authors:  Jing Han; Tong Xia; Erika Bondareva; Chloë Brown; Jagmohan Chauhan; Ting Dang; Andreas Grammenos; Apinan Hasthanasombat; Dimitris Spathis; Andres Floto; Pietro Cicuta; Cecilia Mascolo
Journal:  NPJ Digit Med       Date:  2022-01-28

5.  Diagnosis of COVID-19 via acoustic analysis and artificial intelligence by monitoring breath sounds on smartphones.

Authors:  Zhiang Chen; Muyun Li; Ruoyu Wang; Wenzhuo Sun; Jiayi Liu; Haiyang Li; Tianxin Wang; Yuan Lian; Jiaqian Zhang; Xinheng Wang
Journal:  J Biomed Inform       Date:  2022-04-27       Impact factor: 8.000

  5 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.