| Literature DB >> 30991690 |
Cristina Jácome1, Johan Ravn2, Einar Holsbø3, Juan Carlos Aviles-Solis4, Hasse Melbye5, Lars Ailo Bongo6.
Abstract
We applied deep learning to create an algorithm for breathing phase detection in lung sound recordings, and we compared the breathing phases detected by the algorithm and manually annotated by two experienced lung sound researchers. Our algorithm uses a convolutional neural network with spectrograms as the features, removing the need to specify features explicitly. We trained and evaluated the algorithm using three subsets that are larger than previously seen in the literature. We evaluated the performance of the method using two methods. First, discrete count of agreed breathing phases (using 50% overlap between a pair of boxes), shows a mean agreement with lung sound experts of 97% for inspiration and 87% for expiration. Second, the fraction of time of agreement (in seconds) gives higher pseudo-kappa values for inspiration (0.73-0.88) than expiration (0.63-0.84), showing an average sensitivity of 97% and an average specificity of 84%. With both evaluation methods, the agreement between the annotators and the algorithm shows human level performance for the algorithm. The developed algorithm is valid for detecting breathing phases in lung sound recordings.Entities:
Keywords: automated classification; breath detection; breath onset; deep learning; respiratory phases; spectrograms
Year: 2019 PMID: 30991690 PMCID: PMC6515330 DOI: 10.3390/s19081798
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.576
Subsets from the Tromsø 7 lung sound dataset used in this experiment.
| Datasets | Annotation | N of Files | Duration | N of Inspiration Identified | N of Expiration Identified |
|---|---|---|---|---|---|
| Subset 1 | Annotator 1 | 1022 | 10 s | 3212 | 2842 |
| Subset 2 (training) | Algorithm (inspected by Annotator 2) | 112 | 15 s | 447 | 418 |
| Subset 3 | Annotator 1 | 120 | 15 s | 479 | 436 |
| Annotator 3 | 120 | 15 s | 499 | 459 |
Figure 1Spectrogram image representation of a lung sound recording.
Figure 2Spectrogram image representation of a lung sound recording, with red box representing inspiration and yellow box expiration phase: (a) without prune; (b) without removing overlaps; (c) final result.
Figure 3Agreement and disagreement counted as the fraction of time two annotations overlap. The black region illustrates a “positive” annotation, i.e., indicates the presence of a breathing phase; the grey region illustrates a “negative,” i.e., the absence of a breathing phase; the red region indicates the errors: annotate a breathing phase when in reality it is not present or not annotated breathing phase, when in reality it is present.
Percentage agreements between each annotator and the automatic method using boxes.
| Agreement Using Boxes | Inspiration | Expiration | Both Phases |
|---|---|---|---|
| Annotator 1 vs. Algorithm | 98% | 95% | 96% |
| Annotator 3 vs. Algorithm | 95% | 79% | 87% |
| Annotator 1 vs. Annotator 3 | 95% | 84% | 90% |
Figure 4Pseudo-kappa between each annotator and the algorithm, and between annotators. Confidence intervals are of the bootstrap percentile kind.
Sensitivity and specificity for both breathing phases and both annotators.
| Sensitivity | Specificity | |||||
|---|---|---|---|---|---|---|
| Inspiration | Expiration | Both Phases | Inspiration | Expiration | Both Phases | |
| Algorithm (Annotator 1) | 97% | 94% | 96% | 86% | 87% | 87% |
| Algorithm (Annotator 3) | 98% | 97% | 98% | 84% | 78% | 81% |