| Literature DB >> 30927097 |
Tomasz Grzywalski1, Mateusz Piecuch1, Marcin Szajek1, Anna Bręborowicz2, Honorata Hafke-Dys3,4, Jędrzej Kociński1,5, Anna Pastusiak1,5, Riccardo Belluzzo1.
Abstract
Lung auscultation is an important part of a physical examination. However, its biggest drawback is its subjectivity. The results depend on the experience and ability of the doctor to perceive and distinguish pathologies in sounds heard via a stethoscope. This paper investigates a new method of automatic sound analysis based on neural networks (NNs), which has been implemented in a system that uses an electronic stethoscope for capturing respiratory sounds. It allows the detection of auscultatory sounds in four classes: wheezes, rhonchi, and fine and coarse crackles. In the blind test, a group of 522 auscultatory sounds from 50 pediatric patients were presented, and the results provided by a group of doctors and an artificial intelligence (AI) algorithm developed by the authors were compared. The gathered data show that machine learning (ML)-based analysis is more efficient in detecting all four types of phenomena, which is reflected in high values of recall (also called as sensitivity) and F1-score.Conclusions: The obtained results suggest that the implementation of automatic sound analysis based on NNs can significantly improve the efficiency of this form of examination, leading to a minimization of the number of errors made in the interpretation of auscultation sounds. What is Known: • Auscultation performance of average physician is very low. AI solutions presented in scientific literature are based on small data bases with isolated pathological sounds (which are far from real recordings) and mainly on leave-one-out validation method thus they are not reliable. What is New: • AI learning process was based on thousands of signals from real patients and a reliable description of recordings was based on multiple validation by physicians and acoustician resulting in practical and statistical prove of AI high performance.Entities:
Keywords: Artificial intelligence; Auscultation; Machine learning; Respiratory system; Stethoscope
Mesh:
Year: 2019 PMID: 30927097 PMCID: PMC6511356 DOI: 10.1007/s00431-019-03363-2
Source DB: PubMed Journal: Eur J Pediatr ISSN: 0340-6199 Impact factor: 3.183
Fig. 1The specific localization of auscultation points in the front (left panel) and back (right panel) of a chest
Fig. 2Scheme of the GS data acquisition procedure
Number of recordings in terms of the appearance of specific pathological phenomena
| Phenomenon | Number of recordings |
|---|---|
| Wheezes | 124 |
| Rhonchi | 113 |
| Coarse crackles | 66 |
| Fine crackles | 112 |
Fig. 3Exemplary probability raster for fine crackles (a) and rhonchi (b): the signal (first line) is transformed into a spectrogram (second line) and analyzed by the NN. The output of the NN is presented as bidimensional matrix, called a probability raster (third line). The rows in the matrix represent time, framed in windows of 10 ms each; the columns show the probability of positive detection of each phenomenon. The raster is eventually post-processed to obtain boolean values indicating the presence or absence of phenomena for each frame (the fourth line)
Juxtaposition of recall (sensitivity), precision, specificity, and F1-score for doctors (pediatricians) and NN
| Recall (sensitivity), % | Precision (%) | Specificity (%) | F1-score (%) | |||||
|---|---|---|---|---|---|---|---|---|
| Doctors | NN | Doctors | NN | Doctors | NN | Doctors | NN | |
| Coarse crackles | 56.1 | 56.1 | 34.6 | 40.7 | 84.6 | 88.2 | 42.8 | 47.1 |
| Fine crackles | 72.3 | 83.9 | 39.5 | 52.5 | 69.8 | 79.3 | 51.1 | 64.6 |
| Wheezes | 58.1 | 78.2 | 66.1 | 57.7 | 90.7 | 82.2 | 61.8 | 66.4 |
| Rhonchi | 67.3 | 87.6 | 55.9 | 61.1 | 85.3 | 84.6 | 61.0 | 72.0 |
| Mean | 63.5 | 76.5 | 49.0 | 53.0 | 82.6 | 83.6 | 54.2 | 62.5 |
|
| |
|
|