| Literature DB >> 35768501 |
Sergi Gomez-Quintana1, Alison O'Shea2, Andreea Factor3, Emanuel Popovici4, Andriy Temko4.
Abstract
The study proposes a novel method to empower healthcare professionals to interact and leverage AI decision support in an intuitive manner using auditory senses. The method's suitability is assessed through acoustic detection of the presence of neonatal seizures in electroencephalography (EEG). Neurophysiologists use EEG recordings to identify seizures visually. However, neurophysiological expertise is expensive and not available 24/7, even in tertiary hospitals. Other neonatal and pediatric medical professionals (nurses, doctors, etc.) can make erroneous interpretations of highly complex EEG signals. While artificial intelligence (AI) has been widely used to provide objective decision support for EEG analysis, AI decisions are not always explainable. This work developed a solution to combine AI algorithms with a human-centric intuitive EEG interpretation method. Specifically, EEG is converted to sound using an AI-driven attention mechanism. The perceptual characteristics of seizure events can be heard using this method, and an hour of EEG can be analysed in five seconds. A survey that has been conducted among targeted end-users on a publicly available dataset has demonstrated that not only does it drastically reduce the burden of reviewing the EEG data, but also the obtained accuracy is on par with experienced neurophysiologists trained to interpret neonatal EEG. It is also shown that the proposed communion of a medical professional and AI outperforms AI alone by empowering the human with little or no experience to leverage AI attention mechanisms to enhance the perceptual characteristics of seizure events.Entities:
Mesh:
Year: 2022 PMID: 35768501 PMCID: PMC9243143 DOI: 10.1038/s41598-022-14894-4
Source DB: PubMed Journal: Sci Rep ISSN: 2045-2322 Impact factor: 4.996
Figure 1AI-driven sonification algorithm block diagram.
Figure 2AI-driven sonification algorithm demo (EEG31 in Helsinki dataset). EEG is converted into audio and compressed non-uniformly in time as function of the seizure probability given by an AI algorithm (best seen in color).
Figure 3Snapshot of web survey to assess the AI-driven sonification algorithm.
Figure 4Fleiss kappa while exchanging one annotator by the AI sonification survey results and its p-values.
Figure 5The area under the curve (AUC) for the AI probabilistic output and sensitivity/specificity of the majority vote of the survey participants.
Confusion matrix for the AI sonification majority vote.
| Predicted | Actual | |
|---|---|---|
| Positive | Negative | |
| Positive | 42 | 7 |
| Negative | 5 | 25 |
Average ± CI95 for duration, amplitude RMS and average AI probability for correctly detected and missed seizures.
| Seizure patients (N = 47) | Duration (s) | Amplitude RMS (μV) | # Annotators agreeing | Seizure probability (AI) |
|---|---|---|---|---|
| Detected (N = 42) | 496 ± 157 | 67.8 ± 28.2 | 2.90 ± 0.0925 | 0.790 ± 0.0746 |
| Missed (N = 5) | 279 ± 207 | 18.6 ± 8.18 | 2.400 ± 0.629 | 0.228 ± 0.1058 |
| AUC | 0.595 | 0.861 | 0.752 | 0.981 |
Average ± CI95 for AI probability for correctly detected seizures and missed seizures.
| Non-seizure patients (N = 32) | # Annotators disagreeing | Seizure probability (AI) |
|---|---|---|
| Detected (N = 25) | 0.280 ± 0.188 | 0.287 ± 0.075 |
| False alarms (N = 7) | 0.428 ± 0.477 | 0.526 ± 0.125 |
| AUC | 0.574 | 0.851 |