| Literature DB >> 36015865 |
Jia Xie1, Zhu Wang1, Zhiwen Yu1, Bin Guo1.
Abstract
Modern healthcare practice, especially in intensive care units, produces a vast amount of multivariate time series of health-related data, e.g., multi-lead electrocardiogram (ECG), pulse waveform, blood pressure waveform and so on. As a result, timely and accurate prediction of medical intervention (e.g., intravenous injection) becomes possible, by exploring such semantic-rich time series. Existing works mainly focused on onset prediction at the granularity of hours that was not suitable for medication intervention in emergency medicine. This research proposes a Multi-Variable Hybrid Attentive Model (MVHA) to predict the impending need of medical intervention, by jointly mining multiple time series. Specifically, a two-level attention mechanism is designed to capture the pattern of fluctuations and trends of different time series. This work applied MVHA to the prediction of the impending intravenous injection need of critical patients at the intensive care units. Experiments on the MIMIC Waveform Database demonstrated that the proposed model achieves a prediction accuracy of 0.8475 and an ROC-AUC of 0.8318, which significantly outperforms baseline models.Entities:
Keywords: attention mechanism; hybrid attentive model; medical intervention; multivariate time series
Mesh:
Year: 2022 PMID: 36015865 PMCID: PMC9414519 DOI: 10.3390/s22166104
Source DB: PubMed Journal: Sensors (Basel) ISSN: 1424-8220 Impact factor: 3.847
Figure 1Conceptual illustration of abnormal fluctuations.
Notations for MVHA.
| Notation | Description |
|---|---|
| S, s, | multivariate physiological signals (G and L), one of |
| G, | high-frequency waveforms, the cg-th channel in G, the k-th segment in |
| L, | numerical waveforms, the cl-th channel in L, the k-th segment in |
| P∈ | the convolutional features, the j-th column in P |
| O∈ | output of the CNN layer, the sum of |
| weights of the fluctuant level attention, the k-th value in | |
| H | output of the Bi-LSTM layer, the k-th column in H |
| Z | combination of H, the sum of |
| X | output of the fully connected layer, the k-th column of X |
| feature weights of the fluctuant level attention, feature weights of the trend level attention | |
| d, | output of the trend level attention, difference between |
|
| prediction result of the i-th segment |
Figure 2An overview of the MVHA model.
Figure 3The structure of fluctuant attention mechanism.
Figure 4The structure of trend attention mechanism.
Figure 5One subject’s multi-channel time series which includes a group of intravenous injections.
Figure 6The concrete architecture of the hybrid model. In each layer, the meaning behind symbol ‘@’ indicate the size of the convolution filter, the number of neurons, the stride of the filter, or the size of the pooling layer, the stride of the pooling layer, respectively.
Performance comparison of different models.
| ACC | ROC-AUC | F1 | |
|---|---|---|---|
| CNN (ECG) | 0.8129 | 0.7917 | 0.7630 |
| CNN-LSTM | 0.8090 | 0.7845 | 0.7417 |
| CNN-FAttn | 0.8257 | 0.8119 | 0.7672 |
| CLSTM-FAttn | 0.8314 | 0.8181 | 0.7581 |
| CLSTM-TAttn | 0.8137 | 0.7931 | 0.7617 |
| MVHA | 0.8475 | 0.8318 | 0.7831 |
Figure 7The boxplot diagram of accuracy.
Figure 8The risk level for an intravenous injection predicted by MVHA. The learned attention cells are highlighted in orange (above 0.15) and yellow (between 0.1 to 0.15).
Figure 9The trend level attention of different channels.