| Literature DB >> 33178751 |
Yu-Jie Li1, Li-Ge Zhang2,3, Hong-Yu Zhi1, Kun-Hua Zhong4, Wen-Quan He1, Yang Chen1, Zhi-Yong Yang1, Lin Chen1, Xue-Hong Bai1, Xiao-Lin Qin2, Dan-Feng Li1, Dan-Dan Wang1, Jian-Teng Gu1, Jiao-Lin Ning1, Kai-Zhi Lu1, Ju Zhang4, Zheng-Yuan Xia5, Yu-Wen Chen4, Bin Yi1.
Abstract
BACKGROUND: Dynamic and precise estimation of blood loss (EBL) is quite important for perioperative management. To date, the Triton System, based on feature extraction technology (FET), has been applied to estimate intra-operative haemoglobin (Hb) loss but is unable to directly assess the amount of blood loss. We aimed to develop a method for the dynamic and precise EBL and estimate Hb loss (EHL) based on artificial intelligence (AI).Entities:
Keywords: Intra-operative blood loss; densely connected convolutional networks; feature extraction technology; intra-operative haemoglobin loss
Year: 2020 PMID: 33178751 PMCID: PMC7607084 DOI: 10.21037/atm-20-1806
Source DB: PubMed Journal: Ann Transl Med ISSN: 2305-5839
Figure S1The typical images of blood-soaked sponges with a set gradient of volume.
Figure 1Flow diagram for the estimating models based on artificial intelligence, namely, dense network and feature engineering. EBL, estimation of blood loss; EHL, estimation of haemoglobin loss.
Figure S2Blood area images and blood-soaked sponge image.
The features of the blood-soaked gauze image
| Features of | Features of | Description |
|---|---|---|
|
|
| Feature of blood area |
|
|
| Mean of pixels in H channel |
|
|
| Mean of pixels in S channel |
|
|
| Mean of pixels in V channel |
|
|
| Variance of pixels in H channel |
|
|
| Variance of pixels in S channel |
|
|
| Variance of pixels in V channel |
Hyperparameters of Xgboost
| Hyperparameters | EBL | EHL |
|---|---|---|
| Learning rate | 0.1 | 0.05 |
| Number of estimators | 600 | 300 |
| Max depth | 3 | 4 |
| Min child weight | 4 | 6 |
| Subsample | 0.6 | 0.6 |
| Gama | 0.1 | 0.1 |
| Reg_alpha | 0.1 | 0.1 |
| Reg_lambda | 0.1 | 0.1 |
| Eval_metric | RMSE | RMSE |
RMSE, root mean squared error; EBL, estimation of blood loss; EHL, estimation of haemoglobin loss.
Hyperparameters of random forest
| Hyperparameters | Values |
|---|---|
| Number of estimators | 100 |
| Min_samples_split | 2 |
| Criterion | MSE |
| Min_samples_leaf | 1 |
| Min_impurity_decrease | 1e-07 |
MSE, mean square error.
Figure S3Part of supercomputing platform.
The detailed architecture of DenseNet
| Layers | Composition | Output size |
|---|---|---|
| Convolution |
| 8×256×256 |
| Dense block |
| 16×256×256 |
| Transition layers |
| 8×256×256 |
|
| 8×64×64 | |
| Dense block |
| 16×64×64 |
| Transition layers |
| 8×64×64 |
|
| 8×16×16 | |
| Dense block |
| 16×16×16 |
| Transition layers |
| 8×16×16 |
|
| 8×4×4 | |
| Dense block |
| 16×4×4 |
| Regression layer |
| 16×1 |
| Linear | 1×2 |
ReLU, rectified linear unit.
Model performances of EBL and EHL based on feature engineering and dense network
| Algorithms | R2 | MAE | MSE |
|---|---|---|---|
| EBL | |||
| LR | 0.906 (0.896, 0.916) | 0.355 (0.332, 0.378) | 0.265 (0.242, 0.288) |
| RF | 0.938 (0.925, 0.950) | 0.178 (0.152, 0.204) | 0.176 (0.148, 0.204) |
| Xgboost | 0.946 (0.937, 0.956) | 0.215 (0.202, 0.228) | 0.150 (0.130, 0.170) |
| DenseNet | 0.966 (0.962, 0.971) | 0.186 (0.167, 0.207) | 0.096 (0.084, 0.109) |
| EHL | |||
| LR | 0.861 (0.844, 0.877) | 0.545 (0.501, 0.589) | 0.642 (0.570, 0.714) |
| RF | 0.907 (0.894, 0.920) | 0.419 (0.369, 0.470) | 0.430 (0.365, 0.494) |
| Xgboost | 0.915 (0.904, 0.925) | 0.409 (0.362, 0.456) | 0.396 (0.345, 0.446) |
| DenseNet | 0.941 (0.934, 0.948) | 0.325 (0.293, 0.355) | 0.284 (0.251, 0.317) |
Data were presented with 95% CIs. EBL, estimation of blood loss; EHL, estimation of haemoglobin loss; LR, linear regression; RF, random forest; Xgboost, eXtreme Gradient Boosting; DenseNet, Dense Network; MAE, mean absolute error; MSE, mean square error; CI, confidence interval.
The concordance between methods based on AI for estimating blood loss and haemoglobin loss and the actual data
| Parameter | LR | RF | Xgboost | DenseNet |
|---|---|---|---|---|
| EBL | ||||
| SD | 0.45 | 0.33 | 0.34 | 0.25 |
| LLOA (mL) | −0.77 (−0.98, −0.57) | −0.59 (−0.74, −0.44) | −0.71 (−0.87, −0.55) | −0.47 (−0.50, −0.44) |
| ULOA (mL) | 0.99 (0.79, 1.20) | 0.69 (0.54, 0.83) | 0.64 (0.48, 0.80) | 0.52 (0.48, 0.55) |
| Bias | 0.11 (−0.01, 0.23) | 0.05 (−0.04, 0.13) | −0.04 (−0.13, 0.06) | 0.02 (0.00, 0.04) |
| EHL | ||||
| SD | 0.67 | 0.57 | 0.55 | 0.47 |
| LLOA (g) | −1.17 (−1.48, −0.87) | −1.04 (−1.30, −0.78) | −1.13 (−1.38, −0.88) | −0.87 (−0.93, −0.81) |
| ULOA (g) | 1.465 (1.16, 1.77) | 1.19 (0.93, 1.45) | 1.03 (0.78, 1.28) | 0.97 (0.91, 1.03) |
| Bias | 0.15 (−0.03, 0.33) | 0.078 (−0.07, 0.23) | −0.05 (−0.20, 0.10) | 0.05 (0.02, 0.09) |
Data were presented with 95% CIs. EBL, estimation of blood loss; EHL, estimation of haemoglobin loss; LR, linear regression; RF, random forest; Xgboost, eXtreme Gradient Boosting; DenseNet, Dense Network; SD, standard deviation; LLOA, lower limit of agreement; ULOA, upper limit of agreement; CI, confidence interval.
Figure 2Results of concordance among methods based on (A) linear regression; (B) random forest; (C) extreme gradient boosting; (D) dense network for estimating blood loss and the actual data. EBL, estimation of blood loss; LLOA, lower limit of agreement; ULOA, upper limit of agreement; and CI, confidence interval.
Figure 3Results of concordance among methods based on (A) linear regression; (B) random forest; (C) extreme gradient boosting; (D) dense network for estimating haemoglobin loss and the actual data. EHL, estimation of haemoglobin loss; LLOA, lower limit of agreement; ULOA, upper limit of agreement; CI, confidence interval.