| Literature DB >> 33708869 |
Fabao Xu1, Yifan Xiang1, Cheng Wan2, Qijing You2, Lijun Zhou1, Cong Li1, Songjian Gong3, Yajun Gong1, Longhui Li1, Zhongwen Li1, Li Zhang4, Xiayin Zhang1, Chong Guo1, Kunbei Lai1, Chuangxin Huang1, Hongkun Zhao1, Chenjin Jin1, Haotian Lin1,5.
Abstract
BACKGROUND: Machine learning was used to predict subretinal fluid absorption (SFA) at 1, 3 and 6 months after laser treatment in patients with central serous chorioretinopathy (CSC).Entities:
Keywords: Machine learning; central serous chorioretinopathy (CSC); laser treatment; optical coherence tomography (OCT); subretinal fluid absorption (SFA)
Year: 2021 PMID: 33708869 PMCID: PMC7940879 DOI: 10.21037/atm-20-1519
Source DB: PubMed Journal: Ann Transl Med ISSN: 2305-5839
Figure 1Overall Study Workflow. Workflow diagram showing the training overview for the SFA prediction model.
Patient demographics
| Variable | 1 M prediction | 3 M prediction | 6 M prediction | |||||
|---|---|---|---|---|---|---|---|---|
| ZOC data | XEC data | ZOC data | XEC data | ZOC data | XEC data | |||
| Patients | 401 (63 females) | 60 (11 females) | 308 (46 females) | 30 (5 females) | 244 (37 females) | 19 (2 females) | ||
| Eyes | 416 | 64 | 322 | 33 | 258 | 20 | ||
| Age (years) | 43.19±6.44 | 43.86±7.06 | 42.87±6.44 | 43.21±7.51 | 42.96±6.48 | 41.70±6.73 | ||
| VA (logMAR) | 0.28±0.21 | 0.29±0.16 | 0.28±0.21 | 0.27±0.16 | 0.28±0.22 | 0.28±0.17 | ||
Visual acuity (VA) values are presented as the means ± standard deviations at baseline in different groups (in logarithm of minimum angle of resolution [logMAR] units). ZOC, Zhongshan Ophthalmic Center; XEC, Xiamen Eye Center.
Accuracy of the subretinal fluid absorption predictions during internal and external validation tests
| Variable | 1 M (ACC, %) | 3 M (ACC, %) | 6 M (ACC, %) | ||
| Baseline | Baseline + 1 M | Baseline + 1 M + 3 M | |||
| Algorithm learner | |||||
| Internal validation | |||||
| Decision tree | 0.563±0.054 | 0.712±0.050 | 0.767±0.095 | ||
| Adaboost | 0.603±0.066 | 0.749±0.089 | 0.748±0.057 | ||
| Gradient boosting | 0.623±0.054 | 0.755±0.052* | 0.791±0.072 | ||
| XGBoost | 0.628±0.045 | 0.752±0.056 | 0.810±0.059 | ||
| Random forest | 0.651±0.068* | 0.753±0.065 | 0.818±0.058* | ||
| Extra-trees | 0.645±0.044 | 0.740±0.059 | 0.795±0.079 | ||
| Blending algorithm | 0.647±0.067 | 0.749±0.058 | 0.810±0.066 | ||
| External validation | |||||
| Decision tree | 0.563 | 0.515 | 0.800 | ||
| AdaBoost | 0.719 | 0.576 | 0.750 | ||
| Gradient boosting | 0.703 | 0.697 | 0.850 | ||
| XGBoost | 0.734* | 0.727* | 0.900* | ||
| Random forest | 0.703 | 0.636 | 0.900* | ||
| Extra-trees | 0.734* | 0.636 | 0.900* | ||
| Blending algorithm | 0.703 | 0.697 | 0.900* | ||
| Simplified model | |||||
| Internal validation | |||||
| Decision tree | 0.536±0.053 | 0.687±0.048 | 0.764±0.069 | ||
| AdaBoost | 0.613±0.069 | 0.725±0.072 | 0.779±0.057 | ||
| Gradient boosting | 0.630±0.057 | 0.780±0.043* | 0.818±0.074* | ||
| XGBoost | 0.625±0.049 | 0.768±0.066 | 0.811±0.074 | ||
| Random forest | 0.634±0.048 | 0.762±0.067 | 0.811±0.063 | ||
| Extra-trees | 0.635±0.038* | 0.737±0.057 | 0.814±0.081 | ||
| Blending algorithm | 0.632±0.056 | 0.759±0.047 | 0.811±0.070 | ||
| External validation | |||||
| Decision tree | 0.563 | 0.515 | 0.850 | ||
| AdaBoost | 0.563 | 0.636 | 0.800 | ||
| Gradient boosting | 0.609 | 0.727 | 0.900* | ||
| XGBoost | 0.578 | 0.697 | 0.900* | ||
| Random forest | 0.672* | 0.667 | 0.900* | ||
| Extra-trees | 0.641 | 0.667 | 0.900* | ||
| Blending algorithm | 0.656 | 0.758* | 0.900* |
*, the best learners in all cases. ACC, accuracy of the SFA prediction at 1, 3 and 6 months after laser treatment compared with the ground truth. The results were stratified according to the follow-up periods and the points input into the algorithms.
Figure 2Prediction performance in the internal and external validation tests on the full model. Panels A, B, and C, CM of the classification in the internal validation test. Panels D, E, and F, ROC of the internal validation test. Panels G, H, and I, CM of the classification in the external validation test. Panels J, K, and L, ROC of the external validation test. CM, confusion matrix; ROC, receiver operating characteristic curve.
Figure 3Prediction performance in the internal and external validation tests on the simplified model. Panels A, B, and C, CM of the classification in the internal validation test. Panels D, E, and F, ROC of the internal validation test. Panels G, H, and I, CM of the classification in the external validation test. Panels J, K, and L, ROC of the external validation test. CM, confusion matrix; ROC, receiver operating characteristic curve.