| Literature DB >> 35360710 |
Lei Shao1, Xiaomei Zhang2, Teng Hu3, Yang Chen2, Chuan Zhang1, Li Dong1, Saiguang Ling4, Zhou Dong4, Wen Da Zhou1, Rui Heng Zhang1, Lei Qin2, Wen Bin Wei1.
Abstract
Purpose: To predict the fundus tessellation (FT) severity with machine learning methods.Entities:
Keywords: fundus tessellated density; fundus tessellation; fundus tessellation severity; machine learning; the Beijing eye study
Year: 2022 PMID: 35360710 PMCID: PMC8960643 DOI: 10.3389/fmed.2022.817114
Source DB: PubMed Journal: Front Med (Lausanne) ISSN: 2296-858X
Descriptive statistics of the original sample.
|
|
|
|
| ||
|---|---|---|---|---|---|
|
|
|
|
|
|
|
| No FT | 282 (8.25) | 0.07 (0.04) | 50.45 (8.30) | 85 (5.88) | 197 (9.98) |
| Light FT | 2,312 (67.62) | 0.15 (0.07) | 51.43 (8.70) | 919 (63.60) | 1,393 (70.57) |
| Moderate FT | 684 (20.01) | 0.28 (0.07) | 59.66 (8.55) | 366 (25.33) | 318 (16.11) |
| Severe FT | 141 (4.12) | 0.33 (0.06) | 65.04 (7.21) | 75 (5.19) | 66 (3.34) |
| Total | 3,419 | 0.18 (0.09) | 53.55 (9.51) | 1,445 | 1,974 |
The results of different algorithms.
|
|
|
|
|
| |
|---|---|---|---|---|---|
|
|
|
|
| ||
| Age | 0.0575*** (0.0051) | 0.0330*** (0.0027) | 0.0317*** (0.0028) | 0.0216 | – |
| Gender | −0.2026* (0.0889) | −0.0964* (0.0481) | −0.0835* (0.0488) | 0.0001 | – |
| FTD | 35.6489*** (2.4007) | 17.2249*** (1.1994) | 17.1730*** (1.1980) | 0.2327 | – |
| FTD2 | −24.4600*** (5.0442) | −9.5187*** (2.5911) | −8.0721** (2.6208) | 0.2090 | – |
| Threshold 1 (no FT) | ≤ 3.7389 | ≤ 1.9527 | ≤ 2.0456 | [0, 0.3078] | – |
| Threshold 2 (light FT) | [3.7389, 10.5053] | [1.9527, 5.5435] | [2.0456, 5.6847] | [0.3078, 0.3347] | – |
| Threshold 3 (moderate FT) | [10.5053, 13.9323] | [5.5435, 7.4299] | [5.6847, 7.6598] | [0.3347, 0.4048] | – |
| Threshold 4 (severe FT) | >13.9323 | >7.4299 | >7.6598 | (0.4048, 1) | – |
***P ≤ 0.001; **P ≤ 0.01; *P ≤ 0.05.
In-sample correct classification rate for each machine learning method.
|
|
|
|
|
|
| |
|---|---|---|---|---|---|---|
| Precision | No FT | 0.1950 | 0.1312 | 0.1596 | 0.3475 | 0.1986 |
| Light FT | 0.9208 | 0.9299 | 0.9330 | 0.9373 | 0.9252 | |
| Moderate FT | 0.6535 | 0.6477 | 0.6360 | 0.7003 | 0.6974 | |
| Severe FT | 0.0851 | 0.0922 | 0.0709 | 0.2482 | 0.3262 | |
| Total | 0.7730 | 0.7730 | 0.7742 | 0.8128 | 0.7950 | |
| Recall | No FT | 0.5556 | 0.5692 | 0.5921 | 0.7101 | 0.5657 |
| Light FT | 0.8217 | 0.8162 | 0.8158 | 0.8458 | 0.8281 | |
| Moderate FT | 0.6340 | 0.6392 | 0.6416 | 0.7065 | 0.6964 | |
| Severe FT | 0.5000 | 0.4815 | 0.4762 | 0.8537 | 0.8846 | |
| F1-score | 0.5334 | 0.5240 | 0.5254 | 0.6505 | 0.6236 | |
| Weighted-average F1-score | 0.7458 | 0.7406 | 0.7422 | 0.7964 | 0.7743 | |
Out-of-sample correct classification rate for each machine learning method.
|
|
|
|
|
|
| |
|---|---|---|---|---|---|---|
| Precision | No FT | 0.1957 | 0.1667 | 0.1739 | 0.2174 | 0.2681 |
| Light FT | 0.9268 | 0.9312 | 0.9312 | 0.9083 | 0.9065 | |
| Moderate FT | 0.6474 | 0.6419 | 0.6281 | 0.6364 | 0.5604 | |
| Severe FT | 0.0676 | 0.0676 | 0.0676 | 0.1081 | 0.1892 | |
| Total | 0.7712 | 0.7706 | 0.7683 | 0.7601 | 0.7503 | |
| Recall | No FT | 0.4821 | 0.4694 | 0.4706 | 0.3846 | 0.4111 |
| Light FT | 0.8128 | 0.8098 | 0.8080 | 0.8085 | 0.8114 | |
| Moderate FT | 0.6657 | 0.6676 | 0.6647 | 0.6696 | 0.6559 | |
| Severe FT | 0.7143 | 0.7143 | 0.6250 | 0.6667 | 0.3333 | |
| F1-score | 0.5446 | 0.5382 | 0.5293 | 0.5376 | 0.5145 | |
| Weighted-average F1-score | 0.7419 | 0.7390 | 0.7371 | 0.7367 | 0.7332 | |
Figure 1ROC curve and AUC value of five machine learning methods. (A) The in-sample ROC curves and AUC values of the five machine learning methods. (B) The out-sample ROC curves and AUC values of five machine learning methods.