| Literature DB >> 36148014 |
Dongmei Zhu1,2, Junyu Li3,4,5, Yan Li2, Ji Wu6, Lin Zhu2, Jian Li1, Zimo Wang1, Jinfeng Xu1, Fajin Dong1, Jun Cheng3,4,5.
Abstract
Objective: We aim to establish a deep learning model called multimodal ultrasound fusion network (MUF-Net) based on gray-scale and contrast-enhanced ultrasound (CEUS) images for classifying benign and malignant solid renal tumors automatically and to compare the model's performance with the assessments by radiologists with different levels of experience.Entities:
Keywords: artificial intelligence; classification; contrast-enhanced ultrasound; deep learning; renal tumor
Year: 2022 PMID: 36148014 PMCID: PMC9488515 DOI: 10.3389/fmolb.2022.982703
Source DB: PubMed Journal: Front Mol Biosci ISSN: 2296-889X
FIGURE 1Flow diagram of patient enrollment.
Patient characteristics.
| Malignant ( | Benign ( |
| |
|---|---|---|---|
| Gender: n (%) | < 0.001* | ||
| Male | 74 (74.0%) | 23 (28.4%) | |
| Female | 26 (26.0%) | 58 (71.6%) | |
| Age: mean ± STD | 58.36 ± 14.06 | 53.31 ± 14.00 | 0.017* |
| BMI: mean (IQR) | 23.0 (22.0–25.0) | 23.0 (21.0–24.0) | 0.275 |
| Tumor mean size: mean (IQR) | 4.0 (3.0–6.0) | 4.0 (3.0–5.0) | 0.918 |
| Clinical sign: n (%) | 0.588 | ||
| Waist discomfort/Fatigue | 46 (46.0%) | 34 (42.0%) | |
| No symptoms | 54 (54.0%) | 47 (58.0%) | |
| Surgery: n (%) | 0.475 | ||
| Partial nephrectomy | 41 (41.0%) | 29 (35.8%) | |
| Radical nephrectomy | 59 (59.0%) | 52 (64.2%) |
BMI, body mass index; IQR, interquartile range; STD, standard deviation.
*Statistically significant.
FIGURE 2Data annotation and preprocessing.
Number distribution of patients and images among histologic types.
| Benign | Malignant | ||||||
|---|---|---|---|---|---|---|---|
| Total | Atypical | Typical | Total | ccRCC | pRCC | chRCC | |
| Patients | 81 | 36 | 45 | 100 | 62 | 25 | 13 |
| Images | 3659 | 1531 | 2128 | 6135 | 2964 | 2114 | 1057 |
ccRCC, clear cell Renal cell carcinoma; chRCC, chromophobe renal cell carcinomas; pRCC, papillary renal cell carcinomas.
FIGURE 3Overall architecture of the proposed MUF-Net framework.
Classification performance of deep learning models and radiologists.
| AUC (95% CI) | Accuracy (%) | Sensitivity (% | Specificity (%) | PPV (%) | NPV (%) | |
|---|---|---|---|---|---|---|
| Junior radiologists | 0.740 (0.70–0.75) | 70.6 | 89.3 | 58.7 | 58.0 | 89.5 |
| Senior radiologists | 0.794 (0.72–0.83) | 75.7 | 95.9 | 62.9 | 62.3 | 95.9 |
| B-mode-Net | 0.820 (0.70–0.83) | 74.5 | 75.0 | 77.0 | 73.4 | 62.3 |
| CEUS-mode-Net | 0.815 (0.75–0.89) | 73.9 | 73.8 | 73.2 | 72.5 | 62.2 |
| MUF-Net | 0.877 (0.83–0.93) | 80.0 | 80.4 | 79.1 | 86.9 | 70.0 |
CI, confidence interval; CEUS-mode, contrast-enhanced ultrasound mode; MUF-Net, multimodal ultrasound fusion network; AUC, area under the receiver operating characteristic curve; PPV, positive predictive value; NPV, negative predictive value.
FIGURE 4The receiver operating characteristic curves of the MUF-Net, single-mode models, and radiologists’ assessments in the test cohort.
FIGURE 5Feature heatmaps of a benign tumor and a malignant tumor to show B-mode and CEUS-mode images contain complementary information for diagnosis. The red color represents higher weights (i.e., the network pays more attention to this region).