Literature DB >> 33880797

Artificial intelligence assists identifying malignant versus benign liver lesions using contrast-enhanced ultrasound.

Hang-Tong Hu1,2, Wei Wang1, Li-Da Chen1, Si-Min Ruan1, Shu-Ling Chen1, Xin Li3, Ming-De Lu1,2, Xiao-Yan Xie1, Ming Kuang1,2.   

Abstract

BACKGROUND AND AIM: This study aims to construct a strategy that uses assistance from artificial intelligence (AI) to assist radiologists in the identification of malignant versus benign focal liver lesions (FLLs) using contrast-enhanced ultrasound (CEUS).
METHODS: A training set (patients = 363) and a testing set (patients = 211) were collected from our institute. On four-phase CEUS images in the training set, a composite deep learning architecture was trained and tuned for differentiating malignant and benign FLLs. In the test dataset, AI performance was evaluated by comparison with radiologists with varied levels of experience. Based on the comparison, an AI assistance strategy was constructed, and its usefulness in reducing CEUS interobserver heterogeneity was further tested.
RESULTS: In the test set, to identify malignant versus benign FLLs, AI achieved an area under the curve of 0.934 (95% CI 0.890-0.978) with an accuracy of 91.0%. Comparing with radiologists reviewing videos along with complementary patient information, AI outperformed residents (82.9-84.4%, P = 0.038) and matched the performance of experts (87.2-88.2%, P = 0.438). Due to the higher positive predictive value (PPV) (AI: 95.6% vs residents: 88.6-89.7%, P = 0.056), an AI strategy was defined to improve the malignant diagnosis. With the assistance of AI, radiologists exhibited a sensitivity improvement of 97.0-99.4% (P < 0.05) and an accuracy of 91.0-92.9% (P = 0.008-0.189), which was comparable with that of the experts (P = 0.904).
CONCLUSIONS: The CEUS-based AI strategy improved the performance of residents and reduced CEUS's interobserver heterogeneity in the differentiation of benign and malignant FLLs.
© 2021 The Authors. Journal of Gastroenterology and Hepatology published by Journal of Gastroenterology and Hepatology Foundation and John Wiley & Sons Australia, Ltd.

Entities:  

Keywords:  artificial intelligence; computer-assisted; diagnosis; liver neoplasms; ultrasonography

Mesh:

Substances:

Year:  2021        PMID: 33880797      PMCID: PMC8518504          DOI: 10.1111/jgh.15522

Source DB:  PubMed          Journal:  J Gastroenterol Hepatol        ISSN: 0815-9319            Impact factor:   4.029


Introduction

The worldwide incidence of focal liver lesions (FLLs) is increasing and is accompanied by an increase in the prevalence of hepatocellular carcinoma, intrahepatic cholangiocarcinoma, and metastasis from colorectal cancer. , The noninvasive differentiation of malignant lesions from benign lesions is a key diagnostic process before treatment and routinely relies on computed tomography (CT) and magnetic resonance (MR). Unfortunately, the reported error rate of FLL characterization varies from 11% to 33%. , Compared with CT and MR, contrast‐enhanced ultrasound (CEUS) has the advantages of allowing real‐time scanning and providing dynamic perfusion information with fewer application limitations. , CEUS has been widely used in Europe and Asia, especially in China, where members of the population tend to have lower body mass index values. A meta‐analysis demonstrated that compared with CT and MR, CEUS has an equivalent diagnostic sensitivity (Se) (87% vs 86% and 75%, respectively) and specificity (Sp) (91% vs 88% and 82%, respectively). The main controversy regarding CEUS is its poor generalizability in the reading of real‐time videos between different readers. Regarding the differentiation of diagnoses of hepatocellular carcinoma from target FLLs, the Se of CEUS varied from 84% to 95%, and the Sp varied from 25% to 77% among different radiologists at a single center. However, at different centers, the Se varied from 52% to 98%, and the Sp varied from 71% to 100%. To date, no solution to this critical issue has been proposed. Artificial intelligence (AI) in medicine has been widely explored. Specifically, deep learning has been reported to achieve excellent performance on images of breast cancer, pulmonary diseases, , diabetic retinopathy, , and dermatoma, , even outperforming human experts. , Deep learning models can learn the most predictive features directly from raw image pixels and avoid the subjective feature engineering required in conventional machine learning, making them independent of prior human knowledge and capable of a high degree of fault tolerance. Hwang et al. reported a deep‐learning‐based algorithm that could detect major thoracic diseases from chest radiographs, and good validation results were achieved across five external test datasets with an area under the curve (AUC) of 0.973–1.000. These findings indicate that deep learning offers inherently good generalizability across different radiologists at different centers; thus, this methodology has the potential to overcome the disadvantage of the poor generalizability of CEUS. Previous studies applied machine learning algorithms to CT/MRI/CEUS images to characterize FLL, , , , but none of these algorithms were targeted at reducing imaging interobserver heterogeneity or reported how these algorithms could interact with radiologists and improve diagnosing accuracy (ACC). In this study, we aimed to construct a deep learning model based on CEUS video analysis for the differentiation of benign and malignant FLLs. The performance of AI was compared with that of radiologists with varied experiences. The influence of the AI‐radiologist interaction on performance improvement was assessed, focusing on AI's potential to reduce interobserver heterogeneity.

Methods

Study design and participants

This retrospective study was approved by the ICE for Clinical Research and Animal Trials of the First Affiliated Hospital of Sun Yat‐sen University (No. [2015]106). Informed consent from patients was waived given the retrospective nature of the study. Patients who underwent CEUS examination for FLL characterization met the inclusion criteria. Cases were excluded if they met the following criteria: (i) patients who received pre‐imaging treatment with surgery, trans‐arterial chemoembolization, ablation, systemic chemotherapy, or catheterization; (ii) cases with simple cystic lesions that were not indicated for CEUS examination; (iii) images with greater than 1/3 of the target lesion covered by an acoustic shadow; (iv) cases with missing images of any needed phase; and (v) cases who could not be given a definite diagnosis based on the reference standard. As shown in Table 1, two datasets were collected from the hospital: a development set of 363 patients obtained from January 2014 to May 2015 and a test set of 211 patients obtained from June 2015 to December 2015.
Table 1

Baseline characteristics of the included datasets

Data setsDevelopment setTesting set P
Reference standardMalignant, No.2811640.984
Benign, No.8247
GenderMale, No.2731520.457
Female, No.9059
AgeMean ± SD, year52.64 ± 13.7754.30 ± 12.580.151
Lesion sizeMean ± SD, cm5.10 ± 3.274.74 ± 4.050.245
No. of images614,728 (augmented)616
Ultrasound devicesTypes, No.56
CEUS examinersNo.1011

CEUS, contrast‐enhanced ultrasound; No., number; SD, standard deviation.

Baseline characteristics of the included datasets CEUS, contrast‐enhanced ultrasound; No., number; SD, standard deviation. The reference diagnoses for malignant lesions, such as hepatocellular carcinoma and liver metastasis, were obtained by pathology. For benign lesions, such as hemangiomas and focal nodular hyperplasia, we used typical characteristics on contrast‐enhanced ultrasonography (CEUS) and at least 12 months of follow‐up without progression as standard criteria. For abscesses, the diagnosis was obtained by successful suction of pus or lesion shrinkage after anti‐infection treatment. For other benign and malignant lesion categories, pathology was needed for diagnosis confirmation.

Contrast‐enhanced ultrasonography examination

Contrast‐enhanced ultrasonography systems used were listed in Appendix A. First, the target lesions were detected and assessed by the unenhanced sonography. Second, patients intravenously received a bolus injection of 2.4 mL (up to 3 mL) SonoVue (Bracco) via the antecubital vein followed by 5 mL of 0.9% normal saline solution. Third, CEUS of the largest tumor cross‐section within 6 min was recorded as the arterial, portal venous, and delayed phases at 0–30 s, 31–120 s, and 121–360 s after injection in separate clips with varied time duration. The images and video clips were stored in the Digital Imaging and Communications in Medicine (DICOM) format.

Data preparation

All patients' CEUS examinations, pathological results, and clinical information, which included age, gender, alpha‐fetoprotein, hepatitis, liver cirrhosis, and history of malignancy, were collected from the automatic storage and retrieval system in the hospital. Cases were deidentified before further processing. The results of the CEUS examinations were stored as plain scans and video clips of enhanced phases in DICOM format. Videos were converted into consecutive frames using the native function of MicroDicom DICOM viewer 2.8.3. Based on the 2012 version of the Guidelines and Good Clinical Practice Recommendations for CEUS in the Liver, plain scans and enhanced frames were extracted from specified time durations of CEUS video clips. In total, 32 (1 unenhanced, 15 arterial, 15 portal, and 1 delayed phase images) or 46 (1 unenhanced, 15 arterial, 15 portal, and 15 delayed phase images) representative frames were manually selected from each case (Appendix B). For the test datasets, four representative frames per case (one from each phase) were randomly selected. The frames were preprocessed into a square image containing the lesion and a perilesional area that was 1–2 cm in diameter. The preprocessed images were saved in an 8‐bit JPEG format. Finally, 14 296 original frames for AI development and 844 frames for testing were included in this study. Gold standard labels for the images of each case were assigned based on the reference diagnosis.

Artificial intelligence development: deep learning model

Network architectures

Network architectures and the flowchart of AI development are presented in Figure 1. Microsoft's residual neural network architecture (ResNet), which is regarded as a 4th‐generation convolutional neural network, was used for deep learning model training (Appendix C). Four 152‐layer ResNet branches on four‐phase images were trained independently while fused by a max‐pooling layer and a fully connected layer to obtain the final output. Given the limited data available for training, we applied a transfer learning algorithm that preserved most parts of the network (152‐layer ResNet) that had already been trained on a large dataset (ImageNet) and retrained the weights of the fully connected layer with random initialization on the target dataset (our training set).
Figure 1

Flowchart of data preparation and AI development. Data preparation consisted of data collection, decomposition of video clips into frames, frame selection, and image cropping into square four‐phase AI inputs. AI development consisted of input, network architectures, and output.

Flowchart of data preparation and AI development. Data preparation consisted of data collection, decomposition of video clips into frames, frame selection, and image cropping into square four‐phase AI inputs. AI development consisted of input, network architectures, and output.

Input and output

The input images were resized to a resolution of 224 × 224 pixels. To improve the model's generalizability, we applied an augmentation procedure to enrich the data diversity ; this augmentation was based on algorithms via brightness changes, contrast adjustment, rotation, parallel shifting, and simple combinations thereof to mimic the data diversity observed in clinical practice (Appendix D). Through augmentation, 43 images (including the original image) were generated from a single image. The augmentation procedure generated 614 728 images for AI training. The four‐phase images were input to the corresponding four branches of the 152‐layer ResNet. The output for each case was the risk probability of malignancy with values ranging from 0 to 1 and the initial diagnosis of benign or malignant.

Training protocol

Training was performed on a workstation with a GeForce GTX 1080 Ti graphics processing unit (NVIDIA), a Core i7‐6700 K (Intel) central processing unit, and 64 GB of random‐access memory. Python 3.5 (https://www.python.org) and the Torch (http://torch.ch) framework for neural networks were used for this purpose. Augmentation was performed using the Python imaging library of Pillow 3.3.1 (https://pypi.python.org/pypi/Pillow/3.3.1). During training, the dataset was randomly divided into a training set (80%) and a tuning set (20%). Detailed training configuration can be found in Appendix E.

Artificial intelligence performance and comparison with radiologists

Performance of the artificial intelligence model versus radiologists

By applying the AI to the test set, each case was evaluated by the same input method as used in the training process, and output was presented as a risk probability of malignancy for each case and the corresponding diagnosis of benign or malignant. For radiologists reading CEUS, the diagnosis was referenced to the Guidelines and Good Clinical Practice Recommendations for CEUS in the Liver (Update 2012). For lesions in the noncirrhotic liver, those with arterial hyper‐enhancement and late hypo‐enhancement tend to be malignant. Otherwise, lesions tend to be benign. For lesions in the cirrhotic liver, sustained hyper‐ or iso‐arterial and late enhancement indicate benign features; otherwise, lesions are considered malignant. Clinical information, such as medical history and blood test, aided in diagnosing. Four radiologists (two residents and two experts with 2, 3, 6, and 8 years of experience with hepatic CEUS, separately) who were blinded to the final diagnoses and did not participate in the data preparation work reviewed the cases in random order. The radiologists independently reviewed the CEUS videos along with the patients' clinical information. The performance was evaluated in terms ACC, and the diagnostic tests were assessed based on Se, Sp, positive predictive value (PPV), and negative predictive value (NPV).

Performance of radiologists alone versus radiologists with artificial intelligence assistance

By comparing the performance of the AI with that of the radiologists, an AI assistance strategy was developed based on AI's advantage in the diagnostic PPV or NPV, which suggested a more reliable diagnosis of malignancy or benignity. After an additional 1‐month interval, the radiologists reviewed the CEUS cases again with AI assistance. By assistance, the AI results provided a strong reference in cases of conflict with the radiologists' diagnoses, and the radiologists made the final decision of whether to modify the diagnosis or adhere to the initial diagnosis. Comparisons were drawn between the radiologists alone and the AI‐assisted radiologist performance.

Statistical methods

The performances of the radiologists and AI were mainly evaluated in terms of the AUC, ACC, Se, Sp, PPV, NPV, and error rates. R software (version 3.4.1; https://www.r‐project.org) was used for statistical analysis. Results with two‐sided P‐values of less than 0.05 were considered to indicate a statistically significant difference. Detailed statistical methods can be found in Appendix F.

Results

Performance of the artificial intelligence model versus radiologists

On the test set, the AI achieved an AUC of 0.934 (95% CI 0.890–0.978) and an ACC of 91.0% (95% CI 87.1–94.9%). Radiologists had an ACC varied from 82.0% to 86.7% (P = 0.116) (Table 2, Fig. 2a). In particular, the residents achieved similar Se compared with the experts (88.4–89.6% vs 88.4–90.2%, P = 0.380) but showed a deficiency in Sp (59.6–63.8% vs 72.3–80.9%, P = 0.034) (Fig. 2b).
Table 2

Detailed performance comparison between the AI and the four radiologists on the testing set

StatisticsACCSeSpPPVNPV
AI 0.910 (0.871, 0.949) 0.927 (0.887, 0.967) 0.851 (0.749, 0.953) 0.956 (0.924, 0.988) 0.769 (0.655, 0.884)
Expert10.867 (0.822, 0.913)0.884 (0.835, 0.933)0.809 (0.696, 0.921)0.942 (0.905, 0.979)0.667 (0.544, 0.789)
Expert20.863 (0.816, 0.909)0.902 (0.857, 0.948)0.723 (0.596, 0.851)0.919 (0.877, 0.961)0.680 (0.551, 0.809)
Resident10.839 (0.789, 0.888)0.896 (0.850, 0.943)0.638 (0.501, 0.776)0.896 (0.850, 0.943)0.638 (0.501, 0.776)
Resident20.820 (0.768, 0.872)0.884 (0.835, 0.933)0.596 (0.455, 0.736)0.884 (0.835, 0.933)0.596 (0.455, 0.736)
P (AI vs Experts)0.2560.4190.2970.3850.453
P (AI vs Residents)0.021 * 0.4060.016 * 0.0520.157

ACC, accuracy; Se, sensitivity; Sp, specificity. Bold fonts indicate the best performance per column.

Statistically significant (P < 0.05).

Figure 2

Performance comparison between AI and radiologists. (a) Error rate (1‐accuracy) comparison between AI and radiologists. (b) Detailed comparison of diagnostic sensitivity and specificity between AI and the radiologists.

Detailed performance comparison between the AI and the four radiologists on the testing set ACC, accuracy; Se, sensitivity; Sp, specificity. Bold fonts indicate the best performance per column. Statistically significant (P < 0.05). Performance comparison between AI and radiologists. (a) Error rate (1‐accuracy) comparison between AI and radiologists. (b) Detailed comparison of diagnostic sensitivity and specificity between AI and the radiologists. By comparison, AI outperformed residents (AUC: 82.9–84.4%, P = 0.038; ACC: 91.0% vs 86.3–86.7%, P = 0.256) and matched experts (AUC: 87.2–88.2%, P = 0.438; ACC: 91.0% vs 82.0–83.9%, P = 0.021). Specifically, AI achieved a higher PPV than the residents (95.6% vs 88.4–89.6%, P = 0.052) but comparable with experts (95.6% vs 91.9–94.2%, P = 0.385). NPV of the AI was higher than all four radiologists but not significantly (76.9% vs 59.6–68.0%, P = 0.157–0.453). This indicated that AI is more reliable diagnosis of malignancy than benignity (Table 2, Fig. 2).

Performance of radiologists alone versus radiologists with artificial intelligence assistance

The higher diagnostic PPV of AI suggested a more reliable diagnosis of malignancy. The AI strategy was defined to improve the true malignant rate, especially for residents. When a radiologist made a diagnosis that conflicted with AI's malignant prediction, a strong recommendation to modify his or her diagnosis was suggested. While when the radiologist's diagnosis conflicted with AI's benign prediction, the suggestion for diagnosis modification was general. Compared with radiologists alone, radiologists with AI assistance achieved 7.4–11.0% (P < 0.001–0.015) improved sensitivity for both residents and experts, 21.1–37.3% (P = 0.001–0.031) NPV improvement and 5.1–9.9% (P = 0.004–0.080) improved accuracy. Expert 1 experienced a 4.3% reduced Sp (P = 0.801) and 0.7% decreased PPV (P = 0.998) (Table 3, Fig. 3). With AI assistance, interobserver performance between residents and experts was comparable based on ACC (91.0–92.9%, P = 0.904), Se (97.0–99.4%, P = 0.360), Sp (66.0–76.6%, P = 0.671), PPV (91.1–93.5%, P = 0.818), and NPV (86.8–96.9%, P = 0.460) (Table 4, Fig. 3).
Table 3

Performance comparison of the four radiologists between radiologist‐alone and AI assisted radiologists on the testing set

StatisticsACCSeSpPPVNPV
Expert 1Alone/AI assisted0.867/0.9240.884/0.9700.809/0.766 0.942/0.935 0.667/0.878
P 0.0800.006 * 0.8010.9980.031 *
Expert 2Alone/AI assisted0.863/0.929 0.902/0.9820.723/0.7450.919/0.9310.680/0.921
P 0.038 * 0.005 * 1.0000.8520.014 *
Resident 1Alone/AI assisted0.839/0.9100.896/0.9700.638/0.7020.896/0.9190.638/0.868
P 0.040 * 0.015 * 0.6610.5940.031 *
Resident 2Alone/AI assisted0.820/0.9190.884/0.994 0.596/0.6600.884/0.9110.596/0.969
P 0.004 * <0.001 * 0.6700.5280.001 *

ACC, accuracy; Se, sensitivity; Sp, specificity. Bold fonts indicate the best performance per column.

Statistically significant (P < 0.05).

Figure 3

Performance validation of the strategy of AI assistance in the testing dataset. Performance comparison between radiologists with AI assistance and radiologists alone.

Table 4

Performance comparison between the four radiologists with AI assistance on the testing set

StatisticsACCSeSpPPVNPV
Expert10.924 (0.888, 0.960)0.970 (0.943, 0.996) 0.766 (0.645, 0.887) 0.935 (0.898, 0.972) 0.878 (0.778, 0.978)
Expert2 0.929 (0.894, 0.964) 0.982 (0.961, 1.000)0.745 (0.620, 0.869)0.931 (0.893, 0.968)0.921 (0.835, 1.000)
Resident10.910 (0.871, 0.949)0.970 (0.943, 0.996)0.702 (0.571, 0.833)0.919 (0.878, 0.960)0.868 (0.761, 0.976)
Resident20.919 (0.883, 0.956) 0.994 (0.982, 1.000) 0.660 (0.524, 0.795)0.911 (0.869, 0.952) 0.969 (0.908, 1.000)
P 0.9040.3600.6710.8180.460

ACC, accuracy; Se, sensitivity; Sp, specificity. Bold fonts indicate the best performance per column.

Performance comparison of the four radiologists between radiologist‐alone and AI assisted radiologists on the testing set ACC, accuracy; Se, sensitivity; Sp, specificity. Bold fonts indicate the best performance per column. Statistically significant (P < 0.05). Performance validation of the strategy of AI assistance in the testing dataset. Performance comparison between radiologists with AI assistance and radiologists alone. Performance comparison between the four radiologists with AI assistance on the testing set ACC, accuracy; Se, sensitivity; Sp, specificity. Bold fonts indicate the best performance per column.

Discussion

In this study, we constructed a CEUS‐based AI for FLL differentiation between benignity and malignancy, which significantly outperformed resident radiologists and matched the performance of experts who had access to complementary clinical information on patients. Considering the advantage of AI's high diagnostic PPV compared with radiologists, the strategy of AI assistance was developed to improve their true malignancy rate. For the independent testing set, radiologists with AI assistance exhibited improved performance especially for residents who reached the expert level; thus, interobserver heterogeneity was reduced. Contrast‐enhanced ultrasound is complementary to and even substitutable for CT and MR in the characterization of FLLs, and the main advantages include the increased temporal resolution of CEUS videos and their ability to show detailed blood perfusion morphology. CEUS videos provide time‐sequence information on dynamic blood perfusion, enabling the differentiation of focal nodular hyperplasia from atypical hepatocellular carcinoma. In addition to these visible features, potential pixel‐based “features” of time‐sequence information may be recognizable with the aid of deep learning techniques. By applying multiphase video‐based images and a deep neural network for model development, the advantages of CEUS could be optimally exploited. Our model achieved a tested AUC of 93.4% and ACC of 91.0%. Compared with models trained on single‐frame images, our model outperformed or matched the previously reported performances of AI‐CT (ACC: 82–90%), and AI‐MRI (ACC: 88.0–91.9%). , For AI‐US, our study reported the largest sample size with an independent test dataset. , , Compared with a previous AI‐US study with an independent test dataset, our AI model exhibited better performance (AUC: 93.4% vs 88.1%). For multiphase imaging analysis, an architecture based on multiple ResNet branches was designed. ResNet was the first network architecture to outperform human experts in the ImageNet Large Scale Visual Recognition Challenge. Its pixel‐based convolution and backpropagation design for automatic weight optimization make it powerful in recognizing the distinguishing features of different categories. An ACC of 96.4% has been achieved with the use of ResNet in colonoscopy video analysis for polyp detection in a study by Urban et al. Our ResNet‐based and video‐based AI model achieved an Se of 92.7% and an Sp of 85.1% for FLL differentiation on the test dataset. Its performance was comparable with or even better than the previously reported performance of non‐AI CT (Se: 89%, Sp: 94%) and MR (Se: 83%, Sp: 75%). A CEUS‐based AI model was also reported in a recent study of FLL differentiation ; however, that study used machine learning algorithms based on manually extracted features for model development. That study reported an Se of 83.3% and an Sp of 62.7%, and these values are lower than those obtained with our deep learning model. For the application of AI in clinical practice, the authority of decision‐making should remain under radiologists' supervision. In this study, we proposed a man–AI interaction strategy for FLL diagnosis, which improved residents' performance and reduced interobserver heterogeneity associated with CEUS. Because they lack clinical experience, residents can be less confident in their diagnoses, especially for benign lesions in high‐risk liver background, leading to a low Sp. In this study, our residents achieved similar Se results but significantly lower Sp results compared with experts (59.6–63.8% vs 72.3–80.9%, P = 0.034). By contrast, the AI had a similar PPV and Sp compared with the best‐performing expert and outperformed residents. Therefore, the strategy of AI assistance was designed to compensate for the residents' deficiency in Sp by referring to a more reliable diagnosis of malignancy. In the test procedure, radiologists were informed of AI's high confidence in malignancy diagnoses (comparable PPV with experts) and low confidence of benignity diagnoses (no better NPV than residents). This information gave the radiologists evidence for their choice, as they can modify their diagnosis or not when it conflicts with the diagnosis provided by AI. In the previously reported studies, this specific man–AI interaction strategy was always missed. , As shown in the testing dataset, our strategy was proven to be effective. This AI system may also be helpful in radiology resident training programs and radiologist training at less developed centers. This study has several limitations. First, we used only image data for AI training, thus neglecting potentially important information, such as patients' clinical information related to alpha‐fetoprotein, hepatitis, and liver cirrhosis. Although this limitation can be compensated by the intended purpose of the AI, that is, to provide assistance for radiologists, a comprehensive AI model integrating CEUS and complementary patient information could enable a further breakthrough in FLL differentiation. Second, although our study reported the largest cohort to date compared with previous studies on CEUS, the sample size was still small considering the deep learning nature of this study. Although transfer learning allows the development of an accurate model with a relatively small training dataset, the model performance will still be inferior to that of a model trained from a random initialization on an extremely large dataset. Future studies using a much larger dataset that ideally includes data from multiple centers for training may further improve the AI model's performance.

Conclusion

In summary, we developed CEUS‐based AI for differentiating between benign and malignant FLLs, which outperformed our radiologists. Consequently, a clinically applicable strategy of AI assistance was developed, which improved the performance of residents to the expert level and thus reduced interobserver heterogeneity associated with CEUS.
  36 in total

1.  Evaluation and accurate diagnoses of pediatric diseases using artificial intelligence.

Authors:  Huiying Liang; Brian Y Tsui; Hao Ni; Carolina C S Valentim; Sally L Baxter; Guangjian Liu; Wenjia Cai; Daniel S Kermany; Xin Sun; Jiancong Chen; Liya He; Jie Zhu; Pin Tian; Hua Shao; Lianghong Zheng; Rui Hou; Sierra Hewett; Gen Li; Ping Liang; Xuan Zang; Zhiqi Zhang; Liyan Pan; Huimin Cai; Rujuan Ling; Shuhua Li; Yongwang Cui; Shusheng Tang; Hong Ye; Xiaoyan Huang; Waner He; Wenqing Liang; Qing Zhang; Jianmin Jiang; Wei Yu; Jianqun Gao; Wanxing Ou; Yingmin Deng; Qiaozhen Hou; Bei Wang; Cuichan Yao; Yan Liang; Shu Zhang; Yaou Duan; Runze Zhang; Sarah Gibson; Charlotte L Zhang; Oulan Li; Edward D Zhang; Gabriel Karin; Nathan Nguyen; Xiaokang Wu; Cindy Wen; Jie Xu; Wenqin Xu; Bochu Wang; Winston Wang; Jing Li; Bianca Pizzato; Caroline Bao; Daoman Xiang; Wanting He; Suiqin He; Yugui Zhou; Weldon Haw; Michael Goldbaum; Adriana Tremoulet; Chun-Nan Hsu; Hannah Carter; Long Zhu; Kang Zhang; Huimin Xia
Journal:  Nat Med       Date:  2019-02-11       Impact factor: 53.440

2.  Development and Validation of a Deep Learning Algorithm for Detection of Diabetic Retinopathy in Retinal Fundus Photographs.

Authors:  Varun Gulshan; Lily Peng; Marc Coram; Martin C Stumpe; Derek Wu; Arunachalam Narayanaswamy; Subhashini Venugopalan; Kasumi Widner; Tom Madams; Jorge Cuadros; Ramasamy Kim; Rajiv Raman; Philip C Nelson; Jessica L Mega; Dale R Webster
Journal:  JAMA       Date:  2016-12-13       Impact factor: 56.272

Review 3.  Imaging for the diagnosis of hepatocellular carcinoma: A systematic review and meta-analysis.

Authors:  Lewis R Roberts; Claude B Sirlin; Feras Zaiem; Jehad Almasri; Larry J Prokop; Julie K Heimbach; M Hassan Murad; Khaled Mohammed
Journal:  Hepatology       Date:  2017-11-29       Impact factor: 17.425

4.  Deep Convolutional Neural Network-based Software Improves Radiologist Detection of Malignant Lung Nodules on Chest Radiographs.

Authors:  Yongsik Sim; Myung Jin Chung; Elmar Kotter; Sehyo Yune; Myeongchan Kim; Synho Do; Kyunghwa Han; Hanmyoung Kim; Seungwook Yang; Dong-Jae Lee; Byoung Wook Choi
Journal:  Radiology       Date:  2019-11-12       Impact factor: 11.105

5.  A deep feature fusion methodology for breast cancer diagnosis demonstrated on three imaging modality datasets.

Authors:  Natalia Antropova; Benjamin Q Huynh; Maryellen L Giger
Journal:  Med Phys       Date:  2017-08-12       Impact factor: 4.071

6.  Dermatologist-level classification of skin cancer with deep neural networks.

Authors:  Andre Esteva; Brett Kuprel; Roberto A Novoa; Justin Ko; Susan M Swetter; Helen M Blau; Sebastian Thrun
Journal:  Nature       Date:  2017-01-25       Impact factor: 49.962

7.  Focal liver lesions segmentation and classification in nonenhanced T2-weighted MRI.

Authors:  Ilias Gatos; Stavros Tsantis; Maria Karamesini; Stavros Spiliopoulos; Dimitris Karnabatidis; John D Hazle; George C Kagadis
Journal:  Med Phys       Date:  2017-05-29       Impact factor: 4.071

8.  Classification of focal liver lesions on ultrasound images by extracting hybrid textural features and using an artificial neural network.

Authors:  Yoo Na Hwang; Ju Hwan Lee; Ga Young Kim; Yuan Yuan Jiang; Sung Min Kim
Journal:  Biomed Mater Eng       Date:  2015       Impact factor: 1.300

9.  Deep Learning with Convolutional Neural Network for Differentiation of Liver Masses at Dynamic Contrast-enhanced CT: A Preliminary Study.

Authors:  Koichiro Yasaka; Hiroyuki Akai; Osamu Abe; Shigeru Kiryu
Journal:  Radiology       Date:  2017-10-23       Impact factor: 11.105

10.  Classification and mutation prediction from non-small cell lung cancer histopathology images using deep learning.

Authors:  Nicolas Coudray; Paolo Santiago Ocampo; Theodore Sakellaropoulos; Navneet Narula; Matija Snuderl; David Fenyö; Andre L Moreira; Narges Razavian; Aristotelis Tsirigos
Journal:  Nat Med       Date:  2018-09-17       Impact factor: 53.440

View more
  7 in total

1.  Deep learning radiomics for focal liver lesions diagnosis on long-range contrast-enhanced ultrasound and clinical factors.

Authors:  Li Liu; Chunlin Tang; Lu Li; Ping Chen; Ying Tan; Xiaofei Hu; Kaixuan Chen; Yongning Shang; Deng Liu; He Liu; Hongjun Liu; Fang Nie; Jiawei Tian; Mingchang Zhao; Wen He; Yanli Guo
Journal:  Quant Imaging Med Surg       Date:  2022-06

2.  Predicting the Initial Treatment Response to Transarterial Chemoembolization in Intermediate-Stage Hepatocellular Carcinoma by the Integration of Radiomics and Deep Learning.

Authors:  Jie Peng; Jinhua Huang; Guijia Huang; Jing Zhang
Journal:  Front Oncol       Date:  2021-10-21       Impact factor: 6.244

3.  Early prediction of acute necrotizing pancreatitis by artificial intelligence: a prospective cohort-analysis of 2387 cases.

Authors:  Péter Hegyi; Andrea Szentesi; Szabolcs Kiss; József Pintér; Roland Molontay; Marcell Nagy; Nelli Farkas; Zoltán Sipos; Péter Fehérvári; László Pecze; Mária Földi; Áron Vincze; Tamás Takács; László Czakó; Ferenc Izbéki; Adrienn Halász; Eszter Boros; József Hamvas; Márta Varga; Artautas Mickevicius; Nándor Faluhelyi; Orsolya Farkas; Szilárd Váncsa; Rita Nagy; Stefania Bunduc; Péter Jenő Hegyi; Katalin Márta; Katalin Borka; Attila Doros; Nóra Hosszúfalusi; László Zubek; Bálint Erőss; Zsolt Molnár; Andrea Párniczky
Journal:  Sci Rep       Date:  2022-05-12       Impact factor: 4.996

Review 4.  Artificial intelligence in liver ultrasound.

Authors:  Liu-Liu Cao; Mei Peng; Xiang Xie; Gong-Quan Chen; Shu-Yan Huang; Jia-Yu Wang; Fan Jiang; Xin-Wu Cui; Christoph F Dietrich
Journal:  World J Gastroenterol       Date:  2022-07-21       Impact factor: 5.374

Review 5.  A Narrative Review on LI-RADS Algorithm in Liver Tumors: Prospects and Pitfalls.

Authors:  Federica De Muzio; Francesca Grassi; Federica Dell'Aversana; Roberta Fusco; Ginevra Danti; Federica Flammia; Giuditta Chiti; Tommaso Valeri; Andrea Agostini; Pierpaolo Palumbo; Federico Bruno; Carmen Cutolo; Roberta Grassi; Igino Simonetti; Andrea Giovagnoni; Vittorio Miele; Antonio Barile; Vincenza Granata
Journal:  Diagnostics (Basel)       Date:  2022-07-07

6.  Multimodal ultrasound fusion network for differentiating between benign and malignant solid renal tumors.

Authors:  Dongmei Zhu; Junyu Li; Yan Li; Ji Wu; Lin Zhu; Jian Li; Zimo Wang; Jinfeng Xu; Fajin Dong; Jun Cheng
Journal:  Front Mol Biosci       Date:  2022-09-06

7.  Artificial intelligence assists identifying malignant versus benign liver lesions using contrast-enhanced ultrasound.

Authors:  Hang-Tong Hu; Wei Wang; Li-Da Chen; Si-Min Ruan; Shu-Ling Chen; Xin Li; Ming-De Lu; Xiao-Yan Xie; Ming Kuang
Journal:  J Gastroenterol Hepatol       Date:  2021-05-05       Impact factor: 4.029

  7 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.