Literature DB >> 32354098

Development of a Deep-Learning-Based Artificial Intelligence Tool for Differential Diagnosis between Dry and Neovascular Age-Related Macular Degeneration.

Tae-Young Heo1, Kyoung Min Kim1, Hyun Kyu Min2, Sun Mi Gu2, Jae Hyun Kim3, Jaesuk Yun2, Jung Kee Min2,3.   

Abstract

The use of deep-learning-based artificial intelligence (AI) is emerging in ophthalmology, with AI-mediated differential diagnosis of neovascular age-related macular degeneration (AMD) and dry AMD a promising methodology for precise treatment strategies and prognosis. Here, we developed deep learning algorithms and predicted diseases using 399 images of fundus. Based on feature extraction and classification with fully connected layers, we applied the Visual Geometry Group with 16 layers (VGG16) model of convolutional neural networks to classify new images. Image-data augmentation in our model was performed using Keras ImageDataGenerator, and the leave-one-out procedure was used for model cross-validation. The prediction and validation results obtained using the AI AMD diagnosis model showed relevant performance and suitability as well as better diagnostic accuracy than manual review by first-year residents. These results suggest the efficacy of this tool for early differential diagnosis of AMD in situations involving shortages of ophthalmology specialists and other medical devices.

Entities:  

Keywords:  age-related macular degeneration; class activation map; convolutional neural network; cross-validation; retina

Year:  2020        PMID: 32354098      PMCID: PMC7277105          DOI: 10.3390/diagnostics10050261

Source DB:  PubMed          Journal:  Diagnostics (Basel)        ISSN: 2075-4418


1. Introduction

Deep-learning-based artificial intelligence (AI) tools have been adopted by medical experts for disease diagnosis and detection. Furthermore, AI-based diagnosis can be used to augment human analysis in pathology and radiology [1]. AI-based tools have been developed for cancer diagnosis using pathology slides, preliminary radiology reports, chest X-rays, and detection of cardiac dysfunction using electrocardiograms [2,3,4]. Image extraction and analytical algorithms in AI diagnosis are drawing the attention of medical specialists. For example, use of deep learning tools to analyze photographs of lesions represents a potential methodology for diagnosing several retinal diseases in ophthalmology. Therefore, deep learning AI tools might be useful to ophthalmologists for predicting and treating diabetic retinopathy, age-related macular degeneration (AMD), floaters, and retinitis pigmentosa. Recently, the Food and Drug Administration approved IDx-DR for the detection of diabetic retinopathy as an AI-based diagnostic system [5]. In diabetic retinopathy, blood vessels become blocked and irregular in diameter [6], which induces fluid leakage and hemorrhaging associated with vision damage. Additionally, angiogenesis is considered to be a pathogenic process in diabetic retinopathy [7]. These pathologies represent potential photographic sources for the development of AI diagnosis tools. AMD is a degenerative eye disease and the leading cause of irreversible vision loss in the elderly [8]. It is a complex, multifactorial disease, and the pathogenesis is not fully understood. [9]. Choroidal neovascularization (CNV), vascular leakage, and hemorrhaging are the hallmarks of neovascular AMD (nAMD) [10]. Detection of AMD in its early stages is important for a good prognosis, and the differential diagnosis between dry AMD (dAMD) and nAMD is also critical for appropriate treatment and reduction of disease severity [11]. However, a shortage of ophthalmologists and medical devices for diagnosis represents a potential challenge for the timely detection of diseases. Ophthalmologists can diagnose AMD through eye examinations, such as fundus photography, optical coherence tomography (OCT), fluorescein angiography (FA), and indocyanine green angiography (ICGA) [12], with multimodal imaging also potentially necessary for accurate AMD diagnosis and treatment. However, for diagnostic screening purposes, it is difficult to access all of the various imaging equipment. Fundus photography has the limitation of providing only two-dimensional retinal information. However, it is an inexpensive and relatively simple device-based diagnostic tool that is easy to operate. Additionally, images can be saved and used at a later time by different clinicians and researchers. Furthermore, this method results in higher patient compliance due to the short test times and non-invasiveness of the method. Fundus photographs record the appearance of patient retinas, allowing the clinician to detect retinal changes and review the findings with a colleague [13]. AMD-related leakage of fluid and blood can be observed by fundus photography, which is also capable of detecting drusen, mottled appearance, and hemorrhagic detachment. Therefore, fundus photography might be useful for diagnosing AMD in routine eye examinations. In this study, we explored the viability of fundus photography for the development of a deep-learning-based AI diagnostic tool and demonstrated the performance of the proposed AI tool for differentially diagnosing AMD (control vs. dAMD vs. nAMD). Additionally, we compared the diagnostic accuracy of the AI tool with that of ophthalmology residents for AMD.

2. Materials and Methods

2.1. Ethical Approval

This study was reviewed, and the protocol approved by the Institutional Human Experimentation Committee Review Board of Ulsan University Hospital, Ulsan, Republic of Korea (UUH 2019-12-006, 31 December 2019). The study was conducted in accordance with the ethical standards set forth in the 1964 Declaration of Helsinki.

2.2. Subjects

To select patient groups (nAMD and dAMD), the medical records of patients aged >50 years who had been diagnosed with nAMD or dAMD between March 1, 2015, and July 31, 2019, at the Department of Ophthalmology of Ulsan University Hospital, Ulsan, Republic of Korea, were retrospectively reviewed. All subjects (399 eyes of 378 patients) underwent a complete ophthalmic examination that included the best-corrected visual acuity assessment, non-contact tonometry (CT-1P; Topcon Corporation, Tokyo, Japan), and swept-source OCT (DRI OCT-1 Atlantis; Topcon Corporation, Tokyo, Japan). CNV or polypoidal vascular lesions were detected via FA and ICGA (Heidelberg Retina Angiograph Spectralis; Heidelberg Engineering, Heidelberg, Germany). Patients who had had previous retinal surgeries, such as epiretinal membrane, macular hole, vitreous hemorrhage, and rhegmatogenous retinal detachment (RRD), were excluded. Subjects were also excluded if they had pre-existing ocular diseases (such as glaucoma, uveitis, diabetic retinopathy, and retinal vascular disease, which are known to affect retinal pathophysiology), severe media opacity, or high myopia (axial length ≥ 26.5 mm). To select a normal control, the medical records of patients who had been diagnosed with and surgically treated for various retinal diseases (macular hole, epiretinal membrane, or RRD) were also reviewed. Normal control was defined based on the absence of lesions, including drusen, according to fundus photography and OCT in the unaffected eyes.

2.3. Imaging Equipment

We used two fundus photography systems (TRC-NW8, Topcon Corporation, Tokyo, Japan, and Daytona, Optos, Inc., Marlborough, MA, USA). The TRC-NW8 retinal camera provides high-quality 16.2-megapixel images, with a 45° field of central macular view. Daytona provides ultra-widefield fundus digital images at 200° of the retina in a single pass. All retinal images were reviewed by a retinal specialist (JKM) to ensure that the photographs were of sufficiently high quality to adequately visualize the retina.

2.4. Convolutional Neural Network (CNN) Modeling

Convolutional neural network (CNN) techniques have recently shown noticeable advances in various fields, including computer vision and image analysis. We used this method to classify macular degeneration in macular images. We used a modified Visual Geometry Group with 16 layers (VGG16) model [14] (the winner of the ILSVRC-2014 competition) as a deep learning model for classification. VGG16 has a very simple architecture that uses only 3 × 3 convolutional layers and 2 × 2 pooling layers (Figure 1). We loaded the VGG16 model with image datasets from ImageNet (http://www.image-net.org/) and trained the convolutional layers and fully connected layers with macular images. The macular image dataset was divided into two sets, with 30% of the images in each group placed into the test set, and the remaining images used for the training set [15]. Training was performed using multiple iterations with a learning rate of 0.000001 and Nadam optimization.
Figure 1

The proposed convolutional neural network (CNN) architecture (a modified Visual Geometry Group with 16 layers (VGG16) model). The CNN with the modified VGG16 model used 3 × 3 convolutional layers and 2 × 2 pooling layers. Convolutional layers and fully connected layers were trained with macular images.

Class activation map (CAM) visualization was performed to identify areas displaying the greatest effect of macular degeneration. CAM extracts feature maps of the final convolutional layer (Conv5_3) of the model trained using macular images and computes the weights of the feature maps to represent the heatmap in the image.

2.5. Preprocessing

Each original image had a resolution of 913 × 837 pixels with a 24-bit RGB channel. We first identified the appropriate coordinates for cropping the images to ensure that they were centered around the center of the macula. The coordinates were then adjusted to eliminate unnecessary information, such as the black margin area. All images were cropped based on fixed and adjusted coordinates, with the cropped images having a resolution of 500 × 500 pixels. All of the cropped images were then resized to 244 × 244 pixels as input images for the deep learning model. Preprocessed images were generated using various methods with Keras ImageDataGenerator (https://keras.io/) during training.

2.6. Cross-Validation of Artificial Intelligence (AI)-Based Diagnosis

Cross-validation is a useful technique for evaluating the performance of deep learning models. In cross-validation, the dataset is randomly divided into training set and test sets, with the training set used to build a model, and the test set used to assess the performance of the model by measuring accuracy. In k-fold cross-validation, the dataset is divided randomly into k subsets of equal size, with one used as a test set and the others for training. The cross-validation is performed k times to allow for the use of all subsets exactly once as a test set. Model performance is determined according to the average of model evaluation scores calculated across k test subsets. Here, we evaluated the performance of the proposed CNN model using 5-fold cross-validation, with performance determined according to the average accuracy of five cross-validations for each class comparison.

2.7. Comparative Analysis of Accuracy Values of the AI Diagnosis Tool and Residents in Ophthalmology

To compare the performance between AI diagnosis and that of clinical reviewers, two residents in our hospital evaluated the fundus images used to develop the tool. Reviewer 1 was a first-year resident and reviewer 2 a fourth-year resident in ophthalmology. For 3-class classification, control, dAMD, and nAMD fundus photos were randomly displayed on a computer screen for 20 s, and the two reviewers interpreted fundus findings as Normal, dAMD, or nAMD. For 2-class classification, comparisons were divided into three groups (Normal vs. dAMD, Normal vs. nAMD, and dAMD vs. nAMD) according to the same time constraints. The fundus photos were also randomly displayed on the screen for 20 s. Two reviewers read retina findings as Normal or dAMD in the Normal–dAMD group, as Normal or nAMD in the Normal–nAMD group, and as dAMD or nAMD in the dAMD–nAMD group. Accuracy values for each reviewer were calculated and presented accordingly.

3. Results

3.1. Fundus Image Collection

Eyes (n = 142) from 126 patients were diagnosed with nAMD, and fundus images were collected. Fundus examination of eyes with nAMD can include one or more features, such as subretinal and/or intraretinal fluid, subretinal hemorrhage, and retinal pigment epithelial detachment and intraretinal exudate in the macular area (Figure 2). Based on the category of AMD in age-related eye diseases [16], drusen types corresponding to categories 2 and 3 were defined as dAMD (132 eyes from 127 patients) through fundus photography and OCT (Figure 3a,b). Furthermore, images of 125 eyes from 125 patients were collected as controls (Figure 3c,d).
Figure 2

Multimodal images of neovascular age-related macular degeneration in a 61-year-old man. (a) Fundus photography shows subretinal fluid, exudation, and hemorrhage; (b) Optical coherence tomography (OCT) B-scan revealed non-uniform hyper-reflective formations above the retinal pigment epithelium and the presence of intraretinal cysts and subretinal fluid; (c) Fluorescein angiography (FA) demonstrates aspects of a well-defined (white arrow) and an irregular (yellow arrow) hyper-fluorescent lesion; (d) Indocyanine green angiography (ICGA) shows staining of the type 2 choroidal neovascularization (CNV) (white arrow); (e) An OCT angiography image (with the neovascular network) overlaid on the ICGA image.

Figure 3

Fundus photography and optical coherence tomography of dry age-related macular degeneration (dAMD) and control retinas. (a) Numerous soft, yellow drusen in the right eye of a 78-year-old woman; (b) The corresponding OCT image shows multiple deposits accumulating under the retinal pigment epithelium. (c) Normal control fundus photography in the right eye of a 66-year-old man. (d) The corresponding OCT image of the control.

3.2. Data Augmentation

We performed several iterative learning steps using Nadam optimization, and loss values of the model were recorded for each iteration. The model with the lowest loss value recorded during training was adopted and used. The images were processed using Keras ImageDataGenerator, with the center of the macula located in the center of the image. Image generation was performed by randomly moving in the up, down, left, and right directions, flipping, and applying image rotation and zooming according to a previous report [17] (Figure 4).
Figure 4

Image preprocessing. Eye images were preprocessed by Keras ImageDataGenerator. Original images were cropped and resized to 244 × 244 pixels. The training dataset images were generated using various methods, including width shift, height shift, rotation, zoom, horizontal flip, and vertical flip.

3.3. Validation of the Deep-Learning-Based Diagnostic Tool

CAM visualization showed that the convolutional neural network (CNN) successfully identified areas of degeneration (Figure 5). These represent the most important areas in each image in the trained CNN when classified as AMD. In the case of dAMD, the drusen, which resembled a yellow dot characteristic of dAMD symptoms, was correctly identified. For nAMD, areas involving degeneration and bleeding were identified, with pathological changes, such as elevation, observed in the center of the macula. Accordingly, we were able to identify macular morphological changes characteristic of nAMD (Figure 5).
Figure 5

Examples of class activation map (CAM) visualization. CAM visualization of normal, dry age-related macular degeneration (dAMD), and neovascular age-related macular degeneration (nAMD) retinas. CAM extracts the feature map of the last convolution layer (Conv5_3) and shows a heatmap within the image describing the calculated weight of the feature map. (a) dAMD fundus images show drusen (arrow), and (d) heatmap images show drusen identified by the artificial intelligence (AI) tool; (b) Normal fundus images have no drusen, and (e) heatmap images of normal controls show that the AI tool identified the contour of fovea according to the absence of drusen; (c) nAMD fundus images show bleeding and degenerated areas (green arrows), and (f) heatmap images show identified drusen and other features of degeneration and bleeding; (g–i) Representative images of dAMD, a normal control, and nAMD, respectively; (l) Heatmap images of nAMD show that the AI tool identified pathological changes in the macula, such as elevation of the center; (j) There was no heatmap at the center of dAMD; however, the AI tool detected drusen instead; (k) Heatmap image showing AI identification of the center of the macula in a control, with no degenerated area.

We achieved 90.86% accuracy with preprocessing for three-class classification. Table 1 shows a comparison of accuracy between a preprocessed (w-Pre) model and a non-preprocessed (w/o-Pre) model. The results indicated that the w-Pre model performed better in terms of accuracy, except for the comparison of the control with dAMD, and that preprocessing of fundus images improved classification. Table 2 and Table 3 show the detailed results for each fold. Therefore, we used the modified VGG-16 model trained with preprocessed data.
Table 1

Comparison of outcomes according to preprocessing.

AverageAccuracy3-Class2-Class
Control–dAMD–nAMDControl–dAMDControl–nAMDdAMD–nAMD
w-Pre0.90860.91920.98130.9132
w/o-Pre0.85590.92640.98080.9063

Data represent calculated accuracy values.

Table 2

Results obtained using five-fold cross-validation with preprocessing.

Folds3-Class2-Class
Normal–dAMD–nAMDNormal–dAMDNormal–nAMDdAMD–nAMD
Fold 10.97560.88461.00000.9231
Fold 20.88641.00001.00000.8929
Fold 30.95350.92591.00001.0000
Fold 40.93180.92861.00000.9643
Fold 50.79550.85710.90630.7857
Average0.90860.91920.98130.9132

Data represent calculated accuracy values.

Table 3

Results obtained using five-fold cross-validation without preprocessing.

Folds3-Class2-Class
Normal–dAMD–nAMDNormal–dAMDNormal–nAMDdAMD–nAMD
Fold 10.80490.88460.96670.9615
Fold 20.84090.92861.00000.8571
Fold 30.88370.92591.00000.9626
Fold 40.88640.96431.00000.8214
Fold 50.86360.92860.93750.9286
Average0.85590.92640.98080.9063

Data represent calculated accuracy values.

Table 2 shows the accuracies and the average accuracies obtained for each fold of cross-validation, with the accuracies for two-class classification higher than those for three-class classification. Table 4 shows measurements of sensitivity, specificity, positive predictive value, and negative predictive value, revealing a similar outcome of two-class classification outperforming three-class classification. Additionally, we performed the same validation using ultra-widefield images (Table 5), resulting in similar results. Furthermore, while using ultra-widefield images, the tool showed an accuracy of 0.7584 in normal vs. dAMD, 0.9099 in normal vs. nAMD, and 0.7601 in dAMD vs. nAMD for the two-class classification. In the three-class classification of ultra-widefield images, the accuracy was 0.7321. We evaluated the performance of medical reviewers and compared this with outcomes using AI diagnosis (Table 6). The results showed that the AI tool outperformed both a first- and fourth-year resident in accurately differentiating between AMD types and a control for both three-class and two-class classifications.
Table 4

Average classification results for each model.

ModelAccuracySensitivitySpecificityPPVNPV
3-class 0.90860.90461.00001.00000.9349
Control–dAMD–nAMD0.86050.93940.83030.9500
0.95710.93290.87500.9786
2-classControl–dAMD0.91920.92520.91670.87880.9492
Control–nAMD0.98130.96841.00001.00000.9625
dAMD–nAMD0.91320.87950.94480.93180.8992

Data represent calculated accuracy values. PPV, positive predictive value; NPV, negative predictive value.

Table 5

Results obtained using five-fold cross-validation and ultra-widefield images.

Folds3-Class2-Class
Normal–dAMD–nAMDNormal–dAMDNormal–nAMDdAMD–nAMD
Fold 10.78850.60710.86360.8636
Fold 20.78850.71430.81820.7727
Fold 30.64810.85710.91300.6957
Fold 40.75000.78570.95450.7727
Fold 50.68520.82761.00000.6957
Average0.73210.75840.90990.7601

Data represent calculated accuracy values.

Table 6

Comparison of differential diagnosis of AMD type between first- and fourth-year residents.

Folds3-Class2-Class
Normal–dAMD–nAMDNormal–dAMDNormal–nAMDdAMD–nAMD
Reviewer 1Reviewer 2Reviewer 1Reviewer 2Reviewer 1Reviewer 2Reviewer 1Reviewer 2
Fold 10.73170.90240.96150.92310.96671.00000.69230.9231
Fold 20.70450.90910.96430.89290.93750.90620.85190.9259
Fold 30.69770.81400.85190.92590.93750.87500.81480.9630
Fold 40.79550.72730.96430.92860.90620.96880.75000.9286
Fold 50.71430.80000.75000.96430.87500.90620.71430.7143
Average0.72870.83060.89840.92700.92460.93120.76470.8910

Reviewers 1 and 2 represent a first- and fourth-year resident, respectively. Data represent calculated accuracy values.

4. Discussion

Recent advances in deep learning techniques have increased the focus of medical specialists on potential application of AI-based diagnostic tools. Given the image-extraction features of deep learning algorithms, these techniques are potentially suitable for analyzing photographs from eye examinations. AMD is a leading cause of vision loss, and early detection is important for a good prognosis [18]. Furthermore, differential diagnosis between dAMD and nAMD is critical for suitable treatment [19]. However, based on the shortage of ophthalmologists and medical devices, early diagnosis of dAMD and nAMD is challenging. In this study, we developed a deep-learning-based diagnostic tool to detect and differentiate between dAMD and nAMD using fundus photographs. To the best of our knowledge, this represents the first development and application of an AI tool for differential diagnosis of AMD type. Five-fold cross-validation revealed that our AI model showed high accuracy (>0.9) for both three-class and two-class classification and comparable and superior accuracy to diagnoses by medical reviewers (fourth- and first-year residents). Additionally, differential diagnosis using ultra-widefield images from AMD patients revealed overall accuracies lower than those obtained using conventional fundus images. Unlike conventional fundus images, the ultra-widefield images were not formal photos and included unnecessary information (eyelids, light bleeds, different pixels, etc.), making manual preprocessing of the images necessary. These results suggested that ultra-widefield images were not appropriate for use with deep learning tools. Our AI tool detected features of AMD, such as drusen, bleeding, and elevation of the center of the macula. Specifically, bleeding and degeneration of the centers of maculae are markers used for nAMD diagnosis [20]. Multimodal imaging (e.g., OCT) is generally necessary for accurate AMD diagnosis and prognostic prediction. The low reliability of diagnostic imaging equipment can result in a poor diagnosis and prognosis, especially in low-income countries. Therefore, we developed an AI tool for AMD diagnosis that uses only conventional fundus photographs and demonstrated the efficacy of the tool for differential diagnosis between dAMD and nAMD. Our findings support this AI tool as a cost-effective methodology that addresses possible shortages of eye specialists and medical devices required for accurate AMD diagnosis.
  18 in total

Review 1.  Risk factors for age-related macular degeneration.

Authors:  J R Evans
Journal:  Prog Retin Eye Res       Date:  2001-03       Impact factor: 21.198

Review 2.  Developing prognostic biomarkers in intermediate age-related macular degeneration: their clinical use in predicting progression.

Authors:  Angelica Ly; Michael Yapp; Lisa Nivison-Smith; Nagi Assaad; Michael Hennessy; Michael Kalloniatis
Journal:  Clin Exp Optom       Date:  2017-11-14       Impact factor: 2.742

3.  Use of optical coherence tomography, fluorescein angiography and indocyanine green angiography in a screening clinic for wet age-related macular degeneration.

Authors:  James Talks; Zachariah Koshy; Konstantinos Chatzinikolas
Journal:  Br J Ophthalmol       Date:  2006-12-06       Impact factor: 4.638

Review 4.  Age-related macular degeneration: diagnosis and management.

Authors:  H L Cook; P J Patel; A Tufail
Journal:  Br Med Bull       Date:  2008       Impact factor: 4.291

5.  Deep learning is effective for the classification of OCT images of normal versus Age-related Macular Degeneration.

Authors:  Cecilia S Lee; Doug M Baughman; Aaron Y Lee
Journal:  Ophthalmol Retina       Date:  2017-02-13

6.  Plasma oxidized LDL and thiol-containing molecules in patients with exudative age-related macular degeneration.

Authors:  Alireza Javadzadeh; Amir Ghorbanihaghjo; Elham Bahreini; Nadereh Rashtchizadeh; Hassan Argani; Samira Alizadeh
Journal:  Mol Vis       Date:  2010-12-06       Impact factor: 2.367

7.  Recognising and managing diabetic retinopathy.

Authors:  Anthony Hall
Journal:  Community Eye Health       Date:  2011-09

8.  Comparison of Deep Learning Approaches for Multi-Label Chest X-Ray Classification.

Authors:  Ivo M Baltruschat; Hannes Nickisch; Michael Grass; Tobias Knopp; Axel Saalbach
Journal:  Sci Rep       Date:  2019-04-23       Impact factor: 4.379

9.  Pivotal trial of an autonomous AI-based diagnostic system for detection of diabetic retinopathy in primary care offices.

Authors:  Michael D Abràmoff; Philip T Lavin; Michele Birch; Nilay Shah; James C Folk
Journal:  NPJ Digit Med       Date:  2018-08-28

Review 10.  A systems biology approach towards understanding and treating non-neovascular age-related macular degeneration.

Authors:  James T Handa; Cathy Bowes Rickman; Andrew D Dick; Michael B Gorin; Joan W Miller; Cynthia A Toth; Marius Ueffing; Marco Zarbin; Lindsay A Farrer
Journal:  Nat Commun       Date:  2019-07-26       Impact factor: 14.919

View more
  5 in total

1.  A Study on the Correlation Between Age-Related Macular Degeneration and Alzheimer's Disease Based on the Application of Artificial Neural Network.

Authors:  Meng Zhang; Xuewu Gong; Wenhui Ma; Libo Wen; Yuejing Wang; Hongbo Yao
Journal:  Front Public Health       Date:  2022-06-30

2.  Development of a Fundus Image-Based Deep Learning Diagnostic Tool for Various Retinal Diseases.

Authors:  Kyoung Min Kim; Tae-Young Heo; Aesul Kim; Joohee Kim; Kyu Jin Han; Jaesuk Yun; Jung Kee Min
Journal:  J Pers Med       Date:  2021-04-21

3.  A multimodal deep learning system to distinguish late stages of AMD and to compare expert vs. AI ocular biomarkers.

Authors:  Kaveri A Thakoor; Jiaang Yao; Darius Bordbar; Omar Moussa; Weijie Lin; Paul Sajda; Royce W S Chen
Journal:  Sci Rep       Date:  2022-02-16       Impact factor: 4.379

4.  A Deep Learning-Based Quantitative Structure-Activity Relationship System Construct Prediction Model of Agonist and Antagonist with High Performance.

Authors:  Yasunari Matsuzaka; Yoshihiro Uesawa
Journal:  Int J Mol Sci       Date:  2022-02-15       Impact factor: 5.923

Review 5.  Trends in the Approval and Quality Management of Artificial Intelligence Medical Devices in the Republic of Korea.

Authors:  Kyoungtaek Lim; Tae-Young Heo; Jaesuk Yun
Journal:  Diagnostics (Basel)       Date:  2022-01-30
  5 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.