Literature DB >> 32927416

Development and clinical implementation of tailored image analysis tools for COVID-19 in the midst of the pandemic: The synergetic effect of an open, clinically embedded software development platform and machine learning.

Constantin Anastasopoulos1, Thomas Weikert2, Shan Yang3, Ahmed Abdulkadir4, Lena Schmülling5, Claudia Bühler6, Fabiano Paciolla7, Raphael Sexauer8, Joshy Cyriac9, Ivan Nesic10, Raphael Twerenbold11, Jens Bremerich12, Bram Stieltjes13, Alexander W Sauter14, Gregor Sommer15.   

Abstract

PURPOSE: During the emerging COVID-19 pandemic, radiology departments faced a substantial increase in chest CT admissions coupled with the novel demand for quantification of pulmonary opacities. This article describes how our clinic implemented an automated software solution for this purpose into an established software platform in 10 days. The underlying hypothesis was that modern academic centers in radiology are capable of developing and implementing such tools by their own efforts and fast enough to meet the rapidly increasing clinical needs in the wake of a pandemic.
METHOD: Deep convolutional neural network algorithms for lung segmentation and opacity quantification on chest CTs were trained using semi-automatically and manually created ground-truth (Ntotal = 172). The performance of the in-house method was compared to an externally developed algorithm on a separate test subset (N = 66).
RESULTS: The final algorithm was available at day 10 and achieved human-like performance (Dice coefficient = 0.97). For opacity quantification, a slight underestimation was seen both for the in-house (1.8 %) and for the external algorithm (0.9 %). In contrast to the external reference, the underestimation for the in-house algorithm showed no dependency on total opacity load, making it more suitable for follow-up.
CONCLUSIONS: The combination of machine learning and a clinically embedded software development platform enabled time-efficient development, instant deployment, and rapid adoption in clinical routine. The algorithm for fully automated lung segmentation and opacity quantification that we developed in the midst of the COVID-19 pandemic was ready for clinical use within just 10 days and achieved human-level performance even in complex cases.
Copyright © 2020 The Author(s). Published by Elsevier B.V. All rights reserved.

Entities:  

Keywords:  COVID-19; Computed tomography; Machine learning; Software

Mesh:

Year:  2020        PMID: 32927416      PMCID: PMC7455238          DOI: 10.1016/j.ejrad.2020.109233

Source DB:  PubMed          Journal:  Eur J Radiol        ISSN: 0720-048X            Impact factor:   4.531


Introduction

Despite knowledge of the spread of the disease in Asia, Europe was overwhelmed by the dynamic of the new coronavirus disease (COVID-19) outbreak in spring 2020. Initial attempts to prevent its spread by geographically confined lockdowns failed and western countries became alerted by the exponential increase of new infections in Italy, which by the end of March had exceeded those reported by China. In mid-March, the European countries gradually entered a systemic lockdown and urgently prepared their healthcare systems for the challenges to come. Driven by initial reports from China that indicated a higher sensitivity of chest computed tomography (CT) compared to polymerase chain reaction (PCR) in epidemic areas [1,2] imaging was recognized as an important additional diagnostic tool in the wake of the pandemic [3,4]. As a consequence, not only emergency and intensive care units but also radiology departments in Europe had to quickly adapt to the new reality, following recommendations from their colleagues in Asia [5]. In our department, a substantial increase in chest CT admissions for COVID-19 was seen soon after the initial cases were diagnosed in our hospital by the end of February. At first, CT was used for differential diagnosis of flu-like symptoms that had been advocated by early reports from China [6,7]. Soon, however, when the number of patients on the dedicated medical wards increased, our department received inquiries for a standardized method allowing quantification and follow-up of disease burden for supporting both triage towards intensive care and therapy decisions. At the time of the initiation of this study, only little evidence was available on the evolution of lung tissue alterations in the course of the disease [8,9] and methodological proposals for quantification of these changes were at a very early stage [10,11]. Meanwhile the steadily growing literature on this topic is complemented by several other publications, including recent reports on visual scoring systems [12,13], first quantitative and deep convolutional neural network (DCNN) approaches [[14], [15], [16]] and a multicenter initiative for automated diagnosis and quantitative analysis of COVID-19 on imaging has been set up (https://imagingcovid19ai.eu). This article describes the process from a prototypical development of automated artificial intelligence (AI) based software for lung segmentation and quantification of lung opacities in CTs of COVID-19 patients in a Research and Development environment to its clinical implementation within ten days. We discuss major strengths and weaknesses of our approach and set our results into the context of the current literature. The underlying hypothesis was that modern academic centers in radiology are capable of developing and implementing a clinically useful AI based software for quantification of pulmonary opacities in COVID-19 by their own efforts and with sufficient speed to meet the rapidly increasing clinical needs in the wake of a pandemic.

Material and methods

Hospital and involved personnel

Our hospital is an academic hospital supplying maximum-care to a metropolitan area of 600.000 inhabitants. For this study, the Department of Radiology was supported by the Department of Research and Analysis, a team of 15 researchers with skills in image processing, data pipelines, deep learning applications and statistics. In the framework of this project, 3 members of latter department joined the newly created study team of 8 physicians, including 6 residents and 2 staff physicians specialized in cardio-thoracic imaging. The total work force dedicated to the project during the time frame of 10 days added up to approximately 2 full-time equivalent (FTE) for the scientists and 4 FTE for the physicians.

Patients and datasets

The prospective collection and evaluation of data from subjects with COVID-19 for this project was approved by the local ethics committee (approval number 2020-00566) as part of a study registered on ClinicalTrials.gov on 04/29/2020 (Identifier: NCT04366765). Data from patients actively denying consent for further research use were excluded. Finally, 152 datasets of COVID-19 patients with positive PCR were included (belonging to 146 patients), 23 performed with and 129 without iodine contrast administration, respectively. CT scans were performed in our institution on six scanners of four different types (Somatom Force, Edge, Definition Flash, and Definition AS+, all Siemens, Forchheim, Germany). Iterative image reconstruction (ADMIRE/SAPHIRE, Siemens Healthineers, Erlangen, Germany) with soft tissue kernel (I26f) was used. Fig. 1 shows the increase in the cumulative number of patients from the beginning of March until the end of April 2020 (N = 272). For the training of the two algorithms A1 and A2 as defined below, 45 and 86 datasets were included, respectively, whereby the individual numbers depended on the availability of pre-processed datasets at the time of the training:
Fig. 1

Milestones of our project and cumulative sum of chest CT scans performed in patients with COVID-19 at our department plotted against time in the early pandemic period (day 1 to day 50).

Subset 1 consisting of 45 semi-automatically generated lung segmentations from the early phase of the project (until 03/27/2020) built the basis for training of a preliminary DCNN (A1). The second Subset of 86 manually edited segmentations of COVID-19 patients, including manual segmentations on Subset 1 scans (collectively named Subset 2C, acquired until 03/30/2020) and accounting for half of the training subset for the final DCNN (A2). Milestones of our project and cumulative sum of chest CT scans performed in patients with COVID-19 at our department plotted against time in the early pandemic period (day 1 to day 50). Final testing was performed on a subset of 66 manually-edited segmentations on scans of COVID-19 patients acquired between 03/31/2020 and 04/14/2020 (Subset 2T). In addition to the data of COVID-19 patients, 141 CT datasets of patients with different medical history other than COVID-19 were included. These were: Eighty-six chest CT datasets (including 16 follow-up examinations) that had been performed for assessment or exclusion of pulmonary infection during the second half of 2019 (Subset 2NC). These were used together with the 86 scans of Subset 2c in the training of algorithm A2. Fifty-five chest CT datasets from 2020 with different acquisition protocols (with and without intravenous contrast), used for validating the quantification of pulmonary opacities (see Appendix E1 for further details). The processing of this training and validation subsets was also approved by the local ethics committee (approval number 2020-00595). Patient characteristics for the included datasets are given in Table S1 of Appendix E1.

Development pathway

The technical development, refinement and testing of methods followed a stepwise approach as listed below and visualized in Fig. 1.

Starting point – provisional, semi-automated assessment

First, evaluation with a semi-automated software package was introduced as an interim solution to provide ad hoc quantification of pulmonary opacities identified on CT ([10], CT Pulmo 3D, Syngo.via, VB30A, Siemens Healthineers, Erlangen, Germany). Originally designed for quantification of emphysema, this tool delineates lung contours with restricted options for manual correction. Ten equally spaced threshold-based subranges of negative Hounsfield units (HU) were formed. Based on previous reports on density distribution in acute respiratory distress syndrome [17], the percentage of voxels with HU values between –600 and 0 relative to the entire lung was calculated and considered during radiological reporting. The CT-scans from this initial evaluation step processed until 03/26/2020 (N = 45) are in the following referred to as Subset 1.

Step 1 – training of baseline in-house segmentation method

The 3D segmentations of Subset 1 were exported as binary masks without further manual intervention. For the training of deep learning algorithm 1 (A1), a framework for DCNN semantic segmentation with a U-Net architecture was used [18]. The data was processed with two convolutions in three spatial dimensions (3 × 3 × 3 convolution kernels). The principle of the network was based on the 3D U-Net without batch normalization [19] but we implemented it with only three resolution layers formed by two pooling and upsampling layers to reduce the model complexity and facilitate training. The number of channels per layer were the same as in the first three resolution levels in [19]. The training was performed on TensorFlow with NiftyNet (https://niftynet.io) on a consumer-grade graphics processor unit.

Step 2 – refinement of model by training with manual reference segmentations

The manually refined 3D segmentations from this step (n = 238) are in total referred to as Subset 2, which consists of the previously described Subsets 2C (N = 86), 2NC (N = 86), and 2T (N = 66). Manually segmented reference standards of lung borders were generated in a medical image viewing/processing and software development platform Nora (www.nora-imaging.com). After discussing the segmentation strategy, three radiologists in training (T.W., L.S, C.B.) segmented the lung borders in each axial slice of the chest CT dataset using a threshold-based annotation pencil and avoiding inclusion of hilar vessels. From the Subset 2T, 10 cases were randomly selected for inter-rater comparisons of independent segmentations by all three raters, from which a single estimated ground truth segmentation was computed, as in [16]. The same network architecture as for A1 (Step 1) was used for training of algorithm A2, trained with Subset 2C and Subset 2NC (total N = 172).

Step 3 – implementation of a third-party lung segmentation algorithm

On 04/04/2020 an independent research group released “COVID-19 Web” on the GitHub platform, which to our knowledge was the first open-source lung segmentation algorithm specifically trained with COVID-19 chest CT datasets (https://github.com/JoHof/lungmask) [15]. Similar to A1 and A2, it is based on the U-Net architecture and had been trained on 40 and 238 datasets from patients with and without COVID-19, respectively (1–6 ratio). We implemented version 0.2.2 (downloaded on April 7th, in the following referred to as A3) in step 3 as an external reference for our own algorithms A1 and A2.

Data evaluation and statistical analysis

The inference of lung borders was followed by a simple postprocessing step for all three algorithms, during which spurious remote segmentations were excluded, while keeping the largest connected components. Algorithms A1−3 were tested and compared on Subset 2T.

Segmentation performance

Descriptive statistics were used to compare established performance metrics comparing inter-rater and human/deep learning whole-lung segmentations: The Dice similarity coefficient ranging from 0 to 1, as the number of common voxels times two, divided by the sum of voxels from each segmentation The maximal Hausdorff distance in mm, as the maximum distance between two segmentation contours The Dice coefficient was additionally evaluated for the upper and lower 20 % of the lungs (Appendix E1, Data Supplement S3).

Threshold-based quantification

We estimated the percentual opacity load (POL) in both lungs by thresholding between -600 and 0 HU:where voxel count of lung mask with -600≤HU≤0 and voxel count of lung mask in slice , respectively. derived from each of the algorithms were separately compared to manual in Subset 2T with Bland-Altman analyses in R (v 3.6.3) [20]. Additionally, quantification was computed in 55 non-COVID-19 cases in varying acquisition phases after administration of intravenous iodine contrast (Appendix E1, Data Supplement S4).

Implementation and availability

After the pathway development, a data pipeline was set up to navigate acquired images from the scanner to Nora, where the proposed algorithm (A2) was implemented. This locally hosted software is available to the radiologist during reporting through a web browser. Upon arrival of the image dataset, a segmentation and quantification process are operated automatically or can be triggered by mouse-click. In verified COVID-19 cases or in patients with a high pretest probability, the results are exported to the clinical PACS in the form of a visual report and included into the radiological report at the decision of the radiologist and are thus accessible to the attending physicians (see video).

Results

The technical development steps 1 through 3 were successfully completed within a period of 10 days: At day 1 and 2, segmentations generated by the semi-automated approach were exported and converted to a suitable format for the training of algorithm A1. The pre-processing, training of A1 and subsequent processing took place on day 3−4. Data preparation for the training of algorithm A2 including thorough manual segmentation was performed on days 5–8, followed by training of A2. The preliminary results of the deep learning segmentation A2 were available and discussed on day 9. As a side note, the total number of chest CT scans acquired for COVID-19 at our institution had in the meantime exceeded 200. On day 10 we finalized the threshold-based quantification and the complete pipeline was uploaded to our clinically embedded software development platform Nora. Fig. 2 shows a comparison of the output of the automated approach of algorithms A1-A3. The detailed timeline of the development pathway is given in Table S2 of Appendix E1.
Fig. 2

Segmentation examples of algorithms A1-A3: left basal lung (transversal slice) in an atypical case of COVID-19 (a) with ground-glass opacities (orange arrow), consolidations (green arrow) and a pleural effusion (black line). (b): lung borders, including ground-glass opacity but not consolidation, are segmented with algorithm A1. (c): lung border segmentation including both the ground-glass opacity and the consolidation with algorithm A2. (d): pleural effusion is unexpectedly included in the lung border segmentation with the third-party algorithm A3.

Segmentation examples of algorithms A1-A3: left basal lung (transversal slice) in an atypical case of COVID-19 (a) with ground-glass opacities (orange arrow), consolidations (green arrow) and a pleural effusion (black line). (b): lung borders, including ground-glass opacity but not consolidation, are segmented with algorithm A1. (c): lung border segmentation including both the ground-glass opacity and the consolidation with algorithm A2. (d): pleural effusion is unexpectedly included in the lung border segmentation with the third-party algorithm A3.

Segmentation performance for whole-lung

The performance metrics are given in Fig. 3 and Table 1 . The precision of the deep learning lung tissue segmentation in Subset 2T was excellent for A2 and A3 with mean Dice coefficients of 0.97, while A1 showed a slightly lower mean Dice coefficient of 0.95. The maximal Hausdorff-distance showed a mean of 25, 17 and 28 mm for A1, A2 and A3, respectively. Isolated outliers were observed above the upper quartile, mainly for the preliminary (A1) and the third-party (A3) algorithm, corresponding to the unexpected inclusion of pneumothorax or pleural effusion in the lung segmentation (for an example see Fig. 2). Inter-rater segmentation comparison on the 10 cases was excellent with mean Dice coefficients of 0.99 for all comparisons (Table 1).
Fig. 3

Boxplots of Dice coefficient (a) and maximal Hausdorff distance (b) for the three algorithms (blue: algorithm A1, orange: algorithm A2 and green: algorithm A3), compared to the manual ground truth (GT) on the test subset. The lowest outlier in the Dice coefficient of all three algorithms occurred in one case with a one-sided pneumothorax.

Table 1

Descriptive statistics for performance metrics Dice coefficient and maximum Hausdorff distance, on the left for comparisons between each algorithm and the human reference standards and on the right for the inter-rater comparisons.

GT vs algorithm comparison
Inter-rater comparison
Dice coefficientmax. Hausdorff distanceDice coefficientmax. Hausdorff distance
MeanGT vs A10.9525.5r1 vs r20.9917.0
GT vs A20.9717.4r1 vs r30.9922.0
GT vs A30.9728.4r2 vs r30.9923.7
SDGT vs A10.0314.2r1 vs r20.018.7
GT vs A20.0215.2r1 vs r30.0120.8
GT vs A30.0224.7r2 vs r30.0121.4
MinimumGT vs A10.7810.8r1 vs r20.977.5
GT vs A20.866.3r1 vs r30.977.5
GT vs A30.8610.0r2 vs r30.974.9
MedianGT vs A10.9621.5r1 vs r20.9914.6
GT vs A20.9811.8r1 vs r30.9915.6
GT vs A30.9717.4r2 vs r30.9917.5
maximumGT vs A10.9780.1r1 vs r2134.7
GT vs A20.9871.9r1 vs r3178.3
GT vs A30.98111.0r1 vs r2178.3

Abbreviations: GT: ground truth, SD: standard deviation, A1-A3: algorithms 1–3, r1-r3: rater1–3.

Boxplots of Dice coefficient (a) and maximal Hausdorff distance (b) for the three algorithms (blue: algorithm A1, orange: algorithm A2 and green: algorithm A3), compared to the manual ground truth (GT) on the test subset. The lowest outlier in the Dice coefficient of all three algorithms occurred in one case with a one-sided pneumothorax. Descriptive statistics for performance metrics Dice coefficient and maximum Hausdorff distance, on the left for comparisons between each algorithm and the human reference standards and on the right for the inter-rater comparisons. Abbreviations: GT: ground truth, SD: standard deviation, A1-A3: algorithms 1–3, r1-r3: rater1–3.

Performance in opacity quantification

The results of the threshold-based quantification analysis are displayed in Fig. 4 and Table 2 . Lung opacities were diagnosed in forty-four out of sixty-six scans (66 %) from the test dataset 2T. of the manually lung segmentations ranged from 5 to 55%. The algorithms showed mean underestimations of 3.0 %, 1.8 % and 0.9 % for A1, A2 and A3 in Subset 2T, respectively. For A1 and A3, there was a proportional bias in towards higher opacity loads (0.35 % and -0.4 % for each 10 % increase in , respectively), while for A2 the slope of the bias was almost zero (Fig. 4).
Fig. 4

Bland-Altman analyses for opacity quantification (POL-600, in %) derived from manual and deep learning segmentations (a: manual vs A2, b: manual vs A3). POL-600 values in the test subset of patients with COVID-19 ranged from 5 to 55 %. Note that numbers below 7% do not necessarily reflect lung opacities but can also be found in healthy lungs (see Data Supplement S4).

Table 2

Bland-Altman analyses of opacity quantification (in %) between the manual reference standard and each of the 3 algorithms.

AlgorithmMean biasLower limit of agreementUpper limit of agreementProportional bias interceptProportional bias slope
A13.11.25.02.50.035
A21.80.43.21.80.002
A30.9−0.72.41.5−0.04

A1 to A3: deep neural network algorithms1–3.

Bland-Altman analyses for opacity quantification (POL-600, in %) derived from manual and deep learning segmentations (a: manual vs A2, b: manual vs A3). POL-600 values in the test subset of patients with COVID-19 ranged from 5 to 55 %. Note that numbers below 7% do not necessarily reflect lung opacities but can also be found in healthy lungs (see Data Supplement S4). Bland-Altman analyses of opacity quantification (in %) between the manual reference standard and each of the 3 algorithms. A1 to A3: deep neural network algorithms1–3. Implementation has been accomplished during the pandemic in April 2020. In the first 6 weeks after implementation, almost 500 chest CTs admitted to our department were processed automatically. These scans were in part from patients with COVID-19 (N≈100) and in part from patients with other diseases from the later period of the COVID-19 pandemic. Processing time per individual is approximately 5 min. The quantification results stored in the PACS are used by the attending physicians as a risk stratification tool, particularly as an indicator of deterioration urging transfer to intensive care.

Discussion

In this work, the steps required for the development of a deep learning algorithm for quantification of opacities on chest CT and its clinical integration are outlined. Within just 10 days, two deep neural network segmentation algorithms with different sizes of training datasets were trained and a threshold-based quantification approach estimating lung opacity load was implemented into the clinical workflow in the midst of the pandemic. The results were compared to manually processed results and a lung segmentation algorithm developed and released in the same time period by an independent research group. The successful and timely implementation of the described pipeline was partially owed to the prior experience of our team in the implementation of deep learning pipelines and the close collaboration between developers and clinical stakeholders. In contrast to academic benchmark challenges in image processing, where usually a minimum of three months is provided for fine-tuning of the model, an end-to-end machine learning pipeline with incremental steps and sanity checks was applied, in fact a well-established practice in the tech industry [21]. Additionally, the seamless implementation of the third-party algorithm in the software development platform Nora allowed us a head-to-head comparison in real-time and is an additional strength of this study. The availability of open-source algorithms is vital for the field, especially in time-critical occasions, therefore our proposed method wasmade available on an open-source platform. The first deep learning algorithm A1 was deployed as early as 4 days after project initiation. Forty-five chest CTs from patients with COVID-19 available at that time were utilized for its training. Even without prior manual refinement of the segmentations, A1 provided a satisfactory overall segmentation performance. Nevertheless, a significant progress from the preliminary results of A1 to the second algorithm A2 was seen. This latter deep learning algorithm, trained with the fourfold amount of chest CT datasets and after manual refinement of the reference standard, showed a considerable improvement in segmentation performance on the test subset, consisting of chest CTs with a wide spectrum of COVID-19 related opacities. Although comparable to the third-party algorithm (A3), A2 showed in average lower maximal Hausdorff distances, pointing at a better agreement with the reference standard segmentation. This might be in part attributed to the fact that the reference standard of training and test subsets were created by the same human raters [22], but is also reflected by higher segmentation accuracy in specific cases with coexisting pneumothorax or massive pleural effusion. The deep learning segmentation was interlinked with a subsequent opacity quantification step, based on a HU-thresholding method, established at our site for COVID-19 related opacities before the deep learning segmentations were introduced. Quantification based on thresholding has been previously used for differentiation of normal lung tissue from opacities, such as ground-glass or consolidations [17] and the lower cut-off of -600 HU implemented here reflects the counterpart of the “well-aerated” lung, which has been correlated to the severity of the disease and clinical outcome in patients with COVID-19 [14]. The average underestimation of automated quantification was minor for the proposed A2 and the third-party algorithm, whereas the latter showed a negative bias slope towards higher opacity loads. In contrary, quantification bias with A2 did not manifest a dependency on opacity load, thus making the estimation of error from automated quantification more predictable when comparing baseline and follow-up scans. The approach proposed in this article quantifies but does not classify lung opacities, as recently shown by an automated differentiation of lung opacities in chest CTs caused by COVID-19 and acquired pneumonia [16]. Direct segmentation of affected lung areas has also been proposed as an alternative to approaches using thresholding after segmentation, although the voxel misclassifications reported there might eventually result in a similar degree of opacity underestimation [23]. Factors influencing the distribution of HU values in lung tissue such as inflation depth and prior contrast administration were identified in a small sample with no lung opacities (Appendix E1 only). The role of these and other factors, such as partial volume effects from vessels and bronchi, common CT artifacts or coexisting lung disease, have to be evaluated in more detail. The presented pragmatic approach also harbors some limitations. The first is the selection and curation of datasets that was strongly dominated by the question of availability of pre-processed data from the provisional, semi-automated pipeline in the wake of the pandemic. The data that was used for this project does therefore neither represent a complete, consecutively acquired sample of the COVID-19 cohort at our hospital nor is it a fully random sample. In addition, the reference standard subset of COVID-19 chest CT for the training of A2 was extended by an equally sized subset of relatively homogeneous scans performed for exclusion of pulmonary infection with and without iv contrast. Taken together, these inconsistencies in data selection may limit the performance of the algorithm in cases of advanced COVID-19, although for the training subset of the third-party A3 algorithm this portion of chest CTs performed for other reasons than COVID-19 was even lower. On the other hand, this augmentation of the training subset might reduce the selection bias and represents a more diverse sample [24]. An additional limitation is the use of empirical HU-thresholds for disease quantification, since clear-cut thresholding, especially the upper threshold of 0, might fail to include dense lung consolidations or might erroneously include small vessels due to partial volume effects. Finally, future contributions to the deep learning algorithm have to take into account coexisting abnormalities and potentially include more datasets from a wider spectrum of diseases in the training and test subsets in a controlled fashion.

Conclusion

Rapid development of reliable lung segmentation for COVID-19 is feasible with DCNN. Clinically acceptable results can be achieved in only few days and with less than 50 cases available for training, while by fourfold increasing of the number of datasets human-level performance was achieved. With this technique, fully automated quantification of pulmonary involvement in COVID-19 is possible even in the presence of advanced disease with extensive consolidations. This demonstrates the potential of including machine learning to assist clinical processes to manage the current pandemic and beyond.

CRediT authorship contribution statement

Constantin Anastasopoulos: Conceptualization, Methodology, Software, Visualization, Writing - original draft. Thomas Weikert: Methodology, Data curation, Writing - original draft. Shan Yang: Software, Investigation. Ahmed Abdulkadir: Software, Investigation. Lena Schmülling: Data curation, Validation. Claudia Bühler: Data curation. Fabiano Paciolla: Data curation. Raphael Sexauer: Data curation. Joshy Cyriac: Software. Ivan Nesic: Software. Raphael Twerenbold: Validation. Jens Bremerich: Supervision. Bram Stieltjes: Supervision, Writing - review & editing. Alexander W. Sauter: Conceptualization, Supervision, Writing - review & editing. Gregor Sommer: Conceptualization, Supervision, Writing - original draft, Writing - review & editing.
  20 in total

1.  Checklist for Artificial Intelligence in Medical Imaging (CLAIM): A Guide for Authors and Reviewers.

Authors:  John Mongan; Linda Moy; Charles E Kahn
Journal:  Radiol Artif Intell       Date:  2020-03-25

2.  Integrating artificial intelligence into the clinical practice of radiology: challenges and recommendations.

Authors:  Michael P Recht; Marc Dewey; Keith Dreyer; Curtis Langlotz; Wiro Niessen; Barbara Prainsack; John J Smith
Journal:  Eur Radiol       Date:  2020-02-17       Impact factor: 5.315

3.  Automatic lung segmentation in routine imaging is primarily a data diversity problem, not a methodology problem.

Authors:  Johannes Hofmanninger; Forian Prayer; Jeanny Pan; Sebastian Röhrich; Helmut Prosch; Georg Langs
Journal:  Eur Radiol Exp       Date:  2020-08-20

4.  Chest CT Findings in Coronavirus Disease-19 (COVID-19): Relationship to Duration of Infection.

Authors:  Adam Bernheim; Xueyan Mei; Mingqian Huang; Yang Yang; Zahi A Fayad; Ning Zhang; Kaiyue Diao; Bin Lin; Xiqi Zhu; Kunwei Li; Shaolin Li; Hong Shan; Adam Jacobi; Michael Chung
Journal:  Radiology       Date:  2020-02-20       Impact factor: 11.105

5.  COVID-19 patients and the radiology department - advice from the European Society of Radiology (ESR) and the European Society of Thoracic Imaging (ESTI).

Authors:  Marie-Pierre Revel; Anagha P Parkar; Helmut Prosch; Mario Silva; Nicola Sverzellati; Fergus Gleeson; Adrian Brady
Journal:  Eur Radiol       Date:  2020-04-20       Impact factor: 5.315

6.  Interpretation of CT signs of 2019 novel coronavirus (COVID-19) pneumonia.

Authors:  Jing Wu; Junping Pan; Da Teng; Xunhua Xu; Jianghua Feng; Yu-Chen Chen
Journal:  Eur Radiol       Date:  2020-05-04       Impact factor: 5.315

7.  Performance of Radiologists in Differentiating COVID-19 from Non-COVID-19 Viral Pneumonia at Chest CT.

Authors:  Harrison X Bai; Ben Hsieh; Zeng Xiong; Kasey Halsey; Ji Whae Choi; Thi My Linh Tran; Ian Pan; Lin-Bo Shi; Dong-Cui Wang; Ji Mei; Xiao-Long Jiang; Qiu-Hua Zeng; Thomas K Egglin; Ping-Feng Hu; Saurabh Agarwal; Fang-Fang Xie; Sha Li; Terrance Healey; Michael K Atalay; Wei-Hua Liao
Journal:  Radiology       Date:  2020-03-10       Impact factor: 11.105

8.  CO-RADS: A Categorical CT Assessment Scheme for Patients Suspected of Having COVID-19-Definition and Evaluation.

Authors:  Mathias Prokop; Wouter van Everdingen; Tjalco van Rees Vellinga; Henriëtte Quarles van Ufford; Lauran Stöger; Ludo Beenen; Bram Geurts; Hester Gietema; Jasenko Krdzalic; Cornelia Schaefer-Prokop; Bram van Ginneken; Monique Brink
Journal:  Radiology       Date:  2020-04-27       Impact factor: 11.105

9.  Using Artificial Intelligence to Detect COVID-19 and Community-acquired Pneumonia Based on Pulmonary CT: Evaluation of the Diagnostic Accuracy.

Authors:  Lin Li; Lixin Qin; Zeguo Xu; Youbing Yin; Xin Wang; Bin Kong; Junjie Bai; Yi Lu; Zhenghan Fang; Qi Song; Kunlin Cao; Daliang Liu; Guisheng Wang; Qizhong Xu; Xisheng Fang; Shiqin Zhang; Juan Xia; Jun Xia
Journal:  Radiology       Date:  2020-03-19       Impact factor: 11.105

View more
  9 in total

1.  Coronavirus disease (COVID-19) cases analysis using machine-learning applications.

Authors:  Ameer Sardar Kwekha-Rashid; Heamn N Abduljabbar; Bilal Alhayani
Journal:  Appl Nanosci       Date:  2021-05-21       Impact factor: 3.869

2.  Automated CT Lung Density Analysis of Viral Pneumonia and Healthy Lungs Using Deep Learning-Based Segmentation, Histograms and HU Thresholds.

Authors:  Andrej Romanov; Michael Bach; Shan Yang; Fabian C Franzeck; Gregor Sommer; Constantin Anastasopoulos; Jens Bremerich; Bram Stieltjes; Thomas Weikert; Alexander Walter Sauter
Journal:  Diagnostics (Basel)       Date:  2021-04-21

3.  A novel deep learning-based quantification of serial chest computed tomography in Coronavirus Disease 2019 (COVID-19).

Authors:  Feng Pan; Lin Li; Bo Liu; Tianhe Ye; Lingli Li; Dehan Liu; Zezhen Ding; Guangfeng Chen; Bo Liang; Lian Yang; Chuansheng Zheng
Journal:  Sci Rep       Date:  2021-01-11       Impact factor: 4.379

Review 4.  On the Role of Artificial Intelligence in Medical Imaging of COVID-19.

Authors:  Jannis Born; David Beymer; Deepta Rajan; Adam Coy; Vandana V Mukherjee; Matteo Manica; Prasanth Prasanna; Deddeh Ballah; Michal Guindy; Dorith Shaham; Pallav L Shah; Emmanouil Karteris; Jan L Robertus; Maria Gabrani; Michal Rosen-Zvi
Journal:  Patterns (N Y)       Date:  2021-04-30

Review 5.  Medical image processing and COVID-19: A literature review and bibliometric analysis.

Authors:  Rabab Ali Abumalloh; Mehrbakhsh Nilashi; Muhammed Yousoof Ismail; Ashwaq Alhargan; Abdullah Alghamdi; Ahmed Omar Alzahrani; Linah Saraireh; Reem Osman; Shahla Asadi
Journal:  J Infect Public Health       Date:  2021-11-17       Impact factor: 3.718

6.  Atri-U: assisted image analysis in routine cardiovascular magnetic resonance volumetry of the left atrium.

Authors:  Constantin Anastasopoulos; Shan Yang; Maurice Pradella; Tugba Akinci D'Antonoli; Sven Knecht; Joshy Cyriac; Marco Reisert; Elias Kellner; Rita Achermann; Philip Haaf; Bram Stieltjes; Alexander W Sauter; Jens Bremerich; Gregor Sommer; Ahmed Abdulkadir
Journal:  J Cardiovasc Magn Reson       Date:  2021-11-11       Impact factor: 5.364

7.  Virtual Reality visualization for computerized COVID-19 lesion segmentation and interpretation.

Authors:  Adel Oulefki; Sos Agaian; Thaweesak Trongtirakul; Samir Benbelkacem; Djamel Aouam; Nadia Zenati-Henda; Mohamed-Lamine Abdelli
Journal:  Biomed Signal Process Control       Date:  2021-11-24       Impact factor: 3.880

8.  Automated Detection, Segmentation, and Classification of Pleural Effusion From Computed Tomography Scans Using Machine Learning.

Authors:  Raphael Sexauer; Shan Yang; Thomas Weikert; Julien Poletti; Jens Bremerich; Jan Adam Roth; Alexander Walter Sauter; Constantin Anastasopoulos
Journal:  Invest Radiol       Date:  2022-04-02       Impact factor: 10.065

9.  Considerations on Baseline Generation for Imaging AI Studies Illustrated on the CT-Based Prediction of Empyema and Outcome Assessment.

Authors:  Raphael Sexauer; Bram Stieltjes; Jens Bremerich; Tugba Akinci D'Antonoli; Noemi Schmidt
Journal:  J Imaging       Date:  2022-02-22
  9 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.