Literature DB >> 35265922

Diagnostic utility of artificial intelligence for left ventricular scar identification using cardiac magnetic resonance imaging-A systematic review.

Nikesh Jathanna1,2, Anna Podlasek3,2, Albert Sokol2, Dorothee Auer3,2, Xin Chen2, Shahnaz Jamil-Copley1,2.   

Abstract

Background: Accurate, rapid quantification of ventricular scar using cardiac magnetic resonance imaging (CMR) carries importance in arrhythmia management and patient prognosis. Artificial intelligence (AI) has been applied to other radiological challenges with success. Objective: We aimed to assess AI methodologies used for left ventricular scar identification in CMR, imaging sequences used for training, and its diagnostic evaluation.
Methods: Following PRISMA recommendations, a systematic search of PubMed, Embase, Web of Science, CINAHL, OpenDissertations, arXiv, and IEEE Xplore was undertaken to June 2021 for full-text publications assessing left ventricular scar identification algorithms. No pre-registration was undertaken. Random-effect meta-analysis was performed to assess Dice Coefficient (DSC) overlap of learning vs predefined thresholding methods.
Results: Thirty-five articles were included for final review. Supervised and unsupervised learning models had similar DSC compared to predefined threshold models (0.616 vs 0.633, P = .14) but had higher sensitivity, specificity, and accuracy. Meta-analysis of 4 studies revealed standardized mean difference of 1.11; 95% confidence interval -0.16 to 2.38, P = .09, I2 = 98% favoring learning methods.
Conclusion: Feasibility of applying AI to the task of scar detection in CMR has been demonstrated, but model evaluation remains heterogenous. Progression toward clinical application requires detailed, transparent, standardized model comparison and increased model generalizability.
© 2021 Published by Elsevier Inc. on behalf of Heart Rhythm Society.

Entities:  

Keywords:  Artificial intelligence; Cardiac scar; Deep learning; Imaging – cardiac magnetic resonance imaging (MRI); Machine learning; Neural networks

Year:  2021        PMID: 35265922      PMCID: PMC8890335          DOI: 10.1016/j.cvdhj.2021.11.005

Source DB:  PubMed          Journal:  Cardiovasc Digit Health J        ISSN: 2666-6936


Background and objectives

Accurate identification of cardiac scar is growing in importance. Previous studies demonstrate that ventricular scar volume/mass is associated with ventricular arrhythmia episode risk and response to medical and device therapies. Descriptive delineation can improve electrophysiological procedural outcomes. Subsequently, scar metrics are increasingly integrated into clinical practice as a decision aid guiding patient therapy. Cardiac magnetic resonance imaging (CMR) is the current gold standard for tissue characterization. However, expert manual delineation is time-consuming, with significant interoperator variability. Methods to improve objective quantification through means such as thresholding and full-width half-maximum can reduce time and improve reproducibility. However, outcome variability between these methods exists and none are preferentially recommended., Moreover, these techniques are often solely based on signal intensity and may require further human postprocessing (Figure 1).
Figure 1

Simplified comparison of distinct cardiac magnetic resonance image segmentation methods. Far-left image demonstrates a short-axis view of transmural septal scar. In subsequent images, green represents epicardial border, magenta endocardial, and purple segmented scar for thresholding (top) and manual (bottom) techniques.

Simplified comparison of distinct cardiac magnetic resonance image segmentation methods. Far-left image demonstrates a short-axis view of transmural septal scar. In subsequent images, green represents epicardial border, magenta endocardial, and purple segmented scar for thresholding (top) and manual (bottom) techniques. Artificial intelligence (AI) is a broad term encompassing a multitude of functions, including the undertaking of tasks normally requiring human intelligence. Subsets including machine learning (ML) and deep learning (DL) have been investigated as solutions to clinical challenges. The use of AI in general medical and cardiac imaging research has developed rapidly owing to technological advances and the data-rich environment of imaging. Models ranging from simple task automation to deep investigative associations have been developed. Classical AI methods typically make decisions based on hand-crafted rules or predefined thresholds, whereas ML models can “learn” rules and associations from patterns statistically discerned from datasets. DL is a subcategory of ML that specifically uses multilayer artificial neural networks for decision-making. Training can be supervised or unsupervised. The proposed advantage of AI tools for scar identification includes rapid precision-based undertaking of routine tasks utilizing the wealth of data available with CMR. We undertook a targeted systematic review of publications including AI methods for the identification of left ventricular (LV) cardiac scar in CMR to answer the following questions: What AI methods are being employed for ventricular scar assessment in those with cardiac disease on CMR? What CMR sequences are being utilized? What are the diagnostic evaluations of these methods?

Methods

This review adhered to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) recommendations (Supplemental Appendices A and B). No pre-registration was undertaken.

Search strategy and data sources

We performed a systematic search up to June 2021 of PubMed, Embase, Web of Science, Cumulative Index to Nursing and Allied Health Literature (CINAHL), OpenDissertations, arXiv, and IEEE Xplore. Full Boolean criteria appear in Supplemental Appendix C. Records were imported into Rayyan (Qatar Computing Research Institute, Doha, Qatar) and duplicates were manually removed.

Study selection

Articles underwent independent abstract screening for eligibility and full-text review by at least 2 reviewers (NJ, AP, AS), with conflicts resolved by a third. Full-text articles evaluating AI algorithms for LV scar identification with CMR annotations were included. Exclusion criteria included animal-based studies, algorithms primarily utilizing predefined thresholds, absence of scar annotations, non-CMR imaging, letters to editors, editorials, abstract-only publications/posters, studies not in the English language, and non-LV scar studies.

Data extraction and analysis

We extracted following variables, where available: AI Algorithm characteristics, dataset characteristics, primary cardiac disease, CMR characteristics, ground-truth assessment, and evaluation measures. Owing to the unique complexities of individual algorithms and to enable study comparison, methodologies were broadly categorized into supervised and unsupervised methods, with further subcategorization if appreciable in the main text. The definition of “fully automated” required no human interaction from image input to result output. CMR sequence categorization accounted for nomenclature among different manufacturers. Dataset size was assessed through total caseload, with cases defined as individual CMR scans. Primary cardiac disease was broadly categorized, with mixed-etiology studies highlighted. Where required, conversion to mean was undertaken., Grouped means were calculated using Cochrane’s formula. Welch test was used for comparison of means. Random-effect meta-analyses of continuous primary outcome (Dice coefficient) was expressed as standardized mean difference with a 95% confidence interval. Comparison between proposed and comparator predefined thresholding methods was undertaken. Analysis was visualized by a forest plot. The extent of between-study heterogeneity was assessed with the I2 statistic. Funnel plots were used to assess publication bias. The ROBINS-I tool was used to evaluate the risk of bias of each study included in quantitative analysis by independent assessors and the robvis tool was used for visualization. P values were 2-tailed, with values <.05 considered statistically significant. Extracted data are available in Supplemental Appendix C.

Results

Of 6156 results, 35 were included for qualitative analysis. Four contained predefined thresholding comparators for quantitative analysis (Figure 2).
Figure 2

PRISMA flowchart of study selection process.

PRISMA flowchart of study selection process. Articles included were published between the years 2010 and 2021, with increasing publications in recent years, with the highest in 2020 (n = 8).

CMR dataset

Sequences and magnet strength

All studies used CMR; however, 1 study did not specify sequences. All 34 articles describing CMR sequences utilized late gadolinium–enhanced short-axis imaging as a minimum, with most undertaken with 2D imaging and breath-holds (82.35%). Of the 6 of 34 (17.65%) utilizing 3D late gadolinium enhancement, 4 utilized 3D whole-heart imaging, with the remaining consisting of breath-hold sequences. Additional sequences were only used in combination with 2D late gadolinium enhancement (25/28, 89.29%), with the most frequent being a variation of cine imaging. Ten articles did not declare CMR magnet strength. Of the remaining, 12 used only 1.5 Tesla (T), 6 only 3 T, and 7 a combination.

Manufacturers

CMR manufacturer was declared in 25 of 35 articles: 59.3%, 25.9%, and 18.5% for Siemens, Philips, and General Electric, respectively. Only 5 studies used multiple manufacturers; 1 utilized all 3 manufacturers, 2 used Philips and Siemens, and 2 were unclear on designations.

Dataset size

Excluding postprocessing transformations, a total of 3856 human CMR studies in 32 of 35 articles were utilized, with a range of 3–1073 and a median (interquartile range) of 45 (24–143.5). Three articles had unclear datasets. Nine studies employed augmentation to increase the number of examples for model training.

Cardiac diseases

The most common cardiac disease examined was ischemic heart disease (21/35); however, only 14 contained solely ischemic images. The remaining ischemic cohorts were combined with “normal” images (4), tetralogy of Fallot (1), unspecified (1), or a combination (1). Four studies assessed hypertrophic cardiomyopathy (11.4%) and 9 did not specify any disease etiology (25.7%). Intracardiac device presence was not declared.

AI methods

DL and ML were represented in 60% and 40%, respectively. Subcategorization of DL algorithms were convolutional neural networks based on 2D models (12/21), 3D models (4/21), or other/unspecified (5/21). Most DL approaches used a convolutional neural network with U-Net architecture. ML algorithms that referred to classical ML methods using hand-crafted features included 13 various models with at least 6 studies employing multiple models. Only 4 papers utilized unsupervised primary methods—all classic ML. Twenty-four of 35 ML/DL methods were fully automated, with 9 of 11 nonautomated algorithms utilizing ML methods (Table 1).
Table 1

Summary of reviewed studies

Author, year of publicationSupervised learning?Full Automation?Final code available?MRI sequencesDataset condition cohort
Abramson, 202013YesYesNoCine, 2D LGEIschemic
Brahim, 202014YesYesNo2D LGEMixed – Ischemic, healthy, not specified
Brahim, 202115YesYesNo2D LGEMixed – Ischemic, healthy
Brahim, 202116YesYesNo2D LGEMixed – Ischemic, healthy
Campello, 202017YesYesNoCine, 2D LGE, T2Not specified
Carminati, 201518NoNoNo2D LGEIschemic
Carminati, 201619NoNoNo2D LGEIschemic
De la Rosa, 201920YesYesNoCine, 2D LGEMixed – Ischemic, healthy
Engblom, 201621YesYesNo2D LGEIschemic
Fadil, 202122YesYesYesCine, 2D LGE, pre- & postcontrast T1, T2Mixed – Ischemic, healthy, not specified
Fahmy, 202023YesYesYesCine, 2D LGEHypertrophic cardiomyopathy
Fahmy, 202124YesYesYes2D LGEHypertrophic cardiomyopathy
Heidenreich, 202125YesYesYes2D LGEIschemic
Kotu, 201126YesYesNo2D LGENot specified
Kurzendorfer, 201827YesNoNo3D LGENot specified
Larroza, 201728YesNoYesCine, 2D LGEIschemic
Larroza, 201829YesNoNoCine, 2D LGEIschemic
Lau, 201830YesYesNo2D LGENot specified
Mantilla, 201531YesYesNo2D LGEHypertrophic cardiomyopathy
Merino-Caviedes, 201632YesNoNoCine, 2D LGEHypertrophic cardiomyopathy
Metwally, 201033NoYesNo2D LGENot specified
Moccia, 201834YesYesNo2D LGEIschemic
Moccia, 201935YesYesNo2D LGEIschemic
Moccia, 202036YesYesNoCine, 2D LGEIschemic
Morisi, 201537YesNoNo3D LGENot specified
Rajchl, 201438NoNoNo3D LGE (WH)Mixed – Ischemic, Tetralogy of Fallot
Rukundo, 202039YesYesNo2D LGENot specified
Wang, 201140YesYesNo2D LGEIschaemic
Wang, 202041YesYesNoNot specifiedMixed – Ischemic, healthy
Zabihollahy, 201842YesNoNo3D LGE (WH)Ischemic
Zabihollahy, 201943YesNoNo3D LGE (WH)Ischemic
Zabihollahy, 202044YesYesNo3D LGE (WH)Ischemic
Zhang Z, 202045YesYesNoCine, 2D LGE, T2Not specified
Zhang X, 202046YesYesNoCine, 2D LGE, T2Not specified
Zhuang, 201947YesNoNoCine, 2D LGE, T2Not specified

LGE = late gadolinium enhancement; WH = whole-heart imaging.

Summary of reviewed studies LGE = late gadolinium enhancement; WH = whole-heart imaging.

Diagnostic evaluation

Ground truth

Ground-truth segmentations were all acquired from human delineation, 9 of which highlighted semiautomated thresholding techniques as segmentation aids. No histological confirmation was undertaken.

Evaluation metrics

All methods were compared to a minimum of human delineated ground truth. Four studies compared performance of proposed and predefined thresholding methods.19, 22, 43, 44 Twenty-seven different reported evaluation metrics were utilized. Common metrics existed in 3 groups: overlap, distance, and volume metrics (Table 2).
Table 2

Summary of evaluation metrics utilized

Reported evaluation metrics
Overlap
Dice coefficient, Jaccard index/intersection over union, Sensitivity specificity, Accuracy, Precision, F-score, Mean BF1, Recall, Segment overlap, Repeatability, True/false positive & negative
Distance
Haussdorf distance, Surface distance, Average contour distance, Root-mean-squared area
Volume
Left ventricular volume, Scar/infarct volume, Scar mass, Absolute volume difference ± normalization, Total volume error, Percentage volume error, Scar as myocardial percentage, Mean absolute error ± normalization, Left ventricular mass
Summary of evaluation metrics utilized DSC was the most common statistical metric in 54% of studies, followed by sensitivity (17.1%). Tables 3 and 4 compare results of 5 evaluation methods in studies with comparable data. No statistically significant difference was seen in DSC between predefined threshold models compared to supervised and unsupervised learning methods (0.633 vs 0.616, P = .14), with unsupervised methods performing better than supervised (0.732 vs 0.599, P < .05). However, substantial dataset size variance is present.
Table 3

Comparison of reported Dice coefficient and Haussdorf distance for scar segmentation

Dice coefficient
Haussdorf distance
Mean (SD)Total test casesNo. of studiesMeanTotal test casesNo. of studies
Predefined threshold0.633 (0.15)306437.973822
Supervised and unsupervised learning0.616 (0.256)11251318.1352303
Supervised learning0.599 (0.264)984918.1352303
Unsupervised learning0.732 (0.153)1414---

P = .14.

P < .05.

Table 4

Comparison of reported mean sensitivity, specificity, and accuracy for scar segmentation

SensitivityTotal test casesNo. of studiesSpecificityTotal test casesNo. of studiesAccuracyTotal test casesNo. of studies
Predefined threshold83.91160190.79160188.511601
Supervised and unsupervised learning91.4882497.11870392.9912966
Supervised learning95.09520298.95520294.037764
Unsupervised learning83.79372594.38350388.45203
Comparison of reported Dice coefficient and Haussdorf distance for scar segmentation P = .14. P < .05. Comparison of reported mean sensitivity, specificity, and accuracy for scar segmentation Higher sensitivity, specificity, and accuracy were seen in proposed methods, namely supervised, compared to predefined thresholding with an associated lower Haussdorf distance. In 4 studies with direct predefined thresholding vs supervised/unsupervised learning comparisons, weak evidence of a small effect with high heterogeneity was seen in learning models with higher DSC, standardized mean difference = 1.11; 95% CI -0.16 to 2.38, P = .09 (Figure 3). Visual inspection of funnel plots did not reveal asymmetry in studies, though sparse data existed (Figure 4). Studies had low (n = 2) and moderate (n = 2) risk of bias (Supplemental Appendix C).
Figure 3

Forest plot of supervised and unsupervised learning vs predefined thresholding models.

Figure 4

Funnel plot. Visual analysis suggests no bias, though data points are sparse. SE = standard error; SMD = standardized mean difference.

Forest plot of supervised and unsupervised learning vs predefined thresholding models. Funnel plot. Visual analysis suggests no bias, though data points are sparse. SE = standard error; SMD = standardized mean difference.

Discussion

In this article, we reviewed published literature surrounding methods for CMR scar segmentation. As expected, with technological advances, increasing numbers of studies using learning methods have been published internationally, highlighting the subject’s global interest and clinical potential. With the recent rapid expansion of AI, many state-of-the-art reviews exist covering current and potential applications for general cardiology. Our review focuses on progress within 1 specific aspect of cardiology to provide clinically relatable interpretation. For this purpose, it is important to consider evaluation, practicality, and patient/scanner generalizability.

Evaluation

Our review has not demonstrated clear benefit of supervised/unsupervised learning vs predefined thresholding methods for LV scar segmentation methods. Unsupervised models performed better than other models when considering volume overlap (DSC), but the inverse was seen in sensitivity, specificity, and Haussdorf distance, with supervised methods outperforming others. However, limited comparisons and significant heterogeneity in evaluation methods was evident. In direct comparison from 4 studies, there was weak evidence favoring supervised/unsupervised learning methods. Furthermore, it is questionable whether the small differences demonstrated would translate to clinical benefit. Three evaluation categories were present: overlap, distance, and volume metrics. Combinations of metrics are required to extensively assess segmentation models. Unfortunately, owing to absence of clear guidance regarding minimal and/or preferable metrics, many models described to date may not be directly comparable. The use of segmentation challenges mitigates some of these issues with standardized datasets and reporting measures but is associated with its own caveats. Moreover, studies predominantly compared their proposed model to initial ground-truth or other supervised models, with little comparison to various clinically utilized non–learning methods. For integration into clinical practice, future research must compare standard clinical applications against novel methods through robust multimodal comparable metrics.

Practicality

A noteworthy benefit of the described AI methods is automaticity. Guidelines recommend objective quantification of cardiac structures and function. Hence, full automation of scar assessment is desirable to reduce operator burden. Many existent predefined thresholding algorithms for scar quantification are applied once myocardial borders are delineated—a semiautomatic process., Myocardial border annotation deviations may lead to scar misevaluation and this issue extends to all methodologies dependent on predefined ground-truth borders. Subsequently, published segmentation results may not be directly transferable clinically when reliant on border segmentation quality. Fully automated AI methods for both myocardial and scar segmentation may result in more reproducible and objective segmentation labels. However, the black-box problem of DL decision-making processes being essentially noninterpretable remains a considerable issue. Until this problem is solved, human oversight and manual input for segmentation and correction remains essential to ensure patient safety in the clinical context, currently limiting the potential of a truly automatic approach outside the research environment. Unsupervised methods and rule-based ML algorithms have an advantage in this respect owing to relatively explainable methodologies, which may be important for clinical confidence.

Generalizability

Clinical use and external generalizability of models requires applicability to a variety of patients/scanners and is dependent on image or model complexity and training datasets. Scar patterns vary across different cardiomyopathies. Ischemic scar arises from the subendocardium, with epicardial progression compared with a more heterogenous distribution in nonischemic cardiomyopathies. With the high prevalence of ischemic heart disease and its associated significant mortality, predominance of ischemic scar models is understandable and clinically necessary. Nonischemic etiologies have been investigated but are more sparse., Of greater concern is that 25.7% of publications did not specify disease etiology. Training datasets have significant impact on model performance. Hence, detailed descriptions of dataset disease etiologies are of great importance for clinical utility, especially as studies assessing model transferability across disease cohorts without retraining are sparse. Metadata can vary between manufacturers and magnet strengths, and models trained with specific data may not be generally applicable. Only a small subset of studies included multiple CMR manufactures to mitigate this risk.,21, 22, 23,, Training data on various field strengths would allow greater generalizability clinically, but 1.5 T remains the current standard, with 3 T employed mainly by more experienced imaging centers. Similarly, regional manufacturer predominance may exist. Inclusion of multiple manufacturers and field strengths for generalizability is to be commended and should be actively sought; however, data suggest 1.5 T Siemens scans have the largest data support for clinical application. Small datasets are the main limitation for optimal Algorithm creation. Access to data remains a limitation to researchers explaining, in part, the large variation in dataset size range. Reducing barriers to data accessibility may reduce training/testing data variability and improving model comparability. Collaborative sharing and access to such data is an important consideration. Critically, labeled data are required to avoid time-consuming reanalysis and promote transparent result comparison. Other smaller datasets exist in the form of CMR challenges for more standardized model comparison. Further options to improve generalizability within existing datasets include augmentation through image transformations or using generative adversarial networks to produce synthetic images.,

Conclusion

Feasibility of applying AI to the task of scar segmentation in CMR has been demonstrated. Progression toward clinical application requires dataset transparency, evaluation, standardization, and model generalizability.

Funding Sources

This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors.

Disclosures

The authors have no conflicts of interest to declare.

Authorship

All authors attest they meet the current ICMJE criteria for authorship.

Patient Consent

Not applicable.

Ethics Statement

This review adhered to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) recommendations (Supplemental Appendices A and B). No pre-registration was undertaken. Artificial intelligence models have been shown to be feasible for the segmentation of cardiac structures including scar. There is no clear benefit of supervised or unsupervised learning models over predefined thresholding models. Random-effect analysis may suggest a benefit of learning models, but high heterogeneity exists. Comparison of models to identify superior methodologies remains challenging in humans owing to a lack of true scar “ground truth” as reference, with significant variation in assessment methods utilized and limited access to final Algorithm codes. Standardization of model evaluation, transparency in training/testing data, and highly generalizable models are recommended before transition to clinical practice can be considered.
  30 in total

1.  Segmentation of scarred and non-scarred myocardium in LG enhanced CMR images using intensity-based textural analysis.

Authors:  Lasya Priya Kotu; Kjersti Engan; Trygve Eftestøl; Stein Ørn; Leik Woie
Journal:  Conf Proc IEEE Eng Med Biol Soc       Date:  2011

2.  Left ventricular lead position, mechanical activation, and myocardial scar in relation to left ventricular reverse remodeling and clinical outcomes after cardiac resynchronization therapy: A feature-tracking and contrast-enhanced cardiovascular magnetic resonance study.

Authors:  Robin J Taylor; Fraz Umar; Jonathan R Panting; Berthold Stegemann; Francisco Leyva
Journal:  Heart Rhythm       Date:  2015-10-21       Impact factor: 6.343

3.  Interactive Hierarchical-Flow Segmentation of Scar Tissue From Late-Enhancement Cardiac MR Images.

Authors:  Martin Rajchl; Jing Yuan; James A White; Eranga Ukwatta; John Stirrat; Cyrus M S Nambakhsh; Feng P Li; Terry M Peters
Journal:  IEEE Trans Med Imaging       Date:  2013-09-20       Impact factor: 10.048

4.  Role of Cardiac MR Imaging in Cardiomyopathies.

Authors:  Christopher M Kramer
Journal:  J Nucl Med       Date:  2015-06       Impact factor: 10.057

5.  Texture analysis of cardiac cine magnetic resonance imaging to detect nonviable segments in patients with chronic myocardial infarction.

Authors:  Andrés Larroza; María P López-Lereu; José V Monmeneu; Jose Gavara; Francisco J Chorro; Vicente Bodí; David Moratal
Journal:  Med Phys       Date:  2018-02-22       Impact factor: 4.071

6.  Three-dimensional Deep Convolutional Neural Networks for Automated Myocardial Scar Quantification in Hypertrophic Cardiomyopathy: A Multicenter Multivendor Study.

Authors:  Ahmed S Fahmy; Ulf Neisius; Raymond H Chan; Ethan J Rowin; Warren J Manning; Martin S Maron; Reza Nezafat
Journal:  Radiology       Date:  2019-11-12       Impact factor: 11.105

7.  Extent of peri-infarct scar on late gadolinium enhancement cardiac magnetic resonance imaging and outcome in patients with ischemic cardiomyopathy.

Authors:  Erol Tülümen; Boris Rudic; Hannah Ringlage; Anna Hohneck; Susanne Röger; Volker Liebe; Jürgen Kuschyk; Daniel Overhoff; Johannes Budjan; Ibrahim Akin; Martin Borggrefe; Theano Papavassiliu
Journal:  Heart Rhythm       Date:  2021-01-28       Impact factor: 6.343

8.  Estimating the sample mean and standard deviation from the sample size, median, range and/or interquartile range.

Authors:  Xiang Wan; Wenqian Wang; Jiming Liu; Tiejun Tong
Journal:  BMC Med Res Methodol       Date:  2014-12-19       Impact factor: 4.615

9.  ROBINS-I: a tool for assessing risk of bias in non-randomised studies of interventions.

Authors:  Jonathan Ac Sterne; Miguel A Hernán; Barnaby C Reeves; Jelena Savović; Nancy D Berkman; Meera Viswanathan; David Henry; Douglas G Altman; Mohammed T Ansari; Isabelle Boutron; James R Carpenter; An-Wen Chan; Rachel Churchill; Jonathan J Deeks; Asbjørn Hróbjartsson; Jamie Kirkham; Peter Jüni; Yoon K Loke; Theresa D Pigott; Craig R Ramsay; Deborah Regidor; Hannah R Rothstein; Lakhbir Sandhu; Pasqualina L Santaguida; Holger J Schünemann; Beverly Shea; Ian Shrier; Peter Tugwell; Lucy Turner; Jeffrey C Valentine; Hugh Waddington; Elizabeth Waters; George A Wells; Penny F Whiting; Julian Pt Higgins
Journal:  BMJ       Date:  2016-10-12

10.  Regional variation in cardiovascular magnetic resonance service delivery across the UK.

Authors:  Niall G Keenan; Gabriella Captur; Gerry P McCann; Colin Berry; Saul G Myerson; Timothy Fairbairn; Lucy Hudsmith; Declan P O'Regan; Mark Westwood; John P Greenwood
Journal:  Heart       Date:  2021-03-25       Impact factor: 5.994

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.