| Literature DB >> 35713581 |
Mohammad Reza Hosseinzadeh Taher1, Fatemeh Haghighi1, Ruibin Feng2, Michael B Gotway3, Jianming Liang1.
Abstract
Transfer learning from supervised ImageNet models has been frequently used in medical image analysis. Yet, no large-scale evaluation has been conducted to benchmark the efficacy of newly-developed pre-training techniques for medical image analysis, leaving several important questions unanswered. As the first step in this direction, we conduct a systematic study on the transferability of models pre-trained on iNat2021, the most recent large-scale fine-grained dataset, and 14 top self-supervised ImageNet models on 7 diverse medical tasks in comparison with the supervised ImageNet model. Furthermore, we present a practical approach to bridge the domain gap between natural and medical images by continually (pre-)training supervised ImageNet models on medical images. Our comprehensive evaluation yields new insights: (1) pre-trained models on fine-grained data yield distinctive local representations that are more suitable for medical segmentation tasks, (2) self-supervised ImageNet models learn holistic features more effectively than supervised ImageNet models, and (3) continual pre-training can bridge the domain gap between natural and medical images. We hope that this large-scale open evaluation of transfer learning can direct the future research of deep learning for medical imaging. As open science, all codes and pre-trained models are available on our GitHub page https://github.com/JLiangLab/BenchmarkTransferLearning.Entities:
Keywords: ImageNet pre-training; Self-supervised learning; Transfer learning
Year: 2021 PMID: 35713581 PMCID: PMC9197759 DOI: 10.1007/978-3-030-87722-4_1
Source DB: PubMed Journal: Domain Adapt Represent Transf Afford Healthc AI Resour Divers Glob Health (2021)