Literature DB >> 34974190

AMMU: A survey of transformer-based biomedical pretrained language models.

Katikapalli Subramanyam Kalyan1, Ajit Rajasekharan2, Sivanesan Sangeetha3.   

Abstract

Transformer-based pretrained language models (PLMs) have started a new era in modern natural language processing (NLP). These models combine the power of transformers, transfer learning, and self-supervised learning (SSL). Following the success of these models in the general domain, the biomedical research community has developed various in-domain PLMs starting from BioBERT to the latest BioELECTRA and BioALBERT models. We strongly believe there is a need for a survey paper that can provide a comprehensive survey of various transformer-based biomedical pretrained language models (BPLMs). In this survey, we start with a brief overview of foundational concepts like self-supervised learning, embedding layer and transformer encoder layers. We discuss core concepts of transformer-based PLMs like pretraining methods, pretraining tasks, fine-tuning methods, and various embedding types specific to biomedical domain. We introduce a taxonomy for transformer-based BPLMs and then discuss all the models. We discuss various challenges and present possible solutions. We conclude by highlighting some of the open issues which will drive the research community to further improve transformer-based BPLMs. The list of all the publicly available transformer-based BPLMs along with their links is provided at https://mr-nlp.github.io/posts/2021/05/transformer-based-biomedical-pretrained-language-models-list/.
Copyright © 2021 Elsevier Inc. All rights reserved.

Entities:  

Keywords:  BioBERT; Biomedical pretrained language models; PubMedBERT; Self-supervised learning; Survey; Transformers

Mesh:

Year:  2021        PMID: 34974190     DOI: 10.1016/j.jbi.2021.103982

Source DB:  PubMed          Journal:  J Biomed Inform        ISSN: 1532-0464            Impact factor:   6.317


  2 in total

1.  GeMI: interactive interface for transformer-based Genomic Metadata Integration.

Authors:  Giuseppe Serna Garcia; Michele Leone; Anna Bernasconi; Mark J Carman
Journal:  Database (Oxford)       Date:  2022-06-03       Impact factor: 4.462

2.  Benchmarking for biomedical natural language processing tasks with a domain specific ALBERT.

Authors:  Usman Naseem; Adam G Dunn; Matloob Khushi; Jinman Kim
Journal:  BMC Bioinformatics       Date:  2022-04-21       Impact factor: 3.307

  2 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.