| Literature DB >> 33746351 |
David Chang1, Ivana Balažević2, Carl Allen2, Daniel Chawla1, Cynthia Brandt1, Richard Andrew Taylor1.
Abstract
Much of biomedical and healthcare data is encoded in discrete, symbolic form such as text and medical codes. There is a wealth of expert-curated biomedical domain knowledge stored in knowledge bases and ontologies, but the lack of reliable methods for learning knowledge representation has limited their usefulness in machine learning applications. While text-based representation learning has significantly improved in recent years through advances in natural language processing, attempts to learn biomedical concept embeddings so far have been lacking. A recent family of models called knowledge graph embeddings have shown promising results on general domain knowledge graphs, and we explore their capabilities in the biomedical domain. We train several state-of-the-art knowledge graph embedding models on the SNOMED-CT knowledge graph, provide a benchmark with comparison to existing methods and in-depth discussion on best practices, and make a case for the importance of leveraging the multi-relational nature of knowledge graphs for learning biomedical knowledge representation. The embeddings, code, and materials will be made available to the community.Entities:
Year: 2020 PMID: 33746351 PMCID: PMC7971091 DOI: 10.18653/v1/2020.bionlp-1.18
Source DB: PubMed Journal: Proc Conf Assoc Comput Linguist Meet ISSN: 0736-587X