| Literature DB >> 35573928 |
Feng-Lei Fan1, Jinjun Xiong2, Mengzhou Li1, Ge Wang1.
Abstract
Deep learning as represented by the artificial deep neural networks (DNNs) has achieved great success recently in many important areas that deal with text, images, videos, graphs, and so on. However, the black-box nature of DNNs has become one of the primary obstacles for their wide adoption in mission-critical applications such as medical diagnosis and therapy. Because of the huge potentials of deep learning, increasing the interpretability of deep neural networks has recently attracted much research attention. In this paper, we propose a simple but comprehensive taxonomy for interpretability, systematically review recent studies in improving interpretability of neural networks, describe applications of interpretability in medicine, and discuss possible future research directions of interpretability, such as in relation to fuzzy logic and brain science.Entities:
Keywords: Deep learning; interpretability; neural networks; survey
Year: 2021 PMID: 35573928 PMCID: PMC9105427 DOI: 10.1109/trpms.2021.3066428
Source DB: PubMed Journal: IEEE Trans Radiat Plasma Med Sci ISSN: 2469-7303