| Literature DB >> 32745975 |
Wei Shao1, Tongxin Wang2, Liang Sun3, Tianhan Dong4, Zhi Han5, Zhi Huang6, Jie Zhang7, Daoqiang Zhang8, Kun Huang9.
Abstract
With the tremendous development of artificial intelligence, many machine learning algorithms have been applied to the diagnosis of human cancers. Recently, rather than predicting categorical variables (e.g., stages and subtypes) as in cancer diagnosis, several prognosis prediction models basing on patients' survival information have been adopted to estimate the clinical outcome of cancer patients. However, most existing studies treat the diagnosis and prognosis tasks separately. In fact, the diagnosis information (e.g., TNM Stages) indicates the extent of the disease severity that is highly correlated with the patients' survival. While the diagnosis is largely made based on histopathological images, recent studies have also demonstrated that integrative analysis of histopathological images and genomic data can hold great promise for improving the diagnosis and prognosis of cancers. However, direct combination of these two types of data may bring redundant features that will negatively affect the prediction performance. Therefore, it is necessary to select informative features from the derived multi-modal data. Based on the above considerations, we propose a multi-task multi-modal feature selection method for joint diagnosis and prognosis of cancers. Specifically, we make use of the task relationship learning framework to automatically discover the relationships between the diagnosis and prognosis tasks, through which we can identify important image and genomics features for both tasks. In addition, we add a regularization term to ensure that the correlation within the multi-modal data can be captured. We evaluate our method on three cancer datasets from The Cancer Genome Atlas project, and the experimental results verify that our method can achieve better performance on both diagnosis and prognosis tasks than the related methods.Entities:
Keywords: Cancer diagnosis; Cancer prognosis; Image genomics; Multi-task multi-Modal learning
Mesh:
Year: 2020 PMID: 32745975 DOI: 10.1016/j.media.2020.101795
Source DB: PubMed Journal: Med Image Anal ISSN: 1361-8415 Impact factor: 8.545