Literature DB >> 23787338

Representation learning: a review and new perspectives.

Yoshua Bengio1, Aaron Courville, Pascal Vincent.   

Abstract

The success of machine learning algorithms generally depends on data representation, and we hypothesize that this is because different representations can entangle and hide more or less the different explanatory factors of variation behind the data. Although specific domain knowledge can be used to help design representations, learning with generic priors can also be used, and the quest for AI is motivating the design of more powerful representation-learning algorithms implementing such priors. This paper reviews recent work in the area of unsupervised feature learning and deep learning, covering advances in probabilistic models, autoencoders, manifold learning, and deep networks. This motivates longer term unanswered questions about the appropriate objectives for learning good representations, for computing representations (i.e., inference), and the geometrical connections between representation learning, density estimation, and manifold learning.

Entities:  

Mesh:

Year:  2013        PMID: 23787338     DOI: 10.1109/TPAMI.2013.50

Source DB:  PubMed          Journal:  IEEE Trans Pattern Anal Mach Intell        ISSN: 0098-5589            Impact factor:   6.226


  467 in total

1.  Deep learning for regulatory genomics.

Authors:  Yongjin Park; Manolis Kellis
Journal:  Nat Biotechnol       Date:  2015-08       Impact factor: 54.908

2.  Gene expression inference with deep learning.

Authors:  Yifei Chen; Yi Li; Rajiv Narayan; Aravind Subramanian; Xiaohui Xie
Journal:  Bioinformatics       Date:  2016-02-11       Impact factor: 6.937

3.  Sufficient Forecasting Using Factor Models.

Authors:  Jianqing Fan; Lingzhou Xue; Jiawei Yao
Journal:  J Econom       Date:  2017-08-26       Impact factor: 2.388

4.  Unsupervised Extraction of Stable Expression Signatures from Public Compendia with an Ensemble of Neural Networks.

Authors:  Jie Tan; Georgia Doing; Kimberley A Lewis; Courtney E Price; Kathleen M Chen; Kyle C Cady; Barret Perchuk; Michael T Laub; Deborah A Hogan; Casey S Greene
Journal:  Cell Syst       Date:  2017-07-12       Impact factor: 10.304

5.  Radiomics: a new application from established techniques.

Authors:  Vishwa Parekh; Michael A Jacobs
Journal:  Expert Rev Precis Med Drug Dev       Date:  2016-03-31

Review 6.  [Big data approaches in psychiatry: examples in depression research].

Authors:  D Bzdok; T M Karrer; U Habel; F Schneider
Journal:  Nervenarzt       Date:  2018-08       Impact factor: 1.214

7.  Patient Cohort Retrieval using Transformer Language Models.

Authors:  Sarvesh Soni; Kirk Roberts
Journal:  AMIA Annu Symp Proc       Date:  2021-01-25

8.  Shallow Representation Learning via Kernel PCA Improves QSAR Modelability.

Authors:  Stefano E Rensi; Russ B Altman
Journal:  J Chem Inf Model       Date:  2017-08-07       Impact factor: 4.956

9.  Universal approximation with quadratic deep networks.

Authors:  Fenglei Fan; Jinjun Xiong; Ge Wang
Journal:  Neural Netw       Date:  2020-01-18

10.  Deep Learning Models Unveiled Functional Difference Between Cortical Gyri and Sulci.

Authors:  Shu Zhang; Huan Liu; Heng Huang; Yu Zhao; Xi Jiang; Brook Bowers; Lei Guo; Xiaoping Hu; Mar Sanchez; Tianming Liu
Journal:  IEEE Trans Biomed Eng       Date:  2018-09-28       Impact factor: 4.538

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.