Literature DB >> 33285873

The Convex Information Bottleneck Lagrangian.

Borja Rodríguez Gálvez1, Ragnar Thobaben1, Mikael Skoglund1.   

Abstract

The information bottleneck (IB) problem tackles the issue of obtaining relevant compressed representations T of some random variable X for the task of predicting Y. It is defined as a constrained optimization problem that maximizes the information the representation has about the task, I ( T ; Y ) , while ensuring that a certain level of compression r is achieved (i.e., I ( X ; T ) ≤ r ). For practical reasons, the problem is usually solved by maximizing the IB Lagrangian (i.e., L IB ( T ; β ) = I ( T ; Y ) - β I ( X ; T ) ) for many values of β ∈ [ 0 , 1 ] . Then, the curve of maximal I ( T ; Y ) for a given I ( X ; T ) is drawn and a representation with the desired predictability and compression is selected. It is known when Y is a deterministic function of X, the IB curve cannot be explored and another Lagrangian has been proposed to tackle this problem: the squared IB Lagrangian: L sq - IB ( T ; β sq ) = I ( T ; Y ) - β sq I ( X ; T ) 2 . In this paper, we (i) present a general family of Lagrangians which allow for the exploration of the IB curve in all scenarios; (ii) provide the exact one-to-one mapping between the Lagrange multiplier and the desired compression rate r for known IB curve shapes; and (iii) show we can approximately obtain a specific compression level with the convex IB Lagrangian for both known and unknown IB curve shapes. This eliminates the burden of solving the optimization problem for many values of the Lagrange multiplier. That is, we prove that we can solve the original constrained problem with a single optimization.

Entities:  

Keywords:  information bottleneck; mutual information; optimization; representation learning

Year:  2020        PMID: 33285873      PMCID: PMC7516537          DOI: 10.3390/e22010098

Source DB:  PubMed          Journal:  Entropy (Basel)        ISSN: 1099-4300            Impact factor:   2.524


  7 in total

1.  Information-based clustering.

Authors:  Noam Slonim; Gurinder Singh Atwal; Gasper Tkacik; William Bialek
Journal:  Proc Natl Acad Sci U S A       Date:  2005-12-13       Impact factor: 11.205

2.  Minimum cross-entropy pattern classification and cluster analysis.

Authors:  J E Shore; R M Gray
Journal:  IEEE Trans Pattern Anal Mach Intell       Date:  1982-01       Impact factor: 6.226

3.  Learning Representations for Neural Network-Based Classification Using the Information Bottleneck Principle.

Authors:  Rana Ali Amjad; Bernhard C Geiger
Journal:  IEEE Trans Pattern Anal Mach Intell       Date:  2019-04-02       Impact factor: 6.226

4.  The Deterministic Information Bottleneck.

Authors:  D J Strouse; David J Schwab
Journal:  Neural Comput       Date:  2017-04-14       Impact factor: 2.026

5.  Information Dropout: Learning Optimal Representations Through Noisy Computation.

Authors:  Alessandro Achille; Stefano Soatto
Journal:  IEEE Trans Pattern Anal Mach Intell       Date:  2018-01-10       Impact factor: 6.226

6.  Efficient compression in color naming and its evolution.

Authors:  Noga Zaslavsky; Charles Kemp; Terry Regier; Naftali Tishby
Journal:  Proc Natl Acad Sci U S A       Date:  2018-07-18       Impact factor: 11.205

7.  Toward a unified theory of efficient, predictive, and sparse coding.

Authors:  Matthew Chalk; Olivier Marre; Gašper Tkačik
Journal:  Proc Natl Acad Sci U S A       Date:  2017-12-19       Impact factor: 11.205

  7 in total
  2 in total

1.  Information Bottleneck: Theory and Applications in Deep Learning.

Authors:  Bernhard C Geiger; Gernot Kubin
Journal:  Entropy (Basel)       Date:  2020-12-14       Impact factor: 2.524

2.  Artificial Intelligence Algorithm-Based MRI for Differentiation Diagnosis of Prostate Cancer.

Authors:  Rui Luo; Qingxiang Zeng; Huashan Chen
Journal:  Comput Math Methods Med       Date:  2022-06-28       Impact factor: 2.809

  2 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.