Literature DB >> 35546903

Can contrastive learning avoid shortcut solutions?

Joshua Robinson1, Li Sun2, Ke Yu2, Kayhan Batmanghelich2, Stefanie Jegelka1, Suvrit Sra1.   

Abstract

The generalization of representations learned via contrastive learning depends crucially on what features of the data are extracted. However, we observe that the contrastive loss does not always sufficiently guide which features are extracted, a behavior that can negatively impact the performance on downstream tasks via "shortcuts", i.e., by inadvertently suppressing important predictive features. We find that feature extraction is influenced by the difficulty of the so-called instance discrimination task (i.e., the task of discriminating pairs of similar points from pairs of dissimilar ones). Although harder pairs improve the representation of some features, the improvement comes at the cost of suppressing previously well represented features. In response, we propose implicit feature modification (IFM), a method for altering positive and negative samples in order to guide contrastive models towards capturing a wider variety of predictive features. Empirically, we observe that IFM reduces feature suppression, and as a result improves performance on vision and medical imaging tasks. The code is available at: https://github.com/joshr17/IFM.

Entities:  

Year:  2021        PMID: 35546903      PMCID: PMC9089441     

Source DB:  PubMed          Journal:  Adv Neural Inf Process Syst        ISSN: 1049-5258


  3 in total

1.  Genetic epidemiology of COPD (COPDGene) study design.

Authors:  Elizabeth A Regan; John E Hokanson; James R Murphy; Barry Make; David A Lynch; Terri H Beaty; Douglas Curran-Everett; Edwin K Silverman; James D Crapo
Journal:  COPD       Date:  2010-02       Impact factor: 2.409

2.  Context Matters: Graph-based Self-supervised Representation Learning for Medical Images.

Authors:  Li Sun; Ke Yu; Kayhan Batmanghelich
Journal:  Proc Conf AAAI Artif Intell       Date:  2021-02
  3 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.