Literature DB >> 35647617

Bringing Your Own View: Graph Contrastive Learning without Prefabricated Data Augmentations.

Yuning You1, Tianlong Chen2, Zhangyang Wang2, Yang Shen1.   

Abstract

Self-supervision is recently surging at its new frontier of graph learning. It facilitates graph representations beneficial to downstream tasks; but its success could hinge on domain knowledge for handcraft or the often expensive trials and errors. Even its state-of-the-art representative, graph contrastive learning (GraphCL), is not completely free of those needs as GraphCL uses a prefabricated prior reflected by the ad-hoc manual selection of graph data augmentations. Our work aims at advancing GraphCL by answering the following questions: How to represent the space of graph augmented views? What principle can be relied upon to learn a prior in that space? And what framework can be constructed to learn the prior in tandem with contrastive learning? Accordingly, we have extended the prefabricated discrete prior in the augmentation set, to a learnable continuous prior in the parameter space of graph generators, assuming that graph priors per se, similar to the concept of image manifolds, can be learned by data generation. Furthermore, to form contrastive views without collapsing to trivial solutions due to the prior learnability, we have leveraged both principles of information minimization (InfoMin) and information bottleneck (InfoBN) to regularize the learned priors. Eventually, contrastive learning, InfoMin, and InfoBN are incorporated organically into one framework of bi-level optimization. Our principled and automated approach has proven to be competitive against the state-of-the-art graph self-supervision methods, including GraphCL, on benchmarks of small graphs; and shown even better generalizability on large-scale graphs, without resorting to human expertise or downstream validation. Our code is publicly released at https://github.com/Shen-Lab/GraphCL_Automated.

Entities:  

Keywords:  Graph contrastive learning; graph generative model; information bottleneck; information minimization

Year:  2022        PMID: 35647617      PMCID: PMC9130056          DOI: 10.1145/3488560.3498416

Source DB:  PubMed          Journal:  Proc Int Conf Web Search Data Min


  4 in total

1.  Self-Supervised Learning of Graph Neural Networks: A Unified Review.

Authors:  Yaochen Xie; Zhao Xu; Jingtun Zhang; Zhengyang Wang; Shuiwang Ji
Journal:  IEEE Trans Pattern Anal Mach Intell       Date:  2022-04-27       Impact factor: 6.226

2.  When Does Self-Supervision Help Graph Convolutional Networks?

Authors:  Yuning You; Tianlong Chen; Zhangyang Wang; Yang Shen
Journal:  Proc Mach Learn Res       Date:  2020-07

3.  ZINC 15--Ligand Discovery for Everyone.

Authors:  Teague Sterling; John J Irwin
Journal:  J Chem Inf Model       Date:  2015-11-09       Impact factor: 4.956

4.  MoleculeNet: a benchmark for molecular machine learning.

Authors:  Zhenqin Wu; Bharath Ramsundar; Evan N Feinberg; Joseph Gomes; Caleb Geniesse; Aneesh S Pappu; Karl Leswing; Vijay Pande
Journal:  Chem Sci       Date:  2017-10-31       Impact factor: 9.825

  4 in total
  1 in total

1.  Cross-modality and self-supervised protein embedding for compound-protein affinity and contact prediction.

Authors:  Yuning You; Yang Shen
Journal:  Bioinformatics       Date:  2022-09-16       Impact factor: 6.931

  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.