Literature DB >> 33500352

Neural network interpretation using descrambler groups.

Jake L Amey1, Jake Keeley1, Tajwar Choudhury1, Ilya Kuprov2.   

Abstract

The lack of interpretability and trust is a much-criticized feature of deep neural networks. In fully connected nets, the signaling between inner layers is scrambled because backpropagation training does not require perceptrons to be arranged in any particular order. The result is a black box; this problem is particularly severe in scientific computing and digital signal processing (DSP), where neural nets perform abstract mathematical transformations that do not reduce to features or concepts. We present here a group-theoretical procedure that attempts to bring inner-layer signaling into a human-readable form, the assumption being that this form exists and has identifiable and quantifiable features-for example, smoothness or locality. We applied the proposed method to DEERNet (a DSP network used in electron spin resonance) and managed to descramble it. We found considerable internal sophistication: the network spontaneously invents a bandpass filter, a notch filter, a frequency axis rescaling transformation, frequency-division multiplexing, group embedding, spectral filtering regularization, and a map from harmonic functions into Chebyshev polynomials-in 10 min of unattended training from a random initial guess.
Copyright © 2021 the Author(s). Published by PNAS.

Entities:  

Keywords:  digital signal processing; electron spin resonance; interpretability; machine learning

Year:  2021        PMID: 33500352      PMCID: PMC7865153          DOI: 10.1073/pnas.2016917118

Source DB:  PubMed          Journal:  Proc Natl Acad Sci U S A        ISSN: 0027-8424            Impact factor:   11.205


  3 in total

1.  DEER distance measurements on proteins.

Authors:  Gunnar Jeschke
Journal:  Annu Rev Phys Chem       Date:  2012-01-30       Impact factor: 12.703

2.  Modeling of the N-terminal Section and the Lumenal Loop of Trimeric Light Harvesting Complex II (LHCII) by Using EPR.

Authors:  Niklas Fehr; Carsten Dietz; Yevhen Polyhach; Tona von Hagens; Gunnar Jeschke; Harald Paulsen
Journal:  J Biol Chem       Date:  2015-08-27       Impact factor: 5.157

3.  Deep neural network processing of DEER data.

Authors:  Steven G Worswick; James A Spencer; Gunnar Jeschke; Ilya Kuprov
Journal:  Sci Adv       Date:  2018-08-24       Impact factor: 14.136

  3 in total
  6 in total

1.  Towards autonomous analysis of chemical exchange saturation transfer experiments using deep neural networks.

Authors:  Gogulan Karunanithy; Tairan Yuwen; Lewis E Kay; D Flemming Hansen
Journal:  J Biomol NMR       Date:  2022-05-27       Impact factor: 2.582

Review 2.  Is There a Need for a More Precise Description of Biomolecule Interactions to Understand Cell Function?

Authors:  Pierre Bongrand
Journal:  Curr Issues Mol Biol       Date:  2022-01-21       Impact factor: 2.976

3.  FID-Net: A versatile deep neural network architecture for NMR spectral reconstruction and virtual decoupling.

Authors:  Gogulan Karunanithy; D Flemming Hansen
Journal:  J Biomol NMR       Date:  2021-04-19       Impact factor: 2.835

4.  Protein functional dynamics from the rigorous global analysis of DEER data: Conditions, components, and conformations.

Authors:  Eric J Hustedt; Richard A Stein; Hassane S Mchaourab
Journal:  J Gen Physiol       Date:  2021-09-16       Impact factor: 4.086

5.  Cross-validation of distance measurements in proteins by PELDOR/DEER and single-molecule FRET.

Authors:  Martin F Peter; Christian Gebhardt; Rebecca Mächtel; Gabriel G Moya Muñoz; Janin Glaenzer; Alessandra Narducci; Gavin H Thomas; Thorben Cordes; Gregor Hagelueken
Journal:  Nat Commun       Date:  2022-07-29       Impact factor: 17.694

6.  Deep-Learning-Assisted Focused Ion Beam Nanofabrication.

Authors:  Oleksandr Buchnev; James A Grant-Jacob; Robert W Eason; Nikolay I Zheludev; Ben Mills; Kevin F MacDonald
Journal:  Nano Lett       Date:  2022-03-24       Impact factor: 12.262

  6 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.