Literature DB >> 33733213

Algorithmic Probability-Guided Machine Learning on Non-Differentiable Spaces.

Santiago Hernández-Orozco1,2, Hector Zenil2,3,4,5, Jürgen Riedel2,4, Adam Uccello4, Narsis A Kiani3,4, Jesper Tegnér5.   

Abstract

We show how complexity theory can be introduced in machine learning to help bring together apparently disparate areas of current research. We show that this model-driven approach may require less training data and can potentially be more generalizable as it shows greater resilience to random attacks. In an algorithmic space the order of its element is given by its algorithmic probability, which arises naturally from computable processes. We investigate the shape of a discrete algorithmic space when performing regression or classification using a loss function parametrized by algorithmic complexity, demonstrating that the property of differentiation is not required to achieve results similar to those obtained using differentiable programming approaches such as deep learning. In doing so we use examples which enable the two approaches to be compared (small, given the computational power required for estimations of algorithmic complexity). We find and report that 1) machine learning can successfully be performed on a non-smooth surface using algorithmic complexity; 2) that solutions can be found using an algorithmic-probability classifier, establishing a bridge between a fundamentally discrete theory of computability and a fundamentally continuous mathematical theory of optimization methods; 3) a formulation of an algorithmically directed search technique in non-smooth manifolds can be defined and conducted; 4) exploitation techniques and numerical methods for algorithmic search to navigate these discrete non-differentiable spaces can be performed; in application of the (a) identification of generative rules from data observations; (b) solutions to image classification problems more resilient against pixel attacks compared to neural networks; (c) identification of equation parameters from a small data-set in the presence of noise in continuous ODE system problem, (d) classification of Boolean NK networks by (1) network topology, (2) underlying Boolean function, and (3) number of incoming edges.
Copyright © 2021 Hernández-Orozco, Zenil, Riedel, Uccello, Kiani and Tegnér.

Entities:  

Keywords:  algorithmic causality; explainable AI; generative mechanisms; non-differentiable machine learning; program synthesis

Year:  2021        PMID: 33733213      PMCID: PMC7944352          DOI: 10.3389/frai.2020.567356

Source DB:  PubMed          Journal:  Front Artif Intell        ISSN: 2624-8212


  5 in total

1.  Metabolic stability and epigenesis in randomly constructed genetic nets.

Authors:  S A Kauffman
Journal:  J Theor Biol       Date:  1969-03       Impact factor: 2.691

2.  A Decomposition Method for Global Evaluation of Shannon Entropy and Local Estimations of Algorithmic Complexity.

Authors:  Hector Zenil; Santiago Hernández-Orozco; Narsis A Kiani; Fernando Soler-Toscano; Antonio Rueda-Toicen; Jesper Tegnér
Journal:  Entropy (Basel)       Date:  2018-08-15       Impact factor: 2.524

3.  Calculating Kolmogorov complexity from the output frequency distributions of small Turing machines.

Authors:  Fernando Soler-Toscano; Hector Zenil; Jean-Paul Delahaye; Nicolas Gauvrit
Journal:  PLoS One       Date:  2014-05-08       Impact factor: 3.240

4.  Input-output maps are strongly biased towards simple outputs.

Authors:  Kamaludin Dingle; Chico Q Camargo; Ard A Louis
Journal:  Nat Commun       Date:  2018-02-22       Impact factor: 14.919

5.  Algorithmically probable mutations reproduce aspects of evolution, such as convergence rate, genetic memory and modularity.

Authors:  Santiago Hernández-Orozco; Narsis A Kiani; Hector Zenil
Journal:  R Soc Open Sci       Date:  2018-08-29       Impact factor: 2.963

  5 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.