Literature DB >> 33501243

In Search of Trustworthy and Transparent Intelligent Systems With Human-Like Cognitive and Reasoning Capabilities.

Nikhil R Pal1.   

Abstract

At present we are witnessing a tremendous interest in Artificial Intelligence (AI), particularly in Deep Learning (DL)/Deep Neural Networks (DNNs). One of the reasons appears to be the unmatched performance achieved by such systems. This has resulted in an enormous hope on such techniques and often these are viewed as all-cure solutions. But most of these systems cannot explain why a particular decision is made (black box) and sometimes miserably fail in cases where other systems would not. Consequently, in critical applications such as healthcare and defense practitioners do not like to trust such systems. Although an AI system is often designed taking inspiration from the brain, there is not much attempt to exploit cues from the brain in true sense. In our opinion, to realize intelligent systems with human like reasoning ability, we need to exploit knowledge from the brain science. Here we discuss a few findings in brain science that may help designing intelligent systems. We explain the relevance of transparency, explainability, learning from a few examples, and the trustworthiness of an AI system. We also discuss a few ways that may help to achieve these attributes in a learning system.
Copyright © 2020 Pal.

Entities:  

Keywords:  Artificial Intelligence; Deep Neural Networks; deep learning; explainable AI; machine learning; sustainable AI; trustworthy AI

Year:  2020        PMID: 33501243      PMCID: PMC7806014          DOI: 10.3389/frobt.2020.00076

Source DB:  PubMed          Journal:  Front Robot AI        ISSN: 2296-9144


  22 in total

1.  Probability Models for Open Set Recognition.

Authors:  Walter J Scheirer; Lalit P Jain; Terrance E Boult
Journal:  IEEE Trans Pattern Anal Mach Intell       Date:  2014-11       Impact factor: 6.226

2.  Long short-term memory.

Authors:  S Hochreiter; J Schmidhuber
Journal:  Neural Comput       Date:  1997-11-15       Impact factor: 2.026

3.  Visualizing the Hidden Activity of Artificial Neural Networks.

Authors:  Paulo E Rauber; Samuel G Fadel; Alexandre X Falcao; Alexandru C Telea
Journal:  IEEE Trans Vis Comput Graph       Date:  2017-01       Impact factor: 4.579

Review 4.  Using goal-driven deep learning models to understand sensory cortex.

Authors:  Daniel L K Yamins; James J DiCarlo
Journal:  Nat Neurosci       Date:  2016-03       Impact factor: 24.884

5.  Recognition-by-components: a theory of human image understanding.

Authors:  Irving Biederman
Journal:  Psychol Rev       Date:  1987-04       Impact factor: 8.934

6.  How does the brain solve visual object recognition?

Authors:  James J DiCarlo; Davide Zoccolan; Nicole C Rust
Journal:  Neuron       Date:  2012-02-09       Impact factor: 17.173

7.  Human-level concept learning through probabilistic program induction.

Authors:  Brenden M Lake; Ruslan Salakhutdinov; Joshua B Tenenbaum
Journal:  Science       Date:  2015-12-11       Impact factor: 47.728

8.  Identifying Sparse Connectivity Patterns in the brain using resting-state fMRI.

Authors:  Harini Eavani; Theodore D Satterthwaite; Roman Filipovych; Raquel E Gur; Ruben C Gur; Christos Davatzikos
Journal:  Neuroimage       Date:  2014-10-02       Impact factor: 6.556

9.  Interactive machine learning for health informatics: when do we need the human-in-the-loop?

Authors:  Andreas Holzinger
Journal:  Brain Inform       Date:  2016-03-02

10.  Activations of deep convolutional neural networks are aligned with gamma band activity of human visual cortex.

Authors:  Ilya Kuzovkin; Raul Vicente; Mathilde Petton; Jean-Philippe Lachaux; Monica Baciu; Philippe Kahane; Sylvain Rheims; Juan R Vidal; Jaan Aru
Journal:  Commun Biol       Date:  2018-08-08
View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.