Literature DB >> 33733171

There Is Hope After All: Quantifying Opinion and Trustworthiness in Neural Networks.

Mingxi Cheng1, Shahin Nazarian1, Paul Bogdan1.   

Abstract

Artificial Intelligence (AI) plays a fundamental role in the modern world, especially when used as an autonomous decision maker. One common concern nowadays is "how trustworthy the AIs are." Human operators follow a strict educational curriculum and performance assessment that could be exploited to quantify how much we entrust them. To quantify the trust of AI decision makers, we must go beyond task accuracy especially when facing limited, incomplete, misleading, controversial or noisy datasets. Toward addressing these challenges, we describe DeepTrust, a Subjective Logic (SL) inspired framework that constructs a probabilistic logic description of an AI algorithm and takes into account the trustworthiness of both dataset and inner algorithmic workings. DeepTrust identifies proper multi-layered neural network (NN) topologies that have high projected trust probabilities, even when trained with untrusted data. We show that uncertain opinion of data is not always malicious while evaluating NN's opinion and trustworthiness, whereas the disbelief opinion hurts trust the most. Also trust probability does not necessarily correlate with accuracy. DeepTrust also provides a projected trust probability of NN's prediction, which is useful when the NN generates an over-confident output under problematic datasets. These findings open new analytical avenues for designing and improving the NN topology by optimizing opinion and trustworthiness, along with accuracy, in a multi-objective optimization formulation, subject to space and time constraints.
Copyright © 2020 Cheng, Nazarian and Bogdan.

Entities:  

Keywords:  artificial intelligence; deep neural networks; machine learning; subjective logic; trust in AI

Year:  2020        PMID: 33733171      PMCID: PMC7861320          DOI: 10.3389/frai.2020.00054

Source DB:  PubMed          Journal:  Front Artif Intell        ISSN: 2624-8212


  1 in total

1.  The known unknowns: neural representation of second-order uncertainty, and ambiguity.

Authors:  Dominik R Bach; Oliver Hulme; William D Penny; Raymond J Dolan
Journal:  J Neurosci       Date:  2011-03-30       Impact factor: 6.167

  1 in total
  2 in total

1.  Interpretability and fairness evaluation of deep learning models on MIMIC-IV dataset.

Authors:  Chuizheng Meng; Loc Trinh; Nan Xu; James Enouen; Yan Liu
Journal:  Sci Rep       Date:  2022-05-03       Impact factor: 4.996

2.  Reduced false positives in autism screening via digital biomarkers inferred from deep comorbidity patterns.

Authors:  Dmytro Onishchenko; Yi Huang; James van Horne; Peter J Smith; Michael E Msall; Ishanu Chattopadhyay
Journal:  Sci Adv       Date:  2021-10-06       Impact factor: 14.136

  2 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.