Literature DB >> 33503822

Benchmarking Attention-Based Interpretability of Deep Learning in Multivariate Time Series Predictions.

Domjan Barić1, Petar Fumić1, Davor Horvatić1, Tomislav Lipic2.   

Abstract

The adaptation of deep learning models within safety-critical systems cannot rely only on good prediction performance but needs to provide interpretable and robust explanations for their decisions. When modeling complex sequences, attention mechanisms are regarded as the established approach to support deep neural networks with intrinsic interpretability. This paper focuses on the emerging trend of specifically designing diagnostic datasets for understanding the inner workings of attention mechanism based deep learning models for multivariate forecasting tasks. We design a novel benchmark of synthetically designed datasets with the transparent underlying generating process of multiple time series interactions with increasing complexity. The benchmark enables empirical evaluation of the performance of attention based deep neural networks in three different aspects: (i) prediction performance score, (ii) interpretability correctness, (iii) sensitivity analysis. Our analysis shows that although most models have satisfying and stable prediction performance results, they often fail to give correct interpretability. The only model with both a satisfying performance score and correct interpretability is IMV-LSTM, capturing both autocorrelations and crosscorrelations between multiple time series. Interestingly, while evaluating IMV-LSTM on simulated data from statistical and mechanistic models, the correctness of interpretability increases with more complex datasets.

Entities:  

Keywords:  attention mechanism; interpretability; multivariate time series; synthetically designed datasets

Year:  2021        PMID: 33503822     DOI: 10.3390/e23020143

Source DB:  PubMed          Journal:  Entropy (Basel)        ISSN: 1099-4300            Impact factor:   2.524


  5 in total

1.  Medical Professional Enhancement Using Explainable Artificial Intelligence in Fetal Cardiac Ultrasound Screening.

Authors:  Akira Sakai; Masaaki Komatsu; Reina Komatsu; Ryu Matsuoka; Suguru Yasutomi; Ai Dozen; Kanto Shozu; Tatsuya Arakaki; Hidenori Machino; Ken Asada; Syuzo Kaneko; Akihiko Sekizawa; Ryuji Hamamoto
Journal:  Biomedicines       Date:  2022-02-25

2.  Human-Centric AI: The Symbiosis of Human and Artificial Intelligence.

Authors:  Davor Horvatić; Tomislav Lipic
Journal:  Entropy (Basel)       Date:  2021-03-11       Impact factor: 2.524

3.  Prediction of Time Series Gene Expression and Structural Analysis of Gene Regulatory Networks Using Recurrent Neural Networks.

Authors:  Michele Monti; Jonathan Fiorentino; Edoardo Milanetti; Giorgio Gosti; Gian Gaetano Tartaglia
Journal:  Entropy (Basel)       Date:  2022-01-18       Impact factor: 2.524

4.  Attention-Based Deep Recurrent Neural Network to Forecast the Temperature Behavior of an Electric Arc Furnace Side-Wall.

Authors:  Diego F Godoy-Rojas; Jersson X Leon-Medina; Bernardo Rueda; Whilmar Vargas; Juan Romero; Cesar Pedraza; Francesc Pozo; Diego A Tibaduiza
Journal:  Sensors (Basel)       Date:  2022-02-12       Impact factor: 3.576

5.  Artificial Intelligence-Based Human-Computer Interaction Technology Applied in Consumer Behavior Analysis and Experiential Education.

Authors:  Yanmin Li; Ziqi Zhong; Fengrui Zhang; Xinjie Zhao
Journal:  Front Psychol       Date:  2022-04-06
  5 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.