Literature DB >> 33478020

An Entropy Metric for Regular Grammar Classification and Learning with Recurrent Neural Networks.

Kaixuan Zhang1, Qinglong Wang2, C Lee Giles1.   

Abstract

Recently, there has been a resurgence of formal language theory in deep learning research. However, most research focused on the more practical problems of attempting to represent symbolic knowledge by machine learning. In contrast, there has been limited research on exploring the fundamental connection between them. To obtain a better understanding of the internal structures of regular grammars and their corresponding complexity, we focus on categorizing regular grammars by using both theoretical analysis and empirical evidence. Specifically, motivated by the concentric ring representation, we relaxed the original order information and introduced an entropy metric for describing the complexity of different regular grammars. Based on the entropy metric, we categorized regular grammars into three disjoint subclasses: the polynomial, exponential and proportional classes. In addition, several classification theorems are provided for different representations of regular grammars. Our analysis was validated by examining the process of learning grammars with multiple recurrent neural networks. Our results show that as expected more complex grammars are generally more difficult to learn.

Entities:  

Keywords:  complexity analysis; entropy; recurrent neural network; regular grammar classification

Year:  2021        PMID: 33478020      PMCID: PMC7835824          DOI: 10.3390/e23010127

Source DB:  PubMed          Journal:  Entropy (Basel)        ISSN: 1099-4300            Impact factor:   2.524


  7 in total

1.  First-order versus second-order single-layer recurrent neural networks.

Authors:  M W Goudreau; C L Giles; S T Chakradhar; D Chen
Journal:  IEEE Trans Neural Netw       Date:  1994

2.  A logical calculus of the ideas immanent in nervous activity. 1943.

Authors:  W S McCulloch; W Pitts
Journal:  Bull Math Biol       Date:  1990       Impact factor: 1.758

3.  The Kernel Adaptive Autoregressive-Moving-Average Algorithm.

Authors:  Kan Li; José C Príncipe
Journal:  IEEE Trans Neural Netw Learn Syst       Date:  2015-04-28       Impact factor: 10.451

4.  An Empirical Evaluation of Rule Extraction from Recurrent Neural Networks.

Authors:  Qinglong Wang; Kaixuan Zhang; Alexander G Ororbia Ii; Xinyu Xing; Xue Liu; C Lee Giles
Journal:  Neural Comput       Date:  2018-07-18       Impact factor: 2.026

5.  Stable encoding of large finite-state automata in recurrent neural networks with sigmoid discriminants.

Authors:  C W Omlin; C L Giles
Journal:  Neural Comput       Date:  1996-05-15       Impact factor: 2.026

6.  Learning With Interpretable Structure From Gated RNN.

Authors:  Bo-Jian Hou; Zhi-Hua Zhou
Journal:  IEEE Trans Neural Netw Learn Syst       Date:  2020-02-13       Impact factor: 10.451

7.  Shapley Homology: Topological Analysis of Sample Influence for Neural Networks.

Authors:  Kaixuan Zhang; Qinglong Wang; Xue Liu; C Lee Giles
Journal:  Neural Comput       Date:  2020-05-20       Impact factor: 2.026

  7 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.