| Literature DB >> 36202864 |
Luis A Pineda1, Rafael Morales2.
Abstract
The Entropic Associative Memory (EAM) holds declarative but distributed representations of remembered objects. These are characterized as functions from features to discrete values in an abstract amodal space. Memory objects are registered or remembered through a declarative operation; memory recognition is defined as a logical test and cues of objects not contained in the memory are rejected directly without search; and memory retrieval is a constructive operation. In its original formulation, the content of basic memory units or cells was either on or off, hence all stored objects had the same weight or strength. In the present weighted version (W-EAM) we introduce a basic learning mechanism to the effect that the values of the cells used in the representation of an object are reinforced by the memory register operation. As memory cells are shared by different representations, the corresponding associations are reinforced too. The memory system supports a second form of learning: the distributed representation generalizes and renders a large set of potential or latent units that can used for recognizing novel inputs, which can in turn be used for improving the performance of both the deep neural networks used for modelling perception and action, and of the memory operations. This process can be performed recurrently in open-ended fashion and can be used in long term learning. An experiment in the phonetic domain using the Mexican Spanish DIMEx100 Corpus was carried out. This corpus was collected in a controlled noise-free environment, and was transcribed manually by human trained phoneticians, but consists of a relatively small number of utterances. DIMEx100 was used to produced the initial state of the perceptual and motor modules, and for testing the performance of the memory system at such state. Then the incremental learning cycle was modelled using the Spanish CIEMPIESS Corpus, consisting of a very large number of noisy untagged speech utterances collected from radio and TV. The results support the viability of the Weighted Entropic Associative Memory for modelling cognitive processes, such as phonetic representation and learning, for the construction of applications, such as speech recognition and synthesis, and as a computational model of natural memory.Entities:
Mesh:
Year: 2022 PMID: 36202864 PMCID: PMC9537336 DOI: 10.1038/s41598-022-20798-0
Source DB: PubMed Journal: Sci Rep ISSN: 2045-2322 Impact factor: 4.996
Summary of differences between EAM and W-EAM and associative memory models developed within the Artificial Neural Networks paradigm.
| Property | EAM and W-EAM | ANNs and related models |
|---|---|---|
| Representational format | Declarative but distributed such that the relation between cells in AMRs and memory contents is | Sub-symbolic, embedded in numerical matrices |
| Memory operations | Declarative manipulations on cells and columns of AMRs | Matrix additions and multiplications operations |
| Productivity of representation | Emerging objects of the set | The memory is oriented to store and match patterns and there is no productivity |
| Memory register | Produces the abstraction of the input cue with the content of the memory | Updates a numerical matrix of weights |
| Memory recognition | Test the inclusion of the cue in the memory through the logical material implication | Performs a numerical search until the cue and the product of a matrix operation converges |
| Memory retrieval | Constructive operation that produces novel objects on the basis of a complete or a partial cue and the memory content | Reproductive or photographic operation that reproduces a previously stored object on the basis of a complete or a partial cue |
| Demarcation between auto and hetero-associativity | Weak | Strong |
| Rejection of cues not contained in the memory | Direct without search | Rejection by failing to find, implementing a form of the closed-world assumption |
| Parallelism | Direct parallel manipulations of cells and columns | Parallel computation of matrix operations |
| Main functional parameter | Entropy –no energy function is used | Energy function –the entropy has no functional role |
| Memory capacity and use | A function of the entropy | Depends of the number of local minima of the energy function |
| Demand of memory and processing resources | Low | High |
Figure 1System architecture.
Consonants of the Mexbet-22 phonetic alphabet for Mexican Spanish.
Vowels of the Mexbet-22 phonetic alphabet for Mexican Spanish.
Parameters values of the six scenarios.
| Scenario | ||||
|---|---|---|---|---|
| I | 0 | 0 | 0 | 0.5 |
| II | 0.3 | 0 | 0 | 0.5 |
| III | 0 | 1.5 | 0 | 0.5 |
| IV | 0 | 0 | 0 | 0.1 |
| V | 0.3 | 1.5 | 0 | 0.1 |
| VI | 0.3 | 1.5 | 1 | 0.1 |
Figure 2Memory scenario I.
Figure 3Memory scenario V.
Figure 4Phoneme Error Rate between the strings recognized by the AMRs and the string recognized by the classifier in relation to the correct string, respectively, for the six scenarios.
Example of a recognized utterance and the distances between the strings recognized by the memory and the network to the original phonetic transcription.
| Concept | String | Parameters |
|---|---|---|
| Utterance’s orthographc transcription | Este es el resultado de ese trabajo | |
| Phonetic transcription | esteselresultadodesetr(abaxo | Length: 27 |
| Network output | fpndddgieegeeeeeigggsssssttttdeeeeedsssddggeeeeer(r(llrrrrrrrraaaaaess ssssffuuuubblllllllldpptttttr(r(aaaaaaaaaaaaooooooooodddddddddddeee eeeeeeeennngsssseeeeedppppppttr(r(r(r(aaaaaaabbboaaaaaaaaaxxxxxxx xxxxxxxxxxxooooooooafdsdddfpppp | Length: 225 Net2Utt: 199 |
| Memory output | dddieeeeeeiigsssstttdeeeeesssdieeeer(r(llr(rrrrrrraaaar(ssssuuuubbllllllllp tttttr(r(aaaaaaaaaaaaoooooooooddddddddddeeeeeeeeeeessseeeeppppttr( r(r(r(aaaaaaabbbaaaaaaaaxxxxxxxxxxxxxxxoooooopp | Length: 178 Mem2Utt: 153 Net2Mem:51 |
| Simplified network output | ndiegigstdeser(aesfubltr(r(addeensptr(r(abaxad | Length: 41 SNet2Utt: 20 |
| Simplified memory output | digstdeesdier(ar(ubltr(r(addeesetr(r(abaxop | Length: 37 SMem2Utt: 18 SNet2SMem: 14 |
Figure 5Corpora enrichment for the six scenarios for , , and .
Figure 6Performance of the classifiers along the five learning stages for the six scenarios.
Figure 7Performance of the autoencoders along the five learning stages for the six scenarios.
Figure 8Scenario at the sixth stage of the learning process built upon scenario V.
Figure 9Phonetic memories at the initial (left) and final states (right) of the learning process for the scenario V.