Literature DB >> 18276485

Massively parallel architectures for large scale neural network simulations.

Y Fujimoto1, N Fukuda, T Akabane.   

Abstract

A toroidal lattice architecture (TLA) and a planar lattice architecture (PLA) are proposed as massively parallel neurocomputer architectures for large-scale simulations. The performance of these architectures is almost proportional to the number of node processors, and they adopt the most efficient two-dimensional processor connections for WSI implementation. They also give a solution to the connectivity problem, the performance degradation caused by the data transmission bottleneck, and the load balancing problem for efficient parallel processing in large-scale neural network simulations. The general neuron model is defined. Implementation of the TLA with transputers is described. A Hopfield neural network and a multilayer perceptron have been implemented and applied to the traveling salesman problem and to identity mapping, respectively. Proof that the performance increases almost in proportion to the number of node processors is given.

Year:  1992        PMID: 18276485     DOI: 10.1109/72.165590

Source DB:  PubMed          Journal:  IEEE Trans Neural Netw        ISSN: 1045-9227


  1 in total

1.  Parallelizing Backpropagation Neural Network Using MapReduce and Cascading Model.

Authors:  Yang Liu; Weizhe Jing; Lixiong Xu
Journal:  Comput Intell Neurosci       Date:  2016-04-27
  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.