| Literature DB >> 35126046 |
Jinsong Wei1,2, Zhibin Wang1, Ye Li1, Jikai Lu1,3, Hao Jiang1,3, Junjie An2,3, Yiqi Li1, Lili Gao1, Xumeng Zhang4, Tuo Shi1,2, Qi Liu4.
Abstract
Realization of spiking neural network (SNN) hardware with high energy efficiency and high integration may provide a promising solution to data processing challenges in future internet of things (IoT) and artificial intelligence (AI). Recently, design of multi-core reconfigurable SNN chip based on resistive random-access memory (RRAM) is drawing great attention, owing to the unique properties of RRAM, e.g., high integration density, low power consumption, and processing-in-memory (PIM). Therefore, RRAM-based SNN chip may have further improvements in integration and energy efficiency. The design of such a chip will face the following problems: significant delay in pulse transmission due to complex logic control and inter-core communication; high risk of digital, analog, and RRAM hybrid design; and non-ideal characteristics of analog circuit and RRAM. In order to effectively bridge the gap between device, circuit, algorithm, and architecture, this paper proposes a simulation model-FangTianSim, which covers analog neuron circuit, RRAM model and multi-core architecture and its accuracy is at the clock level. This model can be used to verify the functionalities, delay, and power consumption of SNN chip. This information cannot only be used to verify the rationality of the architecture but also guide the chip design. In order to map different network topologies on the chip, SNN representation format, interpreter, and instruction generator are designed. Finally, the function of FangTianSim is verified on liquid state machine (LSM), fully connected neural network (FCNN), and convolutional neural network (CNN).Entities:
Keywords: RRAM (memristor); SystemC; analog circuits; simulator; spiking neural network (SNN)
Year: 2022 PMID: 35126046 PMCID: PMC8811373 DOI: 10.3389/fnins.2021.806325
Source DB: PubMed Journal: Front Neurosci ISSN: 1662-453X Impact factor: 4.677
FIGURE 1SNN chip architecture. The SNN cores in the simulator consists of analog neurons, RRAM dendrites, SRAM axons and digital control circuit. The output of the neuron is converted into an address signal by an address event representation (AER) circuit. The entire SNN chip structure consists of 63 such SNN cores and a RISC-V instruction set microcontroller.
FIGURE 2Software and hardware for SNN. The software part mainly includes the description and algorithm of neural network, and the hardware part mainly includes NoC-based interconnection and neuron dynamic change, which interact with each other through specific tool chain.
FIGURE 3(A) Software system. (B) Hardware system. The software system and hardware system share a set of tool chain, which is composed of SNN_JSON, JSON interpreter and instruction generator. After the tool chain generates a machine code, it is transmitted to the system through the instruction delivery module.
FIGURE 4(A) Convolution kernels. (B) Convolution process. (C) The mapping of convolution in memristor array. One 2 × 2 convolution kernel strides successively on the 3 × 3 input feature image, and outputs a 2 × 2 feature image; the convolution layer can be mapped to RRAM but wastes a lot of area.
Configuration list of SNN_JSON.
| Class | Parameter | Value | Comment |
| RegList | Sw | 4 | Low level time of input pulse |
| Pw | 14 | High level time of input pulse | |
| REF | 1 | Types of refractory periods | |
| REFT | 0 | The length of the refractory period | |
| timer_window | 200 | Time window size | |
| timer_step | 2 | Time steps | |
| output_core | 0 | The label of output core | |
| Layer1 | Type | LIF | Type of neuron |
| Core | 0 | An independent layer in the corresponding hardware core | |
| Neuron | 0–31 | Number of neurons | |
| bottom_core | 1 | The label of the next layer | |
| bottom_synapse | 0–31 | The label of dendrites in the next hardware core | |
| Layer0 | Type | LIF | Type of neuron |
| Core | 1 | An independent layer in the corresponding hardware core | |
| Neuron | 0–31 | Number of neurons | |
| bottom_core | 4 | The label of the next layer | |
| bottom_synapse | 32–63 | The label of dendrites in the next hardware core |
Simulation results of FangTianSim.
| Network | Data set | Latency | Total power consumption | Power consumption (pulse) | Power consumption (RRAM) | Power consumption (SRAM) | Recognition accuracy |
| LSM | FSDD | 5.6 ms | 5.74 mJ | 34.4 μJ | 1.4 μJ | 5.7 mJ | 83% |
| FCNN | MNIST | 48.5 μs | 9.02 μJ | 0.249 μJ | 0.001 μJ | 8.77 μJ | 82% |
| CNN | MNIST | 76 μs | 12.45 μJ | 0.95 μJ | 1.4 μJ | 10.01 μJ | 95% |
FIGURE 5CNN for MNIST. This is the operation process of a three-layer network on the chip. (A) In the first layer, the convolution results generated by each slide are put into a separate core. (B) The second layer convolved the data of four adjacent cores simultaneously. Finally, all the generated data is routed to one core for full connection process.