| Literature DB >> 29311791 |
Gabriel A Fonseca Guerra1, Steve B Furber1.
Abstract
Constraint satisfaction problems (CSP) are at the core of numerous scientific and technological applications. However, CSPs belong to the NP-complete complexity class, for which the existence (or not) of efficient algorithms remains a major unsolved question in computational complexity theory. In the face of this fundamental difficulty heuristics and approximation methods are used to approach instances of NP (e.g., decision and hard optimization problems). The human brain efficiently handles CSPs both in perception and behavior using spiking neural networks (SNNs), and recent studies have demonstrated that the noise embedded within an SNN can be used as a computational resource to solve CSPs. Here, we provide a software framework for the implementation of such noisy neural solvers on the SpiNNaker massively parallel neuromorphic hardware, further demonstrating their potential to implement a stochastic search that solves instances of P and NP problems expressed as CSPs. This facilitates the exploration of new optimization strategies and the understanding of the computational abilities of SNNs. We demonstrate the basic principles of the framework by solving difficult instances of the Sudoku puzzle and of the map color problem, and explore its application to spin glasses. The solver works as a stochastic dynamical system, which is attracted by the configuration that solves the CSP. The noise allows an optimal exploration of the space of configurations, looking for the satisfiability of all the constraints; if applied discontinuously, it can also force the system to leap to a new random configuration effectively causing a restart.Entities:
Keywords: SpiNNaker; constraint satisfaction; spiking neural networks; spiking neurons; stochastic search
Year: 2017 PMID: 29311791 PMCID: PMC5742150 DOI: 10.3389/fnins.2017.00714
Source DB: PubMed Journal: Front Neurosci ISSN: 1662-453X Impact factor: 4.677
Figure 1(A) Solution to the map coloring problem of the world with four colors and of Australia and Canada with three colors (insets). (B) shows the graph of bordering countries from (A). The plots of the entropy H (top), mean firing spike rate ν (middle), and states count Ω (bottom) v.s. simulation time are shown in (C,D) for the world and Australia maps, evidencing the convergence of the network to satisfying stationary distributions. In the entropy curve red codes for changes of state between successive time bins, green for no change and blue for the network satisfying the CSP. In the states count line, black dots mean exploration of new states; the dots are yellow if the network returns to states visited before. In (E) we have plotted the population activity for four randomly chosen CSP variables from (A), each line represents a color domain.
Figure 2Spiking neural network solution to Sudoku puzzles. (A–C) Show the temporal dependence of the network entropy H, firing rate ν and states count Ω for the easy (G), hard (H), and AI escargot (I) puzzles. The color code is the same as that of Figure 1. In (G–I) red is used for clues and blue for digits found by the solver. (D,F) Illustrate the activity for a random selected cell from (A,C), respectively, evidencing competition between the digits, the lines correspond to a smoothing spline fit. (E) Schematic representation of the network architecture for the puzzle in (A).
Figure 3Spiking neural network simulation of Ising spin systems. (A,B) Show two 2-dimensional spin glass quenched states obtained with interaction probabilities p = 0.5 and p = 0.1. The results for the 3-dimensional lattices for CSPs of 1,000 spins with ferromagnetic and antiferromagnetic coupling constant are shown in (D,E), respectively. In (C) are plotted the temporal dependence of the network entropy, firing rate ν and states count Ω during the stochastic search for the system in (D). (F) Illustrates the origin of frustrated interactions in spin glasses. (G) Depicts the result for the 1-dimensional chain. The parameters for the SNNs used are shown in Table 1.
Network sizes of the SNN solvers of the CMP, Sudoku, and Spin Systems.
| World CMP | 212,400 | 14,422,300 | 193 | 4 |
| Australia CMP | 450 | 22,920 | 7 | 3 |
| Canada CMP | 810 | 39,480 | 13 | 3 |
| Sudoku easy | 36,675 | 86,154,125 | 81 | 9 |
| Sudoku hard | 36,675 | 86,154,125 | 81 | 9 |
| AI escargot | 36,675 | 86,153,250 | 81 | 9 |
| AF ring | 1,050 | 975,500 | 10 | 2 |
| Spin 2D lattices | 10,050 | 2,160,000 | 100 | 2 |
| Spin AF 3D lattices | 100,050 | 31,050,000 | 1,000 | 2 |
| Spin FM 3D lattices | 100,050 | 31,050,000 | 1,000 | 2 |
Figure 4Histograms of the convergence time to a solution for the Sudoku, map coloring and spin system problems of Figures 1–3. For each histogram data from 100 simulations were used. The mean μ, standard deviation σ, skewness γ1, success ratio ξ and the best convergence time t are indicated for each problem. The success ratio is defined as the number of times the simulation converged to satisfaction over the total number of simulations.
Simulation parameters for the SNN solvers of the CMP, Sudoku, and Spin Systems.
| World CMP | 10 | [−0.08, 0.0] | [−0.08, 0.0] | 0.3 |
| Australia CMP | 1 (1) | [−1.2, -1.5] | [ 1.2, 1.4] | 0.2 |
| Canada CMP | 1 (1) | [−1.2, -1.5] | [ 1.2, 1.4] | 0.17 |
| Sudoku easy | 1 (0) | [−0.08, 0.0] | [−0.08, 0.0] | 0.3 |
| Sudoku hard | 1 (0) | [−0.08, 0.0] | [−0.08, 0.0] | 0.3 |
| AI Escargot | 1 (0) | [−0.03, -0.02] | [−0.03, −0.02] | 0.3 |
| AF Ring | 1 (0) | [−0.2, 0.0] | [−0.2, −0.0] | 0.0 |
| Spin 2D lattices | 1 (1) | [−0.2, 0.0] | [−0.2, −0.0] | 0.0 |
| Spin AF 3D lattice | 1 (0) | [−0.2, 0.0] | [−0.2, −0.0] | 0.0 |
| Spin FM 3D lattice | 1 (0) | [−0.2, 0.0] | [−0.2, −0.0] | 0.0 |
Translation of a CSP into an SNN
| |
| X= |
| D= |
| S= |
| R= |
| C= |
| |
| domain. |
| n = size_of_ensemble |
| |
| |
| population[x_i][d_i] = create an SNN with n neurons |
| noise_exc[x_i][d_i] = create a |
| stimulation populations. |
| apply_stimuli(noise[x_i][d_i], population[x_i][d_i]) |
| noise_inh[x_i][d_i] = create a |
| dissipation populations. |
| apply_dissipation(noise_inh[x_i][d_i], population[x_i][d_i]) |
| |
| variable |
| |
| |
| inhibitory(population[x_i][d_i], population[x_i][d_j]) |
| |
| |
| read subset s_i |
| |
| |
| |
| inhibition(population[x_i][d_i], population[x_j][d_i]) |
| |
| excitation(population[x_i][d_i], population[x_j][d_i]) |