Qianqian Ren1, Xiyu Liu1, Minghe Sun2. 1. Academy of Management Science, Business School, Shandong Normal University, Jinan, China. 2. College of Business, The University of Texas at San Antonio, San Antonio, TX, USA.
Abstract
Weighted spiking neural P systems with anti-spikes (AWSN P systems) are proposed by adding anti-spikes to spiking neural P systems with weighted synapses. Anti-spikes behave like spikes of inhibition of communication between neurons. Both spikes and anti-spikes are used in the rule expressions. An illustrative example is given to show the working process of the proposed AWSN P systems. The Turing universality of the proposed P systems as number generating and accepting devices is proved. Finally, a universal AWSN P system having 34 neurons is proved to work as a function computing device by using standard rules, and one having 30 neurons is proved to work as a number generator.
Weighted spiking neural P systems with anti-spikes (AWSN P systems) are proposed by adding anti-spikes to spiking neural P systems with weighted synapses. Anti-spikes behave like spikes of inhibition of communication between neurons. Both spikes and anti-spikes are used in the rule expressions. An illustrative example is given to show the working process of the proposed AWSN P systems. The Turing universality of the proposed P systems as number generating and accepting devices is proved. Finally, a universal AWSN P system having 34 neurons is proved to work as a function computing device by using standard rules, and one having 30 neurons is proved to work as a number generator.
Membrane computing, introduced by Păun [1], is a branch of nature-inspired computing. It provides a rich computational framework for biomolecular computing. Models of membrane computing are inspired by the structures and functions of living cells. The obtained models are distributed and parallel computing devices, usually called P systems [2]. There are three main classes of P systems: cell-like P systems, tissue-like P systems [3], and neural-like P systems [4]. Neural-like P systems, inspired by the ways of information storage and processing in human brain nervous systems, are systems that combine neurons and membrane computing, among which the most widely known are spiking neural P systems (SN P systems) [5]. A SN P system consists of a group of neurons located at the nodes of a directed graph, and neurons send spikes to adjacent neurons through synapses, i.e., links in the graph. There is only one type of objects, i.e., spikes, in the neurons.With different biological features and mathematical motivations, many variants of SN P systems have emerged. Some of them made changes on synapses between neurons, such as SN P systems with rules on synapses [6], SN P systems with multiple channels [7], and SN P systems with thresholds [8], while others made changes on the communication rules, such as SN P systems with communication on request [9], SN P systems with polarizations [10], and SN P systems with inhibitory rules [11]. Various new variants of SN P systems are provided in [12, 13]. Recently, some new variants of neural-like P systems have been proposed, which are inspired by SN P systems, such as those reported in [14]. In addition, many publications appeared in the literature on the computational power of SN P systems as function computing devices and the number generating/accepting devices. Pǎun [18] proved small universality of SN P systems. Pan [19] proved the small universality of SN P systems with communication on request by using 14 neurons, and more details are available in [20, 21].Since the SNP system was proposed, many scholars have explored its applications. At present, there are many applications of SN P systems, such as skeletonizing image processing [22, 23], optimization problems [24], fault diagnosis [25-27], and working models [28].Inspired by the spikes of inhibition of communication between neurons, a new type of SN P systems is proposed by adding anti-spikes to SN P systems, which is called spiking neural P systems with anti-spikes (ASN P systems) [29]. In ASN P systems, each neuron contains multiple copies of symbolic object a or and processes information by spiking rules and forgetting rules. The annihilating rule exists in each neuron and is the first to apply, meaning a and cannot coexist in any neuron. Many researchers have proposed different ASN P systems, such as ASN P systems with multiple channels [30], ASN P systems with rules on synapses [31], and asynchronous ASN P systems [32]. The computational power of ASN P systems as number generating and accepting devices, as well as function computing devices, also can be proved [33].In [34], SN P systems with weighted synapses were proposed. The weights represent the numbers of synapses between connected neurons. Based on the above, a new variant of SN P systems, called the weighted spiking neural P systems with anti-spikes (AWSN P systems),is proposed in this work. In these systems, neurons receive spikes or anti-spikes from their connected neurons and the numbers of spikes or anti-spikes they receive are determined by the weights of the synapses. Only one type of objects, i.e., spikes or antispikes, exists in each neuron with standard rules in SN P systems. These systems use spiking rules with the form of (E/a)⟶a; d (called standard rules if p = 1 and extended rules otherwise), where E is a regular expression over spikes a and c, andp and d are all positive integers. The meaning of the spiking rules is that c spikes are consumed and p spikes are generated after d time periods. SN P systems also have forgetting rules of the form a⟶λ, where s is a positive integer. The meaning of the forgetting rules is that s spikes are dissolved or removed from a neuron.The rest of this article is organized as follows. In Section 2, the basic knowledge of a register machine is given. The definition of AWSN P systems is given, and an example is presented to show their working process in Section 3. By simulating register machines, the computational power of AWSN P systems is proved as natural number generating devices and accepting devices in Section 4. In Section 5, the universality of these systems as function computing devices and number generating devices is obtained by using 34 neurons and 30 neurons, respectively. Remarks and future research directions are given in Section 6.
2. Prerequisites
The universality of systems is proved by simulating a register machine M. A register machine is structured as M=(m, H, l0, l, I), where m is the number of registers, H is the set of instruction labels, l0 and l are the starting and ending labels, and I is the set of instructions shown below:l : (ADD(r), l, l) (add 1 to register r and then go to instruction labels l or l with nondeterministic choice)l : (SUB(r), l, l) (if register r is not empty, then subtract 1 from it and go to l; otherwise, go to l)l : HALT (the ending instruction)A register machine has two modes: a generating mode and an accepting mode. A register machine M generates a set of numbers indefinitely, denoted by Ngen(M) , and works in the following way in the generating mode. When all the registers start empty, M starts the computational process from the instruction label l0. When M reaches l, the computation ends with the results stored in register 1. If the computation does not stop, the numbers will not be generated. A set of numbers can also be accepted by a register machine, denoted as Nacc(M), in the accepting mode. Only the input neuron is nonempty at the beginning. It then works in a way similar to that in the generating mode. As register machines are universal in the accepting mode, the add instructions can be written as l : (ADD(r), l). Register machines can compute any set of Turing computable numbers represented by NRE (see, e.g., [6]).Generally, a universal register machine is used to compute Turing computable functions for the purpose of analyzing the computing power of system. A universal register machine M is proposed by Minsky [35]. If φ(y) = M(g(x), y) satisfies that x and y are natural numbers and g is a recursive function, then M is universal, denoted by M = (8, H, l0, l, I), including 8 registers and 23 instructions. Compared with register machine M′ as shown in Figure 1, register machine M does not have instructions l22 and l23, and the final result is placed in register 0. Since the result is stored in register 0, it cannot contain any SUB instruction. Hence, register 8 is added and used to store the result without any SUB instruction. In general, in order to analyze the universality of the system, i.e., to verify that the system is equivalent to a Turing machine, a universal register machine M′as shown in Figure 1 is simulated by a system, denoted by M′ = (9, H, l0, l, I), consisting of 9 registers and 25 instructions.
Figure 1
The universal register machine M′.
3. Weighted Spiking Neural P Systems with Anti-spikes
3.1. Definition
The proposed AWSN P system is described as follows:whereis the set of alphabets, where the symbol a is a spike, and is an anti-spike.σ1, σ2,…, σ are neurons, in the form of σ = (n, R) for 1 ≤ i ≤ m, where n ≥ 0 is the initial number of spikes stored in σ, and R is the set of rules used in σ in the following form:Spiking rules, (E/b)⟶b′; d, where E is a regular expression over a or , , c ≥ p ≥ 1, and d ≥ 0 are the time unitForgetting rules, b⟶λ, where ands ≥ 1syn ∈ {1,…, h} × {1,…, h} × W represents the synapses, where W = {1,…, n} is the set of weights. For any(i, j, n) ∈ syn, 1 ≤ i, j ≤ h, i ≠ j, and n ∈ W.in and out are the input neuron and output neuron.In the AWSN P system, each neuron has one or more spiking rules and some of them also have forgetting rules, and either spikes or anti-spikes exist in each neuron. If there are k spikes or anti-spikes in neuron σ, b ∈ L(E) and k ≥ c, the spiking rule (E/b)⟶b′; d can be stimulated. If k = c, then the spiking rule is called pure, and the rule can be written asb⟶b′; d. The spiking rule can be interpreted as follows. If c spikes or anti-spikes are removed from neuron σ and the neuron fires, p spikes will be generated after d time periods (as usual in membrane computing, all neurons in a system Π work in parallel with an assumed global clock) and p × n spikes will be sent to neuron σ(i ≠ j), where n ∈ W. If the spiking rule of neuron σ is used in time d for all d ≥ 1, the neuron will be closed before time t + d and will not receive any spikes or anti-spikes, and then the neuron will open at time t + d. If t = 0, spikes will be emitted immediately, which means the neuron receives spikes or anti-spikes from the upper neuron without delay.If the forgetting rules b⟶λ in the neurons are used, then the s spikes or anti-spikes are removed from the neurons. Spiking rules and forgetting rules must be applied if the conditions are met, but the choice of rules is nondeterministic if the conditions of multiple rules are met in a neuron. However, the annihilating rule must be applied first in each neuron.Through these rules, transitions between configurations can occur. Any sequence of transitions starting from the initial configuration is called a computation. A computation will stop when it reaches a configuration where all neurons are open and no rules can be used. To compute the function f : N⟶N, k natural numbers n1, n2 ⋯ , n are introduced into the system by reading a binary sequence z=10, 101,…, 101 from the environment. That is to say, the input neuron of Π receives a spike in a step if it corresponds to 1 in z, but it receives nothing if it corresponds to 0. The input neuron received exactly k+1 spikes and will not receive any more spikes after receiving the last spike. The result of the computation is encoded in the distance between two spikes, which means that the computation halts with exactly two spikes as outputs immediately after outputting the second spike. Hence, it generates a spike string of the form 0101, for b ≥ 0 and r=f(n1,…, n). The computation outputs no spike for a nonspecified number of steps from the beginning of the computation until outputting the first spike.Let Ngen(∏) and Nacc(∏) be the sets of numbers generated and accepted by Π, respectively. Let NASNP, with α ∈ {gen, acc}, denote the family of sets of numbers generated or accepted by an AWSN P system with m neurons and a maximum of n rules in a neuron.
3.2. An Illustrative Example
An example as graphically shown in Figure 2 is given to explain the working process of the AWSN P system. The results of each step are shown in Table 1. A positive number in the table represents the number of spikes in the neuron, and a negative number represents the number of anti-spikes. For example, 2 means there are two spikes, and −2 means there are two anti-spikes.
Figure 2
An example of the AWSN P system.
Table 1
The results of the example.
Step
σ1
σ2
σ3
σ4
t
2
2
0
0
t + 1
1
−1
−3
0
t + 2
2
0
−3
2
t + 3
1
−3
−6
2
t + 4
1
−3
0
3 (fires)
The system has four neurons as shown in Figure 2. Assume that each of neurons σ1 and σ2 has two spikes, and neurons σ3 and σ4 are empty with no spikes. Suppose that the rule in neuron σ1 can be used at time t, generating one anti-spike and sending three anti-spikes to neurons σ2 and σ3 because the weight of synapses between these neurons is 3. Two anti-spikes together with two spikes disappear immediately because the annihilating rule is applied first, and there is one anti-spike left in neuron σ2. The rule in σ2 generates two spikes to be sent to neuron σ4 and one spike to be sent to neuron σ1. So the rule in σ1 can be applied again. Neuron σ3 receives six anti-spikes from σ1 by using the rule of neuron σ1 twice, so that the rule in σ3 fires. Neuron σ4 gets three spikes (two from neuron σ2) and sends one spike to the environment.
4. Computational Models
4.1. Generating Mode
Theorem 1 .
N
genASNP2=NRE.
Proof
A register machine M=(m, H, l0, l, I) is considered. M is simulated by an AWSN P system, including three modules, i.e., modules ADD, SUB, and OUTPUT.In the simulation process, a register r of M corresponds to neuronσ , and the number n contained in register r is the number of spikes contained in neuron σ. An instruction l in H corresponds to neuron σ. Furthermore, the modules require some other neurons in addition to σ and σ. The simulation of the ADD and SUB instructions begins at neuron σ. Modules ADD and SUB are simulated by sending spikes to σ and σ as rules in neuron σ fire. Neuron σ sends a spike to either σ or σ, but the choice is nondeterministic. When a spike arrives at neuron σ, the computation in M stops, and the module OUTPUT begins to send the result stored in register 1 to the environment. At the beginning of the simulation, neuron σ has one spike but other neurons do not have any spikes.Module ADD (Shown in Figure 3) Assume that an ADD instruction l : (ADD(r), l, l) has to be simulated at time t, one spike is in neuron σ, and the rule a⟶a can be used. Neuron σ sends one spike a to neurons σ, σ, and σ, respectively. The rules and a⟶a in neuron σ are chosen in a nondeterministic way for use at time t + 1. In this way, there are two cases to consider depending on the choice of the rules in σ. If a⟶a is chosen, neuron σ sends a spike to neuron σ. Thus, σ will generate one spike by using its rule. If is chosen, neuron σ sends an anti-spike to neurons σ and σ, respectively. Thus σ will fire and generate one spike by using its rule. The rule in neuron σ cannot be used because of the annihilating rule, so that σ is empty. After one spike is added to σ, the register r adds 1 and the instruction l or l is activated. Therefore, the ADD instruction can be simulated correctly by the module ADD.
Figure 3
Module ADD: stimulating the ADD instruction l : (ADD(r), l, l).
(b) Module SUB (Shown in Figure 4) Suppose that neuron σ has one spike. After the rule is enabled at time t, each of the neurons σ and σ receives two anti-spikes , and σ receives one anti-spike. The rest of the computation can be divided into two cases according to the number of spikes contained in σ.
Figure 4
Module SUB: simulating the SUB instruction l : (SUB(r), l, l).
Neuronσ
has at least one spike. Neuron σ receives one anti-spike from neuron σ, but anti-spike will disappear immediately by annihilating one spike in σ. Therefore, the rule in neuron σ is not used at time t + 1. At the same time, neuron σ opens to get one anti-spike from σ, and then the rule in σ fires and generates one spike but sends three spikes to neurons σ and two spikes to σ. The two spikes are annihilated with two anti-spikes from σ and one spike is left in neuron σ. Simultaneously, the same happens in neuron σ, i.e., the two spikes are annihilated immediately and there is no spike left in σ.(2)
Neuronσ has no spike. Neuron σ gets one anti-spike from σ and its rule can be applied at time t + 1. Simultaneously, neuron σ gets one anti-spike from σ. Hence, one spike from σ is annihilated in the next time. The rule in σ cannot be used because σ does not have any anti-spikes. At the same time, neuron σ receives five spikes, among which two spikes are used to annihilate the two anti-spikes received from neuron σ; thus the rule a2⟶λ in σ can be applied. Neuron σ receives one spike that annihilates one anti-spike received from neuron σ, and then the rule in σ is enabled to generate one spike a.Therefore, the SUB instruction can be simulated correctly by module SUB.(c) Module OUTPUT (Shown in Figure 5) Assume that σ of system ∏ accumulated one spike at time t, and neuron σ1 has n spikes for the number n being stored in register 1 of M. When the rule in σ is fired at time t, neuron σ sends one spike to σ1. At this moment, σ1 has an odd number of spikes and its rule fires. At time t + 1, σ1 sends one spike to σ and σ, respectively. Thus, neuron σout has one spike, which is an odd number. At time t + 2, neuron σout fires, sending one spike to the environment. At the same time, the rules in σ1 and σ are used, and both send one a to σout. After n − 1 steps, until neuron σ1 has no spike, the number of spikes in σout is even. At the same time, the use of the rule in σ1 is stopped, and neuron σ has one spike. Neuron σout will receive one spike at time t + n + 2, and then the number of spikes is odd. Neuron σout fires a second time. Therefore, the number computed by the AWSN P system is the difference between the first two steps when the neuron σout fires; that is, (t + n + 2) − (t + 2) = n. The module OUTPUT can be simulated correctly.
Figure 5
Module OUTPUT.
4.2. Accepting Mode
Theorem 2 .
N
accASNP2=NRE.The proof of this theorem is similar to that of Theorem 1. A register machine M=(m, H, l0, l, I), consisting of three modules, ADD, SUB, and INPUT, is considered. Module SUB is shown in Figure 4.Module ADD (Shown in Figure 6) Assume that an ADD instruction l : (ADD(r), l) has to be simulated at time t. Suppose that one spike is in neuron σ; then the rule a⟶a can be used. Thus, neuron σ sends one spike to neurons σ and σ. In this way, the number of spikes in σ increases by 1 and the instruction l is activated. Hence, the ADD instruction can be simulated correctly by this module.
Figure 6
Module ADD: simulating the ADD instruction l : (ADD(r), l).
(2) Module INPUT (Shown in Figure 7) Module INPUT shown in Figure 7 works as follows. The function of module INPUT is to read the spike train 101 and compute the number n in the time between receiving two spikes. When neuron σ receives the first spike at timet and then neurons σ,σ, and σ receive one spike each, the rule in σand σ can be applied at timet + 1. At timet + 2, neuron σ1 gets one spike, and, at the same time, neuron σ gets one spike from σ and neuron σ receives one a from σ. Therefore, in the next n − 1time periods, the rules in neurons σ and σ can continued to be used. During this period, σ1 gets n − 1 spikes. When neuron σ receives the second spike at step t + n, each of neurons σ and σ receives one spike at step t + n + 1 and they both have two spikes. In this way, neurons σ and σ cannot fire to send any spikes to neuron σ1. In the whole process, neuron σ1 receives (n − 1) + 1 = n spikes, i.e., the number nis stored in register 1.
Figure 7
Module INPUT.
From the descriptions above about the three modules, it is clear that the register machine M can correctly simulate the system. The proof is complete.
5. A Small Universal AWSN P System
5.1. The Universality as Function Computing Devices
Theorem 3 .
There is a universal AWSN P system having 34 neurons which can be used to perform function computing.A general framework of a system ∏′ used to simulate a universal register machine M′ is shown in Figure 8, which is a universal AWSN P system. Π′ consists of 8 modules: ADD, SUB, ADD-ADD, SUB-ADD-1, SUB-ADD-2, SUB-SUB, INPUT, and OUTPUT. The modules SUB, OUTPUT, and ADD are the same as those in Figures 4–6, respectively. The module INPUT is shown in Figure 9.
Figure 8
General framework of the universal AWSN P system.
Figure 9
Module INPUT.
Module INPUT works as follows: when neuron σin gets a spike from the environment, the rule a⟶a fires and one spike is sent to neurons σ, σ, and σ, and two spikes are sent to neuron σ. Then, the rule in neuron σ sends one spike to both σ and σ1. At the same time, neuron σ fires and then sends one spike to σ and two spikes to σ. Up to this point, three spikes were sent to neuron σ. Therefore, before neuron σin receives more spikes from the environment, neurons σ and σ have received one spike from each other in each time period and neuron σ1 has received g(x) spikes.When σin receives the second spike, each of the neurons σ, σ, and σ can get one spike and σ gets two spikes. Neuron σ has four spikes at this moment, and its rule can be used to send two spikes to neuron σ. Neuron σ then has six spikes, so that the rule in σ is used to produce one spike and send it to σ. In this way, neurons σ and σ receive one spike from each other in each step before σin receives the third spike from the environment. Neuron σ2 has y spikes at the end. When neuron σin receives the third spike, each of the neurons σ, σ, and σ gets one spike, while σ receives two spikes. As a result, neuron σ has an odd number of spikes and the rule cannot be applied. At present, neuron σ has three spikes, and the rule a3⟶a in neuron σ fires, which generates one spike and sends it to σ. In this way, it can simulate the instruction l0 in the next step.As with the proof of Theorems 1 and 2, the system uses the following numbers of neurons:9 neurons for 9 registers25 neurons for 25 labels5 neurons for the module INPUT1 neuron in each SUB instructions and 14 in total2 neurons for the module OUTPUTTherefore, totally 55 neurons are used.The numbers of neurons can be decreased by exploring some relationships between some instructions of register machine M′. The following modules are given to reduce the number of neurons in the computation process.The SUB-ADD instructions can be divided into two cases, depending on the number of spikes placed in register r1 (the register involved in the SUB instruction). Modules SUB-ADD-1 and SUB-ADD-2 shown in Figures 10 and 11 can simulate the SUB and ADD instructions sequentially. The working process of module SUB-ADD-1 is similar to that of module SUB. When the rule in neuron σ is used and σ contains at least one spike, neuron σ cannot fire. Neuron σ fires by receiving one and then sends one spike to σ. At the end of the computation, neuron σ has one spike, neuron σ has one spike, and neuron σ is empty. When σ is empty, neurons σ and σ are also empty and neuron σ contains one spike. Thus, each pair of SUB-ADD-1 instructions l : (SUB(r1), l, l) and l : (ADD(r2), l) can share a common neuron when r1 ≠ r2, and there are totally 6 pairs in M′:
Figure 10
Module SUB-ADD-1: the sequence of the ADD and SUB instructions l : (ADD(r2), l) and l : (SUB(r1), l, l).
Figure 11
Module SUB-ADD-2: the sequence of the ADD and SUB instructions l20 : (ADD(0), l0) and l15 : (SUB(3), l18, l20).
By using this module, 6 neurons can be saved. In the same way, the module shown in Figure 10 can simulate the two instructions l15 and l20. Neuron σ can be saved.The module ADD-ADD shown in Figure 12 can simulate instructions l17 and l21. In this way, one neuron can be saved.
Figure 12
Module ADD-ADD: the sequence of ADD and ADD instructions l17 : (ADD(2), l21) and l21 : (ADD(3), l18).
The SUB instructions share a common neuron when the labels of their registers are different, as shown in Figure 13. Assume that the simulation of the SUB instruction l : (SUB(r1), l, l) starts at time t. When neuron σ gets a spike, the rule fires and sends one anti-spike to σ and two anti-spikes to σ and σ, respectively, at time t + 1. Neuron σ receives an anti-spike at time t + 2. Neurons σ, σ, σ, and σ work in the same way as those in module SUB shown in Figure 4. Neuron σ will send three spikes to σ and two spikes to σ, where forgetting rules will be applied. Thus, the instruction l : (SUB(r1), l, l) is correctly simulated by this module. The process when starting with instruction l′ is similar to that described above.
Figure 13
Module SUB-SUB with r1 ≠ r2.
Two SUB modules dealing with the same register, as shown in Figure 14, can also be proved to work correctly in a similar way. Assume that the instruction l : (SUB(r1), l, l) is simulated and one spike is contained in neuron σ. The process is divided into two cases according to the number of spikes in neuron σ. When σ has at least one spike, the working process of the system is similar to that of module SUB. When σ is empty, the rule in neuron σ cannot be used. Neurons σ, σ, and σ are all empty but neuron σ contains one spike. All SUB instructions can be simulated correctly by the module. Therefore, all SUB modules can share a common neuron.
Figure 14
Module SUB-SUB with r1=r2.
From the above description about the numbers of neurons saved, the system uses the following:9 neurons for 9 registers17 neurons for 17 labels5 neurons for the module INPUT1 neuron for all the 14 SUB instructions2 neurons for the module OUTPUTA total of 21 neurons can be saved and the number of neurons in this system can be decreased from 55 to 34. The proof is complete.
5.2. The Small Universality as Number Generator
A small universal AWSN P system as a number generator is considered. The process of simulating universal number generators is similar to that of simulating general function computing devices, but the difference between them lies in the module INPUT. The system starts with the spike train 101 from environment and ends with neuron σ1 receivingg(x) spikes. This system is then loaded with an arbitrary number k, and neuron σ2 receives k spikes. The number k is also the output at the same time as the output spike train 101, with g(x) in register 1 and k in register 2. Since the output module is not required, that is to say, register 8 is not required, the register machine M is simulated. If the computation in M halts, the computation can also halt.Furthermore, module INPUT and module OUTPUT can be combined. The module INPUT-OUTPUT is shown in Figure 15, and an example is used to prove its feasibility. The label l′ can also be saved because of module INPUT-OUTPUT. The string 101 is used in module INPUT-OUTPUT, where g(x)=2 and k=4. The computation follows the above working processes of the modules. The results of each step are shown in Table 2.
Figure 15
Module INPUT-OUTPUT.
Table 2
The computation process of the module INPUT-OUTPUT.
Step
σin
σf1
σf2
σf3
σf4
σf5
σ1
σ2
σout
σl0
t
1
0
0
0
2
0
0
0
0
0
t+1
0
1
1
0
2
0
0
0
0
0
t+2
1
1
1
1
2
6
1
0
2 (fire)
0
t+3
0
2
2
1
1
8
2
1
3
0
t+4
0
2
2
0
0
4
2
2
3
0
t+5
0
2
2
0
−1
2
2
3
3
0
t+6
0
2
2
0
0
1
2
4
4 (fire)
1
Assume that σin has one spike at timet, and neuron σ has two spikes. At timet + 1, σ and σ receive one spike, respectively. From the structure shown in Figure 15, neurons σ and σ receive one spike from each other at each step until σ and σstop firing. Then σin receives the second spike. Each of neurons σ1 and σ receives one spike, σ receives six spikes, and σout receives two spikes, so that neurons σ and σout can fire. At timet + 3, both σ and σ have two spikes, but they cannot fire again. σ receives six spikes from σ, but σ also receives two anti-spikes from σ, plus four spikes existing in σ, so that neuron σ has eight spikes. In addition, neuron σout receives two spikes again, so that there are three spikes contained. Neuron σ only has one spike because the received anti-spike annihilates one spike. At timet + 4, the neuron σ is empty after receiving an anti-spike. σ receives two anti-spikes, so that there are four spikes contained in neuron σ, the number of spikes is even, and its rule can fire. At the next step, σ receives one anti-spike and fires. Neuron σ consumes two spikes and still can fire. At timet + 6, neurons σ and σout receive one spike from σ, respectively. So, there are 4 spikes in σout, meeting the required conditions for firing. Neuron σ also gets one spike.The string is read through neuron σ, and g(x) spikes are stored in register 1 when the calculation stops. At the same time , the output number (t + 6 − t − 2 = 4) is the same as the number stored in register 2. Neuron σ activates and starts simulating the register machine by simulating modules ADD and SUB. Therefore, through this process, the module INPUT-OUTPUT can be simulated correctly.Therefore, this system contains the following:8 neurons for the 8 registers14 neurons for the 14 labels (l is saved; 8 neurons are saved by modules SUB-ADD and ADD-ADD)1 neuron for 13 SUB instructions7 neurons in the module INPUT-OUTPUTThere is a universal AWSN P system having 30 neurons that can be used to perform number generating.
6. Conclusions
In this work, a variant of the SN P systems, called the AWSN P systems,is proposed. Because of the use of anti-spikes, the proposed systems are more biologically significant thanSN P systems, with inhibitory spikes in the communication between neurons. An example is used to illustrate the working process of this system. The computational universality is then proved in the case of generating mode and accepting mode, respectively. Finally, the Turing universality of AWSN P systems is proved. The function computing device can be realized by using 34 neurons. Compared with the small universal SN P system using anti-spikes introduced by Song [17], the AWSN P system uses 13 fewer neurons. Compared with the SN P systems with weighted synapses introduced by Pan [34], the AWSN P system uses 4 fewer neurons. The small universality of the ASN P system as number generator is investigated with 30 neurons. Compared with Pan's work [34], the proposed system uses 6 fewer neurons.The computational universality is proved for AWSN P systems with standard rules. There are three types of spiking rules, , a⟶a, and , used that are time dependent, and there is one type of forgetting rules, a⟶λ. There are several future research directions. One direction is to investigate whether the computational power will remain the same if only one or two types of spiking rules are used or if the forgetting rules are not used and to investigate whether AWSN P systems can perform better or the same if the spiking rules are not time-dependent. These open problems certainly need further studies. Another future research direction is the application of the proposed systems. There have been studies, such as using SN P systems with learning function for letter recognitions [36]. If the learning function was introduced in AWSN P systems, it may perform better in letter recognitions. Because the use of anti-spikes improves the ability of AWSN P systems to represent and process information, it may solve more practical problems, which still require further research.
Authors: Tao Song; Linqiang Pan; Tingfang Wu; Pan Zheng; M L Dennis Wong; Alfonso Rodriguez-Paton Journal: IEEE Trans Nanobioscience Date: 2019-02-01 Impact factor: 2.935
Authors: Tao Song; Xiangxiang Zeng; Pan Zheng; Min Jiang; Alfonso Rodriguez-Paton Journal: IEEE Trans Nanobioscience Date: 2018-10-01 Impact factor: 2.935