Literature DB >> 30557333

An improved DBSCAN algorithm based on cell-like P systems with promoters and inhibitors.

Yuzhen Zhao1, Xiyu Liu1, Xiufeng Li1.   

Abstract

Density-based spatial clustering of applications with noise (DBSCAN) algorithm can find clusters of arbitrary shape, while the noise points can be removed. Membrane computing is a novel research branch of bio-inspired computing, which seeks to discover new computational models/framework from biological cells. The obtained parallel and distributed computing models are usually called P systems. In this work, DBSCAN algorithm is improved by using parallel evolution mechanism and hierarchical membrane structure in cell-like P systems with promoters and inhibitors, where promoters and inhibitors are utilized to regulate parallelism of objects evolution. Experiment results show that the proposed algorithm performs well in big cluster analysis. The time complexity is improved to O(n), in comparison with conventional DBSCAN of O(n2). The results give some hints to improve conventional algorithms by using the hierarchical framework and parallel evolution mechanism in membrane computing models.

Entities:  

Mesh:

Year:  2018        PMID: 30557333      PMCID: PMC6296794          DOI: 10.1371/journal.pone.0200751

Source DB:  PubMed          Journal:  PLoS One        ISSN: 1932-6203            Impact factor:   3.240


1 Introduction

Cluster analysis is the process of partitioning dataset into several clusters, with intra-cluster data being similar, and inter-cluster data being dissimilar. Cluster analysis is widely used in the fields of business intelligence [1, 2], Web search [3, 4], security [5, 6], biology [7, 8] and so on [9, 10] to discover implicit pattern or knowledge. As one subfield of data mining, cluster analysis can also be used as a stand-alone tool to obtain the data distribution, observe the characteristics of each cluster, deeply analyse special clusters, compress data (a cluster obtained by cluster analysis can be seen as a group) and so on. Further more, it can also be used as a preprocessing step for other algorithms, that is, these algorithms operate on the clusters or selected attributes [11]. The density-based spatial clustering of applications with noise (short for DBSCAN) algorithm is known as a density-based clustering algorithm, which clusters data points with large enough density [12] and achieves many significant improvements [13-20]. DBSCAN algorithm can recognize clusters of arbitrary shape, even the oval clusters and the “s” shape clusters, further more, the noise points can be removed from clusters. However, for big data processing, particular for big data cluster analysis, the improvement on computational efficiency of DBSCAN is needed. Cell-like P systems with promoters and inhibitors are abstracted based on the structure and function of the living cell, which have three main components, the membrane structure, multisets of objects evolving in a synchronous maximally parallel manner, and evolution rules. Objects in P systems evolve in a maximum parallel mechanism, regulated by promoters and inhibitors, such that the systems perform an efficient computation [21]. Therefore, cell-like P systems with promoters and inhibitors are a kind of suitable tool to improve the computational efficiency of DBSCAN. In this work, DBSCAN algorithm is improved by using parallel evolution mechanism and hierarchical structure in cell-like P systems with promoters and inhibitors. As a result, a so called DBSCAN-CPPI algorithm is obtained. Specifically, core objects from the dataset are parallel detected and regulated by a set of promoters and inhibitors. As well, n + 1 membranes are used to store the detected results, and a specific output membrane is used to output the clustering result. Experimental results based on Iris database of UC Irvine Machine Learning Repository [22] and the banana database show that the proposed algorithm performs well in data clustering, which achieves accuracy 81.33% (as well as the conventional DBSCAN), while the cost of time is reduced from O(n2) to O(n).

2 Preliminaries

In this section, some basic concepts and notions in DBSCAN and cell-like P systems with promoters and inhibitors are recalled [12, 23].

2.1 The DBSCAN algorithm

Density-based spatial clustering of applications with noise, shortly known as DBSCAN, is a density-based clustering algorithm, which clusters data points having large enough density. The ϵ neighborhood of an object is the space within the radius ϵ (ϵ > 0) centered at this object. Core object: An object q is a core object if the number of objects in its ϵ neighborhood is greater than or equal to the threshold MinPts. Directly density-reachable: An object p is directly density-reachable from a core object q if and only if object p is in the ϵ neighborhood of object q. Density-reachable: Object p is density-reachable from object q if and only if there is a sequence p1, p2, …, p such that p1 = q, p = p, and each p is directly density-reachable from p. noise: An object is a noise point if it does not belong to any cluster of the dataset. The general procedure of DBSCAN is as follows. Input: the dataset containing n objects, the neighborhood radius ϵ, the density threshold MinPts Step 1. All objects in the dataset are marked as “unvisited”. Step 2. An unvisited object p is chosen randomly, the mark of this object p is changed to “visited”, and the number of objects in the ϵ neighborhood of p is counted to check whether p is a core object. If p is not a core object, it is marked as a noise point; otherwise, a new cluster C is built and the object p is added to this cluster. The objects, which are in the ϵ neighborhood of p and do not belong to other clusters, are added to this cluster, too. Step 3. For each unvisited object p′ in cluster C, if p′ is unvisited, the mark of p′ is changed to “visited”, and the number of objects in the ϵ neighborhood of p is counted to check whether p′ is a core object. If p′ is a core object, objects, which are in the ϵ neighborhood of p′ and do not belong to other cluster, are added to this cluster C. Step 4. Steps 2 and 3 are repeated until all objects are visited. Output: the clustering result Since the dissimilarity is measured by the distance between two objects, the algorithm can be applied to various types of objects.

2.2 Cell-like P systems with promoters and inhibitors

Biological systems, such as cells, tissues, and human brains, have deep computational intelligences. Biologically inspired computing, or bio-inspired computing in short, focuses on abstracting computing ideas from biological systems to construct computing models and algorithms [24-29]. Membrane computing is a novel research branch of bio-inspired computing, initiated by Gh. Păun in 2002, which seeks to discover new computational models from the study of biological cells, particularly of the cellular membranes [23, 30]. The obtained models are distributed and parallel bio-inspired computing devices, usually called P systems. There are three mainly investigated P systems, cell-like P systems [23], tissue P systems [31], and neural-like P systems [32] (and their variants, see e.g. [33-40]). It has been proved that many P systems are universal, that is, they are able to do what a Turing machine can do efficiently [41-46]. The parallel evolution mechanism of variants of P systems has been found to perform well in doing computation, even solving computational hard problems [47-51]. A cell-like P system with promoters and inhibitors consists of three main components: the hierarchical membrane structure, objects and evolution rules. By membranes, a cell-like P system with promoters and inhibitors is divided into separated regions. Objects (information carriers) and evolution rules (by which objects can evolve to new objects) present in these regions. Objects are represented by symbols from an alphabet or strings of symbols. Evolution rules are executed in a non-deterministic and maximally parallel way in each membrane. The definition of a cell-like P system with promoters and inhibitors is as follows. – O is the alphabet which includes all objects of the system. – μ is a rooted tree (the membrane structure). – w describes the initial objects in membrane i, symbol λ denotes the empty string, and it shows that there is no object in membrane i. – R is the set of rules in membrane i with the form of u → v, where u is a string composed of objects in O, and v is a string over {a, a, a|a ∈ O, 1 ≤ j ≤ t} (a means object a remains in membrane i in which here can be omitted; a means object a goes into the outer layer membrane, and a means object a goes into the inner layer membrane j), α ∈ {z, ¬z′} is a promoter or an inhibitor. A rule can be executed only when promoter z appears and cannot be executed when inhibitor z′ appears. – ρ defines the partial order relationship of the rules, i.e., higher priority rule means the rule should be executed with higher priority. – i is the membrane where the computation result is placed. In the system, rules are executed in non-deterministic maximally parallel manner in each membrane. That is, at any step, if more than one rule can be executed but the objects in the membrane can only support some of them, a maximal number of rules will be executed. Each P system contains a global clock as the timer, and the execution time of one rule is set to a time unit. The computation halts if no rule can be executed in the whole system. The computational results are represented by the types and numbers of specified objects in a specified membrane. Because objects in a P system evolve in maximally parallel, the system computes very efficiently. For more details one can refer to [23].

3 The improved DBSCAN algorithm based on cell-like P systems with promoters and inhibitors

In this section, the DBSCAN algorithm is improved by using parallel evolution mechanism and hierarchical membrane structure in cell-like P systems promoters and inhibitors, where promoters and inhibitors are utilized to regulate parallelism of objects evolution. The obtained algorithm is shortly called DBSCAN-CPPI. Before introducing DBSCAN-CPPI, two matrices, called the distance matrix and dissimilarity matrix, are defined. Assume the dataset with n objects is X = {x1, x2, ⋅⋅⋅, x}, and Euclidean distance is used to define their dissimilarity. The distance matrix between any two objects is defined as follows. where is the distance between x and x. The dissimilarity matrix, denoted by D, can be obtained from the distance matrix . If all elements in are integers, ; otherwise, the element f of matrix D is obtained by multiplying for 100 times and rounding off, thus getting a natural number. The dissimilarity matrix D is as follows.

3.1 The cell-like P system for improving DBSCAN

In general, for a clustering problem with n points, the dissimilarity matrix D, a neighborhood radius ϵ and a density threshold MinPts, a membrane structure with n + 3 membranes labelled by 0, 1, …, n + 2 is used as the framework for DBSCAN-CPPI, which is shown in Fig 1.
Fig 1

Membrane structure for the improved DBSCAN algorith.

The dataset of objects to be dealt with is placed in membrane 0. Each point will be determined whether it is a core object or not in a parallel manner, using parallel evolution mechanism in cell-like P systems. The determined results of the n objects are stored in membranes 1, 2, …, n, respectively. After that, using maximum parallel mechanism, determined results of the n objects can be read/moved into target membranes by using evolution rules. The clustering result is stored in membrane n + 2. Hence, comparing with conventional DBSCAN algorithm, the time consumption of determining whether an object is a core object can be reduced by reading results in membrane 0. The cell-like P system with promoters and inhibitors for DBSCAN-CPPI is as follows. – O = {x, a, W, W′, b, c, A, θ, θ, φ, φ, E|1 ≤ i, j ≤ n}; – μ = [0[1]1[2]2…[]]0; – w0 = θ, w1 = … = w = λ; – i = n + 2; – ρ = {r > r|i < j}; – R0 is the set of rules in membrane 0: Generally, r1, r2…, r6 are used to find all core objects and their neighbors. Initially, x1, x2, …, x are placed into the membrane 0, and the system starts its computation. With x in membrane 0, r1 generates f copies of W and ϵ copies of W′, where ϵ is the radius of neighborhood and f represents the dissimilarity between x and x. The value of f can be computed from D and the value of ϵ is set by the user. After the execution of r1, W and W′ are generated such that r2 can be used. It has the following two cases: If f ≥ ϵ, then after using r2 there are f − ϵ copies of W. In this case, the W remaining will be consumed in one step with parallel using r6 in membrane 0. It means x is out of the radius of neighborhood of x. If f < ϵ, then after the application of r2 there are ϵ − f copies of left in membrane 0. This means x is in the radius of neighborhood of x. In this case, r3 is applied to generate b and c. Objects b work as a counter which count the number of points in the neighborhood of x, and objects c are used to mark x is in the neighborhood of x. The value of MinPts is initially set to define the minimal number of neighbors that a core object should has. If there are more than or equal to MinPts copies of b in membrane 0, which means the number of neighbors of x is enough to let it become a core object, then r4 can be used to generate A to distinguish the core object x from the others. If the number of b is less than MinPts, then x is not a core object and b will be consumed by r5. Rules r7, r8…, r11 are used to separate objects to different clusters. Object A is chosen arbitrarily as a core object to built a new cluster i. With using r8, its neighbors a that are not belonging to other clusters are put into membrane i. If there are other core objects in its neighborhood, this process is repeated. When there is no object that belongs to cluster i, another core object A is chosen arbitrarily to build another cluster j. Object θ is an auxiliary variable used to control the cycles. The remaining objects are put into membrane n + 1 as noise points by using r12. Objects β and φ1 are placed into membranes 1 to n + 1 accordingly. – R1, R2, …, R are the sets of rules in membranes 1, 2, …, n: Each membrane i, 1 ≤ i ≤ n, has the following set of rules Object β is a string and a in current membrane will be added to the end of string β. Object φ is an auxiliary object used to control the cycles. – R is the set of rules in membrane n + 1: Object a in membrane n + 1 is the noise point, and E is added at the beginning of the string. – R is the set of rules in membrane n + 2, which is empty. Membrane n + 2 is used to output the final cluster result, which has no rule inside.

3.2 An example

An example is used to show how the system works. Four data points (1, 1), (1, 2), (3, 2), (3, 3) are considered. Let ϵ = 2 and MinPts = 1. In this example, the square Euclidean distance is chosen as the distance measure. The dissimilarity matrix D44 is as follows. The computational process is shown in Table 1.
Table 1

The computational process of the example.

stepmembrane 0membrane 1membrane 3membrane 6
0θ, x1, x2, x3, x4(r1)
1 θ,a1,W12,W135,W148, W122,W132,W142, a2,W21,W234,W245, W212,W232,W242, a3,W315,W324,W34, W312,W322,W342, a4,W418,W425,W43, W412,W422,W432(r2)
2 θ,a1,W133,W146,W12, a2,W232,W243,W21, a3,W313,W322,W34, a4,W416,W423,W43(r3) θ,a1,W133,W146,b1,c12, a2,W232,W243,b2,c21, a3,W313,W322,b3,c34, a4,W416,W423,b4,c43(r4)
4 θ,a1,W133,W146,A1,c12, a2,W232,W243,A2,c21, a3,W313,W322,A3,c34, a4,W416,W423,A4,c43(r6)
5θ, a1, A1, c12, a2, A2, c21, a3, A3, c34, a4, A4, c43(r7)
6θ11, c12, a2, A2, c21, a3, A3, c34, a4, A4, c43(r9)a1
7θ11, θ12, c21, a3, A3, c34, a4, A4, c43(r10)a1, a2
8θ11, c21, a3, A3, c34, a4, A4, c43(r11)a1, a2
9θ, c21, a3, A3, c34, a4, A4, c43(r7)a1, a2
10θ33, c21, c34, a4, A4, c43(r9)a1, a2a3
11θ33, θ34, c21, c43(r10)a1, a2a3, a4
12θ33, c21, c43(r11)a1, a2a3, a4
13θ, c21, c43(r13)a1, a2a3, a4
14c21, c43a1, a2, φ1, β(r14)a3, a4, φ1, β(r15)
15c21, c43a1, a2, φ2, βa1(r14)a3, a4, φ2, β(r15)
16c21, c43a1, a2, φ3, βa1a2(r15)a3, a4, φ3, β(r14)
17c21, c43a1, a2, φ4, βa1a2(r15)a3, a4, φ4, βa3(r14)
18c21, c43a1, a2, φ5, βa1a2(r16)a3, a4, φ5, βa3a4(r16)
19c21, c43a1, a2,a3, a4βa1a2, βa3a4
The four data points are divided into two clusters by the P system.

3.3 Time complexity analysis

In this subsection, the time cost in the worst case of DBSCAN-CPPI is analyzed. Initially, 6 steps are needed to find all core objects and their neighbors by using r1 to r6 in a maximal parallel manner. 3 steps are needed to put a core object and its neighbors into the corresponding cluster. In the worst case, the n objects are all core objects. In this case, it needs 3n steps to separate the n objects to different clusters. Subsequently, 2 steps (using r10 and r11) are needed to remove the auxiliary objects, and 2 steps are needed to find the noise points and activate the rules in membranes 1, 2, …, n + 1. Till now, the time cost is 6 + 3n + 2 + 1 + 1 = 3n + 10 steps. The rules in membranes 1, 2, …, n + 1 are executed in a parallel manner. By using r17 and r18, object a is added to the string β in its corresponding membrane i, which costs n steps. After that, with using r19, string β is passed into the output membrane n + 2, which costs 1 step. Hence, it needs n + 1 steps to output the result. The time complexity is (3n + 10) + (n + 1) = 4n + 11, which is O(n). Some comparisons results between DBSCAN-CPPI and the conventional/improved DBSCAN algorithm are shown in Table 2.
Table 2

Comparisons results of time complexity of some proposed DBSCAN algorithms.

algorithmtime complexity
DBSCAN [12]O(n2)
Rough-DBSCAN [13]O(n + k2)
DBSCAN using a pruning technique on bit vectors [14]O(knmk + (1 − p) ∗ (n − 1) ∗ m)
A prototype-based modified DBSCAN [15]max{O(nT), O(K′ ∗ tqm)}
G-DBSCAN [16]O(n2)
BDE-DBSCAN [17]O(nlogn)
SS-DBSCAN [18]O(nlogn)
DBSCAN based on grid cell [19]O(n + mk2)
DBSCAN with Spark [20]O(n + Km)
DBSCAN-CPPIO(n)

4 Experiments and analysis

4.1 Illustrative experiment

Take eighteen data points (4, 5), (3.7, 7), (4.5, 8), (4.5, 3), (5, 4), (5, 6), (5.5, 8), (6, 2.8), (6, 4), (6, 5.5), (6.5, 2), (7, 3), (10, 7), (10, 12), (11, 6), (11, 8), (12, 6.5), (12.5, 8) shown in Fig 2 as an example. Let ϵ = 5 and MinPts = 5.
Fig 2

The data points waiting for being clustered.

The conventional DBSCAN algorithm is used to cluster the data points firstly. Two clusters are gained as shown in Fig 3. The proposed DBSCAN-CPPI is tested to cluster the same data points, which obtains the same result as with conventional DBSCAN.
Fig 3

The two clusters formed by the conventional algorithm.

4.2 Applied experiments

In this subsection, the Iris database and the banana database are used as experiments.

The Iris database

The Iris database of UC Irvine Machine Learning Repository [22] is used to test DBSCAN-CPPI. This database contains 150 records. The 150 records are numbered orderly from 1 to 150. Each record contains four Iris properties values and the corresponding Iris species. All records are divided into three species, data from 1 to 50, data from 51 to 100 and data from 101 to 150, respectively. In the experiments, the value of ϵ is set to be 17 and MinPts is with value 5. The proposed DBSCAN-CPPI is tested by clustering the Iris database. The cluster result is shown in Table 3. In this work, the cluster accuracy is defined by the ratio between the number of records which are correctly clustered and the total number of records in the database. The cluster accuracy obtained by the proposed DBSCAN-CPPI is 81.33%, which is as good as the conventional DBSCAN.
Table 3

The 3 clusters and noise points on Iris database using DBSCAN-CPPI algorith.

ClusterSerial number of data in the corresponding cluster
11,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,43,44,45,46,47,48,49,50
251,52,53,54,55,56,57,59,60,62,64,66,67,68,70,72,74,75,76,77,78,79,80,81,82,83,85,86,87,89,90,91,92,93,95,96,97,98,100
371,73,84,102,103,104,105,111,112,113,114,116,117,121,122,124,125,126,127,128,129,130,133,134,137,138,139,140,141,142,143,144,145,146,147,148,149,150
Noise points23,42,58,61,63,65,69,88,94,99,101,106,107,108,109,110, 115,118,119,120,123,131,132,135,136

The banana database

The database consisting of two banana shaped clusters (shown in Fig 4) is used to test DBSCAN-CPPI. Such database contains 1000 records which are numbered from 1 to 1000. Each record contains 2 property values, and all records are separated into clusters, data from 1 to 500 and data from 501 to 1000, respectively. The value of ϵ is set to be 26 and the value of MinPts is 10. The cluster result is shown in Fig 5 (yellow points are noise points, blue points and red points represent the two clusters, respectively) and the accuracy is 87.00% which is as good as the conventional DBSCAN.
Fig 4

The banana shaped database.

Fig 5

The 2 clusters and noise points with DBSCAN algorithm.

4.3 Algorithm analysis

In this subsection, the sensitivity and clustering quality of DBSCAN-CPPI, comparing with the classic k-means algorithm are donsidered.

Sensitivity analysis

In the initialization of DBSCAN-CPPI, it needs to set the values of ϵ and MinPts, which are usually set by experiences. In the following, the relationships between the different values of the two parameters and the accuracy are analyzed. The results are shown in Figs 6 and 7.
Fig 6

The cluster accuracy of different parameter values in the Iris database obtained by DBSCAN-CPPI.

Fig 7

The cluster accuracy of different parameter values in the banana database obtained by DBSCAN-CPPI.

From Figs 6 and 7, it is found that DBSCAN-CPPI is sensitive to the values of the two parameters. With the simulation results, the best result of the Iris database is obtained when ϵ = 17 and MinPts = 3,4,5,6,7. The best result of the banana database is obtained when ϵ = 26 and MinPts = 2, 3, …, 14.

Clustering quality analysis

We compare the clustering quality of DBSCAN-CPPI with k-means algorithm on Iris database. The cluster result of k-means algorithm on Iris database is shown in Table 4 with cluster accuracy 89.33%.
Table 4

The 3 clusters with k-means algorithm.

ClusterSerial number of data in the corresponding cluster
11,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36,37,38,39,40,41,42,43,44,45,46,47,48,49,50
251,52,54,55,56,57,58,59,60,61,62,63,64,65,66,67,68,69,70,71,72,73,74,75,76,77,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,99,100,102,107,114,115,120,122,124,127,128,134,139, 143,147,150
353,78,101,103,104,105,106,108,109,110,111,112,113,116,117,118,119,121,123,125,126,129,130,131,132,133,135,136,137,138,140,141,142,144,145,146,148,149
In the cluster result by k-means algorithm, thirteen objects, which should be clustered in cluster 3, are placed to cluster 2; two objects belonging to cluster 2 are clustered in cluster 3. While, with DBSCAN-CPPI, no object is clustered in wrong clusters. The k-means algorithm is also used to deal with banana database. The cluster result is shown in Fig 8 (yellow points are the points being separated to wrong clusters). The cluster accuracy is 75.10%.
Fig 8

The 2 clusters with k-means algorithm on banana database.

The accuracy of DBSCAN-CPPI on banana database is 11.9% higher than k-means algorithm accuracy. The k-means algorithm divides the “two bananas” from the middle and more points are misclassified, and DBSCAN-CPPI algorithm sets 124 points as noise points and only 6 points are misclassified.

5 Conclusions

In this work, an improved DBSCAN algorithm, named DBSCAN-CPPI is proposed by using parallel evolution mechanism and hierarchical membrane structure in cell-like P systems promoters and inhibitors. The time complexity is improved to O(n), in comparison with conventional DBSCAN of O(n2). Experimental results, based on Iris database and banana database, show that 1. DBSCAN-CPPI performs well on these two databases, it can find clusters of arbitrary shape, the cluster results are better especially when the clusters are not spherical-shaped; 2. DBSCAN-CPPI is suitable for big cluster analysis due to the low time complexity. The results give some hints to improve conventional algorithms by using the hierarchical framework and parallel evolution mechanism in membrane computing models. For further research, it is of interests to use neural-like membrane computing models, see e.g. [52-55], to improve DBSCAN algorithm. A possible way is to use the memory mechanism in neural computing models to store some potential cluster results, and then select the best one as computing result. Also, some other algorithms can be improved by using parallel evolution mechanism and hierarchical membrane structure [56, 57].
  18 in total

1.  Spiking neural P systems with rules on synapses working in maximum spikes consumption strategy.

Authors:  Tao Song; Linqiang Pan
Journal:  IEEE Trans Nanobioscience       Date:  2014-11-06       Impact factor: 2.935

2.  Spiking Neural P Systems With Rules on Synapses Working in Maximum Spiking Strategy.

Authors: 
Journal:  IEEE Trans Nanobioscience       Date:  2015-02-11       Impact factor: 2.935

3.  Spiking Neural P Systems With White Hole Neurons.

Authors:  Tao Song; Faming Gong; Xiyu Liu; Yuzhen Zhao; Xingyi Zhang
Journal:  IEEE Trans Nanobioscience       Date:  2016-10       Impact factor: 2.935

4.  Spiking neural P systems with a generalized use of rules.

Authors:  Xingyi Zhang; Bangju Wang; Linqiang Pan
Journal:  Neural Comput       Date:  2014-08-22       Impact factor: 2.026

5.  On some classes of sequential spiking neural p systems.

Authors:  Xingyi Zhang; Xiangxiang Zeng; Bin Luo; Linqiang Pan
Journal:  Neural Comput       Date:  2014-02-20       Impact factor: 2.026

6.  Spiking Neural P Systems With Scheduled Synapses.

Authors:  Francis George C Cabarle; Henry N Adorna; Min Jiang; Xiangxiang Zeng
Journal:  IEEE Trans Nanobioscience       Date:  2017-10-16       Impact factor: 2.935

Review 7.  A comprehensive overview and evaluation of circular RNA detection tools.

Authors:  Xiangxiang Zeng; Wei Lin; Maozu Guo; Quan Zou
Journal:  PLoS Comput Biol       Date:  2017-06-08       Impact factor: 4.475

8.  On the Computational Power of Spiking Neural P Systems with Self-Organization.

Authors:  Xun Wang; Tao Song; Faming Gong; Pan Zheng
Journal:  Sci Rep       Date:  2016-06-10       Impact factor: 4.379

9.  Complex Network Clustering by a Multi-objective Evolutionary Algorithm Based on Decomposition and Membrane Structure.

Authors:  Ying Ju; Songming Zhang; Ningxiang Ding; Xiangxiang Zeng; Xingyi Zhang
Journal:  Sci Rep       Date:  2016-09-27       Impact factor: 4.379

10.  Spiking Neural P Systems with Neuron Division and Dissolution.

Authors:  Yuzhen Zhao; Xiyu Liu; Wenping Wang
Journal:  PLoS One       Date:  2016-09-14       Impact factor: 3.240

View more
  1 in total

1.  Spiking Neural P Systems with Membrane Potentials, Inhibitory Rules, and Anti-Spikes.

Authors:  Yuping Liu; Yuzhen Zhao
Journal:  Entropy (Basel)       Date:  2022-06-16       Impact factor: 2.738

  1 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.