Literature DB >> 35626457

A Two-Parameter Fractional Tsallis Decision Tree.

Jazmín S De la Cruz-García1, Juan Bory-Reyes2, Aldo Ramirez-Arellano1.   

Abstract

Decision trees are decision support data mining tools that create, as the name suggests, a tree-like model. The classical C4.5 decision tree, based on the Shannon entropy, is a simple algorithm to calculate the gain ratio and then split the attributes based on this entropy measure. Tsallis and Renyi entropies (instead of Shannon) can be employed to generate a decision tree with better results. In practice, the entropic index parameter of these entropies is tuned to outperform the classical decision trees. However, this process is carried out by testing a range of values for a given database, which is time-consuming and unfeasible for massive data. This paper introduces a decision tree based on a two-parameter fractional Tsallis entropy. We propose a constructionist approach to the representation of databases as complex networks that enable us an efficient computation of the parameters of this entropy using the box-covering algorithm and renormalization of the complex network. The experimental results support the conclusion that the two-parameter fractional Tsallis entropy is a more sensitive measure than parametric Renyi, Tsallis, and Gini index precedents for a decision tree classifier.

Entities:  

Keywords:  Gini index; complex networks; decision trees; two-parameter Tsallis entropy

Year:  2022        PMID: 35626457      PMCID: PMC9141694          DOI: 10.3390/e24050572

Source DB:  PubMed          Journal:  Entropy (Basel)        ISSN: 1099-4300            Impact factor:   2.738


1. Introduction

Entropy is a measure of the unpredictability of the state in physical systems that would be needed to specify the degree of disorder in full micro-structure of them. Claude Elwood Shannon [1] defined a measure of entropy to measure the amount of information in a digital system in the context of theory communication that has been applied in a variety of fields such as information theory, complex networks, and data mining techniques. The most widely used form of the Shannon entropy is given by where N is the number of possibilities and . Two celebrated generalizations of Shannon entropy are Renyi [2] and Tsallis entropies [3]. Alfred Renyi proposed a universal formula to define a family of entropy measures given by the expression [2] where q denotes the order of moments. Constantino Tsallis proposed the -logarithm defined by to introduce a physical entropy given by [3] Tsallis entropy could be rewritten [4,5,6] as where of a function f given by , , stands for the Jackson [7] -derivative, to reflect that it is an extension of Shannon entropy. Renyi and Tsallis entropy measures depend on the parameter q, which describes their deviations from the standard Shannon entropy. Both entropies converge to Shannon entropy in the limit . For complex network applications [8] and data mining techniques [9,10,11,12,13,14,15,16,17], the parameter q varies into a range of values. On the other hand, the computation of the entropic index q of the Tsallis entropy was implemented for physics applications in [18,19,20,21,22,23,24,25]. Shannon and Tsallis entropies can be obtained by the action of standard derivative or -derivative, respectively, to the same generating function with respect to variable t and then letting . This approach can be used to reveal different entropy measures based on the actions of appropriate fractional order differentiation operators [26,27,28,29,30,31,32]. The major goal of this work is to introduce a new decision tree based on a two-parameter fractional Tsallis entropy. This new kind of tree is tested on twelve databases for a classification task. The structure of the paper is as follows. Section 2 focuses attention on the notion of two-parameter fractional Tsallis entropy. In Section 3, two-parameter fractional Tsallis decision trees and a constructionist approach to the representation of the databases as a complex network are introduced. The basic facts on the box-covering algorithm of a complex network are reviewed. Finally, we compute an approximation set of parameters q, , and of the two-parameter fractional Tsallis entropy. Section 4 is concerned with the testing of two-parameter fractional Tsallis decision trees on twelve databases. Next, the approximations of q values are tested on Renyi and Tsallis entropies. Discussion of the findings of this study and concluding remarks are offered in Section 5.

2. Two-Parameter Fractional Tsallis Entropy

Based on the actions of fractional order differentiation operators, several entropy measures of fractional order are introduced in [26,27,28,29,30,33,34,35,36,37,38,39,40,41,42,43]. Following this approach in [22], the two-parameter fractional Tsallis entropy is introduced by merging two typical examples of fractional entropies. The first fractional entropy of order is introduced as and the second one by and where denotes the gamma function. Combining (6) with (7) yields a two-parameter fractional relative entropy as follows [31]: for . The entropy (10) reduces to (6) when and reduces to (7) when , yielding the Shannon entropy when both parameters approach Analogously, two extra-parameter-dependent Tsallis entropies are introduced [22]: and Combining theses entropies and motivated by (10), we obtain the following two-parameter fractional Tsallis entropy [22]: for Note that Tsallis entropy is recovered when . This implies that the non-extensibility of [44] forces to be so.

3. Parametric Decision Trees

A decision tree is a supervised data mining technique that creates a tree-like structure, where the non-leaf node tests a given attribute [45]. The outcome gives us the path to reach a leaf node, where the classification label is found. For example, let (x = 3, y = 1) be a tuple to be classified by the decision tree of Figure 1. If we test , we must follow the left path to reach and finally arrive at the leaf node with the classification label “a”.
Figure 1

A decision tree to the classification task.

In general, the cornerstone of the construction process of decision trees is the evaluation of all attributes to find the best node and the best split condition on this node to classify the tuple with the lower error rate. This evaluation is carried out by information gain on each attribute a [45]: where is the entropy of the database after being partitioned by the condition c of a given attribute a and is the entropy induced by c. The tree’s construction needs to evaluate several partition conditions c on all attributes of the database, then chooses the pair of attribute–condition with the highest value. Once a pair is chosen, the process evaluates the partitioned database recursively using a different attribute–condition. The reader is referred to [45] for details on decision tree construction and computation of (14).

3.1. Renyi and Tsallis Decision Trees

In classical decision trees, I in (14) denotes Shannon entropy; however, other entropies such as Renyi or Tsallis can replace it. Thus, (14) can be written using Renyi entropy (2) as and using Tsllis entropy (4) as follows:The parametric decision trees generated by (15) or (16) have been studied in [9,10,11,12,13,14].

3.2. Two-Parameter Fractional Tsallis Decision Tree

Following a similar fashion, a two-parameter fractional decision tree can be induced by the information gain obtained by rewritten (14) using (13): An alternative informativeness measure for constructing decision trees is the Gini index, or Gini coefficient, which is calculated by The Gini index can be deduced from Tsallis entropy (4) using [14]. On the other hand, the two-parameter fractional Tsallis entropy with , , reduces to the Gini index. Hence, Gini decision trees are a particular case of both Tsallis and two-parameter fractional Tsallis trees. The main issue with Renyi and Tsallis decision trees is the estimation of -value to obtain a better classification than the one produced by the classical decision trees. Trial and error is the accepted approach for this purpose. It consists of testing several values in a given interval, usually , and comparing the classification rates. This approach becomes unfeasible in two-parameter fractional Tsallis decision trees as it is needed to tune q, , and . A representation of a database as a complex network is introduced to face this issue. This representation lets us compute and following the approach in [22], which is the basis for determining the fractional decision tree parameters.

3.3. Network’s Construction

A network is a powerful tool to model the relationships among entities or parts of a system. When those relationships are complex, i.e., properties that cannot be found by examining single components, something emerges that is called a complex network. Thus, networks as a skeleton of complex systems [46] have attracted considerable attention in different areas of science [47,48,49,50,51]. Following this approach, a representation of the relationships among attributes (system entities) of a database (system) as a network is obtained. The attribute’s name will be concatenated before the value of a given row to distinguish the same value that might appear on different attributes. Consider the first record of the database shown on the top of Figure 2. The first node will be , the second node will be , and the third node will be . These nodes belong to the same record, so they must be connected; see dotted lines of the network in the middle of Figure 2. We next consider the second record; the nodes and will be added to the network. Note that the node was added in the previous step. We may now add the links between these three nodes. This procedure is repeated for each record in the database.
Figure 2

Network construction from a database. The nodes in the same color belong to the same box for .

The outcome is a complex network that exhibits non-simple topological features [52], which cannot be predicted by analyzing single nodes as occurs in random graphs or lattices [53].

3.4. Computation of Two-Parameter Fractional Tsallis Decision Tree Parameters

By the technique introduced in [22], the parameters and —on the network representation of the database—of the two-parameter fractional Tsallis decision tree are defined to be where is the number of nodes in the box obtained by the box-covering algorithm [54], n is the number of nodes of the network, and is the average degree of the nodes of the box. Similarly, two values of are computed as follows [22]: where is the diameter of the box , is the diameter of the network, and is the number of links among the boxes . The computation of and will be explained later. Inspired by the right-hand term of (19) and (20) (named ) with the fact that is a normalized measure of the number of boxes to cover the network [20], an approximation of the -value for the two-parameter fractional decision tree is introduced:Similarly, from the right hand of (21) and (22) (named ), a second approximation of the -value is given by where is the minimum number of boxes of diameter l to cover the network, n, , , . The process to compute the minimum number of boxes of diameter l to cover the network G is shown in Figure 3. A dual network () is created only with the nodes of the original network, Figure 3b. Then, the links in are added following the rule: two nodes i, j, in the dual network, are connected if the distance between is greater than or equal to l. In our example, , and node one is selected to start. Node one will be connected in with nodes five and six since their distance is four and three. The procedure is repeated with the remaining nodes to obtain the dual network shown in Figure 3b. Next, the nodes will be colored as follows: two directly connected nodes in must not have the same color. Finally, the nodes colored in are mapped to G; see Figure 3c. The minimum number of boxes to cover the network given l equals the number of colors in G. In addition, the nodes in the same color belong to the same box. In practice, ; thus, of the example are shown in Table 1. For details of the box-covering algorithm, the reader is referred to [54].
Figure 3

Box covering of a network for . (a) Original network. (b) Dual network. (c) Colouring process. (d) Mapping colours to the original network.

Table 1

The results of and from the network of Figure 3, and the “pseudo matrix” of .

l Nb(l) δ αl,1 αl,2 αl,3
16----
230.107 α2,1 α2,2 α2,3
320.107 α3,1 α3,2 -
420.143 α4,1 α4,2 -
51----
Now, we are ready to compute . Two boxes were found following the previous example for for ; see Figure 4a. The is the average link per node between the nodes of this box; for this reason, the link between nodes four and six is omitted in this computation. Similarly, . The is the degree of each node of the renormalized network; see the network of Figure 4b.
Figure 4

Renormalization of a network. (a) Grouping nodes into boxes. (b) Converting boxes into supernodes.

In our example, . The renormalization converts each box into a super node, preserving the connections between boxes. On the other hand, it is known that and ; in the first case, each box contains a node, and in the second one, there is one box to cover the network that contains all nodes. For this reason, the and are not defined for and , respectively. This force to as was stated in (19)–(24). Additionally, note that the right hand of (19) and (20) (), (21) and (22) () are “pseudo matrices”, where each row has values; see Table 1. Consequently, and are also “pseudo matrices”. The network represents the relationships between attribute-value (nodes) of each record and the relationships between different database records. For example, the dotted lines in Figure 2 show the relationships between the first record’s attribute value. Links of the node ZIP.08510 are the relationships between the three records, and the links of PHONE.54-76-90 are the relationships between the first and third one. The box-covering algorithm groups these relationships into boxes (network records). The network in the middle of Figure 2 shows that the three boxes (in orange, green, and blue) coincide with the number of records in the database. However, the attribute value of each box does not coincide entirely with records in the database since box-covering finds the minimum number of boxes with the maximum number of attributes where the boxes are mutually exclusive. The nodes in each box (network record) are enough to differentiate the records in the database. For example, the first network record consists of name, phone, and zip values (nodes in orange). The second record in the database can be differentiated from the first by its name and phone (values of those attributes are the second network record in green). The third one can be distinguished from the two others by its name (the third network record in blue). The cost of differentiating the first network record (measured by ) is the highest; meanwhile, the lowest is for the third. Thus, measures the local differentiation cost for the network records. On the other hand, measures the global differentiation cost (by ). For example, the global cost for the first network record is two, and one for the second and third; see the renormalized network (at the bottom of Figure 2). It means that the first network record needs to be differentiated from two network records, and the second and third only need to be distinguished from the first. Note that , for a given l relies on the topology network that captures the relationships of the records and their values. Finally, is the ratio between network records (normalized number of boxes ) and the local differentiation cost; meanwhile, is the ratio between network records and the global differentiation cost.

4. Methodology

Twelve databases (biological, technological, and social disciplines) from the UCI repository [55] were managed in the experiments; see Table 2. Their number of records, attributes, and classes are representative. Once a network was obtained from the database, q, , and parameters of the fractional Tsallis decision were approximated by the following four sets: , , , , , , , , , , , , where means the average value of the pseudo matrices obtained by (19)–(24).
Table 2

Database and network features. N = nominal, U = numerical, M = mixed.

DatabaseRecordsAttributesTypeClassesBalancedNodesEdges
Breast Cancer6999N2No7371276
Car17286N4Yes2570
Cmc14739M3No74264
Glass21410U7No11591743
Haberman3063U2No94395
Hayes1605N3No150186
Image231019U7Yes12,70524,411
Letter20,00016U16Yes2822700
Scale6254N3No2390
Vehicle94618U4Yes14348064
Wine17813U3No12792239
Yeast14849M10No19174907
The network can be obtained from a raw database or after being discretized. Since the classification—measured by the area under receiver operating characteristic curve (AUROC) and Matthews correlation coefficient (MCC)—was better using the approximations computed on the networks from discretized databases, these approximations are only reported. The attribute discretization of a database can be found in [56]. The discretization technique is unsupervised and uses equal-frequency binning. The discretized databases were only used to obtain the networks so that the classification task was carried out using the original databases. The networks obtained from the discretized and non-discretized databases turned out to be different; see Figure 5.
Figure 5

The networks from (a) non-discretized and (b) discretized vehicle database.

The classification task was performed by classical, Renyi, Tsallis, Gini, and the two-parameter fractional Tsallis decisions trees on each database. We used a 10-fold cross-validation repeated ten times to calculate the AUROC and MCC. The best value of the AUROC and MCC, produced by one of the four sets of parameters—used to approximate q, , and —of fractional Tsallis decisions trees, was chosen and compared with the classical and Gini decision trees. In the same way, or was chosen for the q parameter of Renyi and Tsallis trees. Then, their AUROCs and MCCs were compared with those of the classical trees. It is known that decision trees could produce non-normal distributed AUROC and MCC measures [57]. Hence, the normality was verified by the Kolmogorov–Smirnov test. These measures were compared using a T or a U Mann–Whitney test, according to their normality [10,57,58,59].

5. Applications

The approximations of q, , and parameters computed on discretized databases are shown in Table 3. Table 4 shows the AUROC and MCC of classical and two-parameter fractional Tsallis decisions trees and the result of the statistical compassion. In addition, the values of the parameters of fractional Tsallis decisions trees are reported.
Table 3

The parameters of the fractional Tsallis decision tree were obtained using the networks from discretized databases.

Database <qα> <qβ> <α1> <β1> <α2> <β2>
Breast Cancer0.1730.1891.1471.1340.8530.866
Car0.3030.3471.1371.1200.8630.880
Cmc0.1690.1851.1521.1380.8480.862
Glass0.1710.1871.1541.1410.8460.859
Haberman0.3440.4201.3331.2730.6670.727
Hayes0.2690.3101.2311.2000.7690.800
Image0.1170.1231.0561.0540.9440.946
Letter0.1550.1651.051.0470.9500.953
Scale0.3520.4211.2171.1820.7830.818
Vehicle0.0920.0961.1061.1010.8940.899
Wine0.1190.1271.1471.1380.8530.862
Yeast4.5745.0811.0031.0030.9970.997
Table 4

The AUROC and MCC of classical (CT) and two-parameter fractional Tsallis decision trees (TFTT) and their parameters q, , . + means that AUROC or MCC is statistically greater than AUROC or MCC of CT.

Database CTAUROC TFTTAUROC CTMCC TFTTMCC q α β Param. Set
Breast Cancer0.9590.964 +0.8890.967 +0.1730.8530.866<qα>, <α2>, <β2>
Car0.9810.9820.8920.912 +0.3470.8630.880<qβ>, <α2>, <β2>
Cmc0.6910.714 +0.3150.349 +0.1691.1521.138<qα>, <α1>, <β1>
Glass0.7940.874 +0.560.673 +0.1711.1541.141<qα>, <α1>, <β1>
Haberman0.5790.610 +0.18 0.156 0.3441.3331.273<qα>, <α1>, <β1>
Hayes0.8690.895 +0.5780.645 +0.2691.2311.200<qα>, <α1>, <β1>
Image0.9940.9920.982 0.978 0.1231.0561.054<qβ>, <α1>, <β1>
Letter0.9690.974 +0.9120.934 +0.1550.9500.953<qα>, <α2>, <β2>
Scale0.8450.861 +0.6780.703 +0.4211.2171.182<qβ>, <α1>, <β1>
Vehicle0.7620.7550.395 0.387 0.0920.8940.899<qα>, <α2>, <β2>
Wine0.9680.977 +0.9330.957 +0.1191.1471.138<qα>, <α1>, <β1>
Yeast0.7430.7330.462 0.463 4.5740.9970.997<qα>, <α2>, <β2>
The two-parameter fractional Tsallis decision tree outperforms the AUROC and MCC of the classical trees for eight databases. The statistical result of both measures disagrees with Car and Haberman. The AUROC of the two-parameter fractional Tsallis tree was equal to the classical trees for Car, Image, Vehicle, and Yeast; meanwhile, for Haberman, Image, Vehicle, and Yeast, the MCC of both trees showed no difference. Tsallis entropy is a non-extensive measure [60] as well as a two-parameter fractional Tsallis entropy [22]. On the contrary, Shannon entropy is extensive. The super-extensive property is given by , and sub-extensive property by . Note that the approximations of the q parameter for all the databases, see Table 3, are except for Yeast. Thus, they can be considered candidates for being named super-extensive databases. We say that a database is super-extensive if and its value produces a better classification (AUROC, MCC, or another measure) than the classical trees (based on Shannon entropy). Similarly, a database is sub-extensive if and its value produces a better classification. Otherwise, the database is extensive since, in this case, the Shannon entropy (the cornerstone of classical trees) is a less complex measure than the two-parameter fractional Tsallis entropy; hence Shannon entropy must be preferred. The two-parameter fractional Tsallis trees produce classifications equal to or better than the classical trees. Following those conditions, based on MCC, Breast Cancer, Car, Cmc, Glass, Hayes, Letter, Scale, and Wine are super-extensive. Meanwhile, Haberman, Image, Vehicle, and Yeast can be classified as extensive. The AUROC and MCC of Renyi and Tsallis decision trees are compared with the baseline of the classical ones. The and were tested as the entropic index of both parametric decision trees. The parameters of Renyi () and Tsallis () that produce the better AUROCs and MCCs are reported in Table 5. The result shows that the AUROC of Renyi trees was better for Breast Cancer, Glass, Letter, and Yeast and worse for Cmc and Haberman than classical trees. The results are quite similar for MCC, where Car’s classification outperforms the classical tree classification. On the contrary, the MCC of the Vehicle database was statistically less than that of the classical tree. The Tsallis AUROCs were better for Cmc, Glass, Haberman, Hayes, and Wine and worse for Yeast than those of classical trees. Additionally, the MCCs of Car, Cmc, Glass, and Scale were higher, and lower for Yeast, than the classical trees’ MCCs. Based on MCC, Car, Cmc, Glass, and Scale are super-extensive, which is a subset of the classification obtained by two-parameter fractional Tsallis.
Table 5

AUROC and MCC of classical (CT), Renyi (RT), and Tsallis (TT) decision trees. + means that AUROC is statistically greater than AUROC or MCC of CT, and − means the opposite.

Database CTAUROC RTAUROC TTAUROC CTMCC RTMCC TTMCC qr qt
Breast Cancer0.9590.971 +0.9630.8890.901 +0.887<qα>=0.173<qα>=0.173
Car0.9810.9830.9820.8920.906 +0.912 +<qα>=0.303<qβ>=0.347
Cmc0.6910.676 0.712 +0.3150.256 0.35 +<qβ>=0.185 <qα>=0.169
Glass0.7940.838 +0.835 +0.560.622 +0.599 +<qβ>=0.187<qα>=0.171
Haberman0.5790.500 0.610 +0.180.024 0.152<qα>=0.344<qα>=0.334
Hayes0.8690.8690.895 +0.578 0.579 0.587<qα>=0.269<qα>=0.269
Image0.9940.9970.9950.9820.9840.978<qα>=0.117<qβ>=0.123
Letter0.9690.980 +0.9670.9120.939 +0.913<qβ>=0.165<qα>=0.155
Scale0.8450.8390.8570.678 0.651 0.706 +<qβ>=0.421<qβ>=0.421
Vehicle0.7620.7760.7480.3950.297 0.371<qβ>=0.096<qα>=0.092
Wine0.9680.9630.976 +0.933 0.923 0.924<qα>=0.119<qα>=0.119
Yeast0.7430.789 +0.578 0.4620.505 +0.098 <qβ>=5.081<qα>=4.574
Finally, the Gini and the two-parameter fractional Tsallis decisions trees are compared using AUROC and MCC. The results are shown in Table 6. These results indicate that two-parameter fractional Tsallis trees outperform AUROC of Gini trees in six databases, and MCC in ten. It underpins that Gini trees are a particular case of two-parameter fractional Tsallis trees with . In summarizing, two-parameter fractional Tsallis trees have better classifications than classical and Gini trees.
Table 6

AUROC and MCC of Gini decision trees (GT) and two-parameter fractional Tsallis decision trees (TFTT). + means that AUROC is statistically greater than AUROC or MCC of GT.

Database GTAUROC TFTTAUROC GTMCC TFTTMCC
Breast Cancer0.963 0.964 0.8880.967 +
Car0.981 0.982 0.8970.912 +
Cmc0.580.714 +0.357 0.349
Glass0.7120.874 +0.4370.673 +
Haberman0.520.61 +0.0680.156 +
Hayes0.8710.895 +0.655 0.645
Image0.988 0.992 0.9460.978 +
Letter0.9620.974 +0.8940.934 +
Scale0.866 0.861 0.6540.703 +
Vehicle0.710.755 +0.2940.387 +
Wine0.932 0.977 0.8470.957 +
Yeast0.728 0.733 0.4140.463 +

6. Conclusions

This paper introduces two-parameter fractional Tsallis decision trees underpinned by fractional-order entropies. The three parameters of this new decision tree need to be tuned to produce better classifications than the classical ones. The trial and error approach is the standard method to adjust the entropic index for Renyi and Tsallis decision trees. However, it is unfeasible for two-parameter fractional Tsallis trees. From a database representation as a complex network, it was possible to determine a set of values for parameters q, , and based on this network. The experimental results on twelve databases show that the proposed values yield better classifications (AUROC, MCC) for eight of them, and for the four remaining, the classification was equal to that produced by classical trees. Moreover, two values (, ) were tested in Renyi and Tsallis decision trees. The results show that Renyi outperforms the classical trees in four (AUROC) and five (MCC) out of twelve databases. Similarly, Tsallis decision trees produced better classification for five (AUROC) and four (MCC) databases. The classification was worse in almost three and one databases for Renyi and Tsallis, respectively. The overall results of both parametric decision trees suggest that both outperform the classical trees in seven databases. All of the above is less favorable than what happened in eight databases analyzed with the two-parameter fractional Tsallis decision trees. In addition, the databases with a better classification using Tsallis decision trees are a subset of those for which two-parameter fractional Tsallis trees produced a better classification. It supports the conjecture that two-parameter fractional Tsallis entropy is a finer measure than the parametric entropies such as Renyi and Tsallis. The approximate technique for the tree parameters introduced here is a valuable alternative for practitioners. Furthermore, the network classification based on the non-extensive properties of Tsallis and two-parameter fractional Tsallis entropies reveals that the relationships between the records and their attribute values (modeled by a network) are complex. Such complex relationships are better measured by two-parameter fractional Tsallis entropy, the cornerstone of the proposed decision tree. The results pave the way for using the two-parameter Tsallis fractional entropy in other data mining techniques such as K-means, generic MST, Kruskal MST, and algorithms for dimension reduction in the future. Our research has the limitation that the databases used in the experiments are not large enough to reveal the reduction in time compared with the trial-and-error approach to set the tree parameters. However, we may conjecture that our method works in large databases, which will be the scope of future research.
  8 in total

1.  Complex Systems Research in Educational Psychology: Aligning Theory and Method.

Authors:  Jonathan C Hilpert; Gwen C Marchand
Journal:  Educ Psychol       Date:  2018-06-29

Review 2.  Local Patterns to Global Architectures: Influences of Network Topology on Human Learning.

Authors:  Elisabeth A Karuza; Sharon L Thompson-Schill; Danielle S Bassett
Journal:  Trends Cogn Sci       Date:  2016-06-29       Impact factor: 20.229

3.  A Novel Method to Rank Influential Nodes in Complex Networks Based on Tsallis Entropy.

Authors:  Xuegong Chen; Jie Zhou; Zhifang Liao; Shengzong Liu; Yan Zhang
Journal:  Entropy (Basel)       Date:  2020-07-31       Impact factor: 2.524

4.  An Entropy Formulation Based on the Generalized Liouville Fractional Derivative.

Authors:  Rui A C Ferreira; J Tenreiro Machado
Journal:  Entropy (Basel)       Date:  2019-06-28       Impact factor: 2.524

5.  A New Chaotic System with Stable Equilibrium: Entropy Analysis, Parameter Estimation, and Circuit Design.

Authors:  Tomasz Kapitaniak; S Alireza Mohammadi; Saad Mekhilef; Fawaz E Alsaadi; Tasawar Hayat; Viet-Thanh Pham
Journal:  Entropy (Basel)       Date:  2018-09-05       Impact factor: 2.524

6.  New Texture Descriptor Based on Modified Fractional Entropy for Digital Image Splicing Forgery Detection.

Authors:  Hamid A Jalab; Thamarai Subramaniam; Rabha W Ibrahim; Hasan Kahtan; Nurul F Mohd Noor
Journal:  Entropy (Basel)       Date:  2019-04-05       Impact factor: 2.524

7.  Classification of Literary Works: Fractality and Complexity of the Narrative, Essay, and Research Article.

Authors:  Aldo Ramirez-Arellano
Journal:  Entropy (Basel)       Date:  2020-08-17       Impact factor: 2.524

  8 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.