Literature DB >> 33265773

Multi-Objective Evolutionary Rule-Based Classification with Categorical Data.

Fernando Jiménez1, Carlos Martínez1, Luis Miralles-Pechuán2, Gracia Sánchez1, Guido Sciavicco3.   

Abstract

The ease of interpretation of a classification model is essential for the task of validating it. Sometimes it is required to clearly explain the classification process of a model's predictions. Models which are inherently easier to interpret can be effortlessly related to the context of the problem, and their predictions can be, if necessary, ethically and legally evaluated. In this paper, we propose a novel method to generate rule-based classifiers from categorical data that can be readily interpreted. Classifiers are generated using a multi-objective optimization approach focusing on two main objectives: maximizing the performance of the learned classifier and minimizing its number of rules. The multi-objective evolutionary algorithms ENORA and NSGA-II have been adapted to optimize the performance of the classifier based on three different machine learning metrics: accuracy, area under the ROC curve, and root mean square error. We have extensively compared the generated classifiers using our proposed method with classifiers generated using classical methods such as PART, JRip, OneR and ZeroR. The experiments have been conducted in full training mode, in 10-fold cross-validation mode, and in train/test splitting mode. To make results reproducible, we have used the well-known and publicly available datasets Breast Cancer, Monk's Problem 2, Tic-Tac-Toe-Endgame, Car, kr-vs-kp and Nursery. After performing an exhaustive statistical test on our results, we conclude that the proposed method is able to generate highly accurate and easy to interpret classification models.

Entities:  

Keywords:  categorical data; interpretable machine learning; multi-objective evolutionary algorithms; rule-based classifiers

Year:  2018        PMID: 33265773      PMCID: PMC7513209          DOI: 10.3390/e20090684

Source DB:  PubMed          Journal:  Entropy (Basel)        ISSN: 1099-4300            Impact factor:   2.524


1. Introduction

Supervised Learning is the branch of Machine Learning (ML) [1] focused on modeling the behavior of systems that can be found in the environment. Supervised models are created from a set of past records, each one of which, usually, consists of an input vector labeled with an output. A supervised model is an algorithm that simulates the function that maps inputs with outputs [2]. The best models are those that predict the output of new inputs in the most accurate way. Thanks to modern computing capabilities, and to the digitization of ever-increasing quantities of data, nowadays, supervised learning techniques play a leading role in many applications. The first classification systems date back to the 1990s; in those days, researchers were focused on both precision and interpretability, and the systems to be modeled were relatively simple. Years later, when it became necessary to model more difficult behaviors, the researchers focused on developing more and more precise models, leaving aside the interpretability. Artificial Neural Networks (ANN) [3], and, more recently, Deep Learning Neural Networks (DLNN) [4], as well as Support Vector Machines (SVM) [5], and Instance-based Learning (IBL) [6] are archetypical examples of this approach. A DLNN, for example, is a large mesh of ordered nodes arranged in a hierarchical manner and composed of a huge number of variables. DLNNs are capable of modeling very complex behaviors, but it is extremely difficult to understand the logic behind their predictions, and similar considerations can be drawn for SVNs and IBLs, although the underlying principles are different. These models are known as black-box methods. While there are applications in which knowing the ratio behind a prediction is not necessarily relevant, (e.g., predicting a currency’s future value, whether or not a user clicks on an advert or the amount of rain in a certain area), there are other situations where the interpretability of a model plays a key role. The interpretability of classification systems refers to the ability they have to explain their behavior in a way that is easily understandable by a user [7]. In other words, a model is considered interpretable when a human is able to understand the logic behind its prediction. In this way, Interpretable classification models allow external validation by an expert. Additionally, there are certain disciplines such as medicine, where it is essential to provide information about decision making for ethical and human reasons. Likewise, when a public institution asks an authority for permission to investigate an alleged offender, or when the CEO of a certain company wants to take a difficult decision which can seriously change the direction of the company, some kind of explanations to justify these decisions may be required. In these situations, using transparent (also called grey-box) models is recommended. While there is a general consensus on how the performance of a classification system is measured (popular metrics include accuracy, area under the ROC curve, and root mean square error), there is no universally accepted metric to measure the interpretability of the models. Nor is there an ideal balance between the interpretability and performance of classification systems but this depends on the specific application domain. However, the rule of thumb says that the simpler a classification system is, the easier it is to interpret. Rule-based Classifiers (RBC) [8,9] are among the most popular interpretable models, and some authors define the degree of interpretability of an RBC as the number of its rules or as the number of axioms that the rules have. These metrics tend to reward models with fewer rules as simple as possible [10,11]. In general, RBCs are classification learning systems that achieve a high level of interpretability because they are based on a human-like logic. Rules follow a very simple schema: and the fewer rules the models have and the fewer conditions and attributes the rules have, the easier it will be for a human to understand the logic behind each classification. In fact, RBCs are so natural in some applications that they are used to interpret other classification models such as Decision Trees (DT) [12]. RBCs constitute the basis of more complex classification systems based on fuzzy logic [13] such as LogitBoost or AdaBoost [14]. Our approach investigates the conflict between accuracy and interpretability as a multi-objective optimization problem. We define a solution as a set of rules (that is, a classifier), and establish two objectives to be maximized: interpretability and accuracy. We decided to solve this problem by applying multi-objective evolutionary algorithms (MOEA) [15,16] as meta-heuristics, and, in particular, two known algorithms: NSGA-II [15] and ENORA [17]. They are both state-of-the-art evolutionary algorithms which have been applied, and compared, on several occasions [18,19,20]. NSGA-II is very well-known and has the advantage of being available in many implementations, while ENORA generally has a higher performance. In the current literature, MOEAs are mainly used for learning RBCs based on fuzzy logic [18,21,22,23,24,25,26]. However, Fuzzy RBCs are designed for numerical data, from which fuzzy sets are constructed and represented by linguistic labels. In this paper, on the contrary, we are interested in RBCs for categorical data, for which a novel approach is necessary. This paper is organized as follows. In Section 2, we introduce multi-objective constrained optimization, the evolutionary algorithms ENORA and NSGA-II, and the well-known rule-based classifier learning systems PART, JRip, OneR and ZeroR. In Section 3, we describe the structure of an RBC for categorical data, and we propose the use of multi-objective optimization for the task of learning a classifier. In Section 4, we show the result of our experiments, performed on the well-known publicly accessible datasets Breast Cancer, Monk’s Problem 2, Tic-Tac-Toe-Endgame, Car, kr-vs-kp and Nursery. The experiments allow a comparison among the performance of the classifiers learned by our technique against those of classifiers learned by PART, JRip, OneR and ZeroR, as well as a comparison between ENORA and NSGA-II for the purposes of this task. In Section 5, the results are analyzed and discussed, before concluding in Section 6. Appendix A and Appendix B show the tables of the statistical tests results. Appendix C shows the symbols and the nomenclature used in the paper.

2. Background

2.1. Multi-Objective Constrained Optimization

The term optimization [27] refers to the selection of the best element, with regard to some criteria, from a set of alternative elements. Mathematical programming [28] deals with the theory, algorithms, methods and techniques to represent and solve optimization problems. In this paper, we are interested in a class of mathematical programming problems called multi-objective constrained optimization problems [29], which can be formally defined, for l objectives and m constraints, as follows: where (usually called objectives) and are arbitrary functions. Optimization problems can be naturally separated into two categories: those with discrete variables, which we call combinatorial, and those with continuous variables. In combinatorial problems, we are looking for objects from a finite, or countably infinite, set , where objects are typically integers, sets, permutations, or graphs. In problems with continuous variables, instead, we look for real parameters belonging to some continuous domain. In Equation (1), represents the set of decision variables, where is the domain for each variable , . Now, let be the set of all feasible solutions to Equation (1). We want to find a subset of solutions called non-dominated set (or Pareto optimal set). A solution is non-dominated if there is no other solution that dominates , and a solution dominates if and only if there exists i () such that improves , and for every i (), does not improve . In other words, dominates if and only if is better than for at least one objective, and not worse than for any other objective. The set of non dominated solutions of Equation (1) can be formally defined as: where: Once the set of optimal solutions is available, the most satisfactory one can be chosen by applying a preference criterion. When all the functions are linear, then the problem is a linear programming problem [30], which is the classical mathematical programming problem and for which extremely efficient algorithms to obtain the optimal solution exist (e.g., the simplex method [31]). When any of the functions is non-linear then we have a non-linear programming problem [32]. A non-linear programming problem in which the objectives are arbitrary functions is, in general, intractable. In principle, any search algorithm can be used to solve combinatorial optimization problems, although it is not guaranteed that they will find an optimal solution. Metaheuristics methods such as evolutionary algorithms [33] are typically used to find approximate solutions for complex multi-objective optimization problems, including feature selection and fuzzy classification.

2.2. The Multi-Objective Evolutionary Algorithms ENORA and NSGA-II

The MOEA ENORA [17] and NSGA-II [15] use a strategy (Algorithm 1) with , where corresponds to the number of parents and refers to the number of children ( is the population size), with binary tournament selection (Algorithm 2) and a rank function based on Pareto fronts and crowding (Algorithms 3 and 4). The difference between NSGA-II and ENORA is how the calculation of the ranking of the individuals in the population is performed. In ENORA, each individual belongs to a slot (as established in [34]) of the objective search space, and the rank of an individual in a population is the non-domination level of the individual in its slot. On the other hand, in NSGA-II, the rank of an individual in a population is the non-domination level of the individual in the whole population. Both ENORA and NSGA-II MOEAs use the same non-dominated sorting algorithm, the fast non-dominated sorting [35]. It compares each solution with the rest of the solutions and stores the results so as to avoid duplicate comparisons between every pair of solutions. For a problem with l objectives and a population with N solutions, this method needs to conduct objective comparisons, which means that it has a time complexity of [36]. However, ENORA distributes the population in N slots (in the best case), therefore, the time complexity of ENORA is in the worst case and in the best case. Initialize P with N individuals Evaluate all individuals of P whiledo while do Parent1← Binary tournament selection from P Parent2← Binary tournament selection from P Child1, Child2←Crossover(Parent1, Parent2) Offspring1←Mutation(Child1) Offspring2←Mutation(Child2) Evaluate Offspring1 Evaluate Offspring2 end while N best individuals from R according to the rank-crowding function in population R end while return Non-dominated individuals from P Random selection from P Random selection from P ifI is better than J according to the rank-crowding function in population P then return I else return J end if ifthen return end if ifthen return end if return The main reason ENORA and NSGA-II behave differently is as follows. NSGA-II never selects the individual dominated by the other in the binary tournament, while, in ENORA, the individual dominated by the other may be the winner of the tournament. Figure 1 shows this behavior graphically. For example, if individuals B and C are selected for a binary tournament with NSGA-II, individual B beats C because B dominates C. Conversely, individual C beats B with ENORA because individual C has a better rank in his slot than individual B. In this way, ENORA allows the individuals in each slot to evolve towards the Pareto front encouraging diversity. Even though in ENORA the individuals of each slot may not be the best of the total individuals, this approach generates a better hypervolume than that of NSGA-II throughout the evolution process.
Figure 1

Rank assignment of individuals with ENORA vs. NSGA-II.

ENORA is our MOEA, on which we are intensively working over the last decade. We have applied ENORA to constrained real-parameter optimization [17], fuzzy optimization [37], fuzzy classification [18], feature selection for classification [19] and feature selection for regression [34]. In this paper, we apply it to rule-based classification. NSGA-II algorithm was designed by Deb et al. and has been proved to be a very powerful and fast algorithm in multi-objective optimization contexts of all kinds. Most researchers in multi-objective evolutionary computation use NSGA-II as a baseline to compare the performance of their own algorithms. Although NSGA-II was developed in 2002 and remains a state-of-the-art algorithm, it is still a challenge to improve on it. There is a recently updated improved version for many-objective optimization problems called NSGA-III [38]. for to l do value of the jth objective for the individual higher adjacent in the jth objective to the individual I value of the jth objective for the individual lower adjacent in the jth objective to the individual I end for for to l do if or then return ∞ end if end for for to l do end for return

2.3. PART

PART (Partial DT Method [39]) is a widely used rule learning algorithm that was developed at the University of Waikato in New Zealand [40]. Experiments show that it is a very efficient algorithm in terms of both computational performance and results. PART combines the divide-and-conquer strategy typical of decision tree learning with the separate-and-conquer strategy [41] typical of rule learning, as follows. A decision tree is first constructed (using C4.5 algorithm [42]), and the leaf with the highest coverage is converted into a rule. Then, the set of instances that are covered by that rule are discarded, and the process starts over. The result is an ordered set of rules, completed by a default rule that applies to instances that do not meet any previous rule.

2.4. JRip

JRip is a fast and optimized implementation in Weka of the famous RIPPER (Repeated Incremental Pruning to Produce Error Reduction) algorithm [43]. RIPPER was proposed in [44] as a more efficient version of the incrementally reduced error pruning (IREP) rule learner developed in [45]. IREP and RIPPER work in a similar manner. They begin with a default rule and, using a training dataset, attempt to learn rules that predict exceptions to the default. Each rule learned is a conjunction of propositional literals. Each literal corresponds to a split of the data based on the value of a single feature. This family of algorithms, similar to decision trees, has the advantage of being easy to interpret, and experiments show that JRip is particularly efficient in large datasets. RIPPER and IREP use a strategy based on the separate-and-conquer method to generate an ordered set of rules that are extracted directly from the dataset. The classes are examined one by one, prioritizing those that have more elements. These algorithms are based on four basic steps (growing, pruning, optimizing and selecting) applied repetitively to each class until a stopping condition is met [44]. These steps can be summarized as follows. In the growing phase, rules are created taking into account an increasing number of predictors until the stopping criterion is satisfied (in the Weka implementation, the procedure selects the condition with the highest information gain). In the pruning phase redundancy is eliminated and long rules are reduced. In the optimization phase, the rules generated in the previous steps are improved (if possible) by adding new attributes or by adding new rules. Finally, in the selection phase, the best rules are selected and the others discarded.

2.5. OneR

OneR (One Rule) is a very simple, while reasonably accurate, classifier based on a frequency table. First, OneR generates a set of rules for each attribute of the dataset, and, then, it selects only one rule from that set—the one with the lowest error rate [46]. The set of rules is created using a frequency table constructed for each predictor of the class, and numerical classes are converted into categorical values.

2.6. ZeroR

Finally, ZeroR (Zero Rules [40]) is a classifier learner that does not create any rules and uses no attributes. ZeroR simply creates the class classification table by selecting the most frequent value. Such a classifier is obviously the simplest possible one, and its capabilities are limited to the prediction of the majority class. In the literature, it is not used for practical classifications tasks, but as a generic reference to measure the performance of other classifiers.

3. Multi-Objective Optimization for Categorical Rule-Based Classification

In this section, we propose a general schema for an RBC specifically designed for categorical data. Then, we propose and describe a multi-objective optimization solution to obtain optimal categorical RBCs.

3.1. Rule-Based Classification for Categorical Data

Let be a classifier composed by M rules, where each rule , , has the following structure: where for the attribute (called antecedent) takes values in a set (), and (called consequent) takes values in (). Now, let be an observed example, with , for each . We propose maximum matching as reasoning method, where the compatibility degree of the rule for the example (denoted by ) is calculated as the number of attributes whose value coincides with that of the corresponding antecedent in , that is where: The association degree for the example with a class is computed by adding the compatibility degrees for the example of each rule whose consequent is equal to class c, that is: where: Therefore, the classification (or output) of the classifier for the example corresponds to the class whose association degree is maximum, that is:

3.2. A Multi-Objective Optimization Solution

Let be a dataset of K instances with p categorical input attributes, , and a categorical output attribute. Each input attribute j can take a category , , , and the output attribute can take a class , . The problem of finding an optimal classifier , as described in the previous section, can be formulated as an instance of the multi-objective constrained problem in Equation (1) with two objectives and two constraints: In the problem (Equation (3)), the function is a performance measure of the classifier over the dataset , the function is the number of rules of the classifier , and the constraints and limit the number of rules of the classifier to the interval , where w is the number of classes of the output attribute and is given by a user. Objectives and are in conflict. The fewer rules the classifier has, the fewer instances it can cover, that is, if the classifier is simpler it will have less capacity for prediction. There is, therefore, an intrinsic conflict between problem objectives (e.g., maximize accuracy and minimize model complexity) which cannot be easily aggregated to a single objective. Both objectives are typically optimized simultaneously in many other classification systems, such as neural networks or decision trees [47,48]. Figure 2 shows the Pareto front of a dummy binary classification problem described as in Equation (3), with rules, where is maximized. This front is composed of three non-dominated solutions (three possible classifiers) with two, three and four rules, respectively. The solutions with five and six rules are dominated (both by the solution with four rules).
Figure 2

A Pareto front of a binary classification problem as formulated in Equation (3) where is minimized and is minimized.

Both ENORA and NSGA-II have been adapted to solve the problem described in Equation (3) with variable-length representation based on a Pittsburgh approach, uniform random initialization, binary tournament selection, handling constraints, ranking based on non-domination level with crowding distance, and self-adaptive variation operators. Self-adaptive variation operators work on different levels of the classifier: rule crossover, rule incremental crossover, rule incremental mutation, and integer mutation.

3.2.1. Representation

We use a variable-length representation based on a Pittsburgh approach [49], where each individual I of a population contains a variable number of rules , and each rule , is codified in the following components: Integer values associated to the antecedents , for and . Integer values associated to the consequent , for . Additionally, to carry out self-adaptive crossing and mutation, each individual has two discrete parameters and associated with crossing and mutation, where is the number of crossing operators and is the number of mutation operators. Values and for self-adaptive variation are randomly generated from and , respectively. Table 1 summarizes the representation of an individual.
Table 1

Chromosome coding for an individual I.

Codification for Rule SetCodification for Adaptive Crossing and Mutation
AntecedentsConsequentAssociated CrossingAssociated Mutation
b11I b21I bq1I c1I
dI eI
b1MII b2MII bqMII cMII

3.2.2. Constraint Handling

The constraints and are satisfied by means of specialized initialization and variation operators, which always generate individuals with a number of rules between w and .

3.2.3. Initial Population

The initial population (Algorithm 5) is randomly generated with the following conditions: Individuals are uniformly distributed with respect to the number of rules with values between w and , and with an additional constraint that specifies that there must be at least one individual for each number of rules (Steps 4–8). This ensures an adequate initial diversity in the search space in terms of the second objective of the optimization model. All individuals contain at least one rule for any output class between 1 and w (Steps 16–20). for to N do new Individual if then else Int Random(w,) end if {Random rule } for to do {Random integer values associated with the antecedents} for to p do Random(1,) end for {Random integer value associated with the consequent} if then else Random(1,w) end if end for {Random integer values for adaptive variation} Random(0,) Random(0,) end for returnP

3.2.4. Fitness Functions

Since the optimization model encompasses two objectives, each individual must be evaluated with two fitness functions, which correspond to the objective functions and of the problem (Equation (3)). The selection of the best individuals is done using the Pareto concept in a binary tournament.

3.2.5. Variation Operators

We use self-adaptive crossover and mutation, which means that the selection of the operators is made by means of an adaptive technique. As we have explained (cf. Section 3.2.1), each individual I has two integer parameters and to indicate which crossover or mutation is carried out. In our case, and are two crossover operators and two mutation operators, so that . Note that value 0 indicates that no crossover or no mutation is performed. Self-adaptive variation (Algorithm 6) generates two children from two parents by self-adaptive crossover (Algorithm 7) and self-adaptive mutation (Algorithm 8). Self-adaptive crossover of individuals and self-adaptive mutation of individual I are similar to each other. First, with a probability , the values and are replaced by a random value. Additionally, in the case of crossover, the value is replaced by . Then, the crossover indicated by or the mutation indicated by is performed. In summary, if an individual comes from a given crossover or a given mutation, that specific crossover and mutation are preserved to their offspring with probability , so the value of must be small enough to ensure a controlled evolution (in our case, we use ). Although the probability of the crossover and mutation is not explicitly represented, it can be computed as the ratio of the individuals for which crossover and mutation values are set to 1. As the population evolves, individuals with more successful types of crossover and mutation will be more common, so that the probability of selecting the more successful crossover and mutation types will increase. Using self-adaptive crossover and mutation operators helps to realize the goals of maintaining diversity in the population and sustaining the convergence capacity of the evolutionary algorithm, also eliminating the need of setting an a priori operator probability to each operator. In other approaches (e.g., [50]), the probabilities of crossover and mutation vary depending on the fitness value of the solutions. Both ENORA and NSGA-II have been implemented with two crossover operators, rule crossover (Algorithm 9) and rule incremental crossover (Algorithm 10), and two mutation operators: rule incremental mutation (Algorithm 11) and integer mutation (Algorithm 12). Rule crossover randomly exchanges two rules selected from the parents, and rule incremental crossover adds to each parent a rule randomly selected from the other parent if its number of rules is less than the maximum number of rules. On the other hand, rule incremental mutation adds a new rule to the individual if the number of rules of the individual is less than the maximum number of rules, while integer mutation carries out a uniform mutation of a random antecedent belonging to a randomly selected rule. Self-adaptive crossover , Self-adaptive mutation Self-adaptive mutation return if a random Bernoulli variable with probability takes the value 1 then Random(0,) end if Carry out the type of crossover specified by : {0: No cross} {1: Rule crossover} {2: Rule incremental crossover} if a random Bernoulli variable with probability takes the value 1 then Random(0,) end if Carry out the type of mutation specified by : {0: No mutation} {1: Rule incremental mutation} {2: Integer mutation} Random(1,) Random(1,) Exchange rules and ifthen Random(1,) Add to individual I end if ifthen Random(1,) Add to individual J end if ifthen Add a new random rule to I end if Random(1,) Random(1,p) Random(1,)

4. Experiment and Results

To ensure the reproducibility of the experiments, we have used publicly available datasets. In particular, we have designed two sets of experiments, one using the Breast Cancer [51] dataset, and the other using the Monk’s Problem 2 [52] dataset.

4.1. The Breast Cancer Dataset

Breast Cancer encompasses 286 instances. Each instance corresponds to a patient who suffered from breast cancer and uses nine attributes to describe each patient. The class to be predicted is binary and represents whether the patient has suffered a recurring cancer event. In this dataset, 85 instances are positive and 201 are negative. Table 2 summarizes the attributes of the dataset. Among all instances, nine present some missing values; in the pre-processing phase, these have been replaced by the mode of the corresponding attribute.
Table 2

Attribute description of the Breast Cancer dataset.

#Attribute NameTypePossible Values
1agecategorical10–19, 20–29, 30–39, 40–49, 50–59, 60–69, 70–79, 80–89, 90–99.
2menopausecategoricallt40, ge40, premeno
3tumour-sizecategorical0–4, 5–9, 10–14, 15–19, 20–24, 25–29, 30–34, 35–39, 40–44, 45–49, 50–54, 55–59
4inv-nodescategorical0–2, 3–5, 6–8, 9–11, 12–14, 15–17, 18–20, 21–23, 24–26, 27–29, 30–32, 33–35, 36–39
5node-capscategoricalyes, no
6deg-maligncategorical1, 2, 3
7breastcategoricalleft, right
8breast-quadcategoricalleft-up, left-low, right-up, right-low, central
9irradiatcategoricalyes, no
10classcategoricalno-recurrence-events, recurrence-events

4.2. The Monk’s Problem 2 Dataset

In July 1991, the monks of Corsendonk Priory attended a summer course that was being held in their priory, namely the 2nd European Summer School on Machine Learning. After a week, the monks could not yet clearly identify the best ML algorithms, or which algorithms to avoid in which cases. For this reason, they decided to create the three so-called Monk’s problems, and used them to determine which ML algorithms were the best. These problems, rather simple and completely artificial, became later famous (because of their peculiar origin), and have been used as a comparison for many algorithms on several occasions. In particular, in [53], they have been used to test the performance of state-of-the-art (at that time) learning algorithms such as AQ17-DCI, AQ17-HCI, AQ17-FCLS, AQ14-NT, AQ15-GA, Assistant Professional, mFOIL, ID5R, IDL, ID5R-hat, TDIDT, ID3, AQR, CN2, WEB CLASS, ECOBWEB, PRISM, Backpropagation, and Cascade Correlation. For our research, we have used the Monk’s Problem 2, which contains six categorical input attributes and a binary output attribute, summarized in Table 3. The target concept associated with the Monk’s Problem 2 is the binary outcome of the logical formula: In this dataset, the original training and testing sets were merged to allow other sampling procedures. The set contains a total of 601 instances, and no missing values.
Table 3

Attribute description of the MONK’s Problem 2 dataset.

#Atttribute NameTypePossible Values
1head_shapecategoricalround, square, octagon
2body_shapecategoricalround, square, octagon
3is_smilingcategoricalyes, no
4holdingcategoricalsword, balloon, flag
5jacket_colorcategoricalred, yellow, green, blue
6has_tiecategoricalyes, no
7classcategoricalyes, no

4.3. Optimization Models

We have conducted different experiments with different optimization models to calculate the overall performance of our proposed technique and to see the effect of optimizing different objectives for the same problem. First, we have designed a multi-objective constrained optimization model based on the accuracy: where is the proportion of correctly classified instances (both true positives and true negatives) among the total number of instances [54] obtained with the classifier for the dataset . is defined as: where K is the number of instances of the dataset , and is the result of the classification of the instance i in with the classifier , that is: where is the predicted value of the ith instance in , and is the corresponding true value in . Our second optimization model is based on the area under the ROC curve: where is the area under the ROC curve obtained with the classifier with the dataset . The ROC (Receiver Operating Characteristic) curve [55] is a graphical representation of the sensitivity versus the specificity index for a classifier varying the discrimination threshold value. Such a curve can be used to generate statistics that summarize the performance of a classifier, and it has been shown in [54] to be a simple, yet complete, empirical description of the decision threshold effect, indicating all possible combinations of the relative frequencies of the various kinds of correct and incorrect decisions. The area under the ROC curve can be computed as follows [56]: where (sensitivity) is the proportion of positive instances classified as positive by the classifier in , (specificity) is the proportion of negative instances classified as negative by in , and t is the discrimination threshold. Finally, our third constrained optimization model is based on the root mean square error (RMSE): where is defined as the square root of the mean square error obtained with a classifier in the dataset : where is the predicted value of the ith instance for the classifier , and is the corresponding output value in the database . Accuracy, area under the ROC curve, and root mean square error are all well-accepted measures used to evaluate the performance of a classifier. Therefore, it is natural to use such measures as fitting functions. In this way, we can establish which one behaves better in the optimization phase, and we can compare the results with those in the literature.

4.4. Choosing the Best Pareto Front

To compare the performance of ENORA and NSGA-II as metaheuristics in this particular optimization task, we use the hypervolume metric [57,58]. The hypervolume measures, simultaneously, the diversity and the optimality of the non-dominated solutions. The main advantage of using hypervolume against other standard measures, such as the error ratio, the generational distance, the maximum Pareto-optimal front error, the spread, the maximum spread, or the chi-square-like deviation, is that it can be computed without an optimal population, which is not always known [15]. The hypervolume is defined as the volume of the search space dominated by a population P, and is formulated as: where is the set of non-dominated individuals of P, and is the volume of the individual i. Subsequently, the hypervolume ratio (HVR) is defined as the ratio of the volume of the non-dominated search space over the volume of the entire search space, and is formulated as follows: where is the volume of the search space. Computing HVR requires reference points that identify the maximum and minimum values for each objective. For RBC optimization, as proposed in this work, the following minimum (, ) and maximum (, ) points, for each objective, are set in the multi-objective optimization models in Equations (4)–(6): A first single execution of all six models (three driven by ENORA, and three driven by NSGA-II), over both datasets, has been designed for the purpose of showing the aspect of the final Pareto front, and compare the hypervolume ratio of the models. The results of this single execution, with population size equal to 50 and 20,000 generations (1,000,000 evaluations in total), are shown in Figure 3 and Figure 4 (by default, is set to 10, to which we add 2, because both datasets have a binary class). Regarding the configuration of the number of generations and the size of the population, our criterion has been established as follows: once the number of evaluations is set to 1,000,000, we can decide to use a population size of 100 individuals and 10,000 generations, or to use a population size of 50 individuals and 20,000 generations. The first configuration (100 × 10,000) allows a greater diversity with respect to the number of rules of the classifiers, while the second one (50 × 20,000) allows a better adjustment of the classifier parameters and therefore, a greater precision. Given the fact that the maximum number of rules of the classifiers is not greater than 12, we think that 50 individuals are sufficient to represent four classifiers on average for each number of rules (4 × 12 = 48∼50). Thus, we prefer the second configuration ( 20,000) because having more generations increases the chances of building classifiers with a higher precision.
Figure 3

Pareto fronts of one execution of ENORA and NSGA-II, with , on the Breast Cancer dataset, and their respective HVR. Note that in the case of multi-objective classification where is maximized ( and ), function has been converted to minimization for a better understanding of the Pareto front.

Figure 4

Pareto fronts of one execution of ENORA and NSGA-II, with , on the Monk’s Problem 2 dataset, and their respective HVR. Note that in the case of multi-objective classification where is maximized ( and ), function has been converted to minimization for a better understanding of the Pareto front.

Experiments were executed in a computer x64-based PC with one processor Intel64 Family 6 Model 60 Stepping 3 GenuineIntel 3201 Mhz, RAM 8131 MB. Table 4 shows the run time for each method over both datasets. Note that, although ENORA has less algorithmic complexity than NSGA-II, it has taken longer in experiments than NSGA-II. This is because the evaluation time of individuals in ENORA is higher than that of NSGA-II since ENORA has more diversity than NSGA-II, and therefore ENORA evaluates classifiers with more rules than NSGA-II.
Table 4

Run times of ENORA and NSGA-II for Breast Cancer and Monk’s Problem 2 datasets.

Method Breast Cancer Monk’s Problem 2
ENORA-ACC 244.92 s.428.14 s.
ENORA-AUC 294.75 s.553.11 s.
ENORA-RMSE 243.30 s.414.42 s.
NSGA-II-ACC 127.13 s.260.83 s.
NSGA-II-AUC 197.07 s.424.83 s.
NSGA-II-RMSE 134.87 s.278.19 s.
From these results, we can deduce that, first, ENORA maintains a higher diversity of the population, and achieves a better hypervolume ratio with respect to NSGA-II, and, second, using accuracy as the first objective generates better fronts than using the area under the ROC curve, which, in turn, performs better than using the root mean square error.

4.5. Comparing Our Method with Other Classifier Learning Systems (Full Training Mode)

To perform an initial comparison between the performance of the classifiers obtained with the proposed method and the ones obtained with classical methods (PART, JRip, OneR and ZeroR), we have executed again the six models in full training mode. The parameters have been configured as in the previous experiment (population size equal to 50 and 20,000 generations), excepting the parameter that was set to 2 for the Breast Cancer dataset (this case), while, for the Monk’s Problem 2, it was set to 9. Observe that, since in both cases, executing the optimization models using leads to a single objective search for the Breast Cancer dataset. In fact, after the preliminary experiments were run, it turned out that the classical classifier learning systems tend to return very small, although not very precise, set of rules on Breast Cancer, and that justifies our choice. On the other hand, executing the classical rule learners on Monk’s Problem 2 returns more diverse sets of rules, which justifies choosing a higher in that case. To decide, a posteriori, which individual is chosen from the final front, we have used the default algorithm: the individual with the best value on the first objective is returned. In the case of Monk’s Problem 2, that individual has seven rules. The comparison is shown in Table 5 and Table 6, which show, for each classifier, the following information: number of rules, percent correct, true positive rate, false positive rate, precision, recall, F-measure, Matthews correlation coefficient, area under the ROC curve, area under precision-recall curve, and root mean square error. As for the Breast Cancer dataset (observe that the best result emerged from the proposed method), in the optimization model driven by NSGA-II, with root mean square error as the first objective (see Table 7), only PART was able to achieve similar results, although slightly worse, but at the price of having 15 rules, making the system clearly not interpretable. In the case of the Monk’s Problem 2 dataset, PART returned a model with 47 rules, which is not interpretable by any standard, although it is very accurate. The best interpretable result is the one with seven rules returned by ENORA, driven by the root mean square error (see Table 8). The experiments for classical learners have been conducted using the default parameters.
Table 5

Comparison of the performance of the learning models in full training mode—Breast Cancer dataset.

Learning ModelNumber of RulesPercent CorrectTP RateFP RatePrecisionRecallF-Measure MCC ROC AreaPRC Area RMSE
ENORA-ACC 279.020.7900.4490.7960.7900.7620.4550.6710.6970.458
ENORA-AUC 275.870.7590.3740.7510.7590.7540.4020.6930.6960.491
ENORA-RMSE 277.620.7760.4750.7780.7760.7440.4100.6510.6800.473
NSGA-II-ACC 277.970.7800.5010.8050.7800.7380.4290.6400.6790.469
NSGA-II-AUC 275.520.7550.3680.7490.7550.7520.3990.6930.6960.495
NSGA-II-RMSE 279.370.7940.4470.8030.7940.7650.4670.6730.7000.454
PART 1578.320.7830.3970.7730.7830.7690.4420.7770.7930.398
JRip 376.920.7690.4710.7620.7690.7400.3890.6500.6800.421
OneR 172.720.7270.5630.7030.7270.6800.2410.5820.6290.522
ZeroR -70.270.7030.7030.4940.7030.5800.0000.5000.5820.457
Table 6

Comparison of the performance of the learning models in full training mode—Monk’s Problem 2 dataset.

Learning ModelNumber of RulesPercent CorrectTP RateFP RatePrecisionRecallF-Measure MCC ROC AreaPRC Area RMSE
ENORA-ACC 775.870.7590.3700.7530.7590.7450.4360.6950.6800.491
ENORA-AUC 768.710.6870.1630.8360.6870.6870.5230.7620.7290.559
ENORA-RMSE 777.700.7770.3600.7770.7770.7620.4810.7080.6950.472
NSGA-II-ACC 768.380.6840.5880.7040.6840.5970.2030.5480.5800.562
NSGA-II-AUC 766.380.6640.1750.8300.6640.6610.4970.7440.7150.580
NSGA-II-RMSE 768.710.6870.5910.7370.6870.5950.2260.5480.5830.559
PART 4794.010.9400.0870.9400.9400.9400.8660.9800.9790.218
JRip 165.720.6570.6570.4320.6570.5210.0000.5000.5490.475
OneR 165.720.6570.6570.4320.6570.5210.0000.5000.5490.585
ZeroR -65.720.6570.6570.4320.6570.5210.0000.5000.5490.475
Table 7

Rule-based classifier obtained with NSGA-II-RMSE for Breast Cancer dataset.

RuleAntecedentsConsequent
R1: IF age = 50–59 AND inv-nodes = 0–2 AND node-caps = no
AND deg-malig = 1 AND breast = right AND breast-quad = left-low THEN class = no-recurrence-events
R2: IF age = 60–69 AND inv-nodes = 18–20 AND node-caps = yes
AND deg-malig = 3 AND breast = left AND breast-quad = right-up THEN class = recurrence-events
Table 8

Rule-based classifier obtained with ENORA-RMSE for Monk’s Problem 2 dataset.

RuleAntecedentsConsequent
R1: IF head_shape = round AND body_shape = round AND is_smiling = no
AND holding = sword AND jacket_color = red AND has_tie = yes THEN class = yes
R2: IF head_shape = octagon AND body_shape = round AND is_smiling = no
AND holding = sword AND jacket_color = red AND has_tie = no THEN class = yes
R3: IF head_shape = round AND body_shape = round AND is_smiling = no
AND holding = sword AND jacket_color = yellow AND has_tie = yes THEN class = yes
R4: IF head_shape = round AND body_shape = round AND is_smiling = no
AND holding = sword AND jacket_color = red AND has_tie = no THEN class = yes
R5: IF head_shape = square AND body_shape = square AND is_smiling = yes
AND holding = flag AND jacket_color = yellow AND has_tie = no THEN class = no
R6: IF head_shape = octagon AND body_shape = round AND is_smiling = yes
AND holding = balloon AND jacket_color = blue AND has_tie = no THEN class = no
R7: IF head_shape = octagon AND body_shape = octagon AND is_smiling = yes
AND holding = sword AND jacket_color = green AND has_tie = no THEN class = no

4.6. Comparing Our Method with Other Classifier Learning Systems (Cross-Validation and Train/Test Percentage Split Mode)

To test the capabilities of our methodology in a more significant way, we proceeded as follows. First, we designed a cross-validated experiment for the Breast Cancer dataset, in which we iterated three times a 10-fold cross-validation learning process [59] and considered the average value of the performance metrics percent correct, area under the ROC curve, and serialized model size of all results. Second, we designed a train/test percentage split experiment for the Monk’s Problem 2 dataset, in which we iterated ten times a 66% (training) versus 33% (testing) split and considered, again, the average result of the same metrics. Finally, we performed a statistical test over on results, to understand if they show any statistically significant difference. An execution of our methodology, and of standard classical learners, has been performed to obtain the models to be tested precisely under the same conditions of the experiment Section 4.5. It is worth observing that using two different types of evaluations allows us to make sure that our results are not influenced by the type of experiment. The results of the experiments are shown in Table 9 and Table 10.
Table 9

Comparison of the performance of the learning models in 10-fold cross-validation mode (three repetitions)—Breast Cancer dataset.

Learning ModelPercent CorrectROC AreaSerialized Model Size
ENORA-ACC 73.450.619554.80
ENORA-AUC 70.160.629554.63
ENORA-RMSE 72.390.609557.77
NSGA-II-ACC 72.500.609556.20
NSGA-II-AUC 70.030.619555.70
NSGA-II-RMSE 73.340.609558.60
PART 68.920.6155,298.13
JRip 71.820.617664.07
OneR 67.150.551524.00
ZeroR 70.300.50915.00
Table 10

Comparison of the performance of the learning models in split mode—Monk’s problem 2 dataset.

Learning ModelPercent CorrectROC AreaSerialized Model Size
ENORA-ACC 76.690.709586.50
ENORA-AUC 72.820.799589.30
ENORA-RMSE 75.660.689585.30
NSGA-II-ACC 70.070.599590.60
NSGA-II-AUC 67.080.709619.70
NSGA-II-RMSE 67.630.549565.90
PART 73.510.7973,115.90
JRip 64.050.505956.90
OneR 65.720.501313.00
ZeroR 65.720.50888.00
The statistical tests aim to verify if there are significant differences among the means of each metric: percent correct, area under the ROC curve and serialized model size. We proceeded as follows. First, we checked normality and sphericity of each sample by means of the Shapiro–Wilk normality test. Then, if normality and sphericity conditions were met, we applied one way repeated measures ANOVA; otherwise, we applied the Friedman test. In the latter case, when statistically significant differences were detected, we applied the Nemenyi post-hoc test to locate where these differences were. Table A1, Table A2, Table A3, Table A4, Table A5, Table A6, Table A7, Table A8, Table A9, Table A10, Table A11 and Table A12 in Appendix A show the results of the performed tests for the Breast Cancer dataset for each of the three metrics, and Table A13, Table A14, Table A15, Table A16, Table A17, Table A18, Table A19, Table A20, Table A21, Table A22, Table A23 and Table A24 in Appendix B show the results for the Monk’s Problem 2 dataset.
Table A1

Shapiro–Wilk normality test p-values for percent correct metric—Breast Cancer dataset.

Algorithmp-ValueNull Hypothesis
ENORA-ACC 0.5316Not Rejected
ENORA-AUC 0.3035Not Rejected
ENORA-RMSE 0.7609Not Rejected
NSGA-II-ACC 0.1734Not Rejected
NSGA-II-AUC 0.3802Not Rejected
NSGA-II-RMSE 0.6013Not Rejected
PART 0.0711Not Rejected
JRip 0.5477Not Rejected
OneR 0.316Not Rejected
ZeroR 3.818 × 1006 Rejected
Table A2

Friedman p-value for percent correct metric—Breast Cancer dataset.

p-ValueNull Hypothesis
Friedman 5.111 × 1004 Rejected
Table A3

Nemenyi post-hoc procedure for percent correct metric—Breast Cancer dataset.

ENORA-ACC ENORA-AUC ENORA-RMSE NSGA-II-ACC NSGA-II-AUC NSGA-II-RMSE PART JRip OneR
ENORA-AUC 0.2597--------
ENORA-RMSE 0.96270.9627-------
NSGA-II-ACC 0.99810.80471.0000------
NSGA-II-AUC 0.29511.00000.97350.8386-----
NSGA-II-RMSE 1.00000.21690.94360.99600.2486----
PART 0.17901.00000.91860.69971.00000.1461---
JRip 0.99090.89561.00001.00000.91860.98400.8164--
OneR 0.0004 0.6414 0.0451 0.0108 0.5961 0.0002 0.7546 0.0212 -
ZeroR 0.23771.00000.95380.78031.00000.19731.00000.87830.6709
Table A4

Summary of statistically significant differences for percent correct metric—Breast Cancer dataset.

ENORA-ACC ENORA-RMSE NSGA-II-ACC NSGA-II-RMSE JRip
OneR ENORA-ACC ENORA-RMSE NSGA-II-ACC NSGA-II-RMSE JRip
Table A5

Shapiro–Wilk normality test p-values for area under the ROC curve metric—Breast Cancer dataset.

Algorithmp-ValueNull Hypothesis
ENORA-ACC 0.6807Not Rejected
ENORA-AUC 0.3171Not Rejected
ENORA-RMSE 0.6125Not Rejected
NSGA-II-ACC 0.0871Not Rejected
NSGA-II-AUC 0.5478Not Rejected
NSGA-II-RMSE 0.6008Not Rejected
PART 0.6066Not Rejected
JRip 0.2978Not Rejected
OneR 0.4531Not Rejected
ZeroR 0.0000 Rejected
Table A6

Friedman p-value for area under the ROC curve metric—Breast Cancer dataset.

p-ValueNull Hypothesis
Friedman 8.232 × 1010 Rejected
Table A7

Nemenyi post-hoc procedure for area under the ROC curve metric—Breast Cancer dataset.

ENORA-ACC ENORA-AUC ENORA-RMSE NSGA-II-ACC NSGA-II-AUC NSGA-II-RMSE PART JRip OneR
ENORA-AUC 1.0000--------
ENORA-RMSE 0.99720.9990-------
NSGA-II-ACC 0.99991.00001.0000------
NSGA-II-AUC 1.00001.00001.00001.0000-----
NSGA-II-RMSE 0.99900.99971.00001.00001.0000----
PART 0.99991.00001.00001.00001.00001.0000---
JRip 1.00001.00000.99921.00001.00000.99981.0000--
OneR 0.0041 0.0062 0.0790 0.0323 0.0281 0.0582 0.0345 0.0067 -
ZeroR 3.8 × 1007 7.2 × 1007 4.6 × 1005 9.8 × 1006 7.8 × 1006 2.7 × 1005 1.1 × 1005 8.1 × 1007 0.6854
Table A8

Summary of statistically significant differences for area under the ROC curve metric—Breast Cancer dataset.

ENORA-ACC ENORA-AUC ENORA-RMSE NSGA-II-ACC NSGA-II-AUC NSGA-II-RMSE PART JRip
OneR ENORA-ACC ENORA-AUC - NSGA-II-ACC NSGA-II-AUC - PART JRip
ZeroR ENORA-ACC ENORA-AUC ENORA-RMSE NSGA-II-ACC NSGA-II-AUC NSGA-II-RMSE PART JRip
Table A9

Shapiro–Wilk normality test p-values for serialized model size metric—Breast Cancer dataset.

Algorithmp-ValueNull Hypothesis
ENORA-ACC 5.042 × 1005 Rejected
ENORA-AUC 2.997 × 1007 Rejected
ENORA-RMSE 4.762 × 1004 Rejected
NSGA-II-ACC 4.88 × 1006 Rejected
NSGA-II-AUC 2.339 × 1007 Rejected
NSGA-II-RMSE 2.708 × 1006 Rejected
PART 0.3585Not Rejected
JRip 9.086 × 1003 Rejected
OneR 1.007 × 1007 Rejected
ZeroR 0.0000 Rejected
Table A10

Friedman p-value for serialized model size metric—Breast Cancer dataset.

p-ValueNull Hypothesis
Friedman 2.2 × 1016 Rejected
Table A11

Nemenyi post-hoc procedure for serialized model size metric—Breast Cancer dataset.

ENORA-ACC ENORA-AUC ENORA-RMSE NSGA-II-ACC NSGA-II-AUC NSGA-II-RMSE PART JRip OneR
ENORA-AUC 0.9998--------
ENORA-RMSE 0.0053 0.0004 -------
NSGA-II-ACC 0.38710.09420.8872------
NSGA-II-AUC 0.88720.48940.38710.9988-----
NSGA-II-RMSE 4.1 × 1005 1.3 × 1006 0.98600.2169 0.0244 ----
PART 4.7 × 1009 5.6 × 1011 0.1973 0.0013 3.3 × 1005 0.8689---
JRip 0.27120.6997 1.2 × 1008 7.0 × 1005 0.0025 6.3 × 1012 6.9 × 1014 --
OneR 0.0062 0.0546 1.5 × 1012 5.5 × 1008 5.5 × 1006 8.3 × 1014 8.3 × 1014 0.9584-
ZeroR 1.9 × 1005 0.0004 7.3 × 1014 8.6 × 1012 2.3 × 1009 8.5 × 1014 <2 × 1016 0.23770.9584
Table A12

Summary of statistically significant differences for serialized model size metric—Breast Cancer dataset.

ENORA-ACC ENORA-AUC ENORA-RMSE NSGA-II-ACC NSGA-II-AUC NSGA-II-RMSE PART
ENORA-RMSE ENORA-ACC NSGA-II-AUC -----
NSGA-II-RMSE ENORA-ACC ENORA-AUC -- NSGA-II-AUC --
PART ENORA-ACC ENORA-AUC - NSGA-II-ACC NSGA-II-AUC --
JRip -- JRip JRip JRip JRip JRip
OneR OneR - OneR OneR OneR OneR OneR
ZeroR ZeroR ZeroR ZeroR ZeroR ZeroR ZeroR ZeroR
Table A13

Shapiro–Wilk normality test p-values for percent correct metric—Monk’s Problem 2 dataset.

Algorithmp-ValueNull Hypothesis
ENORA-ACC 0.6543Not Rejected
ENORA-AUC 0.6842Not Rejected
ENORA-RMSE 0.0135 Rejected
NSGA-II-ACC 0.979Not Rejected
NSGA-II-AUC 0.382Not Rejected
NSGA-II-RMSE 0.0486 Rejected
PART 0.5671Not Rejected
JRip 0.075 Rejected
OneR 4.672 × 1006 Rejected
ZeroR 4.672 × 1006 Rejected
Table A14

Friedman p-value for percent correct metric—Monk’s Problem 2 dataset.

p-ValueNull Hypothesis
Frideman 1.292 × 1007 Rejected
Table A15

Nemenyi post-hoc procedure for percent correct metric—Monk’s Problem 2 dataset.

ENORA-ACC ENORA-AUC ENORA-RMSE NSGA-II-ACC NSGA-II-AUC NSGA-II-RMSE PART JRip OneR
ENORA-AUC 0.8363--------
ENORA-RMSE 1.00000.9471-------
NSGA-II-ACC 0.19070.99020.3481------
NSGA-II-AUC 0.0126 0.6294 0.0342 0.9958-----
NSGA-II-RMSE 0.0126 0.6294 0.0342 0.99581.0000----
PART 0.87141.00000.96310.98410.57690.5769---
JRip 2.1 × 1006 0.0048 1.0 × 1005 0.13410.68060.6806 0.0036 --
OneR 0.0001 0.0743 0.0006 0.60320.98750.98750.06010.9984-
ZeroR 0.0001 0.0743 0.0006 0.60320.98750.98750.06010.99841.0000
Table A16

Summary of statistically significant differences for percent correct metric—Monk’s Problem 2 dataset.

ENORA-ACC ENORA-AUC ENORA-RMSE PART
NSGA-II-AUC ENORA-ACC - ENORA-RMSE -
NSGA-II-RMSE ENORA-ACC - ENORA-RMSE -
JRip ENORA-ACC ENORA-AUC ENORA-RMSE PART
OneR ENORA-ACC - ENORA-RMSE -
ZeroR ENORA-ACC - ENORA-RMSE -
Table A17

Shapiro–Wilk normality test p-values for area under the ROC curve metric—Monk’s Problem 2 dataset.

Algorithmp-ValueNull Hypothesis
ENORA-ACC 0.4318Not Rejected
ENORA-AUC 0.7044Not Rejected
ENORA-RMSE 0.0033 Rejected
NSGA-II-ACC 0.3082Not Rejected
NSGA-II-AUC 0.0243 Rejected
NSGA-II-RMSE 0.7802Not Rejected
PART 0.1641Not Rejected
JRip 0.3581Not Rejected
OneR 0.0000 Rejected
ZeroR 0.0000 Rejected
Table A18

Friedman p-value for area under the ROC curve metric—Monk’s Problem 2 dataset.

p-ValueNull Hypothesis
Frideman 1.051 × 1008 Rejected
Table A19

Nemenyi post-hoc procedure for area under the ROC curve metric—Monk’s Problem 2 dataset.

ENORA-ACC ENORA-AUC ENORA-RMSE NSGA-II-ACC NSGA-II-AUC NSGA-II-RMSE PART JRip OneR
ENORA-AUC 0.8363--------
ENORA-RMSE 1.00000.7054-------
NSGA-II-ACC 0.88700.05390.9556------
NSGA-II-AUC 1.00000.85441.00000.8713-----
NSGA-II-RMSE 0.5504 0.0084 0.70540.99990.5239----
PART 0.70541.00000.5504 0.0269 0.7295 0.0036 ---
JRip 0.0238 2.3 × 1005 0.0482 0.6806 0.0211 0.9471 7.0 × 1006 --
OneR 0.0084 4.7 × 1006 0.0186 0.4715 0.0073 0.8363 1.4 × 1006 1.0000-
ZeroR 0.0084 4.7 × 1006 0.0186 0.4715 0.0073 0.8363 1.4 × 1006 1.00001.0000
Table A20

Summary of statistically significant differences for area under the ROC curve metric—Monk’s Problem 2 dataset.

ENORA-ACC ENORA-AUC ENORA-RMSE NSGA-II-ACC NSGA-II-AUC PART
NSGA-II-RMSE - ENORA-AUC -----
PART --- PART - PART -
JRip ENORA-ACC ENORA-AUC ENORA-RMSE - NSGA-II-AUC - PART
OneR ENORA-ACC ENORA-AUC ENORA-RMSE - NSGA-II-AUC - PART
ZeroR ENORA-ACC ENORA-AUC ENORA-RMSE - NSGA-II-AUC - PART
Table A21

Shapiro–Wilk normality test p-values for serialized model size metric—Monk’s Problem 2 dataset.

Algorithmp-ValueNull Hypothesis
ENORA-ACC 4.08 × 1005 Rejected
ENORA-AUC 0.0002 Rejected
ENORA-RMSE 0.0094 Rejected
NSGA-II-ACC 0.0192 Rejected
NSGA-II-AUC 0.0846 Rejected
NSGA-II-RMSE 0.0037 Rejected
PART 0.9721Not Rejected
JRip 0.0068 Rejected
OneR 0.0000 Rejected
ZeroR 0.0000 Rejected
Table A22

Friedman p-value for serialized model size metric—Monk’s Problem 2 dataset.

p-ValueNull Hypothesis
Frideman 2.657 × 1013 Rejected
Table A23

Nemenyi post-hoc procedure for serialized model size metric—Monk’s Problem 2 dataset.

ENORA-ACC ENORA-AUC ENORA-RMSE NSGA-II-ACC NSGA-II-AUC NSGA-II-RMSE PART JRip OneR
ENORA-AUC 1.0000--------
ENORA-RMSE 1.00001.0000-------
NSGA-II-ACC 1.00001.00001.0000------
NSGA-II-AUC 0.99250.96960.99840.9841-----
NSGA-II-RMSE 0.88700.95560.79660.92670.2622----
PART 0.28240.17520.39570.22460.9015 0.0027 ---
JRip 0.17520.28240.11100.2246 0.0084 0.9752 1.0 × 1005 --
OneR 0.0211 0.0431 0.0110 0.0304 0.0004 0.6552 1.5 × 1007 0.9993-
ZeroR 0.0012 0.0031 0.0006 0.0020 1.0 × 1005 0.1907 1.3 × 1009 0.90150.9993
Table A24

Summary of statistically significant differences for serialized model size metric—Monk’s Problem 2 dataset.

ENORA-ACC ENORA-AUC ENORA-RMSE NSGA-II-ACC NSGA-II-AUC NSGA-II-RMSE PART
PART ----- NSGA-II-RMSE -
JRip ---- JRip - JRip
OneR OneR OneR OneR OneR OneR - OneR
ZeroR ZeroR ZeroR ZeroR ZeroR ZeroR - ZeroR

4.7. Additional Experiments

Finally, we show the results of the evaluation with 10-fold cross-validation for Monk’s problem 2 dataset and for the following four other datasets: Tic-Tac-Toe-Endgame dataset, with 9 input attributes, 958 instances, and binary class (Table 11).
Table 11

Attribute description of the Tic-Tac-Toe-Endgame dataset.

#Attribute NameTypePossible Values
1top-left-squarecategoricalx, o, b
2top-middle-squarecategoricalx, o, b
3top-right-squarecategoricalx, o, b
4middle-left-squarecategoricalx, o, b
5middle-middle-squarecategoricalx, o, b
6middle-right-squarecategoricalx, o, b
7bottom-left-squarecategoricalx, o, b
8bottom-middle-squarecategoricalx, o, b
9bottom-right-squarecategoricalx, o, b
10classcategoricalpositive, negative
Car dataset, with 6 input attributes, 1728 instances, and 4 output classes (Table 12).
Table 12

Attribute description of the Car dataset.

#Attribute NameTypePossible Values
1buyingcategoricalvhigh, high, med, low
2maintcategoricalvhigh, high, med, low
3doorscategorical2, 3, 4, 5-more
4personscategorical2, 4, more
5lug_bootcategoricalsmall, med, big
6safetycategoricallow, med, high
7classcategoricalunacc, acc, good, vgood
Chess (King-Rook vs. King-Pawn) (kr-vs-kp), with 36 input attributes, 3196 instances, and binary class (Table 13).
Table 13

Attribute description of the kr-vs-kp dataset.

#Attribute NameTypePossible Values
1bkblkcategoricalt, f
2bknwycategoricalt, f
3bkon8categoricalt, f
4bkonacategoricalt, f
5bksprcategoricalt, f
6bkxbqcategoricalt, f
7bkxcrcategoricalt, f
8bkxwpcategoricalt, f
9blxwpcategoricalt, f
10bxqsqcategoricalt, f
11cntxtcategoricalt, f
12dsoppcategoricalt, f
13dwipdcategoricalg, l
14hdchkcategoricalt, f
15katricategoricalb, n, w
16mulchcategoricalt, f
17qxmsqcategoricalt, f
18r2ar8categoricalt, f
19reskdcategoricalt, f
20reskrcategoricalt, f
21rimmxcategoricalt, f
22rkxwpcategoricalt, f
23rxmsqcategoricalt, f
24simplcategoricalt, f
25skachcategoricalt, f
26skewrcategoricalt, f
27skrxpcategoricalt, f
28spcopcategoricalt, f
29stlmtcategoricalt, f
30thrskcategoricalt, f
31wkcticategoricalt, f
32wkna8categoricalt, f
33wknckcategoricalt, f
34wkovlcategoricalt, f
35wkposcategoricalt, f
36wtoegcategoricaln, t, f
37classcategoricalwon, nowin
Nursery dataset, with 8 input attributes, 12,960 instances, and 5 output classes (Table 14).
Table 14

Attribute description of the Nursery dataset.

#Attribute NameTypePossible Values
1parentscategoricalusual, pretentious, great_pret
2has_nurscategoricalproper, less_proper, improper, critical, very_crit
3formcategoricalcomplete, completed, incomplete, foster
4childrencategorical1, 2, 3, more
5housingcategoricalconvenient, less_conv, critical
6financecategoricalconvenient, inconv
7socialcategoricalnonprob, slightly_prob, problematic
8healthcategoricalrecommended, priority, not_recom
9classcategoricalnot_recom, recommend, very_recom, priority, spec_prior
We have used the ENORA algorithm together with the and objective functions in this case because these combinations have produced the best results for the Breast Cancer and Monk’s problem 2 datasets evaluated in 10-fold cross-validation (population size equal to 50, 20,000 generations and number of classes). Table 15 shows the results of the best combination ENORA-ACC or ENORA-RMSE together with the results of the classical rule-based classifiers.
Table 15

Comparison of the performance of the learning models in 10-fold cross-validation mode—Monk’s Problem 2, Tic-Tac-Toe-Endgame, Car, kr-vs-kp and Nursery datasets.

Learning ModelNumber of RulesPercent CorrectTP RateFP RatePrecisionRecallF-Measure MCC ROC AreaPRC Area RMSE
Monk’s problem 2
ENORA-ACC 777.700.7770.3600.7770.7770.7620.4810.7080.6950.472
PART 4779.530.7950.2530.7950.7950.7950.5440.8840.8930.380
JRip 162.900.6290.6460.5260.6290.535−0.0340.4780.5370.482
OneR 165.720.6570.6570.4320.6570.5210.0000.5000.5490.586
ZeroR -65.720.6570.6570.4320.6570.5210.0000.4910.5450.457
Tic-Tac-Toe-Endgame
ENORA-ACC/RMSE 298.330.9830.0310.9840.9830.9830.9630.9760.9730.129
PART 4994.260.9430.0760.9420.9430.9420.8730.9740.9690.220
JRip 997.810.9780.0310.9780.9780.9780.9510.9770.9770.138
OneR 169.940.6990.3570.7010.6990.7000.3400.6710.6510.548
ZeroR -65.350.6530.6530.4270.6530.5160.0000.4960.5450.476
Car
ENORA-RMSE 1486.570.8660.0890.8660.8660.8460.7660.8890.8050.259
PART 6895.780.9580.0160.9590.9580.9580.9290.9900.9790.1276
JRip 4986.460.8650.0640.8810.8650.8700.7610.9470.8990.224
OneR 170.020.7000.7000.4900.7000.5770.0000.5000.5430.387
ZeroR -70.020.7000.7000.4900.7000.5770.0000.4970.5420.338
kr-vs-kp
ENORA-RMSE 1094.870.9490.0500.9500.9490.9490.8980.9500.9270.227
PART 2399.060.9910.0100.9910.9910.9910.9810.9970.9960.088
JRip 1699.190.9920.0080.9920.9920.9920.9840.9950.9930.088
OneR 166.460.6650.3500.6750.6650.6550.3340.6570.6070.579
ZeroR -52.220.5220.5220.2730.5220.3580.0000.4990.5000.500
Nursery
ENORA-ACC 1588.410.8840.0550.8700.8840.8730.8240.9150.8180.2153
PART 22099.210.9920.0030.9920.9920.9920.9890.9990.9970.053
JRip 13196.840.9680.0120.9680.9680.9680.9570.9930.9740.103
OneR 170.970.7100.1370.6950.7100.7020.5700.7860.6320.341
ZeroR -33.330.3330.3330.1110.3330.1670.0000.5000.3170.370

5. Analysis of Results and Discussion

The results of our tests allow for several considerations. The first interesting observation is that NSGA-II identifies fewer solutions than ENORA on the Pareto front, which implies less diversity and therefore a worse hypervolume ratio, as shown in Figure 3 and Figure 4. This is not surprising: in several other occasions [19,34,60], it has been shown that ENORA maintains a higher diversity in the population than other well-known evolutionary algorithms, with generally positive influence on the final results. Comparing the results in full training mode against the results in cross-validation or in splitting mode makes it evident that our solution produces classification models that are more resilient to over-fitting. For example, the classifier learned by PART with Monk’s Problem 2 presents a 94.01% accuracy in full training mode that drops to 73.51% in splitting mode. A similar, although with a more contained drop in accuracy, is shown by the classifier learned with Breast Cancer dataset; at the same time, the classifier learned by ENORA driven by accuracy shows only a 5.57% drop in one case, and even an improvement in the other case (see Table 5, Table 6, Table 9, and Table 10). This phenomenon is easily explained by looking at the number of rules: the more rules in a classifier, the higher the risk of over-fitting; PART produces very accurate classifiers, but at the price of adding many rules, which not only affects the interpretability of the model but also its resilience to over-fitting. Full training results seem to indicate that when the optimization model is driven by RMSE the classifiers are more accurate; nevertheless, they are also more prone to over-fitting, indicating that, on average, the optimization models driven by the accuracy are preferable. From the statistical tests (whose results are shown in the Appendix A and Appendix B) we conclude that among the six variants of the proposed optimization model there are no statistical significative differences, which suggests that the advantages of our method do not depend directly on a specific evolutionary algorithm or on the specific performance measure that is used to drive the evolutions. Significant statistical differences between our method and very simple classical methods such as OneR were expectable. Significant statistical differences between our method and a well-consolidated one such as PART have not been found, but the price to be paid for using PART in order to have similar results to ours is a very high number of rules (15 vs. 2 in one case and 47 vs. 7 in the other case). We would like to highlight that both the Breast Cancer dataset and the Monk’s problem 2 dataset are difficult to approximate with interpretable classifiers and that none of the analyzed classifiers obtains high accuracy rates using the cross-validation technique. Even powerful black-box classifiers, such as Random Forest and Logistic, obtain success rates below in 10-fold cross-validation for these datasets. However, ENORA obtains a better balance (trade-off) between precision and interpretability than the rest of the classifiers. For the rest of the analyzed datasets, the accuracy obtained using ENORA is substantially higher. For example, for the Tic-Tac-Toe-Endgame dataset, ENORA obtains a success percentage with only two rules in cross-validation, while PART obtains with 49 rules, and JRip obtains with nine rules. With respect to the results obtained in the datasets Car, kr-vs-kp and Nursery, we want to comment that better success percentage can be obtained if the maximum number of evaluations is increased. However, better success percentages imply a greater number of rules, which is to the detriment of the interpretability of the models.

6. Conclusions and Future Works

In this paper, we have proposed a novel technique for categorical classifier learning. Our proposal is based on defining the problem of learning a classifier as a multi-objective optimization problem, and solving it by suitably adapting an evolutionary algorithm to this task; our two objectives are minimizing the number of rules (for a better interpretability of the classifier) and maximizing a metric of performance. Depending on the particular metric that is chosen, (slightly) different optimization models arise. We have tested our proposal, in a first instance, on two different publicly available datasets, Breast Cancer (in which each instance represents a patient that has suffered from breast cancer and is described by nine attributes, and the class to be predicted represents the fact that the patient has suffered a recurring event) and Monk’s Problem 2 (which is an artificial, well-known dataset in which the class to be predicted represents a logical function), using two different evolutionary algorithms, namely ENORA and NSGA-II, and three different choices as a performance metric, i.e., accuracy, the area under the ROC curve, and the root mean square error. Additionally, we have shown the results of the evaluation in 10-fold cross-validation of the publicly available Tic-Tac-Toe-Endgame, Car, kr-vs-kp and Nursery datasets. Our initial motivation was to design a classifier learning system that produces interpretable, yet accurate, classifiers: since interpretability is a direct function of the number of rules, we conclude that such an objective has been achieved. As an aside, observe that our approach allows the user to decide, beforehand, a maximum number of rules; this can also be done in PART and JRip, but only indirectly. Finally, the idea underlying our approach is that multiple classifiers are explored at the same time in the same execution, and this allows us to choose the best compromise between the performance and the interpretability of a classifier a posteriori. As a future work, we envisage that our methodology can benefit from an embedded future selection mechanism. In fact, all attributes are (ideally) used in every rule of a classifier learned by our optimization model. By simply relaxing such a constraint, and by suitably re-defining the first objective in the optimization model (e.g., by minimizing the sum of the lengths of all rules, or similar measures), the resulting classifiers will naturally present rules that use more features as well as rules that use less (clearly, the implementation must be adapted to obtain an initial population in which the classifiers have rules of different lengths as well as mutation operators that allow a rule to grow or to shrink). Although this approach does not follow the classical definition of feature selection mechanisms (in which a subset of features is selected that reduces the dataset over which a classifier is learned), it is natural to imagine that it may produce even more accurate classifiers, and more interpretable at the same time. Currently, we are implementing our own version of multi-objective differential evolution (MODE) for rule-based classification for inclusion in the Weka Open Source Software issued under the GNU General Public License. The implementation of other algorithms, such as MOEA/D, their adaptation in the Weka development platform and subsequent analysis and comparison are planned for future work.
Table A25

Nomenclature table (Part I).

SymbolDefinition
Equation (1): Multi-objective constrained optimization
xk k-th decision variable
x Set of decision variables
fix i-th objective function
gjx j-th constraint
l>0 Number of objectives
m>0 Number of constraints
w>0 Number of decision variables
X Domain for each each decision variable xk
Xw Domain for the set of decision variables
F Set of all feasible solutions
S Set of non-dominated solutions or Pareto optimal set
Dx,x Pareto domination function
Equation (2): Rule-based classification for categorical data
D Dataset
xi ith categorical input attribute in the dataset D
x Categorical input attributes in the dataset D
y Categorical output attribute in the dataset D
1,,vi Domain of i-th categorical input attribute in the dataset D
1,,w Domain of categorical output attribute in the dataset D
p0 Number of categorical input attributes in the dataset D
Γ Rule-based classifier
RiΓ ith rule of classifier Γ
bijΓ Category for jth categorical input attribute and ith rule of classifier Γ
ciΓ Category for categorical output attribute and ith rule of classifier Γ
φiΓx Compatibility degree of the ith rule of classifier Γ for the example x
μijΓ(x) Result of the ith rule of classifier Γ and jth categorical input attribute xj
λcΓx Association degree of classifier Γ for the example x with the class c
ηicΓ(x) Result of of the ith rule of classifier Γ for the example x with the class c
fΓx Classification or output of the classifier Γ for the example x
Equation (3): Multi-objective constrained optimization problem for rule-based classification
FD(Γ) Performance objective function of the classifier Γ in the dataset D
NR(Γ) Number of rules of the classifier Γ
Mmax Maximum number of rules allowed for classifiers
Equations (4)–(6): Optimization models
ACCD(Γ) Acurracy: proportion of correctly classified instances with the classifier Γ in the dataset D
K Number of instances in the dataset D
TD(Γ,i) Result of the classification of the ith instance in the dataset D with the classifier Γ
c^iΓ Predicted value of the ith instance in the dataset D with the classifier Γ
cDi Corresponding true value for the ith instance in the dataset D.
AUCD(Γ) Area under the ROC curve obtained with the classifier Γ in the dataset D.
SD(Γ,t) Sensitivity: proportion of positive instances classified as positive with the classifier Γ in the dataset D
1ED(Γ,t) Specificity: proportion of negative instances classified as negative with the classifier Γ in the dataset D
t Discrimination threshold
RMSED(Γ) Square root of the mean square error obtained with the classifier Γ in the dataset D
Table A26

Nomenclature table (Part II).

Equations (7) and (8): Hypervolume metric
P Population
QP Set of non-dominated individuals of P
vi Volume of the search space dominated by the individual i
HV(P) Hypervolume: volume of the search space dominated by population P
H(P) Volume of the search space non-dominated by population P
HVR(P) Hypervolume ratio: ratio of H(P) over the volume of the entire search space
VS Volume of the search space
FDlower Minimum value for objective FD
FDupper Maximum value for objective FD
NRlower Minimum value for objective NR
NRupper Maximum value for objective NR
  3 in total

1.  Comparison of multiobjective evolutionary algorithms: empirical results.

Authors:  E Zitzler; K Deb; L Thiele
Journal:  Evol Comput       Date:  2000       Impact factor: 3.277

2.  Basic principles of ROC analysis.

Authors:  C E Metz
Journal:  Semin Nucl Med       Date:  1978-10       Impact factor: 4.446

3.  Multi-objective evolutionary algorithms for fuzzy classification in survival prediction.

Authors:  Fernando Jiménez; Gracia Sánchez; José M Juárez
Journal:  Artif Intell Med       Date:  2014-01-09       Impact factor: 5.326

  3 in total

北京卡尤迪生物科技股份有限公司 © 2022-2023.