Literature DB >> 35833074

RLMD-PA: A Reinforcement Learning-Based Myocarditis Diagnosis Combined with a Population-Based Algorithm for Pretraining Weights.

Seyed Vahid Moravvej1,2, Roohallah Alizadehsani3, Sadia Khanam4, Zahra Sobhaninia1, Afshin Shoeibi5, Fahime Khozeimeh3, Zahra Alizadeh Sani6, Ru-San Tan7,8, Abbas Khosravi3, Saeid Nahavandi3,9, Nahrizul Adib Kadri10, Muhammad Mokhzaini Azizan11, N Arunkumar12, U Rajendra Acharya13,14,15.   

Abstract

Myocarditis is heart muscle inflammation that is becoming more prevalent these days, especially with the prevalence of COVID-19. Noninvasive imaging cardiac magnetic resonance (CMR) can be used to diagnose myocarditis, but the interpretation is time-consuming and requires expert physicians. Computer-aided diagnostic systems can facilitate the automatic screening of CMR images for triage. This paper presents an automatic model for myocarditis classification based on a deep reinforcement learning approach called as reinforcement learning-based myocarditis diagnosis combined with population-based algorithm (RLMD-PA) that we evaluated using the Z-Alizadeh Sani myocarditis dataset of CMR images prospectively acquired at Omid Hospital, Tehran. This model addresses the imbalanced classification problem inherent to the CMR dataset and formulates the classification problem as a sequential decision-making process. The policy of architecture is based on convolutional neural network (CNN). To implement this model, we first apply the artificial bee colony (ABC) algorithm to obtain initial values for RLMD-PA weights. Next, the agent receives a sample at each step and classifies it. For each classification act, the agent gets a reward from the environment in which the reward of the minority class is greater than the reward of the majority class. Eventually, the agent finds an optimal policy under the guidance of a particular reward function and a helpful learning environment. Experimental results based on standard performance metrics show that RLMD-PA has achieved high accuracy for myocarditis classification, indicating that the proposed model is suitable for myocarditis diagnosis.
Copyright © 2022 Seyed Vahid Moravvej et al.

Entities:  

Mesh:

Year:  2022        PMID: 35833074      PMCID: PMC9262570          DOI: 10.1155/2022/8733632

Source DB:  PubMed          Journal:  Contrast Media Mol Imaging        ISSN: 1555-4309            Impact factor:   3.009


1. Introduction

Myocarditis is a condition that causes inflammation of the heart muscle [1]. It can affect heart pump function as well as electrical activation and conduction, resulting in heart failure and arrhythmia, respectively. The etiology is diverse, including infection (e.g., viral infections such as COVID-19 and parvovirus) [2], systemic inflammatory and autoimmune diseases, and drug reactions. Symptoms of myocarditis include chest pain, fatigue, and shortness of breath [3]. Patients with suspected myocarditis should seek cardiology advice for early diagnosis and treatment. Endomyocardial biopsy, an invasive procedure, is recommended in severe cases to confirm the diagnosis and to guide treatment [4]. Management comprises supportive measures, symptomatic heart failure therapy, antimicrobials for identified infective agents, and immunosuppression for severe inflammation. Early diagnosis and prompt institution of treatment can significantly reduce morbidity and mortality. Noninvasive cardiac imaging with cardiovascular magnetic resonance imaging (MRI) [5] can help clinch the diagnosis. However, MRI requires expert interpretation, which is manually intensive and subject to operator bias. In this regard, automated diagnostic systems can be developed that employ various machine learning and data mining algorithms to solve medical image classification problems efficiently [6]. They can be applied to reporting workflows to screen images automatically, saving physicians time, reducing errors, and enhancing diagnostic accuracy. Excellent performance of in-depth models has been demonstrated in diverse applications, including natural language processing [7-9], computer vision, and medical image analysis [10, 11]. Deep learning-based algorithms converge with suitable weights to minimize the error between the real and predicted outputs. Typically, deep models use gradient-based algorithms as backpropagation to learn the weights. However, such optimization methods are sensitive to initial weights and may become trapped in local minima [12]. This issue is mainly encountered during classification [13]. Few researchers have shown that population-based meta-heuristic (PBMH) algorithms [14, 15] may help to overcome this problem [16]. Among PBMH algorithms, the ABC algorithm is one of the most effective optimizers [17, 18]. It emulates the behavior of bees in nature and, unlike traditional optimization algorithms, dispenses with the need to calculate gradients, thereby reducing the probability of getting stuck in local optimizations [19]. Classification performance in many machine learning algorithms may be adversely affected by imbalanced classification [20], which occurs when one class contains disproportionately more data than the others [21]. While imbalanced models may still attain reasonable detection rates for majority samples, the performance for minority samples is weak as minority class specimens can be difficult to identify due to their rarity and randomness. Also, misalignment of minority class samples can result in high costs. Methods have been proposed to address the problem at two levels [22]: data level and algorithmic level. In the former [23-25], training data are manipulated to balance the class distribution by oversampling minority class and/or undersampling majority class [26]. For instance, the synthetic minority oversampling technique (SMOTE) generates new samples by linear interpolation between adjoining minority samples [24], whereas NearMiss undersamples majority samples using the nearest neighbor algorithm [25]. Of note, oversampling and undersampling can risk overfitting and loss of worthy information, respectively [27]. At the algorithmic level, the importance of the minority class can be raised using techniques [28-32] that include cost-sensitive learning, ensemble learning, and decision threshold adjustment. In cost-sensitive learning, different incorrect classification costs are attributed to the loss function for the whole class, with a higher cost being allocated to minority class misclassification. Ensemble learning systems train several subclassifications and then apply voting or combination to obtain better results. Threshold adjustment techniques train the classifier in the imbalanced dataset and modify the decision threshold during the test. Deep learning-based methods have also been suggested for imbalanced data classification [33-35]. The authors in Reference [36] introduced a new loss function for deep networks that could capture classification errors from both minority and majority classes. Reference [37] introduces a method that could learn the unique features of an imbalanced dataset while maintaining intercluster and interclass margins. To the best of our knowledge, only one work [3] based on deep learning models has been proposed for the diagnosis of myocarditis. The authors developed an algorithm for classifying images based on CNN and the k-means algorithm [38], which has the following workflow: after the data preprocessing stage, the images were placed in several clusters, and each cluster was considered a class in which the CNN classified. The algorithm was repeated for different clusters, and all the results were combined for the final decision. The main problem with the method was that it considered the image matrix as a vector in k-means, which resulted in missed pixels around a specific pixel. This paper presents a method based on the ABC algorithm and reinforcement learning called RLMD-PA that we believe would address the above mentioned problems. The RLMD-PA model poses the classification problem as a guessing game embodied in a sequential decision-making process. At each step, the agent receives an environmental state represented by a training instance and then executes a classification under the direction of a policy. If the agent performs classification perfectly, it will be given a positive reward and, otherwise, a negative one. The minority class is rewarded more than the majority class. The agent's goal is to accumulate as many rewards as possible during the sequential decision-making process to classify the samples as correctly as possible. The main contributions of this article are as follows: (1) we considered the classification problem of medical images as a sequential decision-making process. We presented a reinforcement learning-based algorithm for imbalanced classification; (2) instead of randomly weighting, we have developed an encoding strategy and calculated the optimal initial value using the ABC algorithm, and (3) this work is based on a new well-annotated MRI dataset acquired from Tehran's Omid Hospital that we have named the Z-Alizadeh Sani myocarditis dataset and made publicly downloadable. The rest of the article is structured as follows: the second section is a brief overview of the ABC algorithm and its working. The third section introduces the proposed model. The fourth section presents the evaluation criteria, dataset, and analysis of the results. The last section states the conclusions and future works.

2. Background

2.1. Artificial Bee Colony Algorithm

Artificial bee colony (ABC) introduced by Karaboga and Basturk [39] is one of the most efficient algorithms for optimizing numerical problems. It is straightforward, robust, and population-based [19]. The algorithm emulates the intelligent foraging behavior of bees to arrive at the optimal solution. There is a list of food sources that bees seek out over time to get to the best positions. The algorithm involves three groups of bees: employed bees, onlooker bees, and scout bees. Employed bees discover the positions of food sources, whereas onlooker bees wait in the hive for the nectar from food positions to be sent by employed bees. Onlooker bees use the information to select food source positions. Once an employed bee has exhausted the food source, it becomes a scout bee to search for new positions randomly. The number of employed bees equals the number of unemployed (onlooker and scout) bees. The steps for optimizing an algorithm using the ABC algorithm are as follows: Initialization: in the first step, an initial population S of size C is formed from the positions (solutions), as in where i represents the i-th position, each solution s is D dimensions, and D means the number of parameters that must be optimized. smin and smax are the smallest and largest values in s, respectively. Employed bee phase: at this point, new solutions are recognized by searching the neighborhood for current potential solutions. To keep the population size constant, the quality of new solutions is evaluated. If it is better than the previous ones, it will be replaced; otherwise, it will remain fixed. This step can be formed as follows: where k is a random solution such that k ≠ i. φ is a random number picked from the interval [0, 1]. The potentially new solution v is obtained by changing only one element of s. Onlooker bee phase: for the onlooker bees update, one solution is stochastically elected from the potential solutions, that is, one of the open facility solutions, according to the probability relation p anticipated as follows: The selection process follows the equation provided: the more appropriate a solution is, the higher the chance it will be selected. If the chosen employed bee scores higher than the current onlooker bee's current solution, the current solution replaces the previous one. This process is repeated for all onlooker bees in population S. Scout bee phase: a solution that does not improve its fit after some repetitions can get the algorithm caught up in local optimization [40]. To prevent this, once the solution's fit does not improve after t iterations, the algorithm will discard it, and a new solution will be supplied according to equation (2). Algorithm end condition: although different conditions can be defined for the end of the algorithm, the term termination is repeated in this study, which means that the algorithm ends after MaxItr iterations. The complete ABC algorithm is given in Algorithm 1.

2.2. Reinforcement Learning

Reinforcement learning [41] is an important branch of machine learning that encompasses many domains. Reinforcement learning can achieve relatively good classification results because it can effectively learn the compelling features of noisy data. In Reference [42], the authors defined classification as a sequential decision problem that used several factors to interact with the environment in order to learn an optimal policy function. Due to the complex simulation between the factors and the environment, the run time was inordinately prolonged. The model presented in [43] is a classification based on reinforcement learning provided for noisy text data. The proposed structure comprises of two classifiers: sample selector and relational classifier. The former selects a quality sentence from the noisy data by following the agent, whereas the latter classifier learns acceptable quality performance from clean data and gives a delayed reward to the sample selector for feedback. Finally, the model yields a superior classifier and quality dataset. The authors in Reference [44] proposed a solution for time series data in which the reward function and Markov process are explicitly defined. In various specific applications [45-48], reinforcement learning has been applied to learn the efficient features. These models promote valuable features for the classification, which leads to higher rewards that guide the agent to select more worthy features. To date, limited work has been done on deep learning for the classification of imbalanced data. In Reference [44], an ensemble pruning technique for deciding subclassifiers that adopted reinforcement learning was proposed. However, the model underperformed when the amount of data was increased. This is because it is difficult to choose classifiers when there are too many subclassifications.

3. The Proposed Solution

The overall structure of the proposed model is shown in Figure 1. We considered two critical options for classification. In the first step, we formulated a vector that includes all the learnable weights in our model. We assumed an initial value for the weights with ABC and then applied the backpropagation in the rest of the path. As mentioned, another problem that most classifiers suffer from, including ours, is imbalanced data. To address this, we employed reinforcement learning [49]. These concepts are detailed in the following sections.
Figure 1

Overall process of RLMD-PA.

3.1. Pretraining Phase

Weight initialization of deep networks is an essential part of deep models. Sometimes, incorrect initial values can lead to a failure of convergence in the model. The proposed model has a deep network with weights θ that need to be optimized. In this section, we present an encoding strategy and fitness function for the ABC algorithm.

3.2. Encoding Strategy

In our work, the encoding strategy aims to arrange the CNN and feed-forward weights in a vector that will be considered the position of the bees in the ABC. Setting the specific weights is a challenge. Nevertheless, we have designed an encoding strategy that is as appropriate as possible after a few experiments. Figure 2 illustrates an example with encoding of a three-layer CNN network with three filters in each layer and a feed-forward network with three hidden layers. Note that all weight matrices in the vector are stored in rows.
Figure 2

Placement of weights in a vector.

3.3. Fitness Function

The fitness function is defined as follows to measure the effectiveness of a solution in the ABC algorithm [12]:where N is the total number of samples, and y and are the target and predicted labels for i-th data, respectively.

4. Classification

Due to the difference in the amount of data between our two classes, we face the problem of imbalanced classification. To address this, we used the imbalanced classification Markov decision process (ICMDP) to construct a sequential decision problem. In reinforcement learning, an agent tries to obtain an optimal policy by performing a series of actions in the environment while maximizing its score. In the case of our model, a sample of the dataset is provided to the agent at each time point and classified. The environment then transmits the immediate score to the agent. A positive score corresponds to a correct rating, whereas a wrong rating gives a negative one. By maximizing cumulative rewards, the agent can arrive at the optimal policy. Let D={(x1, y1), (x2, y2), (x3, y3),…, (x, y)} be the imbalanced set of existing images with N samples, where x corresponds to the i-th image, and y is its corresponding label. The following explains the intended settings: Policy π: policy π means a mapping function S⟶A, where S and A are a set of states and actions, respectively. In other words, every π(s) means performing the action a in the state s. π is acknowledged as the classifier model with weights θ. State s: each state s is mapped with sample x from the dataset D. The first data x1 are deemed the initial state of s1. For the model not to learn a particular order, the D is shuffled in each episode. Action a: action a is performed to predict the label x. Since the offered classification is binary, a ∈ {0,1}, zero represents the minority class and one represents the majority class. Reward r: reward considers the performance of an action. An agent with the correct classification gets a positive reward; otherwise, it gets a negative reward. The amount of this bonus should not be the same for both classes. Rewards can significantly improve model performance because the level of reward and action has been carefully calibrated. In this work, the prize is defined for action according to the following equation [27]: where D and D represent the minority and majority classes, that is, healthy and sick, respectively, and λ is a value in the interval [0,1]. The reward λ is less than 1/−1 as the minority class becomes more critical due to fewer data. In effect, we can ascribe more importance to the minority class in order for it to approximate the majority class. In the results section, we will see the importance of the value λ. Terminal E: the training process is completed at several terminal states, which occur in every training episode. An episode is the transition trajectory from an initial state to a final state, namely, {(s1, a1, y1), (s2, a2, y2), (s3, a3, y3),…, (s, a, y)}. In our case, an episode stops when all the training data have been classified or when a sample of the minority class is misclassified. Transition probability P: the agent goes from state s to the next state s based on the order of the read data. The transition probability is determined as p(s|s, a). In ICMDP, the policy function reports the probability of all labels by receiving a sample: In reinforcement learning, the intention is to maximize the discounted cumulative reward, or in mathematical terms, to attain a high limit for the following expression: Equation (7) is termed the return function, which contains all the accumulated return values of the agent searches in space. The discount factor γ ∈ (0,1] [50] is the coefficient of the effect of each reward. The function Q measures the quality of a state-action combination: Equation (8) is expanded according to Bellman's formula [51] By maximizing the function Q supported by π, more cumulative rewards can be achieved. The optimal policy of π is assessed by considering the function Q as follows: By combining the two equations (9) and (10), the function Q is expressed as follows [27]: In a low-dimensional space state, the function Q can be easily solved by a table. However, the table technique is inadequate when space is joined. To solve this problem, Q-learning algorithms are used. In these algorithms, the tuple (s, a, r, s0) received from equation (11) is saved as experience replay memory M. The agent gets a mini-batch B from M and executes the gradient descent on these data according to the following equation:where y is an estimate of the function Q expressed as follows [27]:where s′ is the following state s, and a′ is the action performed in s′; end means whether the agent makes a wrong classification for the minority class or not. Finally, the policy weights π can be updated as follows: In conclusion, the optimal function Q can be achieved by minimizing the loss function presented in equation (12). Notably, the optimal policy of π is taken using Q, which is the optimal model for the proposed classifier.

4.1. Overall Algorithm

We devised the simulation environment according to the above. The structure of the policy network depends on the complexity and number of training samples. According to the structure of the training samples and the output, the network input equals to the number of data classes, which is equivalent to 2. The general training algorithm of the RLMD-PA model is displayed in Algorithm 2. In this algorithm, the policy weights are first initialized using the ABC algorithm, and then, the agent continues the training process until an optimal policy is reached. Action is based on a greedy policy, which is also evaluated by Algorithm 3. The algorithm is repeated for E times, which is taken as 18,000 in this paper. At each step, the policy network weights are stored.

5. Empirical Evaluation

5.1. Dataset

Cardiac magnetic resonance imaging (CMR) [52] allows for comprehensive anatomical and functional evaluation of the heart as well as detailed tissue characterization [53]. It is the preeminent imaging modality for noninvasive diagnosis myocarditis without biopsy. The Lake Louise criterion (LLC) [54] introduced benchmark criteria for diagnosing myocarditis using CMR [55] based on the presence of myocardial necrosis, edema, and hyperemia. The presence of late gadolinium enhancement confirms myocardial necrotic damage. T2-weighted images uncover areas of interstitial edema, which indicates inflammatory response. T1-weighted images before and after contrast can depict hyperemia in the myocardial tissue. Fulfilling two of three LLC criteria confers 80% accuracy for diagnosing myocarditis [56]. This article presents a model for identifying myocarditis by considering the three LLC criteria. A one-year CMR research project on myocarditis was conducted from September 2016 at Omid Hospital in Tehran, Iran, where we performed CMR on patients who were clinically suspected to have myocarditis (e.g., chest pain, elevated troponin, negative functional imaging and/or coronary angiographic findings, and suspected viral etiology) and the treating physician assessed that CMR would likely affect clinical management (e.g., ongoing symptoms, ongoing myocardial injury evidenced by persistent ECG abnormalities, and presence of ventricular dysfunction). The protocol had been approved by the local ethics committee. CMR examination was performed on a 1.5-Tesla system [57]. All cases were scanned with body coils in standard supine position. T1-weighted images were acquired in the axial views. Shortly after gadolinium injection, the T1-weighted sequences were repeated. After approximately 10–15 minutes, late gadolinium enhancement [58] sequences were performed in standard left ventricular short- and long-axis views. Table 1 summarizes the CMR sequence parameters [3].
Table 1

Characteristics of the Z-Alizadeh Sani myocarditis dataset.

ProtocolsTE (mm)TR (mm)NFSlice thickness (mm)Concatenation and slice numberNEBreath-hold time (s)
CINE_segmented (true FISP) long axis (LAX)1.1533.60157318
CINE_segmented (true FISP) short axis (SAX)1.1131.921571518
T2-weighted (TIRM) LAX, precontrast52800Noncine10319
T2-weighted (TIRM) SAX, precontrast52800Noncine105110
T1 relative-weighted TSE (Trigger)-AXIA-dark blood pre- and postcontrast24525Noncine8517
Late-GD enhancement LGE (high-resolution PSIR) SAX and LAX3.16666Noncine8117

TE: time echo, TR: time repetition, NF: number of frames, NE: number of excitations.

A total of 586 patients were identified who had positive evidence of myocarditis on the CMR images, which might show one or more areas of disease. A total of 307 healthy subjects were included as controls. We chose eight CMR images from each patient or control subject for the analysis, which were one long-axis image and one short-axis image acquired using each of the following four CMR sequences: late gadolinium enhancement, perfusion, T2-weighted, and steady-state free precession. The final CMR dataset comprises 4,686 and 2,449 samples from sick (i.e., myocarditis) and healthy subjects, respectively. Figure 3 shows example images obtained from this dataset. It may be noted that in this study, analysis is performed at the image level, and not at the patient level. In other words, prediction is based on a single image regardless of how many images are available for each patient.
Figure 3

Typical healthy and myocarditis images obtained from the Z-Alizadeh Sani myocarditis dataset. The yellow lines indicate the location of myocarditis.

Institutional approval was allowed to use the patient datasets in research studies for diagnostic and therapeutic purposes. Approval was granted on the grounds of existing datasets. Informed consent was received from all of the patients in this study. All methods were carried out in accordance with relevant guidelines and regulations. Ethical approval for using these data was obtained from the Tehran Omid Hospital.

5.2. Metrics

To evaluate the classification performance of the proposed model, we used six standard performance metrics, namely, accuracy, recall, precision, F-measure, specificity, and G-means [59], and they are defined as follows:where TP, TN, FN, and FP are true positive, true negative, false negative, and false positive, respectively. The F-measure and G-means are commonly applied to evaluate imbalanced classification [27], which aligns nicely with our dataset sample distribution and the reason for existing our proposed method. In addition, it is noteworthy that our prediction is per image. In this way, the intelligent myocarditis classification system can effectively screen entire CMR studies and flag individual images for scrutiny by physician readers. For this purpose, low FP and high recall metrics would be desired.

5.3. Details of Model

This work used Python and the PyTorch framework. The codes are written in Jupyter notebook. We used five layers of two-dimensional convolution for the CNN network with 128, 64, 32, 16, and 8 filters. The size of the kernel, stride, and padding in each layer are 3, 2, and 1 for both dimensions, respectively. Each convolution layer involves a max-pooling layer with dimensions of 2 × 2. The three fully connected layers have 128, 64, and 32 hidden layers, respectively. To prevent overfitting, dropout with a probability of 0.4 and early stopping are employed. In every experiment, the batch size is set to 64. The images in the dataset are in gray-scale and light intensities of image pixels are mapped to the range [0, 1]. The images in the dataset come in different sizes and are all resized to 100 × 100 for analysis.

5.4. Experimental Results

While standard techniques like data augmentation and weighted loss function [60] can sometimes be used to correct the imbalanced data distributions, they are not applicable in all situations. In our experiments, data augmentation and weighted loss function do not enrich our model, which is not unexpected. We used k-fold cross-validation (k=5 or 5-CV) in all our implementations. The entire dataset is divided into k subsets. k − 1 subsets are applied for training and the remaining one k for test. This procedure is iterated k times until all data subsets are utilized exactly four times for training and once for testing. All parameters are expressed as means, standard deviations, medians, minimums, and maximums. First, we compared our proposed method with the only published work in this field, CNN-KCL [3]. Next, to investigate the contributions of the two distinct components ABC and RL in our model, we compared the performance of a basic model without ABC and RL, that is, CNN + random weight, versus the models CNN + ABC and CNN + RL, which used ABC and RL for training, respectively. The evaluation results of our RLMD-PA model performance as well as the aforementioned comparisons on the Z-Alizadeh Sani myocarditis dataset are presented in Tables 2 and 3. In general, the RLMD-PA model reduces the error by more than 43%. From the means of all the performance metrics, the RLMD-PA model outperforms the CNN-KCL method as well as CNN + random weight, CNN + ABC, and CNN + RL combinations of its components. Both ABC and RL individually improve on the basic CNN network across all assessed performance metrics, which supports the use of combined approaches of initial weight and reinforcement learning. For better visualization, the results are illustrated in Figure 4. In terms of time, the best model was obtained after 100 iterations in 2 hours, while CNN-KCL got the best after 350 iterations in 5 hours.
Table 2

5-CV classification performances (accuracy, recall, and precision) obtained for automated myocarditis detection using various combinations of deep learning models with the Z-Alizadeh Sani myocarditis dataset.

AccuracyRecallPrecision
MethodMinMedianMaxMeanStd.dev.MinMedianMaxMeanStd.dev.MinMedianMaxMeanStd.dev.
CNN-KCL [3]0.7830.8110.8460.8100.0240.7320.7380.8070.7510.0320.7040.7520.7890.7450.032
CNN + random weight0.7550.7700.8070.7720.0210.6950.7130.7550.7170.2130.6660.6850.7370.6910.029
CNN + ABC0.7990.8030.8450.8150.0200.7410.7660.8140.7710.0270.7260.7290.7830.7460.027
CNN + RL0.8210.8290.8690.8400.0210.7620.7980.8350.8010.0280.7450.7720.8190.7790.029
RLMD-PA (CNN + ABC + RL)0.8620.8840.9120.8860.0200.8370.8690.8790.8630.0170.8040.8370.8860.8400.034
Table 3

5-CV classification performances (F-measure, specificity, and G-means) obtained for automated myocarditis detection using various combinations of methods with the Z-Alizadeh Sani myocarditis dataset.

F-measureSpecificityG-means
MethodMinMedianMaxMeanStd.dev.MinMedianMaxMeanStd.dev.MinMedianMaxMeanStd.dev.
CNN-KCL [3]0.7180.7460.7980.7480.0310.8140.8520.8700.8450.0220.7720.7950.8380.7970.025
CNN + random weight0.6810.7020.7460.7040.0260.7880.8000.8380.8060.0200.7420.7590.7950.7600.021
CNN + ABC0.7350.7450.7980.7580.0260.8260.8350.8640.8420.0180.7870.7950.8390.8060.021
CNN + RL0.7670.7770.8270.7900.0260.8360.8640.8890.8630.0200.8110.8210.8620.8310.022
RLMD-PA (CNN + ABC + RL)0.8200.8470.8820.8510.0240.8770.9000.9320.9010.0240.8570.8790.9050.8820.019
Figure 4

Performance of deep learning models on the mean.

Standard machine learning classifiers have not been successful in classifying medical images, because they typically assume images as one-dimensional vectors, which cause the neighboring pixels of a specific pixel to be spaced apart. In order to compare with our deep model, we used five algorithms: support vector machine (SVM) [61], k-nearest neighbor [62], naïve Bayes [63], logistic regression [64], and random forests [65] to classify the CMR images of the study dataset. SVM performed the best among these methods but is still inferior to deep models. The results are summarized in Tables 4 and 5, and the mean performance metrics is shown in Figure 5.
Table 4

5-CV classification performances (accuracy, recall, and precision) obtained for automated myocarditis detection using various machine learning algorithms with the Z-Alizadeh Sani myocarditis dataset.

AccuracyRecallPrecision
MethodMinMedianMaxMeanStd.dev.MinMedianMaxMeanStd.dev.MinMedianMaxMeanStd.dev.
SVM0.5680.6910.7540.6830.0700.6740.7450.7780.7370.0420.4500.5650.6510.5650.074
KNN0.4800.6140.6350.5880.0640.3990.6370.6830.5890.1110.3370.4900.5110.4600.072
Naïve Bayes0.5470.6320.6760.6150.0510.3880.5340.7130.5650.1340.3950.5100.5530.4840.062
Logistic regression0.6270.6620.7200.6610.0380.5830.6580.7410.6570.0570.5030.5420.6030.5410.041
Random forests0.4150.5500.5900.5300.0700.5370.6830.7110.6480.0710.3290.4370.4690.4200.056
Table 5

5-CV classification performance (F-measure, specificity, and G-means) obtained for automated myocarditis detection using various machine learning algorithms with the Z-Alizadeh Sani myocarditis dataset.

F-measureSpecificity G-means
MethodMinMedianMaxMeanStd.dev.MinMedianMaxMeanStd.dev.MinMedianMaxMeanStd.dev.
SVM0.5400.6520.6950.6390.0600.5050.6620.7600.6510.0930.5830.7040.7520.6920.065
KNN0.3650.5540.5850.5160.0890.5280.6010.6290.5870.0390.4590.6190.6430.5870.075
Naïve Bayes0.3910.5220.6230.5200.0920.6100.6420.6920.6450.0310.4990.6080.6820.6000.072
Logistic regression0.5650.5710.6650.5930.0420.6060.6650.7160.6630.0490.6310.6460.7240.6590.038
Random forests0.4080.5330.5590.5090.0630.3420.4710.5290.4590.0710.4290.5670.6050.5450.071
Figure 5

Performance of traditional methods on the mean.

5.5. Investigation of Other Metaheuristic Algorithms on the Algorithm

The proposed model employs ABC algorithm in conjunction with backpropagation for the initial value. To compare the performance of ABC versus alternative instructors, we employed ABC in our model with five conventional algorithms, namely, gradient descent with momentum backpropagation (GDM) [66], gradient descent with adaptive learning rate backpropagation (GDA) [67], gradient descent with momentum and adaptive learning rate backpropagation (GDMA) [68], one-step secant backpropagation (OSS) [69], and Bayesian regularization backpropagation (BR) [70], and four metaheuristic algorithms, namely, gray wolf optimization (GWO) [71], the Bat algorithm (BA) [72], Cuckoo optimization algorithm (COA) [73], and whale optimization algorithm (WOA) [74]. The population size and number of function evaluations are 100 and 25,000 for all metaheuristic algorithms, respectively. Other parameter settings can be seen in Table 6. The performance metrics of these comparisons are summarized in Tables 7 and 8 and illustrated in Figure 6. In general, metaheuristic algorithms are better than conventional algorithms with the exception of GDMA in terms of accuracy, recall, and F-measure scores. Importantly, the ABC algorithm outperformed all conventional and metaheuristic algorithms to improve the error in the recall and F-measure criteria by more than 25% and 22%, respectively.
Table 6

Parameter setting for the experiments.

AlgorithmParameterValue
ABCLimit n e  × dimensionality of problem
n o 50% of the colony
n e 50% of the colony
n s 1
GWONo parameters
BATConstant for loudness update0.50
Constant for an emission rate update0.50
Initial pulse emission rate0.001
COADiscovery rate of alien solutions0.25
WOAB1
Table 7

Results of 5-CV classification performances (accuracy, recall, and precision) obtained for automated myocarditis detection using various conventional and metaheuristic algorithms with the Z-Alizadeh Sani myocarditis dataset.

AccuracyRecallPrecision
MethodMinMedianMaxMeanStd.dev.MinMedianMaxMeanStd.dev.MinMedianMaxMeanStd.dev.
CNN + GDM + RL0.8110.8570.8680.8490.0220.7840.8010.8300.8060.0180.7320.8060.8250.7960.038
CNN + GDA + RL0.8170.8460.8570.8400.0170.7840.8120.8370.8080.0220.7420.7860.8280.7780.035
CNN + GDMA + RL0.8290.8550.8870.8540.0250.7640.8160.8550.8170.0370.7520.8090.8490.8000.037
CNN + OSS + RL0.8230.8490.8670.8460.0160.7410.8140.8370.8040.0370.7780.7870.8140.7910.015
CNN + BR + RL0.8260.8330.8550.8370.0120.7450.7960.8120.7850.0270.7520.7610.8500.7840.041
CNN + GWO + RL0.8330.8480.8690.8500.0160.7710.7960.8420.8040.0270.7690.8000.8160.7970.020
CNN + BAT + RL0.8370.8470.8650.8510.0130.7780.7820.8330.7960.0240.7870.8050.8300.8070.016
CNN + COA + RL0.8150.8430.8820.8440.0280.7500.8260.8560.8130.0460.7480.7570.8380.7810.039
CNN + WOA + RL0.8200.8450.8470.8370.0120.7500.8260.8140.7890.0210.7420.7830.8070.7810.024
Table 8

Results of 5-CV classification performances (F-measure, specificity, and G-means) obtained for automated myocarditis detection using various conventional and metaheuristic algorithms with the Z-Alizadeh Sani myocarditis dataset.

F-measureSpecificity G-means
MethodMinMedianMaxMeanStd.dev.MinMedianMaxMeanStd.dev.MinMedianMaxMeanStd.dev.
CNN + GDM + RL0.7570.8110.8250.8010.0260.8270.8820.8980.8750.0280.8050.8480.8600.8400.021
CNN + GDA + RL0.7650.7990.8110.7920.0190.8340.8630.9020.8600.0280.8120.8390.8500.8340.015
CNN + GDMA + RL0.7710.8060.8490.8080.0330.8380.8800.9090.8770.0260.8150.8430.8780.8460.026
CNN + OSS + RL0.7590.7990.8250.7970.0240.8590.8730.8850.8720.0100.8040.8390.8610.8370.021
CNN + BR + RL0.7760.7840.7940.7840.0070.8410.8500.9210.8680.0340.8210.8250.8290.8250.003
CNN + GWO + RL0.7790.7970.8280.8010.0210.8560.8800.8890.8770.0130.8210.8360.8630.8400.018
CNN + BAT + RL0.7820.7930.8230.8010.0180.8730.8850.9010.8850.0100.8240.8320.8590.8390.016
CNN + COA + RL0.7520.8030.8440.7960.0380.8350.8540.9010.8620.0280.8000.8450.8760.8370.031
CNN + WOA + RL0.7680.7930.7980.7850.0140.8320.8690.8880.8660.0210.8120.8320.8390.8270.012
Figure 6

Performance of conventional and metaheuristic models on the mean.

5.6. Explore the Reward Function

The reward function is a practical device that helps the agent to achieve the goal. In this work, the minority class reward is +1/−1, while the majority is +λ/−λ. To examine the effect of the value λ on the classification model, we test 10 values of λ ∈ {0,0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1} on the model. Details of the results for all the criteria for these experiments are given in Table 9. For better visualization, we have plotted the trends in Figure 7. On examination, for the accuracy criterion, when λ takes the values from [0, 0.3], the chart has an ascending trend, and from [0.3, 1] has a descending move` This process is valid for all criteria. If λ=0, the importance of the majority class is disregarded, and if λ=1, the importance of both classes is the same. Although the minority class is more important to us, the majority class cannot be ignored.
Table 9

Performance evaluation obtained for various values of λ as the reward of the majority class.

λ AccuracyRecallPrecision F-measureSpecificity G-means
00.8070.7780.7270.7520.8240.801
0.10.8380.8140.7690.7910.8530.833
0.20.8670.8440.8100.8270.8800.862
0.30.8840.8580.8370.8470.9000.879
0.40.8770.8480.8300.8390.8950.871
0.50.8570.8140.8070.8100.8830.848
0.60.8450.7980.7920.7950.8740.835
0.70.8250.7640.7680.7660.8610.811
0.80.8070.7380.7460.7420.8480.791
0.90.7920.7090.7300.7190.8420.773
10.7790.6950.7100.7020.8290.759
Figure 7

Graphical view of change in the performance parameters due to variation in λ.

6. Conclusion and Future Directions

This article presents a new model for classifying myocarditis images. The proposed model consists of two steps. First, the model weights are initialized using the ABC algorithm. Next, the model is considered an ICMDP problem. The environment assigns a high reward to the minority class and a low reward to the majority class. The algorithm terminates when the agent makes a wrong classification for the minority class, or the number of episodes runs out. We performed several experiments to examine various factors that affect the performance of the proposed model. The designed experiments confirmed that the RLMD-PA model with ABC and RL is an effective classifier for myocarditis images. In the future, we will try to employ ensemble convolutional neural network (ECNN), as our model to use a set of CNN networks and connect them to yield higher performance. In addition, we can also work with the generative adversarial network (GAN), which is widely used in many applications. It may be worth exploring to employ the developed model for other medical applications such as stroke detection, cancer detection and plaque detection.
  16 in total

1.  Diagnosis and presentation of fatal myocarditis.

Authors:  Ville Kytö; Pekka Saukko; Eberhard Lignitz; Günther Schwesinger; Véronique Henn; Antti Saraste; Liisa-Maria Voipio-Pulkki
Journal:  Hum Pathol       Date:  2005-09       Impact factor: 3.466

2.  Analysis of the back-propagation algorithm with momentum.

Authors:  V V Phansalkar; P S Sastry
Journal:  IEEE Trans Neural Netw       Date:  1994

3.  SVMs modeling for highly imbalanced classification.

Authors:  Yuchun Tang; Yan-Qing Zhang; Nitesh V Chawla; Sven Krasser
Journal:  IEEE Trans Syst Man Cybern B Cybern       Date:  2008-12-09

4.  Diagnostic Performance of Extracellular Volume, Native T1, and T2 Mapping Versus Lake Louise Criteria by Cardiac Magnetic Resonance for Detection of Acute Myocarditis: A Meta-Analysis.

Authors:  Jonathan A Pan; Yoo Jin Lee; Michael Salerno
Journal:  Circ Cardiovasc Imaging       Date:  2018-07       Impact factor: 7.792

5.  Workflow efficiency of two 1.5 T MR scanners with and without an automated user interface for head examinations.

Authors:  Christoph Moenninghoff; Lale Umutlu; Christian Kloeters; Adrian Ringelstein; Mark E Ladd; Antje Sombetzki; Thomas C Lauenstein; Michael Forsting; Marc Schlamann
Journal:  Acad Radiol       Date:  2013-03-07       Impact factor: 3.173

6.  Logistic Regression: Relating Patient Characteristics to Outcomes.

Authors:  Juliana Tolles; William J Meurer
Journal:  JAMA       Date:  2016-08-02       Impact factor: 56.272

7.  Role of cardiac magnetic resonance imaging in the detection of cardiac amyloidosis.

Authors:  Imran S Syed; James F Glockner; Dali Feng; Philip A Araoz; Matthew W Martinez; William D Edwards; Morie A Gertz; Angela Dispenzieri; Jae K Oh; Diego Bellavia; A Jamil Tajik; Martha Grogan
Journal:  JACC Cardiovasc Imaging       Date:  2010-02

8.  CNN-KCL: Automatic myocarditis diagnosis using convolutional neural network combined with k-means clustering.

Authors:  Danial Sharifrazi; Roohallah Alizadehsani; Javad Hassannataj Joloudari; Shahab S Band; Sadiq Hussain; Zahra Alizadeh Sani; Fereshteh Hasanzadeh; Afshin Shoeibi; Abdollah Dehzangi; Mehdi Sookhak; Hamid Alinejad-Rokny
Journal:  Math Biosci Eng       Date:  2022-01-04       Impact factor: 2.080

9.  The importance of cardiac MRI as a diagnostic tool in viral myocarditis-induced cardiomyopathy.

Authors:  M A G M Olimulder; J van Es; M A Galjee
Journal:  Neth Heart J       Date:  2009-12       Impact factor: 2.380

Review 10.  Myocarditis.

Authors:  Leslie T Cooper
Journal:  N Engl J Med       Date:  2009-04-09       Impact factor: 91.245

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.