Literature DB >> 34943549

Hierarchical Boosting Dual-Stage Feature Reduction Ensemble Model for Parkinson's Disease Speech Data.

Mingyao Yang1, Jie Ma1, Pin Wang1, Zhiyong Huang1, Yongming Li1, He Liu2, Zeeshan Hameed1.   

Abstract

As a neurodegenerative disease, Parkinson's disease (PD) is hard to identify at the early stage, while using speech data to build a machine learning diagnosis model has proved effective in its early diagnosis. However, speech data show high degrees of redundancy, repetition, and unnecessary noise, which influence the accuracy of diagnosis results. Although feature reduction (FR) could alleviate this issue, the traditional FR is one-sided (traditional feature extraction could construct high-quality features without feature preference, while traditional feature selection could achieve feature preference but could not construct high-quality features). To address this issue, the Hierarchical Boosting Dual-Stage Feature Reduction Ensemble Model (HBD-SFREM) is proposed in this paper. The major contributions of HBD-SFREM are as follows: (1) The instance space of the deep hierarchy is built by an iterative deep extraction mechanism. (2) The manifold features extraction method embeds the nearest neighbor feature preference method to form the dual-stage feature reduction pair. (3) The dual-stage feature reduction pair is iteratively performed by the AdaBoost mechanism to obtain instances features with higher quality, thus achieving a substantial improvement in model recognition accuracy. (4) The deep hierarchy instance space is integrated into the original instance space to improve the generalization of the algorithm. Three PD speech datasets and a self-collected dataset are used to test HBD-SFREM in this paper. Compared with other FR algorithms and deep learning algorithms, the accuracy of HBD-SFREM in PD speech recognition is improved significantly and would not be affected by a small sample dataset. Thus, HBD-SFREM could give a reference for other related studies.

Entities:  

Keywords:  Parkinson’s disease; dual-stage feature reduction pair; ensemble learning; hierarchy space instance learning mechanism

Year:  2021        PMID: 34943549      PMCID: PMC8700329          DOI: 10.3390/diagnostics11122312

Source DB:  PubMed          Journal:  Diagnostics (Basel)        ISSN: 2075-4418


1. Introduction

Parkinson’s disease (PD) is a neurodegenerative disease with the characteristics of motor stiffness, movement retardation, tremor, and some non-motor symptoms (NMS, like bass disorder, sleep disorder, depression, constipation, pain, and dysarthria). Numerous studies have shown that PD patients will have NMS as the disease develops that seriously affects the quality of life [1]. NMS can be detected at the early stage of the disease, which allows a sound treatment plan to be designed. Dysarthria is the primary NMS and plays a guiding role in the study of PD pathogenesis. In addition, the advantages of speech data collection have made speech analysis gradually become the main analysis method for PD recognition as well as a key research area for early PD recognition [2]. However, speech data exhibit a high rate of redundancy and repetition and contain much unnecessary noise. Feature reduction (FR) could help alleviate this issue. Currently, this topic has attracted extensive attention from researchers and has great research significance [3]. Early FR research on PD speech recognition primarily focused on feature selection, which could be simply considered as the way of selecting the optimal feature subset from the original feature space. Some feature selection algorithms include Relief [4,5,6], (mRMR) [3], SBS [7], PSO [8], SFS [9], LASSO [2,4], Pvalue [10]. Erika R et al. selected the optimal subset of features from the original features, then using the Pvalue algorithm [10]. Sakar and Kursun [11] proposed a new feature selection algorithm based on mutual information, and the model is trained using support vector machines (SVM), achieving an accuracy of 92.75%. Musa Peker [12] used mRMR to identify valid features and then submitted the obtained features into a complex-valued artificial neural network. Benba et al. [13] selected features based on pathology thresholds through a multi-dimensional voice detection procedure (MDPV) and then submitted the obtained features to K-nearest neighbors (KNN) and SVM, achieving an accuracy of 95%. Shirvan RA et al. [14] used genetic algorithms and KNN to determine the optimal features that affected the result of recognition. Feature extraction is another type of FR algorithm, the idea is to map the high-dimensional features to the low-dimensional space and keep all the information of the original instance as much as possible [15]. Linear approaches were primarily used before, in which PCA [16,17], and LDA [18,19,20,21] were representative methods. Chen et al. [21] developed a PD detection system that used PCA to extract features and trained the model with a fuzzy KNN classifier, which achieved an accuracy of 96.7%. Hariharan M et al. extracted features of PD using PCA and LDA and obtained a high accuracy rate [10]. Linear feature extraction methods generally assume the data in a high-dimensional linear space, which is the opposite of the non-linear characteristics of PD speech datasets in the real world [22,23,24]. Thus, linear feature extraction could not be applied well to non-linear data spaces because it limits the accuracy of PD recognition [25]. Currently, non-linear feature extraction has been developed and applied to PD recognition [19,26,27]. Kernel mapping and deep neural network mapping are two representative types of non-linear feature extraction methods. Yang achieved good results by feature extraction of PD speech data through SFS and PCA with kernel [19]. Derya A proposed the Genetic Algorithm-Wavelet Kernel-Extreme Learning Machine (GA-WK-ELM), and the wavelet kernels were used to map non-linear features from PD speech data [25]. Grover used deep neural networks to process Parkinson’s disease speech data features and predict the severity of PD [26]. Camilo considered multimodal information, including not only speech data of PD patients, but also writing, handwriting data and gait, and posture data and trained the model for recognition according to deep learning methods [27]. Manifold learning is another type of feature extraction method that could be applied to small sample datasets. Locally preserved projection (LPP) is a representative algorithm for manifold learning, which preserves the structure of the nearest neighbor between data samples after feature extraction, while minimizing the dimensions of the features [28]. However, since LPP is the nearest neighbor retention algorithm, most of the improved algorithms based on LPP only focus on the differences between classes and do not consider the large differences within classes [29,30,31]. Liu et al. considered both interclass data aliasing and intraclass data aliasing, which effectively solve these problems [16]. In recent studies, some scholars have attempted to integrate the advantages of feature selection and feature extraction to create hybrid feature processing methods. M. Hariharan et al. [9] proposed a hybrid system using SFS and PCA to process the data feature characteristics and feed the processed bibliography into a least square support vector machine classifier to learn the prediction model. H. Almayyan et al. [32] proposed a hybrid recognition system that uses PCA and Relief for feature processing and SVM combined with recursive feature elimination (SVMRFE) as a classifier to train the model. In addition, the study still used the SMOTE technique in order to equalize and diversify the dataset. Based on the above analysis, we know that the FR method can solve the problems of high redundancy, high repetition, and noise of speech data. However, traditional feature extraction could construct high-quality features but could not achieve feature preference, while traditional feature selection could achieve feature preference but could not construct high-quality features. The two types of FR methods are different in principle but can be complementary to each other. Thus, it is necessary to propose a feature reduction method that could simultaneously achieve feature preference and high-quality feature construction. Although some related studies have made some progress in this field [21,32], critical problems also remain to be solved: (1) the integration of feature extraction and feature selection always occurs once, then the absence of multiple iterations to find the optimal fusion make it impossible to obtain higher quality merged features; (2) existing methods only consider information on the characteristics of the sample in the original space, and ignoring structural information on the characteristics of the deeper instance. In order to address these issues, the Hierarchical Boosting Dual-Stage Feature Reduction Ensemble Model for Parkinson’s disease speech data (HBD-SFREM) is proposed in this study. The major contributions and innovations of this model are listed below. The instance space of the hierarchy is built by an iterative deep extraction mechanism. The manifold feature extraction method embeds the nearest neighbor feature preference method to form a dual-stage feature reduction pair module. The dual-stage feature reduction pair (D-Spair) module is iteratively performed by the AdaBoost mechanism to obtain higher quality features, thus achieving a substantial improvement in model diagnosis accuracy. The deep hierarchy instance space is integrated into the original instance space to enhance the generalization ability of the model. The writing structure of this paper is given here. Section 2 introduces the principles related to the proposed model; Section 3 describes the experiments designed in this paper as well as the presentation and analysis of the results; Section 4 analyzes the limitations and contributions of this study.

2. Materials and Methods

2.1. Symbol Description

In order to facilitate the presentation of the HBD-SFREM, some symbols need to be defined first. The datasets used in this study are numerical matrices and described as , where . By default, each row represents an instance, indicates the number of instances in . denotes the dimension of . is the category of datasets, the label of instances is expressed as . The number of instances in each hierarchy is determined by the number of instances in the upper hierarchy and , where is the proportion of instances retained when IDEM is performed. The mapping matrix generated by the D-Spair maps to , where represents the high-dimensional dataset, and represents the low-dimensional dataset, ().

2.2. The Proposed Algorithm

2.2.1. Construction of the Different Hierarchy Instance Space

In this part, the layers of hierarchical instance spaces and the numbers of independent instance subspace are used. One of the primary innovations in this paper is that deep hierarchy instance space is constructed based on IDEM. The relationship between the different hierarchies of instance spaces is analyzed by learning instances of different hierarchy spaces, and the generalization ability of the final model will also be improved. In the IDEM mechanism, is used to define the clusters and the clustered partition of data points is denoted by , while the radial basis function is used to map the data to high-dimension space, thus the objective function is defined as: where is the center of each cluster, where is instances of . Assume that each cluster has the same weight, the Euclidean distance of each sample to the cluster center is denoted as: Figure 1 describes the detailed process of the IDEM. The IDEM is based on the means clustering method with radial basis kernel [33,34,35]. The original dataset is defined as the first hierarchy instance, and the IDEM mechanism is used to cluster this hierarchy instance to generate the second hierarchy instance. Then, the second hierarchy instance is clustered to generate the third hierarchy instance, until the hierarchy instances are generated, where ( represents the set of positive integers). The number of newly generated instances is from the upper hierarchy instances.
Figure 1

Flow chart of IDEM.

2.2.2. Boosting Dual-Stage Feature Reduction Pair Ensemble Module

The typical characteristics of PD speech datasets are a small sample, having high repetition, high redundancy, and a certain amount of noise. According to the characteristics above, the boosting dual-stage feature reduction pair ensemble module (BD-SFREM) is designed to address this issue, which includes the dual-stage feature reduction pair (D-Spair) module and boosting ensemble module. D-Spair module; Suppose the number of instances of is , then the total number of the instance . In the first step, D-Spair makes instances belonging to the same category closer together after mapping, that is, the within-class variance matrix of similar samples is reduced, the specific mathematical formula is expressed as follows: where stands for the variance matrix of the intraclass. denotes the center of class, and the samples belonging to the same class. Similarly, instances with different class labels are mapped as far apart as possible, that is the variance matrix between different classes should be increased as much as possible, and the specific mathematical formula is expressed as follows: where represents the scatter matrix between different classes. stands for the center of the local part, and the number of the class in the local part. In addition, the nearest neighbor structure between samples is preserved during the mapping process (i.e., locality preservation), the specific mathematical formula could be described as follows: where represents a Laplacian matrix, a diagonal matrix and the elements are the result of summing the diagonal elements of , stands for an affinity matrix, . Thus, the objective function of the feature extraction part of the D-Spair is designed to minimize the local variance matrix within the same category and maximize the variance matrix between different categories, while preserving the nearest neighbor structure of each instance. Based on the description of Equations (3)–(5), the mathematical expression of the feature extraction part is expressed as follows. Equation (6) could be transformed by the Lagrange multiplier method into Equation (7) Take the derivative of to obtain the optimal solution. where and is the penalty factor. Equation (8) could be solved and the projection matrix is obtained. The vector is the generalized eigenvector of and is the first largest eigenvalues. The vector is composed of the first k eigenvectors of . Next, the vector is used to map , resulting in high-quality feature extraction, the mapped data are named . Define the sample set as , divide into and according to the class label of instances. An instance is randomly selected from without putting back (). According to the nearest neighbor criterion, an instance is also selected from and respectively, which are noted as , . Assume that has features, i.e., each consists of p-dimensional vectors (), where is the feature of . Similarly, denotes the feature weight of , which also consists of p-dimensional vectors , denotes the feature weight of . Same as , and are also composed of p-dimensional vectors. Firstly, initialize the weights . Second, update according to the distances of from and . The feature weights of a single instance are obtained by iterating times. The feature weights of all instances are obtained by iterating the above process times. Finally, the higher quality features are selected with that are useful to the training model. The related mathematical expressions are as follows: Then, these optimal features are used to train the classifier. Boosting ensemble module; In the boosting ensemble module, the AdaBoost mechanism is used to combine various D-Spair, thereby constructing the boosting ensemble module. Finally, the pseudocode of BD-SFREM is shown as follows.

2.2.3. Hierarchical Space Instance Learning Mechanism

The implementation of the hierarchical space instance learning mechanism is based on the construction of the different hierarchy spaces and BD-SFREM. First, the IDEM mechanism is used to construct the deep hierarchy space. Then, the BD-SFREM is applied to different hierarchy spaces to perform the hierarchy space instance learning mechanism, and the results of the deep hierarchy spaces are integrated with the results of the original hierarchy spaces in order to improve the generalization ability of the model. The pseudocode of the hierarchical space instance learning mechanism is shown as follows:

2.2.4. Overall Description of the Proposed Model

The overall description of the proposed model (HBD-SFREM) is described in this part. First, the different hierarchy space is constructed by IDEM. Second, a method of boosting dual-stage feature reduction process (boosting dual-stage feature reduction pair ensemble module) is established based on the proposed objective function. Finally, the above methods are applied to different hierarchy spaces to perform hierarchy space instance learning, then the results of the deep hierarchy spaces are integrated with the results of the original hierarchy instance spaces in order to improve the generalization ability of the algorithm. Figure 2 depicts the algorithm of this paper.
Figure 2

Graphical description of the proposed model.

3. Results

3.1. Datasets

Three representative PD speech datasets and a self-collected PD speech dataset were utilized to validate the innovation of the HBD-SFREM. LSVT: The LSVT dataset was founded by Professor Athanasios Tsanas of the University of Oxford (tsanasthanasis@gmail.com). The role of this dataset was to assess effectiveness after rehabilitation treatment. In total, 14 subjects with PD (eight of them were male and six were female) participated in the entire data collection process. For more details, see [36]. PSDMTSR: The dataset consisted of a total sample of 40 subjects, in which 20 samples were from people with PD and 20 samples were from healthy people. For more details, see [37]. Parkinson: A total of 31 subjects’ speech data were collected in this dataset, 23 of whom were people with PD and eight of whom were healthy. For more details, see [38]. SelfData: The dataset was collected from a total of 31 subjects, 10 of whom suffered from PD and 21 of whom were healthy. Specifically, five of the 10 with PD were male and five were female; 12 of the 21 healthy subjects were male and nine were female. Thirteen voice segments (samples) were collected for each subject, and each voice segment consisted of 26 features. The SONY ICD-SX2000 recording tool was used for voice acquisition, and the recording tool was kept at a distance of 15 cm from the subject’s lips during the acquisition. Each subject was asked to read a specific piece of pronunciation material and the pronunciation made by each subject was recorded. The sampling was set to a frequency of 44.1 kHz and the resolution was set to 16 bits. Three of the four datasets (LSVT, PSDMTSR, and Parkinson) are available to the public and can be downloaded from the UCI dataset repository created by the University of California, Irvine (www.archive.ics.uci.edu/ml/index.php (accessed on 24 November 2021)). The Chinese Army Medical University provided the SelfData dataset. Brief information about the datasets is shown in Table 1.
Table 1

Basic information about datasets.

DatabaseAttributes
PatientsHealthy PeopleInstancesFeaturesClassesReference
LSVT1401263092[36]
PSDMTSR20201040262[37]
Parkinson248195232[38]
SelfData1021403262--

For the LSVT dataset, ’healthy people’ means the number of patients whose clinicians allowed ongoing rehabilitation, and ’patients’ mean the number of patients whose clinicians did not allow rehabilitation. For the SelfData dataset, the ‘healthy people’ denote the number of patients treated with the relevant medication and the ‘patients’ mean the number of patients treated with the relevant medication before.

3.2. Experimental Environment

All experiments were conducted in MATLAB version 2017b, running on a PC with Windows 10, 64-bit and the CPU was intel(R) Core i5-2300 (2.80 GHz) as well as 8 GB of RAM. Praat is a computer speech processing software, which is used to analyze the speech features and extract speech features in this paper. The basic classifiers used in this study was the SVM. For optimal performance of the D-Spair, the affinity matrix was constructed using adjustable regularization coefficients and as well as adjustable kernel parameters and adjusted from the given set . The dimension of the subspace stack network was adjusted from the following set .The local ratios and were empirically chosen as 0.9 for this study. The parameter description and setting of the HBD-SFREM are shown in Table 2. In this study, all experiments were repeated ten times and the statistical results are reported.
Table 2

Parameter description and setting.

ParameterMeaningParameter Setting
H Layers of deep instance space2
n Numbers of independent instance space3
λ Penalty factor for MT(γXAXTSDC)M 10−4,10−3,…,104
γ Penalty factor for MTXAXTM 10−4,10−3,…,104
t Kernel parameter for affinity matrix10−4,10−3,…,104
k Number of nearest neighbor instances in Z 5
d Dimension after FR5,10,15,…
P Instance output rate of each hierarchical0.8

3.3. Evaluation Criteria

The proposal of a new algorithm needs to be evaluated using a series of criteria. This study selected five model evaluation metrics to comprehensively evaluate the HBD-SFREM. They are: model prediction accuracy rate (Acc), model prediction correct rate (Pre), model recall rate (Rec), and comprehensive evaluation metrics F-score and G-mean. All the above evaluation metrics were constructed by a confusion matrix. The confusion matrix is a table that visualizes the model predictions [39]. The PD speech diagnosis studied in this paper is a binary classification problem, thus the confusion matrix was constructed as shown in Table 3.
Table 3

Confusion matrix for PD speech recognition problem.

Prediction Labels
Positive (P)Negative (N)
Real labelPositive (P) TP FN
Negative (N) FP TN
Based on the above definition of the confusion matrix, the evaluation metrics (EM) of the algorithmic model studied in this paper could be defined as: ; ; ; ; ;

3.4. Results and Analysis

In this part, the ablation method was used to verify the major innovation parts of the HBD-SFREM and then the representative feature extraction and feature selection algorithms were selected for comparison. Furthermore, existing feature reduction algorithms for PD speech recognition and two deep learning methods were also introduced in comparing with the proposed model. In the experiments, the hold-out method was used to divide the PD speech dataset: the dataset was randomly partitioned into three disjoint sets, including the training, validation, and test sets. As multiple speech segments (instances) were collected for each PD subject in the used dataset, instances from the same subject should be divided into the same set, to avoid the crossover of instances from the same subjects which could effectively respond to the authenticity of the results.

3.4.1. Verification of the Effectiveness of HBD-SFREM

This section introduces the verification results of the innovation of HBD-SFREM, including the results of the BD-SFREM and those of the hierarchical space instance learning mechanism. It is worth noting that since the construction of the different hierarchy space is the basis for its learning mechanism, the validity of the hierarchical space instance learning mechanism could further prove the effectiveness of the construction of the different hierarchy instance space. Verification of the BD-SFREM; This part gives the results of both D-Spair and BD-SFREM. Two of the feature processing methods were chosen for constructing the D-Spair, and these are local discriminant preservation projection (LDPP) and Relief. To give a much clearer presentation of the results, some symbols should be defined below. Only-FE represents the mere usage of LDPP to process the features and Only-FS the Relief. D-Spair stands for the results of D-Spair module, while BD-SFREM represents the result of boosting dual-stage feature reduction pair ensemble module. (B) represents the affinity matrix of the binary mode in the feature extraction and (H) the heat kernel mode. The experiments constructed in this section were performed in the original instance space. As shown in Table 4, for LSVT, Parkinson, and PSDMTSR, the BD-SFREM had the best results in Acc, Pre, Rec, G-mean, and F-score regardless of diverse classifier, while for SelfData, the BD-SFREM had the best results in Acc and Pre. In addition, the results of D-Spair and BD-SFREM were much more accurate than those of the Only-FS and Only-FE. Thus, the D-Spair module and BD-SFREM are effective. Three of the four datasets used in this paper are unbalanced datasets. From the experiment results in the above table, the BD-SFREM module is helpful in handling imbalanced instance datasets, especially for the LSVT, PSDMTSR, and Parkinson datasets, and the advantages of the BD-SFREM are more obvious. Since the quality of the self-collected dataset was lower than that of the public dataset, its model effectiveness was accordingly reduced. However, it can be improved by the IDEM mechanism, which is illustrated in next section.
Table 4

Results of the validation of the algorithm using the ablation method (%).

MethodsOnly-FSOnly-FE (B)Only-FE (H)D-Spair (B)D-Spair (H)BD-SFREM (B)BD-SFREM (H)
Datasets/EM/Classifier
LSVTACCSVM (linear)78.5778.5778.5783.3383.3385.7192.86
SVM (RBF)76.1973.8171.4383.3385.7183.3390.48
preSVM (linear)95.2482.7691.3088.8988.8996.00100.00
SVM (RBF)95.0090.4878.5792.0092.3196.0096.15
RecSVM (linear)71.4385.7175.0085.7185.7185.7189.29
SVM (RBF)67.8667.8678.5782.1485.7185.7189.29
G-meanSVM (linear)81.4474.2380.1882.0782.0789.2194.49
SVM (RBF)79.3876.2667.0183.9185.7189.2191.05
F-scoreSVM (linear)81.6384.2182.3587.2787.2790.5794.34
SVM (RBF)79.1777.5578.5786.7988.8990.5792.59
PSDMTSRAccSVM (linear)45.1954.8152.5655.7756.4158.0758.33
SVM(RBF)46.7955.7755.7755.7756.7357.3758.97
PreSVM (linear)42.1157.8954.8860.9860.4265.4361.61
SVM (RBF)46.2159.1859.78591861.2960.1860.45
RecSVM (linear)45.1935.2628.8532.0537.1833.9744.23
SVM (RBF)47.4437.1835.2637.1836.5443.5951.92
G-meanSVM (linear)40.7451.2046.1950.4753.0352.8056.60
SVM (RBF)46.1652.5851.8652.5853.0255.6958.55
F-ScoreSVM (linear)31.8743.8237.8242.0246.0344.7351.49
SVM (RBF)42.3645.6744.3545.6745.7850.5655.86
ParkinsonAccSVM (linear)59.6866.1366.1367.7479.0396.7795.16
SVM (RBF)61.2959.6861.2967.7462.9083.8779.03
PreSVM (linear)90.32100.00100.00100.00100.00100.00100.00
SVM (RBF)84.2190.3284.2182.6180.0097.6293.02
RecSVM (linear)56.0058.0058.0060.0074.0096.0094.00
SVM (RBF)64.0056.0064.0076.0072.0082.0080.00
G-meanSVM (linear)64.8176.1676.1677.4686.0297.9896.95
SVM (RBF)56.5764.8156.5750.3342.4386.7077.46
F-scoreSVM (linear)69.1473.4273.4275.0085.0697.9696.91
SVM (RBF)72.7369.1472.7379.1775.7989.1386.02
Self DataAccSVM (linear)47.5544.7645.4558.0455.2458.7458.74
SVM (RBF)45.4543.3645.4546.8546.1549.6558.04
PreSVM (linear)35.0633.3334.1538.8936.3640.5440.00
SVM (RBF)33.7532.5334.5235.0034.5734.3833.33
RecSVM (linear)51.9251.9253.8526.9230.7728.8526.29
SVM (RBF)51.9251.9255.7753.8553.8542.3115.38
G-meanSVM (linear)48.3745.9546.7945.1846.1546.7745.50
SVM (RBF)46.5644.6946.9748.0450.3947.7335.60
F-scoreSVM (linear)41.8640.6041.7931.8233.3333.7132.18
SVM (RBF)40.9140.0042.6542.4242.1137.9324.05
Verification of the hierarchical space instance learning mechanism; This section compares the results of the deep hierarchy instance space with those of the original instance space, and illustrates the effectiveness of the hierarchical space instance learning mechanism. (O) represents the results in the original instance space and (H) the results in the deep hierarchy instance space. Specifically speaking, Only-FS (O) stands for the results of the original instance space, and Only-FS (H) the results of the deep hierarchy instance space. As shown in Table 5, the results of the deep hierarchy space instance (H) were improved for all PD speech datasets in diverse methods compared with the results of the original instance space (O). For LSVT, PSDMTSR, and SelfData, the results of (H) were obviously better than those of (O). For Parkinson, the results of (H) were also improved, though insignificantly. The last two columns of the table are the results of BD-SFREM, from which the results of (H) were obviously better than those of (O) in all datasets, with a maximum improvement rate of 9.53% on the LSVT dataset. Therefore, the hierarchical space instance learning mechanism in this paper is effective.
Table 5

Verification of hierarchy space instance learning mechanism (%).

MethodsOnly-FSOnly-FE (B)D-Spair (B)BD-SFREM (B)
Datasets/EM/Classifier (O)(H)(O)(H)(O)(H)(O)(H)
LSVTACCSVM (linear)78.5780.9578.5785.7183.3385.7185.7185.71
SVM (RBF)76.1983.3373.8183.3383.3385.7183.3392.86
preSVM (linear)95.2495.4582.7695.8388.8995.8396.00100.00
SVM (RBF)95.0092.0090.4888.8992.0092.3196.0096.30
RecSVM (linear)71.4375.0085.7182.1485.7182.1485.7185.71
SVM (RBF)67.8682.1467.8685.7182.1485.7185.7192.86
G-meanSVM (linear)81.4483.4574.2387.3482.0787.3488.1085.71
SVM (RBF)79.3883.9176.2682.0783.9185.7188.1092.86
F-scoreSVM (linear)81.6384.0084.2188.4687.2788.4689.2188.89
SVM (RBF)79.1786.7977.5587.2786.7988.8989.2194.55
PSDMTSRAccSVM (linear)45.1948.0854.8158.0155.7757.6958.0157.05
SVM(RBF)47.4452.8855.7757.3755.7757.3757.3760.26
PreSVM (linear)42.1147.8657.8962.8960.9865.7965.4360.19
SVM(RBF)64.2256.9259.1860.3659.1860.3660.1861.11
RecSVM (linear)25.6442.9535.2639.1032.0532.0533.9741.67
SVM(RBF)44.8723.7237.1842.9537.1842.9543.5956.41
G-meanSVM (linear)40.7447.8051.2054.8450.4751.6852.8054.94
SVM(RBF)58.0144.1152.5855.5352.5855.5355.6960.13
F-ScoreSVM (linear)31.8745.2743.8248.2242.0243.1044.7349.24
SVM(RBF)52.8333.4845.6750.1945.6750.1950.5658.67
ParkinsonAccSVM (linear) 59.6872.5866.1374.1967.7482.2696.7785.48
SVM (RBF)61.2967.7459.6870.9767.7467.7483.8785.48
PreSVM (linear)90.3286.67100.00100.00100.00100.00100.00100.00
SVM (RBF)84.2185.7190.3279.6382.6182.6197.62100.00
RecSVM (linear)56.0078.0058.0068.0060.0078.0096.0082.00
SVM (RBF)64.0072.0056.0086.0076.0076.0082.0082.00
G-meanSVM (linear)64.8162.4576.1682.4677.4688.3297.9890.55
SVM (RBF)56.5760.0064.8126.7750.3350.3386.7090.55
F-scoreSVM (linear)69.1482.1173.4280.9575.0087.6497.9690.11
SVM (RBF)72.7378.2669.1482.6979.1779.1789.1390.11
Self DataAccSVM (linear)47.5548.2544.7645.4558.0447.5558.7462.94
SVM(RBF)45.4546.1543.3645.4546.8550.3549.6549.65
PreSVM (linear)35.0635.5333.3333.7538.8935.0540.5447.06
SVM (RBF)33.7533.7732.5334.1535.0035.8234.3832.14
RecSVM (linear)51.9251.9251.9251.9226.9251.9228.8515.38
SVM (RBF)51.9250.0051.9253.8553.8546.1542.3134.62
G-meanSVM (linear)48.3748.9545.9546.5645.1848.3646.7737.23
SVM (RBF)46.5646.8844.6946.7948.0449.3447.7344.90
F-scoreSVM (linear)41.8642.1940.6040.9131.8241.8633.7123.19
SVM (RBF)40.9140.3140.0041.7942.4240.3437.9333.33
Table 6 shows the results of HBD-SFREM in different spaces (in which SVM (RFE) classifier is used). From the results in Table 6, we can see that the integrated output is always optimal, which further improves the generalization performance of the whole model.
Table 6

Verification of the integration output (%).

Hierarchical SpaceOriginalSpaceDeepSpace 1DeepSpace 2 PFinal
Datasets/EM
LSVTACC83.3392.8685.7192.86
pre92.0096.3095.8396.30
Rec82.1492.8682.1492.86
G-mean83.9192.8687.3492.86
F-score86.7994.5588.4694.55
PSDMTSRACC57.3754.4960.2660.26
pre60.1856.3661.1161.11
Rec43.5939.7456.4156.41
G-mean55.6952.4560.1360.13
F-score50.5646.6258.6758.67
ParkinsonACC83.8782.2685.4893.55
pre97.6267.901.0097.62
Rec82.0092.0082.0094.00
G-mean86.7061.9190.5592.83
F-score89.1389.3290.1195.92
SelfDataACC49.6549.6556.6456.64
pre34.3832.1440.7440.74
Rec42.3134.6242.3142.31
G-mean47.7344.9052.3752.37
F-score37.9333.3341.5141.51

3.4.2. Comparison with the Representative Feature Processing Model

In this section, some representative feature processing methods, like mRMR, Pvalue, SVMRFE, PCA, and LDA, were selected to compare with the proposed model (HBD-SFREM. Because deep learning also acts as major feature processing methods, its two representative methods, namely deep belief network (DBN) and stacked encoder (SE), were compared with HBD-SFREM in this paper. To facilitate the results presentation, some symbols should be defined in the first place. HBD-SFREM (B) stands for the results in mode B, and HBD-SFREM (H) the results of mode H. As shown in Table 7, the results of HBD-SFREM outperformed the algorithm reference groups on ACC and Pre, regardless of diverse datasets and classifiers. For the LSVT dataset, HBD-SFREM outperformed those reference groups on Rec, G-mean, and F-score. For the PSDMTSR and Parkinson datasets, the results of HBD-SFREM in G-mean and F-score were more accurate than those of reference groups. For SelfData, the results of the HBD-SFREM on Acc and Pre were better than its reference groups. To demonstrate the advantages of HBD-SFREM more clearly, the results of using SVM (RBF) classifier on different datasets are given in Figure 3, where the HBD-SFREM has achieved the best accuracy. In summary, HBD-SFREM outperformed the reference groups in most cases, which further verifies the effectiveness of HBD-SFREM.
Table 7

Comparison with representative feature processing algorithms (%).

MethodsmRMRPvalueSVMRFEPCALDADBNSEHBD-SFREM
Datasets/EM/Classifier (B)(H)
LSVTACCSVM (linear)76.1983.3373.8183.3378.5778.5771.4388.1092.86
SVM (RBF)83.3380.9583.3369.0580.9592.8690.48
preSVM (linear)100.00100.0094.7495.6591.3095.2494.44100.00100.00
SVM (RBF)100.0091.6783.87100.0095.4596.3096.15
RecSVM (linear)64.2975.0064.2978.5775.0071.4360.7189.2989.29
SVM (RBF)75.0078.5792.8653.5775.0092.8689.29
G-meanSVM (linear)80.1886.6077.2685.4280.1881.4475.0894.4894.49
SVM (RBF)86.6082.0777.2673.1983.4592.8691.05
F-scoreSVM (linear)78.2685.7176.6086.2782.3581.6373.9194.3494.34
SVM (RBF)85.7184.6288.1469.7784.0094.5592.59
PSDMTSRAccSVM (linear)48.0846.4752.5657.0548.4047.6060.2661.2266.35
SVM(RBF)56.4156.4155.7756.7353.8560.2661.22
PreSVM (linear)47.8645.9956.2563.1047.1346.2764.2969.2372.57
SVM(RBF)62.8259.0957.3858.2754.7661.1164.00
RecSVM (linear)42.95.40.3823.0833.9726.2829.8164.1540.3852.56
SVM(RBF)31.4141.6744.8747.4444.2356.4151.28
G-meanSVM (linear)47.8046.0743.5152.1843.0544.1558.5857.5664.90
SVM(RBF)50.5754.4554.6955.9652.9860.1360.41
F-ScoreSVM (linear)45.2743.0032.7344.1733.7436.2653.7351.0160.97
SVM(RBF)41.8848.8750.3652.3048.9458.6756.94
ParkinsonAccSVM (linear) 72.5882.2680.6564.5269.3564.5267.7496.7795.16
SVM (RBF)72.5879.0372.5861.2975.8193.5598.39
PreSVM (linear)100.00100.0080.65100.0096.97100.0087.50100.00100.00
SVM (RBF)100.0093.0278.9576.00100.0097.92100.00
RecSVM (linear)74.0078.00100.0056.0064.0056.0070.0096.0094.00
SVM (RBF)66.0080.0090.0076.0070.0094.0098.00
G-meanSVM (linear)86.0288.3200.0074.8376.5974.8363.9097.9896.95
SVM (RBF)81.2477.4600.0000.0083.6792.8398.99
F-scoreSVM (linear)85.0687.6489.2971.7977.1171.7977.7897.9696.91
SVM (RBF)79.5286.0284.1176.0082.3595.9298.99
Self DataAccSVM (linear)48.2544.7660.1448.2545.4541.2661.5464.3461.54
SVM(RBF)47.5545.4551.7545.4545.4556.6466.43
PreSVM (linear)35.9034.1236.8435.9035.8734.0042.8653.8542.86
SVM (RBF)35.8034.5233.3334.8835.8740.7470.00
RecSVM (linear)53.8555.7713.4653.8563.4665.3817.3113.4617.31
SVM (RBF)55.7755.7732.6957.6963.4642.3113.46
G-meanSVM (linear)48.2544.7660.1448.2545.4542.3838.7635.4638.76
SVM (RBF)49.2546.3134.1849.2547.2452.3736.08
F-scoreSVM (linear)45.3244.6941.8345.0446.4344.7424.6621.5424.66
SVM (RBF)43.0842.3419.7243.0845.8341.5122.58
Figure 3

Comparison Results Using Different Datasets.

In addition, the ROC curves of all models on different datasets are shown in Figure 4. From Figure 4, we can see the area under curves (AUC) of HBD-SFREM is higher than the comparison models. It is worth noting that since SelfData is designed to simulate the real diagnosis environment of doctors, it is weaker in quality than the other three public datasets, but even under such conditions, the experimental result (AUC) shown in Figure 4 still proves that the HBD-SFREM is better than the comparative methods.
Figure 4

(a) Description of the ROC curves on LSVT; (b) description of the ROC curves on Parkinson; (c) description of the ROC curves on PSDMTSR; (d) description of the ROC curves on SelfData.

3.4.3. Comparison with Relevant PD Speech Recognition Methods

HBD-SFREM primarily improves the accuracy of PD speech recognition. This section aims to show the effectiveness of the HBD-SFREM by comparing it with other PD speech FR algorithms. The algorithm reference groups are as follows: Relief-SVM [4]: Little used method in 2012, it involves first selecting four feature processing methods to process the features of the dataset, and then using Relief and SVM classifier with linear kernel function model (Relief-SVM) to learn to obtain a model. mRMR classifier [3]: This method was used by Sakar in 2018. In [3], feature selection is first performed using mRMR and then the prediction results voting or stacking strategies of seven classifiers are integrated. LDA-NN-GA [20]: This algorithm was proposed by L Ali and C Zhu in 2019. In [20]. The dataset is partitioned into a training set and a test set using the leave-one-out method (LOSO). Since each subject in the dataset contains multiple samples, the leave-one-out method here actually leaves all samples from one subject. Then, the feature dimension of the dataset is reduced using the LDA dimension reduction algorithm, and the BP neural network with genetic algorithm optimization is used to train the optimal prediction model (LDA-NN-GA). FC-SVM [6]: This algorithm was proposed by Cigdem O in 2018. In [6], the Fisher criterion (FC)-based feature selection method is used to rank feature weights, finally, the first K useful features are selected based on a threshold to input the classifier (SVM with RBF) for training to obtain the model. SFFS-RF [40]: This algorithm was proposed by Galaz Z in 2016. In this study, the sequential floating feature selection algorithm (SFFS) is adopted to process the data features, followed by inputting the processed results into the RF classifier to learn the prediction model. Table 8 shows that HBD-SFREM always performed better than the other algorithms. For LSVT and Parkinson, the results were higher than those of the other algorithms, and the largest improvement rates in accuracy were 16.67% and 38.71%, respectively, demonstrating the advantages of HBD-SFREM. For SelfData and PSDMTSR, the results of HBD-SFREM were higher than the other algorithms in most cases, and the biggest improvement rates in accuracy were 22.37% and 22.27%, respectively. In addition, the experimental results of the comparison algorithms selected in this section were not as excellent as described in relevant studies, and the reason for this phenomenon is probably because the experimental conditions in this study were slightly different from those used in the reference group. For instance, the data diversity method differed from the method used by the authors in [20]. Additionally, the number of training data used in this study were less than that of [20]. In general, the larger the number of training data instances, the higher the prediction accuracy produced by the training model.
Table 8

Comparison of PD speech dataset processing algorithms (%).

DatasetsLSVTPSDMTSRParkinsonSelfData
Methods
HBD-SFREM (B)SVM (linear)92.8661.2296.7764.34
SVM (RBF)92.8660.2693.5556.64
HBD-SFREM (H)SVM (linear)92.8666.3595.1661.54
SVM (RBF)90.4861.2298.3966.43
Relief [4]SVM (linear)78.5745.1959.6847.55
SVM (RBF)76.1947.4461.2945.45
mRMR [3]SVM (linear)76.1948.0872.5848.25
SVM (RBF)83.3356.4172.5847.55
LDA-NN-GA [20]81.4261.3880.8363.00
ReliefF-FC-SVM (RBF) [6]82.5461.3881.6762.67
SFFS-RF [40]81.6460.6380.8360.00

4. Discussion and Conclusions

HBD-SFREM has introduced an excellent dual-stage feature processing method that integrates the advantages of traditional feature extraction and feature selection algorithms. HBD-SFREM could generate high-quality features that are most useful to model learning, and thus achieve an early and accurate diagnosis of PD. These benefits can improve the identification accuracy as well as its stability. In addition, HBD-SFREM could be applied to small sample datasets of PD speech, including some unbalanced speech datasets. Experimental results demonstrate that the HBD-SFREM outperforms other existing algorithms of PD speech diagnosis. Currently, publicly available PD speech datasets are relatively few. Three public PD speech datasets from UCI are introduced to validate the effectiveness as well as the innovativeness of the HBD-SFREM. In addition, this article also introduces the Chinese PD speech dataset collected by the authors. The experimental results indicate that HBD-SFREM achieves significantly better performance with the datasets studied. For all datasets, HBD-SFREM largely improves the diagnosis accuracy, especially on the Parkinson dataset. The degree of accuracy is enhanced by at least 19.36% compared to the other representative feature processing algorithms. At present, there are still relatively few fusion methods to study the selection and extraction of features for PD speech recognition, so this paper lays a good foundation for future research. For future study, many more types of feature extraction and selection methods should be introduced into this research to develop and evaluate further effective algorithms. Besides, the improvement of the hierarchical space instance learning mechanism should be verified. As a framework algorithm, HBD-SFREM is different from other extraction and feature selection algorithms. Therefore, HBD-SFREM is rather valuable for reference and study in this field.
  24 in total

1.  Selection of vocal features for Parkinson's Disease diagnosis.

Authors:  Olcay Kursun; Ergun Gumus; Ahmet Sertbas; Oleg V Favorov
Journal:  Int J Data Min Bioinform       Date:  2012       Impact factor: 0.667

2.  The global kernel k-means algorithm for clustering in feature space.

Authors:  Grigorios F Tzortzis; Aristidis C Likas
Journal:  IEEE Trans Neural Netw       Date:  2009-05-29

3.  Taste Recognition in E-Tongue Using Local Discriminant Preservation Projection.

Authors:  Lei Zhang; Xuehan Wang; Guang-Bin Huang; Tao Liu; Xiaoheng Tan
Journal:  IEEE Trans Cybern       Date:  2018-01-17       Impact factor: 11.448

4.  Kernel K-Means Sampling for Nyström Approximation.

Authors:  Li He; Hong Zhang
Journal:  IEEE Trans Image Process       Date:  2018-05       Impact factor: 10.856

5.  Telediagnosis of Parkinson's disease using measurements of dysphonia.

Authors:  C Okan Sakar; Olcay Kursun
Journal:  J Med Syst       Date:  2009-03-14       Impact factor: 4.460

6.  Suitability of dysphonia measurements for telemonitoring of Parkinson's disease.

Authors:  Max A Little; Patrick E McSharry; Eric J Hunter; Jennifer Spielman; Lorraine O Ramig
Journal:  IEEE Trans Biomed Eng       Date:  2009-04       Impact factor: 4.538

7.  Comparative Motor Pre-clinical Assessment in Parkinson's Disease Using Supervised Machine Learning Approaches.

Authors:  Erika Rovini; Carlo Maremmani; Alessandra Moschetti; Dario Esposito; Filippo Cavallo
Journal:  Ann Biomed Eng       Date:  2018-07-20       Impact factor: 3.934

8.  Collection and analysis of a Parkinson speech dataset with multiple types of sound recordings.

Authors:  Betul Erdogdu Sakar; M Erdem Isenkul; C Okan Sakar; Ahmet Sertbas; Fikret Gurgen; Sakir Delil; Hulya Apaydin; Olcay Kursun
Journal:  IEEE J Biomed Health Inform       Date:  2013-07       Impact factor: 5.772

9.  Prosodic analysis of neutral, stress-modified and rhymed speech in patients with Parkinson's disease.

Authors:  Zoltan Galaz; Jiri Mekyska; Zdenek Mzourek; Zdenek Smekal; Irena Rektorova; Ilona Eliasova; Milena Kostalova; Martina Mrackova; Dagmar Berankova
Journal:  Comput Methods Programs Biomed       Date:  2016-01-08       Impact factor: 5.428

10.  Effective dysphonia detection using feature dimension reduction and kernel density estimation for patients with Parkinson's disease.

Authors:  Shanshan Yang; Fang Zheng; Xin Luo; Suxian Cai; Yunfeng Wu; Kaizhi Liu; Meihong Wu; Jian Chen; Sridhar Krishnan
Journal:  PLoS One       Date:  2014-02-20       Impact factor: 3.240

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.