Literature DB >> 35192644

Body fat prediction through feature extraction based on anthropometric and laboratory measurements.

Zongwen Fan1,2, Raymond Chiong1, Zhongyi Hu3, Farshid Keivanian1, Fabian Chiong4.   

Abstract

Obesity, associated with having excess body fat, is a critical public health problem that can cause serious diseases. Although a range of techniques for body fat estimation have been developed to assess obesity, these typically involve high-cost tests requiring special equipment. Thus, the accurate prediction of body fat percentage based on easily accessed body measurements is important for assessing obesity and its related diseases. By considering the characteristics of different features (e.g. body measurements), this study investigates the effectiveness of feature extraction for body fat prediction. It evaluates the performance of three feature extraction approaches by comparing four well-known prediction models. Experimental results based on two real-world body fat datasets show that the prediction models perform better on incorporating feature extraction for body fat prediction, in terms of the mean absolute error, standard deviation, root mean square error and robustness. These results confirm that feature extraction is an effective pre-processing step for predicting body fat. In addition, statistical analysis confirms that feature extraction significantly improves the performance of prediction methods. Moreover, the increase in the number of extracted features results in further, albeit slight, improvements to the prediction models. The findings of this study provide a baseline for future research in related areas.

Entities:  

Mesh:

Year:  2022        PMID: 35192644      PMCID: PMC8863283          DOI: 10.1371/journal.pone.0263333

Source DB:  PubMed          Journal:  PLoS One        ISSN: 1932-6203            Impact factor:   3.240


1 Introduction

Obesity, characterised by excess body fat, is a medical problem that increases one’s risk of other diseases and health issues, such as cardiovascular diseases, diabetes, musculoskeletal disorders, depression and certain cancers [1-3]. These diseases could result in escalating the spiralling economic and social costs of nations [4]. Conversely, having extremely low body fat is also a significant risk factor for infection in children and adolescents [5], and it may cause pubertal delay [6], osteoporosis [7] and surgical complications [8]. Thus, the accurate prediction of both excess and low body fat is critical to identifying possible treatments, which would prevent serious health problems. Although a huge volume of medical data is available from sensors, electronic medical health records, smartphone applications and insurance records, analysing the data is difficult [9]. There are often too many measurements (features), leading to the curse of dimensionality [10] from a data analytics viewpoint. With a relatively small size of patient samples, but a large number of disease measurements, it is very challenging to train a highly accurate prediction model [11]. In addition, redundant, irrelevant or noise features may further hinder the prediction model’s performance [12]. Feature extraction, as an important tool in data mining for data pre-processing, has been applied to reduce the number of input features by creating new, more representative combinations of features [13]. This process reduces the number of features without leading to significant information loss [14]. In this study, three widely used feature extraction methods are utilised to reduce features. Specifically, by analysing large interrelated features, Factor Analysis (FA) can be used to extract the underlying factors (latent features) [15]. It is able to identify latent factors that adequately predict a dataset of interest. Unlike FA, which assumes there is an underlying model, Principal Component Analysis (PCA) is a descriptive feature reduction method that applies an optimal set of derived features, extracted from the original features, for model training [16]. PCA data projection concerns only the variances between samples and their distribution. Independent Component Analysis (ICA), a technique that assumes the data to be the linear mixtures of non-Gaussian independent sources [17], is widely used in blind source separation applications [18]. Feature extraction has been widely used in the medical area to map redundant, relevant and irrelevant features into a smaller set of features from the original data [19, 20]. For example, Das et al. [21] applied feature extraction methods to extract significant features from the raw data before using an Artificial Neural Network (ANN) model for medical disease classification. Their results showed that feature extraction methods could increase the accuracy of diagnosis. Tran et al. [22] proposed an improved FA method for cancer subtyping and risk prediction with good results. Sudharsan and Thailambal [23] applied PCA to pre-process the experimental datasets used for predicting Alzheimer’s disease. Their results showed that applying PCA for pre-processing could improve the precision of the prediction model. In the work of Franzmeier et al. [24], ICA was utilised to extract features from cross-sectional data for connectivity-based prediction of tau spreading in Alzheimer’s disease with impressive results. In addition, machine learning methods have been increasingly applied to solve body fat prediction problems [25]. Shukla and Raghuvanshi [26] showed that the ANN model is effective for estimating the body fat percentage using anthropometric data in a non-diseased group. Kupusinac et al. [27] also employed ANNs for body fat prediction and achieved high prediction accuracy. Keivanian et al. [28, 29] considered a weighted sum of body fat prediction errors and the ratio of features, and optimised the prediction using a metaheuristic search-based feature selection-Multi-Layer Perceptron (MLP) model (MLP is a type of ANN). Chiong et al. [30] proposed an improved relative-error Support Vector Machine (SVM) for body fat prediction with promising results. Fan et al. hybridised a fuzzy-weighted operation and Gaussian kernel-based machine learning models to predict the body fat percentage, while Uçar et al. [31] combined a few machine learning methods (e.g. ANN and SVM) for the same purpose, and their models achieved satisfactory predictions. In this study, we apply FA, PCA and ICA to extract critical features from the available features, using four machine learning methods—MLP, SVM, Random Forest (RF) [32], and eXtreme Gradient Boosting (XGBoost) [33]—to predict the body fat percentage. We consider five metrics, that is, the mean absolute error (MAE), standard deviation (SD), root mean square error (RMSE), robustness (MAC) and efficiency, in the evaluation process. We use experimental results based on real-world body fat datasets to validate the effectiveness of feature extraction for body fat prediction. One of the datasets is from the StatLib, based on body circumference measurements [34]; the other dataset is from the National Health and Nutrition Examination Survey (NHANES) based on physical examinations [35]. In addition, we employ the Wilcoxon rank-sum test [36] to validate whether the prediction accuracy based on feature extraction improves significantly or not. The motivation of this study is to assess and compare different feature extraction methods for body fat prediction as well as provide a baseline for future research in related areas. It is worth pointing out that the results presented here are new in the context of body fat prediction. We also explore the optimal number of features used for each of the feature extraction methods while balancing accuracy and efficiency. The rest of this paper is organised as follows: Section 2 briefly introduces the feature extraction methods and prediction models. In Section 3, experimental results based on the real-world body fat datasets are provided; specifically, performance measurements are first described, and then experimental results based on feature extraction for the prediction of body fat percentage are discussed. Lastly, Section 4 concludes this study and highlights some future research directions.

2 Methods

In this section, we first discuss three widely used feature extraction methods: FA, PCA and ICA. Then, we present four well-known machine learning algorithms—MLP, SVM, RF and XGBoost.

2.1 Feature extraction methods

Feature extraction methods are widely used in data mining for data pre-processing [37]. They can reduce the number of input features without incurring much information loss [38]. In this case, they can alleviate the overfitting of prediction models by removing redundant, irrelevant or noise measurements/features. In addition, with less misleading features, the model accuracy and computation time could be further improved.

2.1.1 Factor analysis

This widely used statistical method for feature extraction is an exploratory data analysis method. FA can be used to reduce the number of observable features with a set of fewer latent features (factors) without losing much information [39]. Each latent feature is able to describe the relationships between the corresponding observed features. Since the factor cannot be directly measured with a single feature, it is measured through the relationships in a set of common features, if and only if one of these requirements is satisfied: (a) The minimum number of features is used to capture maximum variability in the data and (b) the information overlap among the factors is minimised. By doing so, (1) the most common variance between features is extracted by the first latent factor; (2) eliminating the factor extracted in (1), the second factor with the most variance between the remaining features is extracted; and (3) steps (1) and (2) are repeated until the rest of features are tested. FA is very helpful for reducing features in a dataset where a large number of features can be presented by a smaller number of latent features. An example of the relationship between a factor and its observed features is given in Fig 1, in which p denotes the number of observed features. If the models has k latent features, then the assumption in FA is given in Eq 1. Generally, FA calculates a correlation matrix based on the correlation coefficient to determine the relationship for each pair of features. Then, the factor loadings are analysed to check which features are loaded onto which factors where factor loadings can be estimated using maximum likelihood [40]. where are factor loadings, which means that w is the factor loading of the ith variable on the rth factor (similar to weights or strength of the correlation between the feature and the factor) [41], and e is the error term, which denotes the variance in each feature that is unexplained by the factor.
Fig 1

An example of the relationship between a factor and its observed features.

2.1.2 Principle component analysis

PCA is a very useful tool for reducing the dimensionality of a dataset, especially when the features are interrelated [42]. This non-parametric method uses an orthogonal transformation to convert a set of features into a smaller set of features termed principal components. Using a covariance matrix, we are able to measure the association of each feature with other features. To decompose the covariance matrix, singular value decomposition [43] can be applied for linear dimensionality reduction by projecting the data into a lower dimensional space, which yields eigenvectors and eigenvalues of the principal components. In this case, we could obtain the directions of data distribution and the relative importance of these directions. A positive covariance between two features indicates that the features increase or decrease together, whereas a negative covariance indicates that the features vary in opposite directions. The first principal component could preserve as much of the information in the data as possible, whereas the second one could retain as much of the remaining variability as possible until no features are left. In other words, the extracted principal components are ordered in terms of their importance (variance). Considering that PCA is sensitive to the relative scaling of the original features, in practice, it is better to normalise the data before using PCA. An example of using a component to represent its corresponding features is given in Fig 2. As this figure shows, each component is a linear function of its corresponding features, whereas a feature in FA is a function of given factors plus an error term.
Fig 2

An example of using a component to represent its corresponding features.

2.1.3 Independent component analysis

ICA is a blind source separation technique [44]. It is very useful for finding factors hidden behind random signals, measurements or features based on high-order statistics. The purpose of ICA is to minimise the statistical dependence of the components of the representation. By doing so, the dependency among the extracted signals is eliminated. To achieve good performance, some assumptions should be met before using ICA [45]: (1) The source signals (features) should be statistically independent; (2) the mixture signals should be linearly independent from each other; (3) the data should be centred (zero-mean operation for every signals); and (4) the source signals should have a non-Gaussian distribution. One widely used application of ICA is the cocktail party problem [46]. As Fig 3 illustrates, there are two people speaking, and each has a voice signal. These signals are received by the microphones, which then send the mixture signals. Since the distance between the microphones and the people differ, the mixture signals from microphones differ as well. Using ICA for signal extraction, the original signals can be obtained. Notably, it is difficult for FA and PCA to extract source signals (original components).
Fig 3

An example of the process of extracting signals from the cocktail party problem with two speaking people (source signals) and two microphones (mixture signals).

2.2 Prediction models

In this section, four widely used machine learning models—MLP, SVM, RF and XGBoost—are introduced.

2.2.1 MLP

The MLP is a type of ANN that generally has three different kinds of layers, including the input, hidden and output layers [47]. Each layer is connected to its adjacent layers. Similarly, each neuron in the hidden and output layers is connected to all the neurons in the previous layer with a weight vector. The values from the weighted sum of inputs and bias term are fed into a non-linear activation function as outputs for the next layer. Fig 4 shows an example of MLP with three, two and one input, hidden and output neurons, respectively. We can see from the figure that the input layer has three input neurons (x1, x2, x3) and one bias term with a value of b1. Their values, based on the inner product with the weight matrix, are fed into the hidden layer. In this step, the input is first transformed using a learned non-linear transformation—an activation function g(⋅)—that projects the input data into a new space where it becomes linearly separable. The outputs of two neurons in the hidden layer depend on the outputs of input neurons and a bias neuron in the same layer with a value of b2. The output layer has one neuron that takes inputs from the hidden layer with the activation function, where f(x) is the feed-forward prediction value from an input vector .
Fig 4

An example of MLP with three input neurons, two hidden neurons, and one output neuron.

2.2.2 SVM

SVMs, founded on the structural risk minimisation principle and statistical learning theory [48], have been widely used in many real-world applications and have displayed satisfactory performance (e.g., see [49-51]). Given n training samples , the standard form of ε-SVM regression can be expressed as Eq (2). We can see from Fig 5 that, unlike the SVM for classification problems that classifies a sample into a binary class, the SVM regression fits the best line within a threshold value ε with tolerate errors (ξ and ). where is a weight vector, is the transpose of , b is a bias term, ξ and are slack variables of the ith sample, C is a penalty parameter, ε is a tolerance error, x and y are the ith input vector and output value, respectively, and ϕ(x) is a function that is able to map a sample from a low dimension space to a higher dimension space.
Fig 5

ε-SVM regression with the ε-insensitive hinge loss, meaning there is no penalty to errors within the ε margin.

After solving the objective function in Eq (2) using the Lagrangian function [52] and Karush–Kuhn–Tucker conditions [53], we can obtain the best parameters ( and ) for the SVM. The final prediction model, g(x), can be expressed as follows: where Kernel(x, x) = ϕ(x)ϕ(x) is a kernel function [54].

2.2.3 RF

The RF, proposed by Ho [55], is a decision tree-based ensemble model. For body fat prediction, the RF regression model uses an ensemble learning method for regression. It creates many decision trees based on the training set [56]. By combining multiple decision trees into one model, the RF model improves the prediction accuracy and stability. It is also able to avoid overfitting by utilising resampling and feature selection techniques. The training procedure of RF is given in Fig 6. As the figure illustrates, the RF generates many sub-datasets with the same size of samples from the given training samples based on the re-sampling strategy. Then, for each new training set, each decision tree is trained with the selected features based on recursive partitioning, where a decision tree search is applied for the best split from the selected features. The final output is based on the average of predictions from all the decision trees.
Fig 6

An example of the RF model.

2.2.4 XGBoost

XGBoost is also an ensemble model [57]. It employs gradient boosting [58] to group multiple results from the decision tree-based models as the final result. In addition, it uses shrinkage and feature sub-sampling to further reduce the impact of overfitting [59]. XGBoost is suitable in applications that require parallelisation, distributed computing, out-of-core computing, and cache optimisation, which is suitable in real-world applications that have high requirements of computation time and storage memory [60]. The training procedure of XGBoost is depicted in Fig 7. It can be seen from the figure that XGBoost is based on gradient boosting. More specifically, new models (decision trees) are built to predict the errors (residuals) of prior models (from f1 to the current model). Once all the models are obtained, they are integrated together to make the final prediction.
Fig 7

An example of the XGBoost model.

3 Experimental results and discussions

In this section, we present the results of the computational experiments conducted based on two body fat datasets—Cases 1 and 2—to validate the effectiveness of feature extraction methods for body fat prediction. Case 1 is based on anthropometric measurements, while Case 2 is based on physical examination and laboratory measurements. We compare four well-known machine learning algorithms, the MLP, SVM, RF and XGBoost, with the feature extraction methods used. Specifically, MLP_FA, MLP_PCA and MLP_ICA are the MLP based on FA, PCA and ICA; SVM_FA, SVM_PCA and SVM_ICA are the SVM based on FA, PCA and ICA; RF_FA, RF_PCA and RF_ICA are the RF based FA, PCA and ICA; and XGBoost_FA, XGBoost_PCA and XGBoost_ICA are XGBoost based on FA, PCA and ICA. The programming/development environment was based on Python using scikit-learn, and the experiments were executed on a computer with an i5-6300HQ CPU of 2.30GHz having 16.0 GB RAM.

3.1 Performance measures

In this study, we considered five performance measures. Specifically, the MAE and RMSE were used to evaluate the model’s approximation ability, SD was used to measure the variability of the errors between the predicted and target values, MAC [61] was used to evaluate model robustness, and computation time was used to measure the efficiency. To better evaluate the performance, we randomly shuffled the data and ran the experiments of five-fold cross validation for 20 times, then averaged them to get the final results. The computation time included the time for feature extraction and 20 runs of five-fold cross validation. Our objective was to minimise the MAE, SD, RMSE and computation time while maximising MAC. where n is the number of samples, and are prediction and target values of the ith sample, respectively, e is the ith sample’s absolute error, is the average of absolute errors, () is the inner product operation for () and , and () is the transpose of .

3.2 Parameter settings

We used the grid search approach with cross validation for parameter selection [62]. The settings used in our experiments, obtained after some tuning process, are listed in Table 1.
Table 1

Parameter settings for the prediction models, where #neurons is the number of neurons, #iterations is the maximum number of iterations, regularisation is the regularisation parameter, σ2 is the variance within the RBF kernel, #trees is the number of trees, and depth is the maximum depth of the tree.

Grid searchOptimal parameters
MLP#neurons = [100, 500, 1000]#neurons = 500
#iterations = [100, 500, 1000]#iterations = 500
SVMregularisation = [10, 100, 1000]regularisation = 10
1/σ2 = [0.001, 0.01, 0.1]1/σ2 = 0.001
RF#trees = [10, 100, 1000]#trees = 1000
depth = [3, 4, 5]depth = 5
XGBoost#trees = [10, 100, 1000]#trees = 100
depth = [3, 4, 5]depth = 3
A flowchart of different feature extraction methods used for body fat prediction based on K-fold cross validation with N repeated experiments is given in Fig 8 to further clarify the procedure of our experiments. In the figure, K = 5 and N = 20; i.e., the experiments were repeated 20 times and each experiment was conducted based on 5-fold cross validation.
Fig 8

A flowchart of different feature extraction methods used for body fat prediction based on K-fold cross validation with N repeated experiments.

3.3 Case 1: Body fat percentage prediction based on anthropometric measurements

3.3.1 Data description

The body fat dataset used in Case 1 contained 252 samples with 13 input features and one output feature. It was downloaded from the StatLib (see http://lib.stat.cmu.edu/datasets/bodyfat). The statistical descriptions of this dataset are provided in Table 2. The input features included age, weight and various body circumference measurements, and the output feature was the body fat percentage.
Table 2

Statistical properties of Case 1’s body fat dataset.

VariableUnitSymbolMinimumMaximumMeanStandard deviation
Age (years)yearsAge228144.884912.6020
Weight (lbs)lbsWeight118.5363.15178.924429.3892
Height (inches)inchesHeight29.577.7570.14883.6629
Neck circumferencecmNeck31.151.237.99212.4309
Chest circumferencecmChest79.3136.2100.82428.4305
Abdomen 2 circumferencecmAbdomen69.4148.192.556010.7831
Hip circumferencecmHip85147.799.90487.1641
Thigh circumferencecmThigh47.287.359.40605.2500
Knee circumferencecmKnee3349.138.59052.4118
Ankle circumferencecmAnkle19.133.923.10241.6949
Biceps (extended) circumferencecmBiceps24.84532.27343.0213
Forearm circumferencecmForearm2134.928.66392.0207
Wrist circumferencecmWrist15.821.418.22980.9336
Body fat percentage%Bodyfat%047.519.15088.3687

3.3.2 Determination of the number of extracted features

To determine the number of extracted features, we calculated the explained variance for each feature by using scikit-learn [63]. We only selected the principal components that have the largest eigenvalues based on a given threshold (i.e. how much information it contained). The four steps to determine the number of extracted features were as follows: (1) constructing the covariance matrix; (2) decomposing the covariance matrix into its eigenvectors and eigenvalues; (3) sorting the eigenvalues by decreasing order to rank the corresponding eigenvectors; and (4) selecting the k largest eigenvalues such that their cumulative explained variance reached the given threshold. The explained variance ratio for the StatLib dataset is given in Fig 9. Here, the threshold was set to 0.99, which means 99% of the information remained. In this case, six features were extracted from the 13 input features.
Fig 9

Explained variance ratio for the StatLib dataset.

3.3.3 Experiments and results

Table 3 presents the results obtained by the MLP, SVM, RF and XGBoost for body fat prediction with and without feature extraction. As shown in the table, the SVM, RF and XGBoost perform better than MLP. The performance of SVM and XGBoost is similar, whereas that of RF is the best in terms of accuracy. However, it is clear that, by incorporating feature extraction, the learning models can achieve higher prediction accuracy, stability and robustness in most cases. The XGBoost model with FA feature extraction generated the most precise and stable results, albeit taking longer computation time than the standalone XGBoost. Using the feature extraction method increases the computation time because feature extraction pre-processing also takes time, even though it is more efficient to train the prediction model with less input features. Among all the prediction models, XGBoost with FA for feature extraction shows the best prediction accuracy (MAE = 3.433, SD = 4.188 and RMSE = 4.248), and the SVM with PCA obtained results in the shortest computation time (close to the standalone SVM).
Table 3

Experimental results based on the StatLib dataset (best results are highlighted in bold).

Algorithm MAE SD RMSE MAC Computation time (s)
MLP6.8728.1778.3360.845150
MLP_FA3.7644.5474.6060.952884
MLP_PCA3.7164.5004.5640.953378
MLP_ICA6.3727.6447.7460.862442
SVM3.9414.7644.8240.947 9
SVM_FA3.7404.5294.6030.95267
SVM_PCA3.7964.6604.7240.94910
SVM_ICA3.6784.4494.511 0.954 20
RF3.8374.6124.6760.951411
RF_FA3.8914.7344.7880.947437
RF_PCA3.8664.6704.7390.948374
RF_ICA4.0074.8144.8910.945368
XGBoost3.9454.7584.8290.94784
XGBoost_FA 3.433 4.188 4.248 0.949116
XGBoost_PCA3.5384.2894.3370.94756
XGBoost_ICA3.5584.2964.3620.94674

3.3.4 Statistical analysis based on the Wilcoxon rank-sum test

Although the results of MLP, SVM and XGBoost presented thus far have shown that the use of feature extraction can improve their performance, statistical analysis is needed to validate whether the differences between the results obtained are statistically significant. In this section, we report the results of statistical tests conducted based on the Wilcoxon rank-sum test [64]. Table 4 shows the statistical test results based on the 20-run experimental results. As shown in the table, the MLP, SVM and XGBoost and their versions with the feature extraction methods incorporated are significantly different (the p-value is less than 0.05). However, the difference between the RF and RF_PCA is not significant. This means the use of feature extraction is effective in improving the performance of MLP, SVM and XGBoost.
Table 4

Wilcoxon rank-sum tests for the MLP, SVM, RF, XGBoost, and the use of feature extraction, based on the StatLib dataset in terms of RMSE (p-values less than 0.05 are highlighted in bold).

MLPSVMRFXGBoost
MLP_FA6.302×10−8
MLP_PCA6.302×10−8
MLP_ICA6.302×10−8
SVM_FA3.180×10−7
SVM_PCA2.756×10−5
SVM_ICA6.302×10−8
RF_FA 0.030
RF_PCA0.256
RF_ICA6.302×10−8
XGBoost_FA6.302×10−8
XGBoost_PCA6.302×10−8
XGBoost_ICA1.329×10−7

3.3.5 Prediction performance with more extracted features

To investigate the impact of having a different number of anthropometric features on the prediction performance, we increased the number of extracted features from 6 (as calculated in Section 3.3.2) to 13 (the total number of input features) in this series of experiments. Tables 5–7 show the results obtained by the MLP, SVM, RF and XGBoost using FA, PCA and ICA, respectively. As shown in Tables 5–7, in most cases, the accuracy (RMSE and MAE) and stability (SD and MAC) were not necessarily enhanced by extracting more features as the inputs of the learning models. Among the models being compared, XGBoost-FA performs the best for predicting the body fat percentage in terms of MAE, RMSE, SD and MAC, which means it is able to predict the body fat percentage with the highest accuracy and stability on the StatLib dataset.
Table 5

Experimental results for the MLP, SVM, RF, and XGBoost, based on the StatLib dataset, with FA feature extraction (best results are highlighted in bold; # means the number of features).

#MLPSVMRFXGBoost
MAC SD RMSE MAC MAC SD RMSE MAC MAC SD RMSE MAC MAC SD RMSE MAC
63.7644.5474.6060.9523.7404.5294.603 0.952 3.8914.7344.7880.947 3.433 4.188 4.248 0.949
73.7194.4744.539 0.953 3.7284.4834.5420.9533.9394.7834.8330.946 3.511 4.252 4.296 0.948
83.7184.4494.5040.9543.6654.4244.501 0.954 3.9054.7254.7770.947 3.463 4.179 4.231 0.949
93.6744.3964.447 0.955 3.6274.4034.4660.9553.9464.7694.8320.946 3.463 4.163 4.218 0.950
103.6724.3814.4330.9553.5424.3444.408 0.956 3.9154.7484.7910.947 3.460 4.160 4.217 0.950
113.6534.3564.436 0.956 3.5564.3994.4580.9553.9264.7724.8270.946 3.445 4.143 4.210 0.951
123.6344.3474.414 0.956 3.5174.3564.4280.9563.9794.8034.8900.945 3.464 4.196 4.247 0.949
133.6714.4044.471 0.955 3.6474.4194.4840.9553.9344.7714.8190.946 3.462 4.152 4.202 0.950
Table 7

Experimental results for the MLP, SVM, RF, and XGBoost, based on the StatLib dataset, with ICA feature extraction (best results are highlighted in bold; # means the number of features).

#MLPSVMRFXGBoost
MAC SD RMSE MAC MAC SD RMSE MAC MAC SD RMSE MAC MAC SD RMSE MAC
66.3727.6447.7460.8623.6784.4494.511 0.954 4.0074.8144.8910.945 3.558 4.296 4.362 0.946
76.3497.5337.6870.8683.7304.5364.592 0.952 4.1685.0325.1050.940 3.588 4.366 4.430 0.944
86.3057.5077.6300.8683.7914.6484.708 0.949 4.3355.2205.2860.935 3.676 4.492 4.546 0.941
96.2717.5157.5960.8653.7924.6214.689 0.950 4.3995.2935.3450.933 3.693 4.503 4.558 0.941
106.2867.4937.6240.8663.7994.6064.670 0.950 4.4105.3255.3930.932 3.701 4.492 4.574 0.941
116.2757.5007.6110.8663.7304.5084.573 0.953 4.4185.3255.3890.933 3.626 4.408 4.463 0.944
126.2057.4487.5410.8673.7544.5564.618 0.952 4.5865.4915.5750.929 3.657 4.499 4.556 0.942
136.2347.4077.5430.8713.7144.4964.563 0.953 4.5635.4965.5480.928 3.612 4.438 4.494 0.942
It is critical to reduce the number of dimensions when the data size or the number of dimensions is large (big data scenarios). In addition, the prediction models with PCA outperform the corresponding versions with ICA in terms of all the metrics used. This might be due to the Gaussian distribution of the body fat dataset since PCA can process the Gaussian distribution data while ICA cannot. Fig 10 depicts the comparative experimental results of the computation time for the MLP, SVM, RF and XGBoost using FA, PCA and ICA, respectively. The results show that XGBoost with FA is the fastest among the compared methods. Fig 10 also reveals that in some cases, the computation time increases with more features, which further highlights the importance of feature extraction in improving the efficiency. The computation time includes the time for feature extraction and 20 runs of five-fold cross validation, which means that when a different number of features are extracted, the time for feature extraction may also differ.
Fig 10

Comparison results in terms of computation time based on FA, PCA and ICA feature extraction for the StatLib dataset.

3.4 Case 2: Body fat percentage prediction based on physical examination and laboratory measurements

3.4.1 Data description

The body fat dataset used in Case 2 was downloaded from the NHANES (see https://www.cdc.gov/nchs/nhanes/index.htm). The data were pre-processed as in [65] by (1) combining DEMO, LAB11, LAB18, LAB25, BMX, and BIX files into one dataset, (2) keeping data on male adults (age > 18); and (3) removing samples with missing values. After pre-processing, 862 samples with 39 features were obtained. These features and their statistical descriptions are provided in Table 8.
Table 8

Statistical properties of Case 2’s body fat dataset.

More details can be found at https://www.cdc.gov/nchs/nhanes/index.htm.

VariableUnitSymbolMinimumMaximumMeanStandard deviation
Segmented neutrophils numberANC1.29.94.0111.5443
Basophils numberABC00.20.03110.0468
Lymphocyte numberALC0.35.32.05610.6086
Monocyte numberAMC0.11.60.58120.1828
Eosinophils numberAEC02.10.2060.1736
Red cell count SIRBC3.436.785.13730.389
Hemoglobin(g/dL)*10HGB113183155.36779.953
Hematocrit% / 100HCT0.3550.5470.4610.028
Mean cell volumefLMCV65.1108.689.93424.4912
Mean cell hemoglobinpgMCH20.937.430.32271.7578
Mean cell volumefL * 10MCHC310360337.05347.49
Red cell distribution width%RDW1118.812.40170.7036
Platelet count(%) SIPLT11491251.363155.2816
Mean platelet volumefLMPV6.111.88.36090.8788
Sodiummmol/LSNA129.9146.4139.70562.3272
Potassiummmol/LSK3.115.364.15860.3065
Chloridemmol/LSCL92.4112.3102.09052.8116
Calcium, totalmmol/LSCA2.1252.72.37910.0912
Phosphorusmmol/LSP0.5492.3571.11110.164
Bilirubin, totalumol/LSTB3.463.311.69115.9454
Bicarbonatemmol/LBIC173224.17172.2766
Glucosemmol/LGLU3.2231.1415.10611.5484
Ironumol/LIRN3.9446.3917.97086.5508
LDHU/LLDH45578151.936234.5774
Protein, totalg/LSTP649677.01514.3629
Uric acidumol/LSUA172.5642.4354.523971.593
Albuming/LSAL345746.83292.8720
Triglyceridesmmol/LTRI0.28211.5951.56101.2548
Blood urea nitrogenmmol/LBUN1.4154.92331.2775
Creatinineumol/LSCR35.4901.772.403431.6012
Cholesterol, totalmmol/LSTC1.689.724.90241.0921
ASTU/LAST982729.032537.1348
ALTU/LALT7116334.773847.1822
GGTU/LGGT769837.484947.624
Alkaline phosphotaseU/LALP3027184.254125.5154
WeightkgWT42.7138.181.908617.2013
Standing heightcmHT152.3201.3174.35317.8856
Waist circumferencecmWC62.4147.793.341413.6034
Estimated percent body fat%BFP461.824.18747.5771

Statistical properties of Case 2’s body fat dataset.

More details can be found at https://www.cdc.gov/nchs/nhanes/index.htm.

3.4.2 Determination of the number of extracted features

We ran the same experiment as in Section 3.3.2 to determine the number of extracted features. The explained variance ratio for the NHANES dataset is given in Fig 11. With the threshold set to 0.99, 12 features were extracted from the 38 input features.
Fig 11

Explained variance ratio for the NHANES dataset.

3.4.3 Experiment results

Table 9 presents results obtained through the MLP, SVM, RF and XGBoost for body fat prediction with and without feature extraction. These results are consistent with those shown in Table 3, and show that ensemble models such as XGBoost performs better than the MLP and SVM. Similarly, results show that incorporating feature extraction into the prediction models enhances the body fat prediction accuracy. The XGBoost model with PCA feature extraction generated the most precise and stable results, as well as shorter computation time than the standalone XGBoost.
Table 9

Experimental results based on the NHANES dataset (best results are highlighted in bold).

Algorithm MAE SD RMSE MAC Computation time (s)
MLP5.0886.3946.4340.936689
MLP_FA4.7276.0156.0380.9433360
MLP_PCA4.1605.2305.2500.9482684
MLP_ICA4.9196.2656.2880.9391206
SVM6.0227.5427.5720.91170
SVM_FA6.2108.0308.0600.887563
SVM_PCA4.8376.0586.0810.92963
SVM_ICA4.7056.2036.2250.939 28
RF4.5545.7305.7460.9491479
RF_FA4.8226.0446.0640.9431068
RF_PCA4.6965.8565.8770.946542
RF_ICA4.7065.8895.9050.946549
XGBoost4.5925.7805.8020.948584
XGBoost_FA4.1695.2555.2760.946656
XGBoost_PCA 4.021 5.07 5.089 0.950 178
XGBoost_ICA4.0395.0815.096 0.950 183

3.4.4 Statistical analysis based on the Wilcoxon rank-sum test

Table 10 presents statistical test results between the experimental results with and without feature extraction pre-processing. As shown in the table, the MLP, SVM, RF and XGBoost and their versions that use feature extraction are significantly different (the p-value is less than 0.05). This means the use of feature extraction methods are effective in improving the performance of MLP, SVM and XGBoost, but not that of RF (the performance of RF_FA, RF_PCA and RF_ICA is less than that of RF in Table 9).
Table 10

Wilcoxon rank-sum tests for the MLP, SVM, RF, XGBoost, and the use of feature extraction, based on the NHANES dataset in terms of RMSE (p-values less than 0.05 are highlighted in bold).

MLPSVMRFXGBoost
MLP_FA6.302×10−8
MLP_PCA6.302×10−8
MLP_ICA4.229×10−7
SVM_FA6.302×10−8
SVM_PCA6.302×10−8
SVM_ICA6.302×10−8
RF_FA6.302×10−8
RF_PCA1.473×10−6
RF_ICA5.509×10−6
XGBoost_FA6.302×10−8
XGBoost_PCA6.302×10−8
XGBoost_ICA6.302×10−8

3.4.5 Prediction performance with more extracted features

To evaluate the prediction performance on increasing the number of extracted features, we conducted experiments in which the number of features used ranged from 12 (as calculated in Section 3.4.2) to 38 (the total number of input features). Tables 11–13 show the results obtained from the MLP, SVM, RF and XGBoost by using FA, PCA and ICA for feature extraction, respectively. From the tables, we can observe that with more features extracted, the prediction models can be further improved using feature extraction methods. Table 11 shows that XGBoost based on FA feature extraction has the best prediction accuracy (3.713, 4.707 and 4.728 in terms of (MAE, SD, RMSE) using 38 features. However, it performs satisfactorily using 24 features (3.772, 4.783, 4.803), which is more feasible in real applications. As shown in Table 12, the MLP has the best performance using 35 features. It has improved (from 4.160, 5.230, 5.250 and 0.948 to 3.621, 4.618, 4.647 and 0.960) in terms of MAE, SD, RMSE and MAC. As Table 13 shows, XGBoost outperforms the other models in comparison with the use of different number of features. Its best result is 3.805, 4.818, 4.840 and 0.955 in terms of MAE, SD, RMSE and MAC based on 24 extracted features. The results with 38 features are used as the baseline. Analysing the results from Tables 11–13 reveals that the MLP, SVM, RF, and XGBoost with feature extraction performed similarly or better than their corresponding baselines in terms of all metrics with only half the features (19 features). This shows the potential of greatly improving the efficiency in real-world applications. In addition, analysis reveals that PCA is more suitable for extracting features for the body fat dataset than ICA. The reason could be that this body fat dataset has a Gaussian distribution and PCA is better suited for Gaussian-distribution data whereas ICA is better suited for non-Gaussian distribution data.
Table 11

Experimental results for the MLP, SVM, RF, and XGBoost, based on the NHANES dataset, with FA feature extraction (best results are highlighted in bold; # means the number of features).

#MLPSVMRFXGBoost
MAC SD RMSE MAC MAC SD RMSE MAC MAC SD RMSE MAC MAC SD RMSE MAC
124.7276.0156.0380.9436.2108.0308.0600.8874.8226.0446.0640.943 4.169 5.255 5.276 0.946
134.5905.8105.8270.9475.5257.0577.0860.9094.7115.8845.9070.946 4.021 5.062 5.074 0.950
144.5995.8105.8320.9475.2486.6196.6380.9174.7175.8845.9010.946 4.034 5.073 5.092 0.950
154.5855.7965.8140.9485.0436.3366.3580.9234.7395.9555.9810.944 4.031 5.078 5.103 0.950
164.5455.7305.7480.9494.6975.8625.8820.9344.7115.9055.9240.945 4.021 5.070 5.088 0.950
174.5395.7455.7620.9484.5425.6625.6790.9384.7485.9375.9640.945 4.017 5.045 5.069 0.950
184.3045.4365.4540.9544.1985.2705.2920.9474.5735.7215.7420.948 3.855 4.860 4.879 0.954
194.2625.3965.4100.9544.1305.1945.2100.9484.5565.7015.7230.949 3.837 4.844 4.859 0.955
204.2505.4015.4190.9544.0765.1075.1270.9504.5585.7225.7470.948 3.802 4.802 4.820 0.955
214.1895.3455.360 0.955 4.0395.1065.1280.9504.5925.7365.7490.948 3.809 4.824 4.840 0.955
224.1505.2795.295 0.956 3.9555.0115.0270.9514.5975.7395.7630.948 3.784 4.788 4.809 0.955
234.1615.2995.319 0.956 3.9244.9825.0090.9524.6005.7375.7560.948 3.792 4.767 4.785 0.956
244.1555.2975.315 0.956 3.9234.9995.0140.9524.6065.7605.7810.948 3.772 4.783 4.803 0.956
254.1485.2945.311 0.956 3.8974.9544.9800.9524.6015.7625.7810.948 3.809 4.790 4.807 0.955
264.1455.2915.307 0.956 3.8974.9734.9910.9524.6085.7535.7720.948 3.800 4.782 4.802 0.955
274.1365.2745.291 0.956 3.8904.9514.9750.9524.6255.7685.7960.948 3.798 4.787 4.810 0.955
284.1415.2915.302 0.956 3.8974.9634.9800.9524.6295.7755.7900.948 3.787 4.782 4.798 0.955
294.1365.2635.287 0.956 3.8984.9564.9730.9524.6155.7535.7720.948 3.832 4.822 4.841 0.955
304.1185.2595.276 0.957 3.9134.9945.0150.9524.6145.7645.7810.948 3.838 4.824 4.843 0.955
314.1015.2335.254 0.957 3.8904.9634.9820.9524.6055.7615.7790.948 3.814 4.817 4.834 0.955
324.0985.2255.240 0.957 3.9024.9785.0020.9524.6075.7555.7740.948 3.799 4.791 4.809 0.955
334.1125.2435.260 0.957 3.8844.9684.9820.9524.6335.7935.8210.947 3.797 4.803 4.822 0.955
344.0925.2265.246 0.957 3.9014.9855.0000.9524.6265.7805.8040.947 3.793 4.809 4.825 0.955
354.1095.2335.252 0.957 3.8984.9845.0020.9524.6315.7925.8110.947 3.801 4.801 4.821 0.955
364.1055.2315.251 0.957 3.8874.9664.9810.9524.6295.7765.7980.948 3.813 4.809 4.831 0.955
374.1105.2365.255 0.957 3.8804.9534.9700.9534.6325.7825.8010.947 3.777 4.774 4.791 0.955
384.1205.2335.256 0.957 3.9094.9654.9860.9524.5265.6495.6650.950 3.713 4.707 4.728 0.957
Table 13

Experimental results for the MLP, SVM, RF, and XGBoost, based on the NHANES dataset, with ICA feature extraction (best results are highlighted in bold; # means the number of features).

#MLPSVMRFXGBoost
MAC SD RMSE MAC MAC SD RMSE MAC MAC SD RMSE MAC MAC SD RMSE MAC
124.9196.2656.2880.9394.7056.2036.2250.9394.7065.8895.9050.946 4.039 5.081 5.096 0.950
134.8726.2136.2330.9404.7256.2076.2290.9394.7435.9355.9570.945 4.006 5.039 5.056 0.950
144.8656.1806.2060.9404.7016.1656.1820.9404.7585.9515.9700.944 3.982 5.024 5.045 0.951
154.8876.2226.2480.9394.7026.1576.1790.9414.7595.9515.9730.944 3.994 5.031 5.047 0.951
164.8636.1846.2040.9404.6956.1296.1470.9414.7845.9926.0110.944 4.007 5.054 5.074 0.950
174.6135.8535.8720.9464.4605.7595.7800.9484.6285.7805.8000.947 3.828 4.835 4.851 0.955
184.5525.7755.8010.9484.4235.6305.6480.9504.6195.7935.8150.947 3.830 4.820 4.836 0.955
194.5525.7905.8090.9484.4025.6095.6200.9514.6355.8165.8350.947 3.811 4.795 4.811 0.955
204.5035.7315.7510.9494.3945.6895.7100.9494.6735.8795.9020.946 3.832 4.814 4.829 0.955
214.5075.7305.7440.9494.3965.6685.6890.9504.7115.9105.9350.945 3.819 4.819 4.836 0.954
224.4715.6975.7220.9494.3825.6385.6610.9504.6925.8975.9120.945 3.817 4.826 4.840 0.955
234.4585.7075.7230.9494.3545.6605.6810.9504.7295.9265.9480.945 3.824 4.837 4.855 0.955
244.3945.6815.6940.9494.3195.6345.6540.9504.7746.0016.0220.943 3.805 4.818 4.840 0.955
254.4465.6615.6870.9504.3295.6335.6530.9504.7816.0226.0380.943 3.819 4.835 4.851 0.954
264.4305.6705.6920.9504.3255.6125.6270.9514.8136.0506.0720.943 3.850 4.873 4.888 0.954
274.4375.6735.6870.9504.3305.6325.6600.9504.8236.0816.1010.942 3.871 4.874 4.894 0.954
284.4265.6665.6860.9504.3065.5895.6080.9514.8416.0836.1120.942 3.853 4.875 4.896 0.954
294.4325.6715.6940.9504.3205.5785.6050.9514.8836.1586.1790.940 3.863 4.885 4.901 0.954
304.4105.6495.6680.9504.3245.6295.6490.9504.9116.1736.2010.940 3.883 4.911 4.930 0.953
314.4055.6535.6730.9504.3025.5665.5890.9524.8766.1276.1480.941 3.895 4.915 4.932 0.953
324.3995.6305.6530.9504.3335.6825.7020.9494.9156.1636.1900.940 3.890 4.901 4.918 0.953
334.3935.6295.6460.9504.3145.6325.6510.9504.9426.1996.2230.940 3.898 4.919 4.947 0.953
344.3895.6105.6310.9514.3135.5555.5750.9524.9496.2246.2440.939 3.903 4.922 4.943 0.953
354.3845.6235.6440.9514.3545.6665.6930.9504.9666.2306.2570.939 3.904 4.928 4.949 0.953
364.3865.6255.6470.9504.3455.6595.6780.9504.9866.2686.2910.938 3.925 4.952 4.970 0.952
374.4155.6515.6720.9504.3295.6025.6250.9515.0036.2716.2940.938 3.911 4.949 4.970 0.952
384.4015.6345.6530.9504.3445.6295.6460.9505.0226.2906.3160.938 3.931 4.970 4.986 0.952
Table 12

Experimental results for the MLP, SVM, RF, and XGBoost, based on the NHANES dataset, with PCA feature extraction (best results are highlighted in bold; # means the number of features).

#MLPSVMRFXGBoost
MAC SD RMSE MAC MAC SD RMSE MAC MAC SD RMSE MAC MAC SD RMSE MAC
124.1605.2305.2500.9484.8376.0586.0810.9294.6965.8565.8770.946 4.021 5.07 5.089 0.950
134.0565.1195.1390.9504.8316.0466.0650.9294.6685.8265.8480.947 4.010 5.06 5.077 0.950
144.0575.1085.1260.9504.8416.0706.0930.9294.6615.8155.8370.947 4.026 5.06 5.086 0.950
15 4.000 5.052 5.073 0.951 4.8526.0776.0990.9294.6945.8635.8830.9464.0285.0695.0880.950
16 3.982 5.018 5.033 0.952 4.8316.0496.0710.9294.6775.8105.8330.9474.0235.0625.0780.950
17 3.816 4.807 4.829 0.955 4.8276.0576.0740.9294.5455.6785.6940.9493.8354.809 4.828 0.955
18 3.743 4.736 4.755 0.957 4.8306.0506.0680.9294.5145.6345.6560.9503.8504.8194.8420.955
19 3.716 4.688 4.707 0.958 4.8286.0356.0610.9294.5245.6575.6730.9503.8044.7974.8160.955
20 3.697 4.693 4.714 0.958 4.8496.0726.0930.9284.5205.6535.6700.9503.7864.7734.7950.956
21 3.677 4.669 4.689 0.958 4.8316.0366.0590.9294.5305.6495.6660.9503.7694.7594.7750.956
22 3.663 4.655 4.675 0.958 4.8336.0546.0790.9294.5245.6455.6630.9503.7654.7684.7830.956
23 3.645 4.662 4.688 0.960 4.8456.0776.0990.9284.5395.6735.6890.9493.7474.7314.7510.957
24 3.665 4.642 4.672 0.959 4.8286.0486.0660.9294.5245.6595.6710.9503.7224.7044.7210.957
25 3.661 4.658 4.687 0.960 4.8566.0916.1160.9284.5155.6715.6880.9493.7194.7104.7280.957
26 3.652 4.653 4.679 0.960 4.8126.0236.0490.9304.5325.6525.6750.9503.7444.7304.7460.956
27 3.648 4.667 4.688 0.959 4.8386.0756.0980.9284.5025.6265.6460.9503.7514.7294.7490.956
28 3.650 4.644 4.667 0.960 4.8336.0636.0830.9294.5275.6625.6850.9503.7364.7094.7250.957
29 3.653 4.641 4.662 0.958 4.8256.0366.0590.9294.5185.6515.6690.9503.7224.6924.7070.957
30 3.651 4.655 4.681 0.959 4.8226.0296.0560.9294.5245.6545.6730.9503.7454.7294.7510.956
31 3.665 4.674 4.701 0.959 4.8356.0656.0920.9294.5075.6355.6520.9503.7414.7234.7400.956
32 3.622 4.631 4.651 0.960 4.8166.0366.0540.9294.4935.6225.6420.9503.7354.6954.7130.957
33 3.641 4.642 4.669 0.960 4.8316.0406.0630.9294.5115.6255.6520.9503.7204.6974.7130.957
34 3.638 4.637 4.656 0.960 4.8706.0896.1130.9284.5165.6425.6570.9503.7454.7294.7420.956
35 3.621 4.618 4.647 0.960 4.8366.0536.0750.9294.5245.6465.6650.9503.7314.7124.7350.957
36 3.628 4.638 4.662 0.960 4.8396.0536.0710.9294.5065.6175.6340.9503.7064.6844.7010.957
37 3.635 4.614 4.638 0.959 4.8426.0546.0740.9294.5105.6285.6480.9503.7454.7544.7680.956
38 3.626 4.634 4.660 0.960 4.8416.0666.0910.9284.5335.6665.6840.9493.7384.7264.7430.956
Among the three feature extraction algorithms, PCA is the most effective one for this dataset. It greatly improves the performance of the prediction models being compared. In addition, Fig 12 depicts the comparative experimental results of computation time for the MLP, SVM, RF and XGBoost with different number of features extracted from FA, PCA and ICA. As shown in the figure, for each prediction model, there is a trend that with more features used, more time is needed. The prediction models ordered by computation time from the most time-consuming to the most efficient are the MLP, RF, XGBoost and SVM.
Fig 12

Comparison results in terms of computation time based on FA, PCA, and ICA feature extraction for the NHANES dataset.

4 Conclusion

The accurate prediction of body fat is important for assessing obesity and its related diseases. However, researchers find it challenging to analyse the large volumes of medical data generated. The main purpose of this study is to analyse and compare the prediction effectiveness of four well-known machine learning models (MLP, SVM, RF and XGBoost) when combined with three widely used feature extraction approaches (FA, PCA and ICA) for body fat prediction. The results presented in this paper are new in the context of body fat prediction; they could, therefore, provide a baseline for future research in this domain. Experimental results showed that feature extraction methods can reduce features without incurring significant loss of information for body fat prediction. In Case 1, with only six extracted features, the prediction models exhibited better performance than the models without using feature extraction. This finding confirms the effectiveness of feature extraction. Among the comparison models, XGBoost with FA had the best approximation ability and high efficiency. With the increase in the number of extracted features, model performance can be further improved. For Case 2, PCA was the most effective in improving model performance. Although the MLP with PCA had the best prediction accuracy, it required significantly more computation time. This means XGBoost is more appropriate for real-world applications, given its similar prediction accuracy and greater efficiency. Statistical analysis based on the Wilcoxon rank-sum test confirmed that feature extraction significantly improved the performance of MLP, SVM and XGBoost. This finding confirms the effectiveness of using feature extraction in these models. Although, the prediction models can be further improved slightly by increasing the number of extracted features, the number of features determined by the explained variance ratio was sufficient in both the considered cases. The feature extraction results themselves are a novel contribution of this work. The results provided by XGBoost with PCA feature extraction could be used as the baseline for future research in related areas. In future studies, we plan to investigate ways to improve the feature extraction method specified for body fat datasets. Methods of improving the prediction model (e.g. an improved MLP [66]), using XGBoost with PCA as a baseline for body fat prediction, also need to be investigated. It is also worth noting that the findings of this work could be applied to other prediction problems with a large number of features, e.g., finance, engineering and healthcare. Finally, we will explore other applications of analysing the body fat percentage. For example, applying domain knowledge to group body fat percentages into different disease classes in order to confirm the relationship between the body fat percentage and specific disease(s). 8 Oct 2021
PONE-D-21-28367
Body fat prediction through feature extraction based on anthropometric and laboratory measurements
PLOS ONE Dear Dr. Chiong, Thank you for submitting your manuscript to PLOS ONE. After careful consideration, we feel that it has merit but does not fully meet PLOS ONE’s publication criteria as it currently stands. Therefore, we invite you to submit a revised version of the manuscript that addresses the points raised during the review process.
In particular:
Please submit your revised manuscript by Nov 22 2021 11:59PM. If you will need more time than this to complete your revisions, please reply to this message or contact the journal office at plosone@plos.org. When you're ready to submit your revision, log on to https://www.editorialmanager.com/pone/ and select the 'Submissions Needing Revision' folder to locate your manuscript file. The "methods" section is unclear and detailed setup of each model used during experiments is not presented, There is no deep analysis of feature extraction results, Presentation needs improvements: Fig. 7 and Fig. 9 are hard to read - please consider using log scale for vertical axis, Table 7 - please remove not needed zeroes from values, Selection of simple models is not discussed and justified. Models performing dynamic (contextual) data processing could give better results even for simple NNs in comparison with  solutions with  static FA/PCA/ICA  (please see e.g. Sigma-if neural network), Conclusions are very shallow. Please include the following items when submitting your revised manuscript:
A rebuttal letter that responds to each point raised by the academic editor and reviewer(s). You should upload this letter as a separate file labeled 'Response to Reviewers'. A marked-up copy of your manuscript that highlights changes made to the original version. You should upload this as a separate file labeled 'Revised Manuscript with Track Changes'. An unmarked version of your revised paper without tracked changes. You should upload this as a separate file labeled 'Manuscript'. If you would like to make changes to your financial disclosure, please include your updated statement in your cover letter. Guidelines for resubmitting your figure files are available below the reviewer comments at the end of this letter. If applicable, we recommend that you deposit your laboratory protocols in protocols.io to enhance the reproducibility of your results. Protocols.io assigns your protocol its own identifier (DOI) so that it can be cited independently in the future. For instructions see: https://journals.plos.org/plosone/s/submission-guidelines#loc-laboratory-protocols. Additionally, PLOS ONE offers an option for publishing peer-reviewed Lab Protocol articles, which describe protocols hosted on protocols.io. Read more information on sharing protocols at https://plos.org/protocols?utm_medium=editorial-email&utm_source=authorletters&utm_campaign=protocols. We look forward to receiving your revised manuscript. Kind regards, Maciej Huk, Ph.D. Academic Editor PLOS ONE Journal Requirements: When submitting your revision, we need you to address these additional requirements. 1. Please ensure that your manuscript meets PLOS ONE's style requirements, including those for file naming. The PLOS ONE style templates can be found at https://journals.plos.org/plosone/s/file?id=wjVg/PLOSOne_formatting_sample_main_body.pdf and https://journals.plos.org/plosone/s/file?id=ba62/PLOSOne_formatting_sample_title_authors_affiliations.pd [Note: HTML markup is below. Please do not edit.] Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #1: No Reviewer #2: Yes Reviewer #3: Yes ********** 2. Has the statistical analysis been performed appropriately and rigorously? Reviewer #1: No Reviewer #2: Yes Reviewer #3: Yes ********** 3. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Yes ********** 4. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #1: Yes Reviewer #2: Yes Reviewer #3: Yes ********** 5. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #1: In this paper, authors tried three standard feature extraction methods on two open datasets. Verified with three vaninal models. The reason I reject this paper is that: this is a "toy paper". It will be a shame for this journal and machine learning community to publish something like this in 2021. I may grade a 'B' if this was a ML course project report from a junior university student. - Motivation What's the motivation of this paper? In the conclusion, the author states "Experimental results showed that feature extraction methods can reduce features without incurring significant loss of information for body fat prediction." "In Case 1, with only six extracted features, the prediction models exhibited better performance than the models without using feature extraction." "For Case 2, PCA was the most effective in improving model performance. " Do we need this paper to tell us this? Can't we find any similar sentences in any basic ML books for university students? "Although the MLP with PCA had the best prediction accuracy, it required significantly more computation time. This means XGBoost is more appropriate for real-world applications, given its similar prediction accuracy and greater efficiency." Didn't the author know that in real-world, complex neural networks have been used everywhere? This statement is so shabby and out of date. - Experiment setup Author didn't give detailed setup of each model. "Case 1 contained 252 samples with 13 input features" "After pre-processing, 862 samples with 39 features were obtained." Is this a joke? With this amount of data and feature dimensions, authors are talking about "curse of dimension". Are we living in 1980s? No deep analysis of feature extraction results. - others section 2.2 is waste of paper. You can find all those in wikipedia or any ML tutorial books/blogs/etc. Reviewer #2: In this manuscript, the authors investigate the effectiveness of feature extraction for body fat prediction. Their results on two real body fat datasets demonstrate that the prediction models perform better on incorporating feature extraction for body fat prediction. The paper is well organized; however, I have some concerns: 1. The method section is poorly written and rather unclear on many points; You don't need to show every step/equation for well-known methods. Just summarize them and describe your settings (e.g., # of node, activation function, LR...) 2. The task is a continuous prediction (body fat percentage), and how did you perform SVM/RF on continuous prediction tasks? Did you group the body fat percentage into different classes? 3. Can authors compare them with methods without feature extraction (just min-max normalization on raw features)? This is an important baseline. 4. On page1 last row, what is "Lagrange multiplier measurements (features)"? Typo? Reviewer #3: This work studies the problem of predicting body fat using a feature extraction method followed by a learning algorithm. For feature extraction, they tried ICA, PCA, and FA (factor analysis), and for learning and prediction, they used MLP, SVM, RF (random forest), and XGboost. According to the experimental validation work, XGBoost with FA has the best approximation ability. Overall the paper is easy to read and understand. Here are some issues I would like to see addressed: 1. State the novelty of the work. 2. Fix some notational inconsistencies, e.g., in section 2.2.1, stick with either e_i or error_i 3. Explain why you think ICA, PCA, and FA are the most useful feature extraction one should use and why MLP, SVM, RF, and XGBoost are picked as the prediction methods. 4. Why is this work confined only to a body fat prediction? Can it be used for any other prediction problem with only a large number of real-valued features? ********** 6. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #1: No Reviewer #2: No Reviewer #3: No [NOTE: If reviewer comments were submitted as an attachment file, they will be attached to this email and accessible via the submission site. Please log into your account, locate the manuscript record, and check for the action link "View Attachments". If this link does not appear, there are no attachment files.] While revising your submission, please upload your figure files to the Preflight Analysis and Conversion Engine (PACE) digital diagnostic tool, https://pacev2.apexcovantage.com/. PACE helps ensure that figures meet PLOS requirements. To use PACE, you must first register as a user. Registration is free. Then, login and navigate to the UPLOAD tab, where you will find detailed instructions on how to use the tool. If you encounter any issues or have any questions when using PACE, please email PLOS at figures@plos.org. Please note that Supporting Information files do not need this step. 5 Nov 2021 Please refer to the response letter attached (appended to this document) Submitted filename: Response to Reviewers Final.doc Click here for additional data file. 18 Jan 2022 Body fat prediction through feature extraction based on anthropometric and laboratory measurements PONE-D-21-28367R1 Dear Dr. Chiong, We’re pleased to inform you that your manuscript has been judged scientifically suitable for publication and will be formally accepted for publication once it meets all outstanding technical requirements. Within one week, you’ll receive an e-mail detailing the required amendments. When these have been addressed, you’ll receive a formal acceptance letter and your manuscript will be scheduled for publication. An invoice for payment will follow shortly after the formal acceptance. To ensure an efficient process, please log into Editorial Manager at http://www.editorialmanager.com/pone/, click the 'Update My Information' link at the top of the page, and double check that your user information is up-to-date. If you have any billing related questions, please contact our Author Billing department directly at authorbilling@plos.org. If your institution or institutions have a press office, please notify them about your upcoming paper to help maximize its impact. If they’ll be preparing press materials, please inform our press team as soon as possible -- no later than 48 hours after receiving the formal acceptance. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information, please contact onepress@plos.org. Kind regards, Maciej Huk, Ph.D. Academic Editor PLOS ONE Additional Editor Comments (optional): Reviewers' comments: Reviewer's Responses to Questions Comments to the Author 1. If the authors have adequately addressed your comments raised in a previous round of review and you feel that this manuscript is now acceptable for publication, you may indicate that here to bypass the “Comments to the Author” section, enter your conflict of interest statement in the “Confidential to Editor” section, and submit your "Accept" recommendation. Reviewer #2: All comments have been addressed Reviewer #3: All comments have been addressed ********** 2. Is the manuscript technically sound, and do the data support the conclusions? The manuscript must describe a technically sound piece of scientific research with data that supports the conclusions. Experiments must have been conducted rigorously, with appropriate controls, replication, and sample sizes. The conclusions must be drawn appropriately based on the data presented. Reviewer #2: Yes Reviewer #3: Yes ********** 3. Has the statistical analysis been performed appropriately and rigorously? Reviewer #2: (No Response) Reviewer #3: Yes ********** 4. Have the authors made all data underlying the findings in their manuscript fully available? The PLOS Data policy requires authors to make all data underlying the findings described in their manuscript fully available without restriction, with rare exception (please refer to the Data Availability Statement in the manuscript PDF file). The data should be provided as part of the manuscript or its supporting information, or deposited to a public repository. For example, in addition to summary statistics, the data points behind means, medians and variance measures should be available. If there are restrictions on publicly sharing data—e.g. participant privacy or use of data from a third party—those must be specified. Reviewer #2: Yes Reviewer #3: Yes ********** 5. Is the manuscript presented in an intelligible fashion and written in standard English? PLOS ONE does not copyedit accepted manuscripts, so the language in submitted articles must be clear, correct, and unambiguous. Any typographical or grammatical errors should be corrected at revision, so please note any specific errors here. Reviewer #2: Yes Reviewer #3: Yes ********** 6. Review Comments to the Author Please use the space provided to explain your answers to the questions above. You may also include additional comments for the author, including concerns about dual publication, research ethics, or publication ethics. (Please upload your review as an attachment if it exceeds 20,000 characters) Reviewer #2: (No Response) Reviewer #3: All of my comments have been adequately addressed. Thank you. The revised manuscript looks much better now. ********** 7. PLOS authors have the option to publish the peer review history of their article (what does this mean?). If published, this will include your full peer review and any attached files. If you choose “no”, your identity will remain anonymous but your review may still be made public. Do you want your identity to be public for this peer review? For information about this choice, including consent withdrawal, please see our Privacy Policy. Reviewer #2: No Reviewer #3: No 28 Jan 2022 PONE-D-21-28367R1 Body fat prediction through feature extraction based on anthropometric and laboratory measurements" Dear Dr. Chiong: I'm pleased to inform you that your manuscript has been deemed suitable for publication in PLOS ONE. Congratulations! Your manuscript is now with our production department. If your institution or institutions have a press office, please let them know about your upcoming paper now to help maximize its impact. If they'll be preparing press materials, please inform our press team within the next 48 hours. Your manuscript will remain under strict press embargo until 2 pm Eastern Time on the date of publication. For more information please contact onepress@plos.org. If we can help with anything else, please email us at plosone@plos.org. Thank you for submitting your work to PLOS ONE and supporting open access. Kind regards, PLOS ONE Editorial Office Staff on behalf of Dr. Maciej Huk Academic Editor PLOS ONE
Table 6

Experimental results for the MLP, SVM, RF, and XGBoost, based on the StatLib dataset, with PCA feature extraction (best results are highlighted in bold; # means the number of features).

#MLPSVMRFXGBoost
MAC SD RMSE MAC MAC SD RMSE MAC MAC SD RMSE MAC MAC SD RMSE MAC
63.7164.5004.564 0.953 3.7964.6604.7240.9493.8664.6704.7390.948 3.538 4.289 4.337 0.947
73.7274.5044.557 0.953 3.8204.6714.7280.9493.9014.7224.7820.947 3.511 4.237 4.287 0.948
83.7294.5034.556 0.953 3.8504.6934.7640.9483.9144.7744.8230.946 3.558 4.277 4.336 0.947
93.7544.5154.580 0.952 3.8224.6634.7220.9493.9164.7444.8050.947 3.530 4.286 4.324 0.947
103.7374.5014.543 0.953 3.8074.6624.7290.9493.9464.7784.8350.946 3.520 4.279 4.320 0.947
113.7034.4454.518 0.954 3.7894.6234.6870.9503.9194.7484.8170.946 3.493 4.222 4.273 0.948
123.7164.4754.546 0.953 3.7814.6294.6930.9503.9584.8254.8640.944 3.485 4.229 4.290 0.948
133.6354.4014.456 0.955 3.7774.6184.6830.9503.9814.8184.8930.945 3.438 4.149 4.205 0.950
  25 in total

1.  Relationship between underweight, bone mineral density and skeletal muscle index in premenopausal Korean women.

Authors:  J Lim; H S Park
Journal:  Int J Clin Pract       Date:  2016-05-10       Impact factor: 2.503

2.  A Multilayer Perceptron Based Smart Pathological Brain Detection System by Fractional Fourier Entropy.

Authors:  Yudong Zhang; Yi Sun; Preetha Phillips; Ge Liu; Xingxing Zhou; Shuihua Wang
Journal:  J Med Syst       Date:  2016-06-02       Impact factor: 4.460

Review 3.  Body mass index and the risk of infection - from underweight to obesity.

Authors:  J Dobner; S Kaser
Journal:  Clin Microbiol Infect       Date:  2017-02-20       Impact factor: 8.067

4.  Wilcoxon-Mann-Whitney or t-test? On assumptions for hypothesis tests and multiple interpretations of decision rules.

Authors:  Michael P Fay; Michael A Proschan
Journal:  Stat Surv       Date:  2010

Review 5.  Relief-based feature selection: Introduction and review.

Authors:  Ryan J Urbanowicz; Melissa Meeker; William La Cava; Randal S Olson; Jason H Moore
Journal:  J Biomed Inform       Date:  2018-07-18       Impact factor: 6.317

6.  Survival analysis for high-dimensional, heterogeneous medical data: Exploring feature extraction as an alternative to feature selection.

Authors:  Sebastian Pölsterl; Sailesh Conjeti; Nassir Navab; Amin Katouzian
Journal:  Artif Intell Med       Date:  2016-07-29       Impact factor: 5.326

7.  Predicting body fat percentage based on gender, age and BMI by using artificial neural networks.

Authors:  Aleksandar Kupusinac; Edita Stokić; Rade Doroslovački
Journal:  Comput Methods Programs Biomed       Date:  2013-10-29       Impact factor: 5.428

Review 8.  From obesity to diabetes and cancer: epidemiological links and role of therapies.

Authors:  Custodia García-Jiménez; María Gutiérrez-Salmerón; Ana Chocarro-Calvo; Jose Manuel García-Martinez; Angel Castaño; Antonio De la Vieja
Journal:  Br J Cancer       Date:  2016-02-23       Impact factor: 7.640

9.  Health Effects of Overweight and Obesity in 195 Countries over 25 Years.

Authors:  Ashkan Afshin; Mohammad H Forouzanfar; Marissa B Reitsma; Patrick Sur; Kara Estep; Alex Lee; Laurie Marczak; Ali H Mokdad; Maziar Moradi-Lakeh; Mohsen Naghavi; Joseph S Salama; Theo Vos; Kalkidan H Abate; Cristiana Abbafati; Muktar B Ahmed; Ziyad Al-Aly; Ala’a Alkerwi; Rajaa Al-Raddadi; Azmeraw T Amare; Alemayehu Amberbir; Adeladza K Amegah; Erfan Amini; Stephen M Amrock; Ranjit M Anjana; Johan Ärnlöv; Hamid Asayesh; Amitava Banerjee; Aleksandra Barac; Estifanos Baye; Derrick A Bennett; Addisu S Beyene; Sibhatu Biadgilign; Stan Biryukov; Espen Bjertness; Dube J Boneya; Ismael Campos-Nonato; Juan J Carrero; Pedro Cecilio; Kelly Cercy; Liliana G Ciobanu; Leslie Cornaby; Solomon A Damtew; Lalit Dandona; Rakhi Dandona; Samath D Dharmaratne; Bruce B Duncan; Babak Eshrati; Alireza Esteghamati; Valery L Feigin; João C Fernandes; Thomas Fürst; Tsegaye T Gebrehiwot; Audra Gold; Philimon N Gona; Atsushi Goto; Tesfa D Habtewold; Kokeb T Hadush; Nima Hafezi-Nejad; Simon I Hay; Masako Horino; Farhad Islami; Ritul Kamal; Amir Kasaeian; Srinivasa V Katikireddi; Andre P Kengne; Chandrasekharan N Kesavachandran; Yousef S Khader; Young-Ho Khang; Jagdish Khubchandani; Daniel Kim; Yun J Kim; Yohannes Kinfu; Soewarta Kosen; Tiffany Ku; Barthelemy Kuate Defo; G Anil Kumar; Heidi J Larson; Mall Leinsalu; Xiaofeng Liang; Stephen S Lim; Patrick Liu; Alan D Lopez; Rafael Lozano; Azeem Majeed; Reza Malekzadeh; Deborah C Malta; Mohsen Mazidi; Colm McAlinden; Stephen T McGarvey; Desalegn T Mengistu; George A Mensah; Gert B M Mensink; Haftay B Mezgebe; Erkin M Mirrakhimov; Ulrich O Mueller; Jean J Noubiap; Carla M Obermeyer; Felix A Ogbo; Mayowa O Owolabi; George C Patton; Farshad Pourmalek; Mostafa Qorbani; Anwar Rafay; Rajesh K Rai; Chhabi L Ranabhat; Nikolas Reinig; Saeid Safiri; Joshua A Salomon; Juan R Sanabria; Itamar S Santos; Benn Sartorius; Monika Sawhney; Josef Schmidhuber; Aletta E Schutte; Maria I Schmidt; Sadaf G Sepanlou; Moretza Shamsizadeh; Sara Sheikhbahaei; Min-Jeong Shin; Rahman Shiri; Ivy Shiue; Hirbo S Roba; Diego A S Silva; Jonathan I Silverberg; Jasvinder A Singh; Saverio Stranges; Soumya Swaminathan; Rafael Tabarés-Seisdedos; Fentaw Tadese; Bemnet A Tedla; Balewgizie S Tegegne; Abdullah S Terkawi; J S Thakur; Marcello Tonelli; Roman Topor-Madry; Stefanos Tyrovolas; Kingsley N Ukwaja; Olalekan A Uthman; Masoud Vaezghasemi; Tommi Vasankari; Vasiliy V Vlassov; Stein E Vollset; Elisabete Weiderpass; Andrea Werdecker; Joshua Wesana; Ronny Westerman; Yuichiro Yano; Naohiro Yonemoto; Gerald Yonga; Zoubida Zaidi; Zerihun M Zenebe; Ben Zipkin; Christopher J L Murray
Journal:  N Engl J Med       Date:  2017-06-12       Impact factor: 91.245

10.  A Novel Method for Cancer Subtyping and Risk Prediction Using Consensus Factor Analysis.

Authors:  Duc Tran; Hung Nguyen; Uyen Le; George Bebis; Hung N Luu; Tin Nguyen
Journal:  Front Oncol       Date:  2020-06-24       Impact factor: 6.244

View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.