| Literature DB >> 34857774 |
Farideh Jalali-Najafabadi1, Michael Stadler2, Nick Dand3, Deepak Jadon4, Mehreen Soomro2, Pauline Ho2,5, Helen Marzo-Ortega6, Philip Helliwell6, Eleanor Korendowych7, Michael A Simpson3, Jonathan Packham8, Catherine H Smith9, Jonathan N Barker10, Neil McHugh7, Richard B Warren11, Anne Barton2,5, John Bowes2,5.
Abstract
In view of the growth of clinical risk prediction models using genetic data, there is an increasing need for studies that use appropriate methods to select the optimum number of features from a large number of genetic variants with a high degree of redundancy between features due to linkage disequilibrium (LD). Filter feature selection methods based on information theoretic criteria, are well suited to this challenge and will identify a subset of the original variables that should result in more accurate prediction. However, data collected from cohort studies are often high-dimensional genetic data with potential confounders presenting challenges to feature selection and risk prediction machine learning models. Patients with psoriasis are at high risk of developing a chronic arthritis known as psoriatic arthritis (PsA). The prevalence of PsA in this patient group can be up to 30% and the identification of high risk patients represents an important clinical research which would allow early intervention and a reduction of disability. This also provides us with an ideal scenario for the development of clinical risk prediction models and an opportunity to explore the application of information theoretic criteria methods. In this study, we developed the feature selection and psoriatic arthritis (PsA) risk prediction models that were applied to a cross-sectional genetic dataset of 1462 PsA cases and 1132 cutaneous-only psoriasis (PsC) cases using 2-digit HLA alleles imputed using the SNP2HLA algorithm. We also developed stratification method to mitigate the impact of potential confounder features and illustrate that confounding features impact the feature selection. The mitigated dataset was used in training of seven supervised algorithms. 80% of data was randomly used for training of seven supervised machine learning methods using stratified nested cross validation and 20% was selected randomly as a holdout set for internal validation. The risk prediction models were then further validated in UK Biobank dataset containing data on 1187 participants and a set of features overlapping with the training dataset.Performance of these methods has been evaluated using the area under the curve (AUC), accuracy, precision, recall, F1 score and decision curve analysis(net benefit). The best model is selected based on three criteria: the 'lowest number of feature subset' with the 'maximal average AUC over the nested cross validation' and good generalisability to the UK Biobank dataset. In the original dataset, with over 100 different bootstraps and seven feature selection (FS) methods, HLA_C_*06 was selected as the most informative genetic variant. When the dataset is mitigated the single most important genetic features based on rank was identified as HLA_B_*27 by the seven different feature selection methods, consistent with previous analyses of this data using regression based methods. However, the predictive accuracy of these single features in post mitigation was found to be moderate (AUC= 0.54 (internal cross validation), AUC=0.53 (internal hold out set), AUC=0.55(external data set)). Sequentially adding additional HLA features based on rank improved the performance of the Random Forest classification model where 20 2-digit features selected by Interaction Capping (ICAP) demonstrated (AUC= 0.61 (internal cross validation), AUC=0.57 (internal hold out set), AUC=0.58 (external dataset)). The stratification method for mitigation of confounding features and filter information theoretic feature selection can be applied to a high dimensional dataset with the potential confounders.Entities:
Mesh:
Year: 2021 PMID: 34857774 PMCID: PMC8640070 DOI: 10.1038/s41598-021-00854-x
Source DB: PubMed Journal: Sci Rep ISSN: 2045-2322 Impact factor: 4.996
Figure 1Research Pipeline.
Figure 2The potential confounders were mitigated by stratification.
Figure 3Methodology for feature selection.
Figure 4Risk prediction model development and external validation.
Figure 5Heatmap (a) feature ranking in original dataset, Heatmap (b) of feature ranking post-mitigation depicting the majority vote over 100 bootstrap. The top 10 selected features (in rows) and seven features selection techniques in (columns).
Figure 6The ICAP feature selection pre-mitigation and post-mitigation for seven classification methods. Heatmap depicting the predictive performance (AUC for hold out set) for different number of HLA features(in rows) and different classification method in (columns).
The best generated models out of 448 generated models.
| The best models | AUC % | Precision% | Recall% | |||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Model Num | Model name | Feature Selection | Top features | Cross validation | Hold out | External | Cross validation | Hold out | External | Cross validation | Hold out | External |
| 402 | LG | disr | 40 | 0.62 | 0.58 | 0.57 | 0.60 | 0.53 | 0.55 | 0.57 | 0.54 | 0.57 |
| 303 | Adaboost | jmi | 60 | 0.66 | 0.64 | 0.54 | 0.62 | 0.59 | 0.52 | 0.61 | 0.64 | 0.60 |
| 416 | DT | disr | 10 | 0.54 | 0.53 | 0.51 | 0.67 | 0.63 | 0.53 | 0.16 | 0.16 | 0.20 |
| 398 | XGBoost | disr | 40 | 0.63 | 0.60 | 0.55 | 0.60 | 0.56 | 0.53 | 0.57 | 0.53 | 0.56 |
| 232 | KNNC | disr | 60 | 0.73 | 0.76 | 0.53 | 0.73 | 0.74 | 0.53 | 0.74 | 0.76 | 0.52 |
| 39 | NB Gussain | mim | 10 | 0.61 | 0.58 | 0.59 | 0.63 | 0.54 | 0.57 | 0.35 | 0.34 | 0.42 |
| 184 | RF | icap | 20 | 0.61 | 0.58 | 0.58 | 0.59 | 0.54 | 0.58 | 0.54 | 0.45 | 0.59 |
Figure 7Comparison between AUC for cross validation, hold out and external set for 448 different generated ML models.There are 448 models trained using combination of ‘number of features’, ‘Feature Selection’, ‘7 ML Model type’.
Machine Learning Algorithms and their Corresponding Hyperparameters.
| Model | Scikit-Learn package | Parameter name in Scikit_Learn Package | Test Range |
|---|---|---|---|
| DT | tree.DecisionTreeCalssifier | max_features | [1, 10, 20, 30, 40, 50, 60, 70] |
| max_depth | [1, 2] | ||
| min_sample_split | [2, 5, 10] | ||
| min_sample_leaf | [2, 3, 4, 5] | ||
| XGBoost | xgboost.XGBClassifier | n_estimators | [100, 200, 300, 400] |
| learning_rate | [0.1, 0.5, 1.0] | ||
| max_depth | [1, 2] | ||
| min_child_weight | [1, 3] | ||
| eta | [0.8] | ||
| gamma | [2] | ||
| lambda | [0.5] | ||
| alpha | [0.5] | ||
| RF | ensemble.RandomForestClassifier | n_estimators | [100, 200, 300, 400] |
| max_depth | [1,2] | ||
| max_feature | [1,10,20,30,40,50,60,70] | ||
| min_sample_leaf | [2,3,4, 5] | ||
| min_samples_split | [2,5,10] | ||
| AdaBoost | ensemble.AdaBoostClassifier | n_estimator | [100, 200, 300, 400] |
| learning_rate | [0.1, 0.5, 1.0] | ||
| LR | linear_model.LogisticRegression | C | [0.01,0.1,1,10] |
| KNN | neighbors.KNeighborsClassifire | K | [1, 3, 5] |
| NB Gaussian | naive_bayes.GaussianNB | — | — |
Figure 8(a) Receiver operating characteristic and (b) Precsion-Recall curve of the best models with internal( hold out and cross validation) and external model performance for RF.
Figure 9Decision curve analysis for seven machine learning models for prediction of psoriatic arthritis (PsA).