| Literature DB >> 28831289 |
Janek Thomas1, Tobias Hepp2, Andreas Mayr2,3, Bernd Bischl1.
Abstract
We present a new variable selection method based on model-based gradient boosting and randomly permuted variables. Model-based boosting is a tool to fit a statistical model while performing variable selection at the same time. A drawback of the fitting lies in the need of multiple model fits on slightly altered data (e.g., cross-validation or bootstrap) to find the optimal number of boosting iterations and prevent overfitting. In our proposed approach, we augment the data set with randomly permuted versions of the true variables, so-called shadow variables, and stop the stepwise fitting as soon as such a variable would be added to the model. This allows variable selection in a single fit of the model without requiring further parameter tuning. We show that our probing approach can compete with state-of-the-art selection methods like stability selection in a high-dimensional classification benchmark and apply it on three gene expression data sets.Entities:
Mesh:
Year: 2017 PMID: 28831289 PMCID: PMC5555005 DOI: 10.1155/2017/1421409
Source DB: PubMed Journal: Comput Math Methods Med ISSN: 1748-670X Impact factor: 2.238
Figure 1True positive rate (on y-axis) and false discovery rate (on x-axis) for three different, boosting-based variable selection algorithms, probing (black), stability selection (green), cross-validation (blue), and different simulation settings: n ∈ {100,500}, p ∈ {100,500,1000}, and pinf ∈ {5,20}. All settings of stability selection are combined. Shaded areas are smooth hulls around all observed values.
Figure 2Boxplots of true positive rate (top) and false discovery rate (bottom) for different simulation settings and the three boosting-based, variable selection algorithms. Different Stability selection settings are denoted by SS(πthr, PFER).
Total number of selected variables and intersection size for four variable selection techniques (boosting with 25-fold bootstrap, probing, stability selection, and the lasso with 10-fold cross-validation) on three gene expression data sets. The last column compares algorithm runtime in seconds.
| Cross-validation | Probing | Stability selection | Lasso ( | Runtime (sec.) | |
|---|---|---|---|---|---|
|
| |||||
| Cross-validation |
| 10.52 | |||
| Probing | 5 |
| 1.78 | ||
| Stability selection | 3 | 3 |
| 49.4 | |
| Lasso ( | 7 | 5 | 3 |
| 0.4 |
|
| |||||
|
| |||||
| Cross-validation |
| 24 | |||
| Probing | 14 |
| 4.39 | ||
| Stability selection | 1 | 1 |
| 102.28 | |
| Lasso ( | 14 | 14 | 1 |
| 1.13 |
|
| |||||
|
| |||||
| Cross-validation |
| 14.2 | |||
| Probing | 10 |
| 6.89 | ||
| Stability selection | 5 | 5 |
| 66.46 | |
| Lasso ( | 23 | 7 | 4 |
| 0.68 |