| Literature DB >> 28831290 |
Andreas Mayr1,2, Benjamin Hofner3, Elisabeth Waldmann1, Tobias Hepp1, Sebastian Meyer1, Olaf Gefeller1.
Abstract
Statistical boosting algorithms have triggered a lot of research during the last decade. They combine a powerful machine learning approach with classical statistical modelling, offering various practical advantages like automated variable selection and implicit regularization of effect estimates. They are extremely flexible, as the underlying base-learners (regression functions defining the type of effect for the explanatory variables) can be combined with any kind of loss function (target function to be optimized, defining the type of regression setting). In this review article, we highlight the most recent methodological developments on statistical boosting regarding variable selection, functional regression, and advanced time-to-event modelling. Additionally, we provide a short overview on relevant applications of statistical boosting in biomedicine.Entities:
Mesh:
Year: 2017 PMID: 28831290 PMCID: PMC5558647 DOI: 10.1155/2017/6083072
Source DB: PubMed Journal: Comput Math Methods Med ISSN: 1748-670X Impact factor: 2.238
Box 1The structure of statistical boosting algorithms.
Number of variables considered to be informative in different scenarios of stability selection and the default 25-fold bootstrap tuning of mboost without stability selection for comparison.
| Colon cancer | Breast carcinoma | Riboflavin production | |
|---|---|---|---|
| PFER = 1, | 2 | 1 | 4 |
| PFER = 3, | 3 | 1 | 5 |
| 25-fold bootstrap | 11 | 28 | 39 |