Literature DB >> 30116285

Clarifying questions about "risk factors": predictors versus explanation.

C Mary Schooling1,2, Heidi E Jones1.   

Abstract

BACKGROUND: In biomedical research much effort is thought to be wasted. Recommendations for improvement have largely focused on processes and procedures. Here, we additionally suggest less ambiguity concerning the questions addressed.
METHODS: We clarify the distinction between two conflated concepts, prediction and explanation, both encompassed by the term "risk factor", and give methods and presentation appropriate for each.
RESULTS: Risk prediction studies use statistical techniques to generate contextually specific data-driven models requiring a representative sample that identify people at risk of health conditions efficiently (target populations for interventions). Risk prediction studies do not necessarily include causes (targets of intervention), but may include cheap and easy to measure surrogates or biomarkers of causes. Explanatory studies, ideally embedded within an informative model of reality, assess the role of causal factors which if targeted for interventions, are likely to improve outcomes. Predictive models allow identification of people or populations at elevated disease risk enabling targeting of proven interventions acting on causal factors. Explanatory models allow identification of causal factors to target across populations to prevent disease.
CONCLUSION: Ensuring a clear match of question to methods and interpretation will reduce research waste due to misinterpretation.

Entities:  

Keywords:  Cause; Confounding; Predictor; Risk factor; Scientific inference; Selection bias; Statistical inference

Year:  2018        PMID: 30116285      PMCID: PMC6083579          DOI: 10.1186/s12982-018-0080-z

Source DB:  PubMed          Journal:  Emerg Themes Epidemiol        ISSN: 1742-7622


Introduction

Biomedical research has reached a crisis where much research effort is thought to be wasted [1]. Recommendations to improve the situation have largely focused on processes and procedures, such as addressing high impact questions, starting from what is already known, registration of protocols, and making data available [1]. Here we additionally suggest that ensuring the conceptual approach matches the question would avoid conflating different questions and mistaking the answer to one question for the answer to a different question. Much observational biomedical research concerns the role of “risk factors” in disease, which is conducted for two main reasons, (1) risk stratification or prediction and (2) assessing causality. These are two fundamentally different questions, concerning two different concepts, i.e., prediction versus explanation, which require different approaches and have substantively different interpretations. However, the use of the term “risk factor” as something which may predict and/or explain means that these two concepts may be conflated so that a study may not fulfill either objective, i.e., neither predicts nor explains. For example, major predictors of cardiovascular disease have long been assumed to be targets of intervention [2]. After extensive and expensive research investment over more than 35 years, including the development, testing and failure of an entire new class of drugs (CETP inhibitors) [3], high density lipoprotein cholesterol has recently been identified to be a non-causal risk factor (i.e. predictor) for cardiovascular disease [4, 5]. Equally, factors that do not predict risk are very rarely identified as causal factors (such as factors that are ubiquitous in a given community and thus are not caught as increasing risk) [6], which suggests interventions are being missed. Given, the importance of avoiding ‘low priority questions’ [1], here, we clarify the difference between these two concepts and their use in observational studies.

Prediction models

Risk stratification, prediction models or ‘weather forecasting’ models identify people or groups at high or elevated risk of a particular health condition, ideally so that they can be offered proven interventions, or other mitigation can be implemented. A very successful example of a risk stratification model is the Framingham score which predicts 10-year risk of heart disease in healthy people [7], to inform prevention, such as use of lipid modulators. Other examples include prognostic models for identifying the best cancer treatment [8], or models for predicting disease trends, such as Google Flu Trends [9]. These predictive models typically rely on statistical projections of previous patterns, and to be feasible usually rely on easily captured information. For example, the Framingham score can be applied in daily clinical practice, even in resource poor settings, because it only requires assessment of age, sex, smoking, blood pressure, lipids and diabetes, which are relatively cheap and quick to measure. Google flu trends was based on internet search terms for particular symptoms [9]. Prediction models are usually developed based on statistical criteria to fit the distribution of the data well, using techniques such as stepwise selection, or more recently machine learning techniques. Prediction models often include several “risk factors” to obtain a model that fits the data well and can explain the greatest amount of variance in the outcome health condition. The contribution of each “risk factor” is presented so that the reader can see the independent contribution of each one to the overall prediction, as well as measures of model fit. Prediction models are usually validated in similar populations. As with all statistical models they cannot be expected to predict well in novel circumstances [9], and are best developed using a representative sample of the population in which they will be applied. Consistently poor measurement will impair precision, because it adds noise. Inconsistently poor measurement will impair predictive power, because it may change the relation between risk factor and outcome. Prediction models may not be generalizable to populations that differ from the one in which they were developed because in a new population the correlation between predictive and true causal factors may be different. For example, the Framingham model often has to be calibrated to predict absolute risk of heart disease correctly in new populations [10]. While Google Flu trends is no longer providing estimates; it became inaccurate, possibly because the model needed dynamic recalibration of the relation between search terms and influenza to stay on track [9]. Tried and tested prediction models are immensely valuable for identifying target populations, i.e. people or groups in need of prevention or treatment, but the “risk factors” that predict a health condition are not necessarily targets of intervention. For example flu symptoms do not cause influenza, and are not targets of intervention to prevent the occurrence of flu. Whether the “risk factors” that predict health conditions in risk stratification models are also targets of intervention has to be established from different studies designed to assess effects of interventions. As such, it is not appropriate to calculate a population attributable risk or proportion for “risk factors” from a risk prediction model, because removal of these “risk factors” might or might not affect population health. Similarly, the purpose of predictive models is to explain the greatest amount of the variance in the outcome, so only factors that contribute to explaining the variance need to be included. The concepts of confounding, mediation and effect measure modification are not applicable to predictive models. Interaction terms can be added to predictive models to improve model fit, but these interactions should not be interpreted as indicating different effects by subgroup.

Explanatory models

Explanatory models can be thought of as simplified, abstract, propositional models for some particular aspect of how the world works, which provide a guide as to how to manipulate items of interest. As such, explanatory models are based on potentially causal factors i.e., factors whose manipulation changes the outcome [11]. Studies assessing causality are explanatory rather than predictive. Explanatory models are designed to assess whether a particular “risk factor” explains the occurrence, or course, of disease and as such is a valid target of intervention. “Risk factors” selected as potential causal factors might be based on “risk factors” from predictive models, might be theoretically based or might be hypothesized from other sources. For example, the Framingham score includes factors, such as smoking and blood pressure, which undoubtedly cause heart disease, but also other “risk factors”, such as age, sex and high-density lipoprotein, whose causal role in heart disease is less clear [12]. In contrast, factors that do not predict disease might be identified as possible causal factors based on physiology or well-established theories. For example, observationally telomere length does not appear to predict renal cell cancer, but people with genetically longer telomeres are at greater risk [13], suggesting a causal role in renal cell cancer, as in other cancers [14]. Studies assessing the role of causal factors need to avoid the major sources of bias in observational studies designed to assess causality, which can be most simply thought of as confounding and selection bias [15, 16]. In addition, measurement error is often thought of as an additional source of bias, although non-differential measurement error usually biases towards the null and differential measurement error can be thought of as a form of selection bias. Confounding occurs when extraneous common causes of the putative cause and health condition are omitted so that a spurious relation is observed. For example, smoking causes both yellow fingers and lung cancer, so any assessment of the causal effect of yellow fingers on lung cancer would need to take smoking into account. Confounding is difficult to avoid unless all the common causes of the putative cause and disease are known. One of the simplest options, when experimental studies (randomized controlled trials) are not possible, is to use methods, such as Mendelian randomization, that are less open to confounding [17]. No method is assumption free, and Mendelian randomization has stringent assumptions, nevertheless it has clarified some controversies over the causes of cardiovascular disease, such as the role of high density lipoprotein-cholesterol [18]. A sufficient set of confounders needs to be identified from external knowledge of causality, measured accurately in the study and included in the analytic model, so that any residual confounding does not cause incorrect causal inference. Given, the difficulty of assessing both known and unknown confounders, demonstrating that estimates for other associations subject to the same confounding are coherent with known causal effects gives greater credence to any new estimates from the same study [19, 20]. For example, observational studies of hormone replacement therapy (HRT) in women found apparent benefits for accidents as well as for cardiovascular disease [21], which suggests residual confounding for cardiovascular disease because HRT would not be expected physiologically to protect against accidents. The apparently protective findings were also not coherent with estrogen having no benefit for men in the Coronary Drug project trial [22] and possibly causing myocardial infarction in young women [23]. Nevertheless, confounding can, potentially, be addressed in an observational study by collecting sufficient relevant information about the study participants, so that all confounding is accounted for by adjustment, inverse probability of treatment weighting or standardization. Confounding is a causal concept and not relevant for predictive models [24]. Confounding cannot reliably be assessed from observational data; meaning that testing confounders for inclusion in an analytic model based on statistical correlations or changes in estimates is not valid. Conversely, factors that are not confounders should not be included in the analytic model because they may prevent assessment of the full effect of the hypothesized cause in question. For example, an observational study designed to assess the effect of alcohol on stroke should not include blood pressure in the model as a confounder. Blood pressure may cause stroke but is more likely a consequence of alcohol use than a cause of alcohol use, making blood pressure likely a mediator not a confounder. As such, adjusting the model for blood pressure would not give the full effect of alcohol on stroke. Studies designed to assess the role of potential causal factors should only present the effect estimate for the hypothesized cause in question, because estimates for other factors in the model are unlikely to be correctly controlled for confounding (sometimes referred to as the “Table 2 fallacy”) [25]. However, it may be helpful to present models adjusting for different sets of confounders because of the difficulty of unambiguously identifying confounders. Further, presenting both the crude and adjusted estimates for the hypothesized cause may elucidate the extent to which the effect estimate is influenced by the hypothesized confounders. Selection bias occurs when the sample is inadvertently constructed in such a way as to generate a spurious relation, most often inadvertent selection on common effects of hypothesized cause and outcome [24], hence sometimes described as “collider bias”. For example a study assessing the relation of smoking with lung cancer in very old people might find no relation because the sample is by definition only those who have survived their smoking habit i.e., is dependent on smoking and not getting lung cancer [26]. Selection bias is difficult to detect because it may require conceptualizing the relation of hypothesized causal exposure with disease in the sample absent from the study. For example, a study assessing the relation of obesity with death in people with diabetes [27] will not give a valid causal estimate unless it takes into account the relation of obesity with death in the people with diabetes who are absent from the study because of illness or previous death. Similarly, a spurious link between potential cause and disease may arise from measurement dependent on potential cause and disease. Recovering from selection bias is only possible in certain circumstances, for example when external data is available, but cannot be guaranteed [16]. Biomedical “risk factors” in explanatory models are potentially causal, i.e., manipulating the “risk factor” changes the outcome, and like all causal factors, in everyday experience, would be expected to be consistent within their particular area of application, and hence generalizable (or more precisely transportable [11, 24]) to other situations. However, this consistency may not always be apparent or relevant, because not all parts of an explanatory model may be applicable in all situations. For example, an explanatory model for lung cancer could include smoking and asbestos, among other factors, but attempting to reduce lung cancer by manipulating smoking would not be effective in a non-smoking population. As such, consideration needs to be given as to how to apply the explanatory model so as to act on relevant causal factors in any given population [24]. It is appropriate to calculate population attributable risks or proportions for explanatory factors, because these are causal factors whose manipulation could impact population health. However, attributable risks or proportions tells us what proportion of the outcome would not have occurred had the exposure been absent, but does not guarantee that will be the effect of removing the exposure.

Summary

Predicting and explaining risk of health conditions are answering two fundamentally different questions with completely different approaches and implications. In this context, researchers need to identify the intent or purpose of their study, as identifying who is at risk (risk stratification) or what would be an effective intervention (explanation), so as to ensure research questions are addressed appropriately and effectively. Some “risk factors” can be both predictors and explanatory factors, at the same, which may lead to conflation of these terms in the research community. For example, blood pressure is both a predictor and a cause of cardiovascular disease. However, studies where blood pressure was considered as a risk predictor would have a different purpose, research question and approach from studies where blood pressure was considered as an explanatory factor. Prediction and explanation typically require different approaches in terms of conceptualization, modelling, analysis, validation, presentation, interpretation, generalizability and risk attribution as summarized in Table 1. Risk prediction studies are using statistical techniques to generate contextually specific data-driven models requiring a representative sample that identifies people at risk of disease efficiently, but do not necessarily identify targets of intervention. Explanatory studies, ideally embedded within an explanatory model of reality, test causal factors that might be targets of intervention. Predictive models allow public health practitioners to identify populations at elevated risk of disease to enable targeting of proven interventions on causal factors. Explanatory models allow public health professionals to identify causal factors to target across populations to prevent disease.
Table 1

Attributes of predictive versus causal models

AttributeType of model
PredictiveCausal
PurposeRisk stratification, risk prediction or “weather forecasting”To test whether a factor or set of factors are causal
Type of modelData-drivenExplanatory
Type of analysisData-driven selection procedureTest of specific causal model
Role of risk factorsJointly fit the distribution of the dataPotential targets of intervention
Attributes of typical risk factorsCheap and easy to measurePart of a causal model
Role of confoundingConfounding is a causal concept [24] so it is not relevantConfounders, typically common causes of “risk factor” and health condition [24] need to be identified from external knowledge and their effect on the estimate removed, via adjustment or other means
Type of sampleRepresentative of the population in which the model will be appliedFree from selection bias for the association(s) of interest
Role of measurement errorConsistently poor measurement will impair precision. Inconsistently poor measurement will impair predictive powerNon-differential misclassification of the exposure or the outcome usually biases towards the null, differential will bias the estimates, measurement error for confounders impacts ability to appropriately adjust, measurement error for predictors of missingness impacts ability to approrpriately adjust for selection bias caused by loss to follow-up
Validation techniqueReplication in a similar sampleUse of control exposures and outcomes [19]
Coherence with high-quality estimates [20]
PresentationShow the association of each “risk factor” with the outcome health conditionOnly show the association of the “risk factor(s)” tested for causality with the outcome [25]
Measures of model fitEffect estimates and 95% confidence interval
InterpretationIdentification of those at risk of a specific outcome, ideally for preventive actionIdentification of a potential cause of the outcome, whose modification might change the risk of the outcome
Predictors of riskEffects on risk
Risk predictors should not be used to calculate population attributable fractions or risks because attribution implies causality when predictive models are not necessarily based on causal factorsCausal factors can be used to calculate population attributable fractions or risks because explanatory models are based on causal factors
Generalizability/transportabilityModel may need to be recalibrated for use in a new populationMay need to consider distribution of causal factors in a new population to get an estimate of the effect the causal risk factors on the population
Attributes of predictive versus causal models

Conclusion

Explicitly distinguishing between the different purposes of observational biomedical studies and explicitly matching the approach, interpretation and wording to the researcher’s intent will enable more focused and productive use of research resources. Avoiding the imprecise term “risk factor” and using a word, such as ‘predictor’, in risk stratification studies and ‘explanatory’ factor in causal studies might bring clarity of thought and thereby reduce unwarranted assumptions in biomedical research.
  24 in total

1.  Negative controls: a tool for detecting confounding and bias in observational studies.

Authors:  Marc Lipsitch; Eric Tchetgen Tchetgen; Ted Cohen
Journal:  Epidemiology       Date:  2010-05       Impact factor: 4.822

Review 2.  Cigarette smoking and dementia: potential selection bias in the elderly.

Authors:  Miguel A Hernán; Alvaro Alonso; Giancarlo Logroscino
Journal:  Epidemiology       Date:  2008-05       Impact factor: 4.822

3.  Causal inference and the data-fusion problem.

Authors:  Elias Bareinboim; Judea Pearl
Journal:  Proc Natl Acad Sci U S A       Date:  2016-07-05       Impact factor: 11.205

4.  The table 2 fallacy: presenting and interpreting confounder and modifier coefficients.

Authors:  Daniel Westreich; Sander Greenland
Journal:  Am J Epidemiol       Date:  2013-01-30       Impact factor: 4.897

5.  Prediction of coronary heart disease using risk factor categories.

Authors:  P W Wilson; R B D'Agostino; D Levy; A M Belanger; H Silbershatz; W B Kannel
Journal:  Circulation       Date:  1998-05-12       Impact factor: 29.690

6.  The Coronary Drug Project. Findings leading to discontinuation of the 2.5-mg day estrogen group. The coronary Drug Project Research Group.

Authors: 
Journal:  JAMA       Date:  1973-11-05       Impact factor: 56.272

Review 7.  Mendelian randomization in cardiometabolic disease: challenges in evaluating causality.

Authors:  Michael V Holmes; Mika Ala-Korpela; George Davey Smith
Journal:  Nat Rev Cardiol       Date:  2017-06-01       Impact factor: 32.419

Review 8.  CETP-Inhibition and HDL-Cholesterol: A Story of CV Risk or CV Benefit, or Both.

Authors:  Stephen J Nicholls
Journal:  Clin Pharmacol Ther       Date:  2018-06-27       Impact factor: 6.875

9.  Association Between Telomere Length and Risk of Cancer and Non-Neoplastic Diseases: A Mendelian Randomization Study.

Authors:  Philip C Haycock; Stephen Burgess; Aayah Nounu; Jie Zheng; George N Okoli; Jack Bowden; Kaitlin Hazel Wade; Nicholas J Timpson; David M Evans; Peter Willeit; Abraham Aviv; Tom R Gaunt; Gibran Hemani; Massimo Mangino; Hayley Patricia Ellis; Kathreena M Kurian; Karen A Pooley; Rosalind A Eeles; Jeffrey E Lee; Shenying Fang; Wei V Chen; Matthew H Law; Lisa M Bowdler; Mark M Iles; Qiong Yang; Bradford B Worrall; Hugh Stephen Markus; Rayjean J Hung; Chris I Amos; Amanda B Spurdle; Deborah J Thompson; Tracy A O'Mara; Brian Wolpin; Laufey Amundadottir; Rachael Stolzenberg-Solomon; Antonia Trichopoulou; N Charlotte Onland-Moret; Eiliv Lund; Eric J Duell; Federico Canzian; Gianluca Severi; Kim Overvad; Marc J Gunter; Rosario Tumino; Ulrika Svenson; Andre van Rij; Annette F Baas; Matthew J Bown; Nilesh J Samani; Femke N G van t'Hof; Gerard Tromp; Gregory T Jones; Helena Kuivaniemi; James R Elmore; Mattias Johansson; James Mckay; Ghislaine Scelo; Robert Carreras-Torres; Valerie Gaborieau; Paul Brennan; Paige M Bracci; Rachel E Neale; Sara H Olson; Steven Gallinger; Donghui Li; Gloria M Petersen; Harvey A Risch; Alison P Klein; Jiali Han; Christian C Abnet; Neal D Freedman; Philip R Taylor; John M Maris; Katja K Aben; Lambertus A Kiemeney; Sita H Vermeulen; John K Wiencke; Kyle M Walsh; Margaret Wrensch; Terri Rice; Clare Turnbull; Kevin Litchfield; Lavinia Paternoster; Marie Standl; Gonçalo R Abecasis; John Paul SanGiovanni; Yong Li; Vladan Mijatovic; Yadav Sapkota; Siew-Kee Low; Krina T Zondervan; Grant W Montgomery; Dale R Nyholt; David A van Heel; Karen Hunt; Dan E Arking; Foram N Ashar; Nona Sotoodehnia; Daniel Woo; Jonathan Rosand; Mary E Comeau; W Mark Brown; Edwin K Silverman; John E Hokanson; Michael H Cho; Jennie Hui; Manuel A Ferreira; Philip J Thompson; Alanna C Morrison; Janine F Felix; Nicholas L Smith; Angela M Christiano; Lynn Petukhova; Regina C Betz; Xing Fan; Xuejun Zhang; Caihong Zhu; Carl D Langefeld; Susan D Thompson; Feijie Wang; Xu Lin; David A Schwartz; Tasha Fingerlin; Jerome I Rotter; Mary Frances Cotch; Richard A Jensen; Matthias Munz; Henrik Dommisch; Arne S Schaefer; Fang Han; Hanna M Ollila; Ryan P Hillary; Omar Albagha; Stuart H Ralston; Chenjie Zeng; Wei Zheng; Xiao-Ou Shu; Andre Reis; Steffen Uebe; Ulrike Hüffmeier; Yoshiya Kawamura; Takeshi Otowa; Tsukasa Sasaki; Martin Lloyd Hibberd; Sonia Davila; Gang Xie; Katherine Siminovitch; Jin-Xin Bei; Yi-Xin Zeng; Asta Försti; Bowang Chen; Stefano Landi; Andre Franke; Annegret Fischer; David Ellinghaus; Carlos Flores; Imre Noth; Shwu-Fan Ma; Jia Nee Foo; Jianjun Liu; Jong-Won Kim; David G Cox; Olivier Delattre; Olivier Mirabeau; Christine F Skibola; Clara S Tang; Merce Garcia-Barcelo; Kai-Ping Chang; Wen-Hui Su; Yu-Sun Chang; Nicholas G Martin; Scott Gordon; Tracey D Wade; Chaeyoung Lee; Michiaki Kubo; Pei-Chieng Cha; Yusuke Nakamura; Daniel Levy; Masayuki Kimura; Shih-Jen Hwang; Steven Hunt; Tim Spector; Nicole Soranzo; Ani W Manichaikul; R Graham Barr; Bratati Kahali; Elizabeth Speliotes; Laura M Yerges-Armstrong; Ching-Yu Cheng; Jost B Jonas; Tien Yin Wong; Isabella Fogh; Kuang Lin; John F Powell; Kenneth Rice; Caroline L Relton; Richard M Martin; George Davey Smith
Journal:  JAMA Oncol       Date:  2017-05-01       Impact factor: 31.777

10.  Genetic Variants Related to Longer Telomere Length are Associated with Increased Risk of Renal Cell Carcinoma.

Authors:  Mitchell J Machiela; Jonathan N Hofmann; Robert Carreras-Torres; Kevin M Brown; Mattias Johansson; Zhaoming Wang; Matthieu Foll; Peng Li; Nathaniel Rothman; Sharon A Savage; Valerie Gaborieau; James D McKay; Yuanqing Ye; Marc Henrion; Fiona Bruinsma; Susan Jordan; Gianluca Severi; Kristian Hveem; Lars J Vatten; Tony Fletcher; Kvetoslava Koppova; Susanna C Larsson; Alicja Wolk; Rosamonde E Banks; Peter J Selby; Douglas F Easton; Paul Pharoah; Gabriella Andreotti; Laura E Beane Freeman; Stella Koutros; Demetrius Albanes; Satu Mannisto; Stephanie Weinstein; Peter E Clark; Todd E Edwards; Loren Lipworth; Susan M Gapstur; Victoria L Stevens; Hallie Carol; Matthew L Freedman; Mark M Pomerantz; Eunyoung Cho; Peter Kraft; Mark A Preston; Kathryn M Wilson; J Michael Gaziano; Howard S Sesso; Amanda Black; Neal D Freedman; Wen-Yi Huang; John G Anema; Richard J Kahnoski; Brian R Lane; Sabrina L Noyes; David Petillo; Leandro M Colli; Joshua N Sampson; Celine Besse; Helene Blanche; Anne Boland; Laurie Burdette; Egor Prokhortchouk; Konstantin G Skryabin; Meredith Yeager; Mirjana Mijuskovic; Miodrag Ognjanovic; Lenka Foretova; Ivana Holcatova; Vladimir Janout; Dana Mates; Anush Mukeriya; Stefan Rascu; David Zaridze; Vladimir Bencko; Cezary Cybulski; Eleonora Fabianova; Viorel Jinga; Jolanta Lissowska; Jan Lubinski; Marie Navratilova; Peter Rudnai; Neonila Szeszenia-Dabrowska; Simone Benhamou; Geraldine Cancel-Tassin; Olivier Cussenot; H Bas Bueno-de-Mesquita; Federico Canzian; Eric J Duell; Börje Ljungberg; Raviprakash T Sitaram; Ulrike Peters; Emily White; Garnet L Anderson; Lisa Johnson; Juhua Luo; Julie Buring; I-Min Lee; Wong-Ho Chow; Lee E Moore; Christopher Wood; Timothy Eisen; James Larkin; Toni K Choueiri; G Mark Lathrop; Bin Tean Teh; Jean-Francois Deleuze; Xifeng Wu; Richard S Houlston; Paul Brennan; Stephen J Chanock; Ghislaine Scelo; Mark P Purdue
Journal:  Eur Urol       Date:  2017-08-07       Impact factor: 20.096

View more
  13 in total

1.  Is the Way Forward to Step Back? Documenting the Frequency With Which Study Goals Are Misaligned With Study Methods and Interpretations in the Epidemiologic Literature.

Authors:  Katrina L Kezios
Journal:  Epidemiol Rev       Date:  2022-01-14       Impact factor: 4.280

2.  How feasible is it to abandon statistical significance? A reflection based on a short survey.

Authors:  Fredi Alexander Diaz-Quijano; Fernando Morelli Calixto; José Mário Nunes da Silva
Journal:  BMC Med Res Methodol       Date:  2020-06-03       Impact factor: 4.615

3.  Reflection on modern methods: generalized linear models for prognosis and intervention-theory, practice and implications for machine learning.

Authors:  Kellyn F Arnold; Vinny Davies; Marc de Kamps; Peter W G Tennant; John Mbotwa; Mark S Gilthorpe
Journal:  Int J Epidemiol       Date:  2021-01-23       Impact factor: 7.196

4.  Non-genetic risk and protective factors and biomarkers for neurological disorders: a meta-umbrella systematic review of umbrella reviews.

Authors:  Alexios-Fotios A Mentis; Efthimios Dardiotis; Vasiliki Efthymiou; George P Chrousos
Journal:  BMC Med       Date:  2021-01-13       Impact factor: 8.775

5.  What CVD risk factors predict self-perceived risk of having a myocardial infarction? A cross-sectional study.

Authors:  Åsa Grauman; Liisa Byberg; Jorien Veldwijk; Stefan James
Journal:  Int J Cardiol Cardiovasc Risk Prev       Date:  2022-01-13

6.  Prediction of Poststroke Depression Based on the Outcomes of Machine Learning Algorithms.

Authors:  Yeong Hwan Ryu; Seo Young Kim; Tae Uk Kim; Seong Jae Lee; Soo Jun Park; Ho-Youl Jung; Jung Keun Hyun
Journal:  J Clin Med       Date:  2022-04-18       Impact factor: 4.964

7.  An ensemble-based feature selection framework to select risk factors of childhood obesity for policy decision making.

Authors:  Xi Shi; Gorana Nikolic; Gorka Epelde; Mónica Arrúe; Joseba Bidaurrazaga Van-Dierdonck; Roberto Bilbao; Bart De Moor
Journal:  BMC Med Inform Decis Mak       Date:  2021-07-21       Impact factor: 2.796

8.  Thirty-day suicidal thoughts and behaviours in the Spanish adult general population during the first wave of the Spain COVID-19 pandemic.

Authors:  P Mortier; G Vilagut; M Ferrer; I Alayo; R Bruffaerts; P Cristóbal-Narváez; I Del Cura-González; J Domènech-Abella; M Felez-Nobrega; B Olaya; J I Pijoan; E Vieta; V Pérez-Solà; R C Kessler; J M Haro; J Alonso
Journal:  Epidemiol Psychiatr Sci       Date:  2021-02-17       Impact factor: 6.892

9.  Differences in estimates for 10-year risk of cardiovascular disease in Black versus White individuals with identical risk factor profiles using pooled cohort equations: an in silico cohort study.

Authors:  Ramachandran S Vasan; Edwin van den Heuvel
Journal:  Lancet Digit Health       Date:  2022-01

Review 10.  Systematic review of risk prediction studies in bone and joint infection: are modifiable prognostic factors useful in predicting recurrence?

Authors:  Maria Dudareva; Andrew Hotchen; Martin A McNally; Jamie Hartmann-Boyce; Matthew Scarborough; Gary Collins
Journal:  J Bone Jt Infect       Date:  2021-07-08
View more

北京卡尤迪生物科技股份有限公司 © 2022-2023.