Sirs,The article by Ian McDowell[1] is valuable for its insights regarding explanations in public health. I wish to add some observations on recent methodologies to evaluate such explanations.Let us start with the observation that ‘In epidemiology, many of the hypotheses being evaluated in the interpretation of studies can be seen as auxiliary hypothesis in the sense that they are independent of the presence, absence or direction of any causal connection between the study exposure and the disease. Much of the interpretation of epidemiological studies amounts to the testing of such auxiliary explanations for observed associations.’[2] Thus, it is important to understand that all epidemiological studies are only the testing parts of the observed association given that a whole set of factors (sociological, economic, environmental) are acting in the actual causal mechanism.The process of conceptualizing and arriving at logical conclusions involves intuition and prior information. It is evident, however, that the individuals are inaccurate at intuitively formulating uncertainties in predicting events.[3,4,5] Hence public-health explanations require scrutiny by experts from different fields (sociologists, economists, environment experts, epidemiologist, statisticians and historians to name a few) that not only have understood the public-health question in the relevant context but also use such information to provide logical explanations.Nonetheless, to feed this process, public-health professionals have to come up with a specific causal mechanism, given that epidemiological observations have provided crucial tests of competing explanations. To aid in evaluating these mechanisms, causal diagrams[6] can be used to depict how hypothesized causal networks translate into testable associations. And, while Hume and later Popper successfully argued that we cannot deductively ‘prove’ hypotheses, others have argued that the deduction has limited scientific utility because we cannot ensure the truth of all the premises, even if logical argument is valid; hence theory formation and enumerative induction remain an essential part of scientific explanation.[7,8] As aids to these processes, we have the deductive methodology of Bayesian probability logic, which translates personal probabilities of the premises of valid arguments into personal probabilities about deductive conclusions[9,10] and bias analysis,[11] which combines Bayesian and sensitivity-analysis concepts to evaluate the plausibility of alternative explanations. I agree with those that argue for incorporating these methodologies into standard epidemiological training.