The latest issue of The Reasoner hosted two contributions from the PhilPharm team, one by Jürgen Landes on Jeffrey vs. Leitgeb & Pettigrew’s updating rules, the other one by myself: I introduce the subsection devoted to Formal Epistemology of Medicine within the “What’s Hot in Mathematical Philosophy?” column. You can download the pdf version of the gazette here: TheReasoner-122.
Formal Epistemology of Medicine
This report inaugurates a subsection within the “What’s Hot in Mathematical Philosophy” column, which will be devoted to the “Formal Epistemology of Medicine”. This new strand of research analyses issues arising in medical epistemology by examining the interaction of methodological, social and regulatory dimensions in medicine. The motivation for adopting a formal approach stems from its higher capability to describe the “rules of the game” and to provide an analytic explanatory account of the investigated phenomena. The idea emerges out of the ERC project “Philosophy of Pharmacology: Safety, Statistical Standards, and Evidence Amalgamation” hosted by the MCMP until June 2017, and now by the Univpm (Ancona, Italy) — with MCMP further remaining involved as additional beneficiary. The project consists in two main research strands: 1) developing a justificatory framework for probabilistic confirmation of causal hypotheses; 2) a game-theoretic approach to epistemic issues around (medical) evidence.
1. Formalisation of scientific inference within the Bayesian epistemology tradition has generally aimed at providing mathematical explanations of various inferential phenomena in the sciences: confirmatory support of coherent evidence, confirmatory role of explanatory power, the role of replication in assessing the reliability of evidence, the no-alternatives and the no-miracles arguments (see e.g. Crupi V. Chater N., & Tentori K. New axioms for probability and likelihood ratio measures. British Journal for the Philosophy of Science, 2013, 64(1), 189–204; Dawid R., Hartmann S., & Sprenger J. The No Alternatives Argument. British Journal for the Philosophy of Science, 2015, 66, 213–234; Fitelson B. A probabilistic theory of coherence. Analysis, 2003, 63(279), 194–199). We drew on this tradition in order to exploit the confirmatory support of heterogeneous sources of evidence, and to expand the justificatory toolset in such domains as drug risk management and policy-making (Landes J. Osimani B. Poellinger R. (2017) Epistemology of causal inference in pharmacology. Towards a framework for the assessment of harms. European Journal for Philosophy of Science). This also goes in the direction advocated by Gelman (Gelman A. Working through some issues. Significance 12.3 (2015): 33-35.) and Marsman et al. (A Bayesian bird’s eye view of ‘Replications of important results in social psychology’. R Soc Open Sci. 2017, 4(1): 160426) invoking a more comprehensive approach to evidence, in the aftermath of the “reproducibility crisis”. In analogy with Bogen and Woodwards’ distinction between data and phenomena (Bogen J., Woodward J. Saving the Phenomena. The Philosophical Review, 1988, 97 (3): 303-52), our framework breaks down the inferential path from data to hypotheses into two steps: one from data to abstract causal indicators; the other one, from such indicators to the causal hypothesis itself. This also helps depress some crosstalk in the philosophical literature, generated by conflating ontological, epistemological, and methodological issues around causal inference.
2. The scientific ecosystem in which the above epistemic dynamics are embedded is characterised by the joint interaction of several features: 1) medical products are so called “credence products”, that is, products for which the consumer (medical community, patients, and the public health system) cannot evaluate the quality prior to (and often not even after) consumption 2) information asymmetry affects epistemic interchange at various levels (patient vs. doctor, policy makers vs. scientific community, state-of-the-art-knowledge vs. Nature), and may be obviously exploited and lead to various phenomena such as suppliers’ induced demand, or disease mongering; 3) producers of medical knowledge often have vested interests in the research outputs and dissemination, leading them to engage in strategic behavior regarding evidence exhibition (whose features may also evolve in time: see Bennett Holman, The Fundamental Antagonism: Science and Commerce in Medical Epistemology. 2015, PhD Dissertation, University of California, Irvine). This strongly impacts on the processes and norms regarding the production, and evaluation of evidence and its use for decisions (see also Teira, D. On the normative foundations of pharmaceutical regulation. In: La Caze A., Osimani B. (2018) Uncertainty in Pharmacology: Epistemology, Methods and Decisions. Boston Series for the Philosophy and History of Science, Springer).
Various institutional instruments have been developed in order to address these issues: evidential standards (e.g. evidence hierarchies proposed within the EBM paradigm), decision-rules (e.g. the precautionary principle), and deontological norms.
We started to investigate the joint interaction of such dimensions by developing a Bayesian model of hypothesis confirmation which takes into account both random and systematic error (Landes J. Osimani B., (2018) Varieties of Error and Varieties of Evidence in Scientific Inference, under review). In particular, we examined the interplay of coherence and consistency of evidence, with source(s) reliability. Our results partly confirm Bovens and Hartmann (Bovens, L., & Hartmann, S. (2003), Bayesian Epistemology. OUP) and Claveau (Claveau F. The Independence Condition in the Variety-of-Evidence Thesis. Philosophy of Science, 2013, 80, pp. 94–118), who investigate similar epistemic dynamics, but we realize that Bovens and Hartmann’s results concerning the failure of the variety of evidence thesis (VET), mainly rely on their randomizing instrument being so in a specific way: when its probability of delivering positive reports (no matter what the truth is) is higher than .5 the instrument tends to be a “yes-man”, whereas it is a “nay-sayer” if this probability drops below .5. In the former case, consistency of positive reports from the same instrument speaks in favour of it being a randomizer (and therefore weakens their confirmatory strength), whereas the opposite holds for the latter case, which explains VET failure there. In our model the VET fails too, but the area of failure is considerably smaller and depends on the ratio of false to true positives of the biased vs. reliable instrument affected by random error; the take-home message is that replication with the same instrument is favoured when the noise of the reliable instrument exceeds the systematic error of the biased one. We plan to further explore these results by modeling different sorts of replications and features of reliability in various scientific settings, and embed them in an extended framework, where more agents/groups are involved in strategic behaviour.
The 2018 February issue of The Reasoner contains two contributions by Barbara Osimani and Jürgen Landes. Barbara Osimani published a column on current trends in mathematical philosophy. Jürgen Landes worte about a problem for updating beliefs in the light of new evidence.
We want to call for contributions which address the following topics [broadly construed]
– reliability, its uses and models in the sciences and methodologies (e.g., empirical, theoretical, social)
– reliability, its uses and models in the philosophy of these sciences to include, among other approaches, formal epistemology
– traditional main-stream approaches to reliability
– reliability and the replication crisis
– reliable inference.
Submissions will be due some time in autumn/winter this year.
Barbara Osimani and Jürgen Landes gave talks at Issues in Medical Epistemology Cologne (14-16 Dec, 2017): Barbara Osimani delivered the opening keynote lecture on ”Nature, noise and evidence in medicine”. Jürgen Landes presented on “Variety, Reliability, Confirmation and Drug Safety”.
Barbara’s talk can be viewed here: https://youtu.be/zPcu4-QR5TM
Abstract of Osimani:
With respect to other scientific ecosystems medicine is characterised by the joint interaction of the following factors: 1) medicine is an intrinsic interdisciplinary science: it does not investigate a specific level of reality like physics, or biology, but rather works across levels; 2) this leads to a stronger error propagation with regard to causal inference and extrapolation, as well as to explanation and intervention, and therefore to stronger epistemic uncertainty; 3) with therapeutic interventions being associated with an eliminable amount of risk (“residual risk”), medicine is affected by ethical dilemmas regarding not only conflicting goods, but also the same kind of good: health; 4) high stakes are involved in most decisions at various levels (not only regarding health and well-being, but also implicating existential, psychological, and financial dimensions, as well as policy-making at the societal level); 5) vested interests held by (at least some of) the producers of medical knowledge strongly impact on the processes and norms regarding the production, interpretation and evaluation of evidence. (strategic behaviour).
Various institutional instruments have been developed in order to address these issues: evidential standards (such as the evidence hierarchies proposed within the evidence based medicine paradigm), decision-rules (e.g. the precautionary principle), and deontological norms: reproducibility requirements and bias detection tools. On their turn, scientists have advocated for a more comprehensive view of evidence, which may incorporate these issues explicitly, and keep track of them in the inferential process (Gelman, 2015, Marsman et al. 2017).
This talk presents a research program where these different issues are investigated in their joint interaction, by distinguishing different dimensions of first and second order evidence (Landes, Osimani, Poellinger; EJPS, 2017, Osimani, 2018). In particular, I will present how our theoretical framework can address problems of causal assessment of harm in pharmacology (Osimani 2013, Poellinger , 2018, forthcoming), and other meta-evidential questions such as the analysis of random error vs. bias, and the reproducibility crisis (Landes and Osimani, forthcoming).
1. Gelman, Andrew. Working through some issues. Significance 12.3 (2015): 33-35.
2. Landes J. Osimani B. Poellinger R. (2017) Epistemology of causal inference in pharmacology. Towards a framework for the assessment of harms. European Journal for Philosophy of Science.
3. Landes J. Osimani (forthcoming) Varieties of Error and Varieties of Evidence in Scientific Inference.
4. Marsman M., Schönbrodt F.D., Morey R.D., Yao Y., Gelman A., Wagenmakers EJ. (2017) A Bayesian bird’s eye view of ‘Replications of important results in social psychology’. R Soc Open Sci. 4(1): 160426.
5. Osimani B. (2018) Epistemic games and epistemic gains. A multilayer approach to causal inference in medicine. In: Osimani B., La Caze A. (eds.) (2018) Uncertainty in Pharmacology: Epistemology, Methods and Decisions”. Springer: Boston Series in Philosophy of Science.
6. Osimani B. (2013) Hunting side effects and explaining them: should we reverse evidence hierarchies upside down? Topoi (Special Issue: Evidence and Causality in the Sciences) October, Volume 33, Issue 2, pp 295–312.
7. Poellinger, R. (2018) Analogy-Based Inference Patterns in Pharmacological Research. In: La Caze, A. & Osimani, B (eds.) (2018): Uncertainty in Pharmacology: Epistemology, Methods, and Decisions. Boston Studies in Philosophy of Science. Springer.
8. Poellinger R. (forthcoming) On the Ramifications of Theory Choice in Causal Assessment: Indicators of Causation and Their Conceptual Relationships.
Abstract of Landes:
In this talk, I investigate the notions of varied evidence and reliability, their interplay and their contributions towards hypothesis confirmation within the framework of Landes et al. (2017). In particular, I shall show how one can explicate the notion of varied evidence, how too much positive evidence leads to a sharp drop in assessed reliability (too-good-to-be-true evidence) and whether the hypothesis
of interest or biases are more likely given the available evidence.
Jürgen Landes gave an introductory lecture at the Salzburg Conference for Young Analytic Philosophy 2017 conference on 13.09.2017. The lecture was entitled Philosophy of Pharmacology and Causal Assessment.
Roland Poellinger, Postdoctoral Researcher at the Munich Center for Mathematical Philosophy (MCMP) visited Professor Jan-Willem Romeijn, Head of the Department of Theoretical Philosophy at the University of Groningen, The Netherlands, from April to May, 2017.
Image caption: Similarity structures everywhere
When I was making plans to spend an intense research visit abroad in Spring 2017, there would not have been a better place for me than Groningen: What unites philosophers in Munich and Groningen is a strong focus on formal methods and Bayesian reasoning. In much of my work, Bayesian networks are the means of choice. Using such nets, I have looked at causal decision theory and paradox in causal reasoning. I developed ideas for integrating causal and non-causal knowledge in an extension of the Bayes net framework, and more recently I have become highly interested in using Bayes nets for reconstructing analogical arguments in science and for making explicit…
View original post 627 more words