What’s Hot in Mathematical Philosophy? Formal Epistemology of Medicine

Posted on Updated on

The latest issue of The Reasoner hosted two contributions from the PhilPharm team, one by Jürgen Landes on Jeffrey vs. Leitgeb & Pettigrew’s updating rules, the other one by myself: I introduce the subsection devoted to Formal Epistemology of Medicine within the “What’s Hot in Mathematical Philosophy?” column. You can download the pdf version of the gazette here: TheReasoner-122.

Formal Epistemology of Medicine

This report inaugurates a subsection within the “What’s Hot in Mathematical Philosophy” column, which will be devoted to the “Formal Epistemology of Medicine”. This new strand of research analyses issues arising in medical epistemology by examining the interaction of methodological, social and regulatory dimensions in medicine. The motivation for adopting a formal approach stems from its higher capability to describe the “rules of the game” and to provide an analytic explanatory account of the investigated phenomena. The idea emerges out of the ERC project “Philosophy of Pharmacology: Safety, Statistical Standards, and Evidence Amalgamation” hosted by the MCMP until June 2017, and now by the Univpm (Ancona, Italy) — with MCMP further remaining involved as additional beneficiary. The project consists in two main research strands: 1) developing a justificatory framework for probabilistic confirmation of causal hypotheses; 2) a game-theoretic approach to epistemic issues around (medical) evidence.

1. Formalisation of scientific inference within the Bayesian epistemology tradition has generally aimed at providing mathematical explanations of various inferential phenomena in the sciences: confirmatory support of coherent evidence, confirmatory role of explanatory power, the role of replication in assessing the reliability of evidence, the no-alternatives and the no-miracles arguments (see e.g. Crupi V. Chater N., & Tentori K. New axioms for probability and likelihood ratio measures. British Journal for the Philosophy of Science, 2013, 64(1), 189–204; Dawid R., Hartmann S., & Sprenger J. The No Alternatives Argument. British Journal for the Philosophy of Science, 2015, 66, 213–234; Fitelson B. A probabilistic theory of coherence. Analysis, 2003, 63(279), 194–199). We drew on this tradition in order to exploit the confirmatory support of heterogeneous sources of evidence, and to expand the justificatory toolset in such domains as drug risk management and policy-making (Landes J. Osimani B. Poellinger R. (2017) Epistemology of causal inference in pharmacology. Towards a framework for the assessment of harms. European Journal for Philosophy of Science). This also goes in the direction advocated by Gelman (Gelman A. Working through some issues. Significance 12.3 (2015): 33-35.) and Marsman et al. (A Bayesian bird’s eye view of ‘Replications of important results in social psychology’. R Soc Open Sci. 2017, 4(1): 160426) invoking a more comprehensive approach to evidence, in the aftermath of the “reproducibility crisis”. In analogy with Bogen and Woodwards’ distinction between data and phenomena (Bogen J., Woodward J. Saving the Phenomena. The Philosophical Review, 1988, 97 (3): 303-52), our framework breaks down the inferential path from data to hypotheses into two steps: one from data to abstract causal indicators; the other one, from such indicators to the causal hypothesis itself. This also helps depress some crosstalk in the philosophical literature, generated by conflating ontological, epistemological, and methodological issues around causal inference.

2. The scientific ecosystem in which the above epistemic dynamics are embedded is characterised by the joint interaction of several features: 1) medical products are so called “credence products”, that is, products for which the consumer (medical community, patients, and the public health system) cannot evaluate the quality prior to (and often not even after) consumption 2) information asymmetry affects epistemic interchange at various levels (patient vs. doctor, policy makers vs. scientific community, state-of-the-art-knowledge vs. Nature), and may be obviously exploited and lead to various phenomena such as suppliers’ induced demand, or disease mongering; 3) producers of medical knowledge often have vested interests in the research outputs and dissemination, leading them to engage in strategic behavior regarding evidence exhibition (whose features may also evolve in time: see Bennett Holman, The Fundamental Antagonism: Science and Commerce in Medical Epistemology. 2015, PhD Dissertation, University of California, Irvine). This strongly impacts on the processes and norms regarding the production, and evaluation of evidence and its use for decisions (see also Teira, D. On the normative foundations of pharmaceutical regulation. In: La Caze A., Osimani B. (2018) Uncertainty in Pharmacology: Epistemology, Methods and Decisions. Boston Series for the Philosophy and History of Science, Springer).

Various institutional instruments have been developed in order to address these issues: evidential standards (e.g. evidence hierarchies proposed within the EBM paradigm), decision-rules (e.g. the precautionary principle), and deontological norms.

We started to investigate the joint interaction of such dimensions by developing a Bayesian model of hypothesis confirmation which takes into account both random and systematic error (Landes J. Osimani B., (2018) Varieties of Error and Varieties of Evidence in Scientific Inference, under review). In particular, we examined the interplay of coherence and consistency of evidence, with source(s) reliability. Our results partly confirm Bovens and Hartmann (Bovens, L., & Hartmann, S. (2003), Bayesian Epistemology. OUP) and Claveau (Claveau F. The Independence Condition in the Variety-of-Evidence Thesis. Philosophy of Science, 2013, 80, pp. 94–118), who investigate similar epistemic dynamics, but we realize that Bovens and Hartmann’s results concerning the failure of the variety of evidence thesis (VET), mainly rely on their randomizing instrument being so in a specific way: when its probability of delivering positive reports (no matter what the truth is) is higher than .5 the instrument tends to be a “yes-man”, whereas it is a “nay-sayer” if this probability drops below .5. In the former case, consistency of positive reports from the same instrument speaks in favour of it being a randomizer (and therefore weakens their confirmatory strength), whereas the opposite holds for the latter case, which explains VET failure there. In our model the VET fails too, but the area of failure is considerably smaller and depends on the ratio of false to true positives of the biased vs. reliable instrument affected by random error; the take-home message is that replication with the same instrument is favoured when the noise of the reliable instrument exceeds the systematic error of the biased one. We plan to further explore these results by modeling different sorts of replications and features of reliability in various scientific settings, and embed them in an extended framework, where more agents/groups are involved in strategic behaviour.

Barbara Osimani



Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s