- Philosophy of Pharmacology
- Drug Safety
- Risk Assessment and Epistemic Asymmetry
- Evidence Synthesis
- Foundations of Statistics
- Drug Regulation
- Social Epistemology of Pharmacology
Philosophy of Pharmacology
Pharmacology blends science and technology in a very peculiar fashion. It works across levels of reality by directly intervening at the biochemical level only: whereas the direct domain of action of drug molecules is limited to protein receptors, the desired end-effects are clinically observable results. However, because the proteins with which the drug molecules interact are embedded in various, possibly interacting, biological pathways it is no mean feat to determine whether a drug caused an observed (side-)effect or not.
The problem of collecting, analyzing and evaluating evidence on pharmaceutical safety is a central problem of health-care practice. Statistics on drug-induced hospitalizations range between 5% and 10% in total hospitalizations in Europe as well as in all Western countries (Stausberg and Hasford, 2011; Wu et al., 2010; Brvar et al. 2009; van der Hooft et al. 2008; Kongkaew et al. 2008, Aktionbündnis Patientensicherheit, 2007). According to the European Medicines Agency some 197,000 people in the European Union die each year as a result of adverse drug reactions (see: EMA, 2012). Drug-induced toxicities in the U.S., which rank among the top 10 causes of death, result in health care costs of $30 billion US Dollars annually (Mokdad et al. 2000, Wysowski et al. 2005). Pirmohamed et al. (2004) estimated approximately £ 500 million costs per annum for the U.K as a result of adverse drug events. The annual direct costs for Germany have been estimated to amount to Euro 400 million (Schneeweiss et al. 2002). ADRs constitute a concern for the industry too in that the attrition rate (the proportion of would-be drugs whose development is interrupted before reaching the market over the total R&D portfolio) is continuously increasing (Hay et al. 2014). This concern is also one of the reasons for the development of the Innovative Medicines Initiative (IMI) within the 7th Framework Programme.
ADRs are thus responsible for a heavy economic and social burden (Lundkvist and Jönsson, 2004); also, they constitute an extremely vulnerable point for the health system and a key ethical problem for decisions concerning pharmaceutical products. Also in view of this, the European Parliament and the European Council have recently changed the regulation of pharmacovigilance practice (Directive 2010/84/EU; Regulation (EU) No 1235/2010, entered into force in July 2012) putting a special emphasis on joint efforts for what can be considered an information based (rather than power-based) approach to pharmaceutical risk assessment. The related guidelines encourage the integration of information coming from different sources of safety signals (spontaneous case reports, literature, data-mining, pharmacoepidemiological studies, post-marketing trials, drug utilization studies, non-clinical studies, late-breaking information; see EMA-HMA guidelines on pharmacosurveillance: Module VII-X, see also Herxheimer 2012). This means that causal inference should not only rely on sample representativeness, but also on a mixture of methodological tools and available knowledge (coming from basic science and theory in general, as well as study-specific).
Yet, the methodological bases for implementing such a policy are shaky in that causal assessment of adverse drug reactions (ADRs) is still parasitic on the (statistical) methods developed to test drug efficacy (see also Senn, 2007). However, it is doubtful whether orthodox techniques to assess efficacy of treatments are equally effective in assessing causal relationships between pharmaceuticals and suspected adverse reactions. Especially, considering the fact that one is dealing with unintended (and undesired) consequences of interventions (what economists would call “externalities”).
Furthermore, a considerable proportion of drug withdrawals are based on individual case studies or case series reporting dramatic/fatal effects (Olivier and Montastruc, 2006; Arnaiz et al. 2001); but for less dramatic outcomes, no clear guidance is available. A recent methodological review (Price, 2014) authored among others by FDA statisticians, points to the unique challenges faced by safety evaluation in drug development and surveillance. The review advocates for Bayesian methods for design and analysis of safety trials because of their ability to incorporate historical (heterogeneous) knowledge in the prior, to adapt sample size on the basis of accruing knowledge, and generally to uncover problems at an earlier stage. Yet, the legitimate emphasis on avoiding bias and confounding may obscure the different challenges posed by efficacy vs. safety evaluation (see section on Risk Assessment and Epistemic asymmetry). Time has come for a paradigm change in safety assessment and decision making.
Philosophy, especially philosophy of science, can make the difference in this matter in that it possesses a sophisticated toolkit to address methodological issues in scientific inference and related epistemological underpinnings. More specifically, the extensive debate on causality and probability has produced many relevant contributions to the epistemology of statistical inference (Hartmann & Sprenger 2011, Pearl 2000, Papineau 2001, 1989; Spirtes, Glymour, Scheines 2000, Woodward 2003, Cartwright, 2007b), and an intensive philosophical reflection on causal modelling has contributed to clarify statistical and scientific inference. Finally, the philosophical debate has also insisted on a plurality of methods to diagnose causality and of ways to conceptualize it; particularly, the focus on processes and mechanisms (Craver, 2005; Mitchell, 2009; Bechtel, 2011) contributes to balance the strong emphasis on statistical hypothesis testing adopted by regulators. Furthermore, the analysis of causal mechanisms and processes has raised the awareness that both traditional analyses of causation as well as standard methods of causal inference are limited by the fact that they preponderantly rely on linear, sequential models of causality. In light of this, both methodologists and philosophers are increasingly proposing new approaches to causality which take into account systems level causality (Casini et al. 2011) causal circularity (Clarke et al. forthcoming) as well as causal interaction leading to nonlinear models (VanderWeele & Robins 2007).
Risk Assessment and Epistemic Asymmetry
“It may be unfair to invoke bias and confounding to discredit observational studies as a source of evidence on harms“: Papanikolaou PN, Christidi GD, Ioannidis JPA (2006) Comparison of evidence on harms of medical interventions in randmized and nonrandomized studies. CMAJ, 174 (5): 635-641.
Much of the evidence for harms comes from anecdotal reports, case series, or survey data, which standard guidelines of evidence evaluation regard as being of poorer quality with respect to controlled (randomized) experiments. Although the role of this “lower level” evidence is increasingly acknowledged to be a valid source of information that contributes to assessing the risk profile of medications on theoretical (Aronson and Hauben, 2006; Howick et al. 2009, Hauben and Aronson, 2007) or empirical grounds (Benson and Hartz, 2000; Golder et al. 2011), current practices have difficulty in assigning a precise epistemic status to this kind of evidence and integrating it with more standard methods of hypothesis testing. The philosophical debate has already addressed similar questions in relation to the assessment of treatment efficacy (Worrall 2010, 2007, Teira 2011, Papineau, 2001, 1993, 1989; Cartwright, 2007a). In particular these critics have been levelled against the Evidence Based Paradigm and its focus on ranking evidence by essentially relying on randomization (Worrall 2010, Cartwright 2007, 2011, Teira 2011, see Osimani 2013). However, none of these contributions expressly addresses the specific issues arising in causal assessment for harm. Epidemiologists such as Vandenbrouke as well as Ioannidis and colleagues have recognized the distinctive virtues and drawbacks of randomization in efficacy vs. safety assessment. In a paper devoted to the comparison between randomized and non-randomized studies in the assessment of adverse outcomes by Papanikoloau et al. (2006), it is argued that “it may be unfair to invoke bias and confounding to discredit observational studies as a source of evidence on harms” (emphasis added). These suggestions have noteworthy implications when considering current emphasis on evidence hierarchies, since they imply an asymmetry in the way evidence of benefits and risks of health technologies should be evaluated. However such suggestions fail to be grounded on a sound epistemic basis and seem rather ad hoc, although intuitively correct.
In the effort to provide an explanation for this asymmetry Osimani (2013b) identified the following reasons to develop new methods for safety assessment, as a complement to current standards of evidence evaluation for efficacy assessment. These jointly motivate the project:
- Distinctive loss functions in safety vs. efficacy assessment: As Rudén & Hansson (2008) point out, the focus of research in risk detection is on false negatives, rather than on false positives; which means that in safety issues there is a higher probability/danger in failing to detect causation than in mistaking spurious for authentic causation. This can also be seen as a problem of “reversed” external validity: in the case of unintended/unexpected effects the information searched for is not whether the target population will experience the same outcomes observed in the study population, but whether it will experience additional outcomes which have not been detected during the study.
- As a consequence, also the issue of impartiality “changes sign” and assumes in the case of efficacy vs. safety assessment opposite characteristics. Efficacy must be tested against fraud (which explains the success of randomized trials in pharmaceutical regulation: Teira 2011). As for negative outcomes, fraud is linked to holding back safety information (see the Vioxx case, Cronassial case, not to mention the Contergan case: Osimani 2007), hence the point of contention is reversed. Teira (2011) conceptualizes impartiality as the quality of a procedure, such that it impedes that uncertainty is exploited by any of the parties involved.
- Risk-benefit balance and the precautionary principle. Pharmaceutical risk management and decision making follows analogous criteria to those developed for standard health technology assessment, i.e. evaluation of costs and benefits latu sensu. This means that pharmaceutical products are kept in the market insofar as the expected benefit outweighs the expected harm. However, a problem arises when a harmful effect is only suspected to be associated with the drug but a causal connection between them has yet not been established. Following the precautionary principle, hypotheses of causal relationships need not be rejected or accepted: it is sufficient that they are strong enough with respect to the risk which is associated with the technology under examination (Räpple, 1991; Scheu 2003). This change of paradigm in administrative and tort rule, where, in principle, causation needs to be established for assigning culpability, has been mainly fostered by environmental law (Di Fabio 1994), but has been anticipated by the German legislation on pharmaceuticals due to the pressure caused by the Contergan case and the related sentence (Landesgericht Aachen, 18. 12. 1970 – 4 KMs 1/68, 15 – 115/67. Juristische Zeitung p. 515). Still, adequate responses in the methodological practice are lacking.
- Cumulative learning and the virtues of probabilistic vs. categorical causal assessment. Probabilistic causal assessment is also indispensable in a situation of cumulative progressive learning. Evidence about adverse events accumulates over time and there comes a point where the signal strongly suggests causation without demonstrating it.
- Integration of prior knowledge (theory, historical data, knowledge of same-class molecules). The rationale behind the introduction of the precautionary principle in the pharmaceutical domain – also mirrored in the notion of “development risk” (or “potential risk”), as well as in the very existence of a pharmacosurveillance system – reflects a high default prior for an undefined latent risk (Osimani 2013a). Frequentist statistics does not allow to incorporate priors in hypothesis evaluation. This is particularly detrimental in the case of harm assessment considering that much knowledge of the drug behavior may be inferred analogically from same-class molecules. Furthermore, most compounds are characterized by promiscuity, meaning that they bear some affinity to off-target proteins: this is at the origin of most side-effects at the clinical level, hence integrating information about biochemical constraints, molecular mechanisms and biological pathways at the systems level considerably enhances the predictability of drug-organism interactions (Xie et al. 2009).
- RCTs deliver limited and purposely decontextualized information; they have been developed in order to test fertilizers on plant growth. The causal structure here is much closer to physical causality (Thompson, 2011) – plots of lands do not react to fertilizers in the same way as human beings absorb and metabolize drugs (pharmacokynetics, pharmacodynamics) – and is not as rich in feedback loops, threshold effects, interactive causality as complex biological systems are, where such phenomena are much more frequent and entrenched. Hence another important reason for developing an alternative approach to current standards for causal assessment in pharmacology is also related to the ontological complexity of biological mechanisms upon which one should intervene.
Philosophers have long investigated the notion of “causality” and how to draw causal inferences. We are currently favoring a Bayesian approach to causal inference. However, we are also interested in an approach using the belief formation formalism due to Dempster & Shafer. In particular, we like the constructive interpretation of probabilities within this approach. This research interest was piqued by Professor Shafer’s visit to the MCMP.
Theories of causality have proposed reductivist (Lewis 1986, Spirtes et al. 2000) as well as non-reductivist definitions of causes (Woodward 2003, Williamson 2005); definitions based on truth conditions (Lewis 1986, but also Suppes 1980, Woodward 2003, Reutlinger, 2013) or on physical features (causation as a mark-transmitting process: Dowe, 2000). Furthermore, causes have been distinguished from “powers” (Cartwright, 2007c) or “dispositions” (Mumford and Anjum, 2011) in analogy to the Aristotelian distinction between act and potency.
This debate is extremely relevant to the issue of safety assessment.
In fact, adverse drug reactions fundamentally originate through three kinds of phenomena:
1) the same mechanism leading to the intended effect also produces harmful effects (e.g. the inhibiting effects on prostaglandins production induced by anti-rheumatic drugs favors the anti-infectious action, but at the same time damages the stomach mucous membrane, where prostaglandins protect the mucosa from self-absorption);
2) the drug chemically binds to off-target receptors, thereby impacting different organs/apparati;
3) the drug triggers integrated responses of separate levels in the organ system (interaction of different organ levels). These phenomena are connected to the way causality is characterized in the biological realm: i.e. by back-up mechanisms, parastasis and feeback causality (see Joffe 2011).
In this framework, drugs may have paradoxical or bidirectional effects, i.e. effects which are opposite to the intended drug effect or produce both the desired effect and its opposite (Smith et al. 2012, Aronson and Hauben 2006); as well as effects which are indistinguishable from the disease symptoms and thus may be confused with them.
Hence, a reflection on causal mechanisms and causal interaction is an essential pre-requisite of (statistical) causal models. However, their role is strongly debated in relation to evidence standards. Philosophers closer to the Evidence Based Medicine approach, even in recognizing some value to knowledge about mechanisms, still doubt that mechanisms can bridge the gap of statistical black-box evidence because of the limited and fragmentary knowledge of the “causal web” in which they are embedded (see for instance Howick, 2011); other philosophers instead generally recognize that knowledge about mechanisms plays a plurality of roles both in combination with statistical information and in a stand-alone fashion:
1. Following the philosophical analysis of causal explanation, mechanisms are called to provide the ontologic rationale for observed regularities (Salmon, 1984);
2. Knowledge about mechanisms can constitute a sort of double check for causality (Salmon, 1997; Russo, Williamson, 2007, Clarke et al. forthcoming);
3. Mechanisms also have a methodological relevance in that they are supposed to provide the basis for extrapolation (Cartwright, 2007a), and are important for supporting the reliability of model assumptions as well as for interpreting experiment results (for instance a two-way curvilinear causal interaction cannot be detected or may be misinterpreted by linear regression models);
4. Mechanisms are held to have an epistemological/theoretical relevance, in that they can provide the hypothesis which puts together disparate data (through abduction, a.k.a. retroductive or inverse induction). In this sense they provide the basis for the accumulation of knowledge and scientific progress (see also Craver, 2005).
These roles are all relevant in evaluating evidence of pharmaceutical harm, as they provide answers to different questions.
In particular, three distinctive dimensions of causal structures in pharmacology are noteworthy:
1) intervention on malfunctioning: the fact that the drug generally intervenes on a state of malfunctioning (see Nervi 2010) by reinstating an equilibrium state in the organ system which has been disrupted by the disease: this presents specific challenges to the analysis of the causal structure involved, also from a philosophical point of view, where causality has been mainly analyzed by having in mind natural laws rather than natural phenomena “artificially” induced by technologies;
2) causal cycles: the state of malfunctioning is generally characterized by a vicious cycle: this means that a positive feedback increasingly drifting the organism functioning far from equilibrium is interrupted by favouring the reinstatement of the normal negative feedback (e.g. homeostatic pathways in metabolism).
3) system-level effects: the drug molecule is supposed to be as specific as possible and bind to target receptors only; however, besides problems with promiscuity (i.e. the drug binding with off-target receptors and thereby triggering other unintended-unexpected biological processes), there is the issue of system-level reactions to the drug. The system can be seen as a causal web characterized by different kinds of relationships: positive and negative feedback, attrition or threshold effects, back-up mechanisms (net-effect: no result), overcompensation mechanisms (net-effect: opposite result); multiple realizability, moderating and mediating factors, low/high integration among subsystems.
Current standards of evidence synthesis in medical research offer two fundamental methods:
- (systematic) meta-analyses;
- Narrative reviews.
The former method does not allow to amalgamate evidence at different levels; quite the contrary, in order for results to be at all meaningful, studies in the meta-analysis need to be as homogeneous as possible: studies must have been carried out with the same inclusion-exclusion criteria, the same kind of control, the same context of treatment administration. As a consequence the result of the meta-analysis gives at the same time very local and very abstract information.
This follows from the general fact that criteria underlying evidence standards are focused on the quality of the causal signal precisely in the sense of eliminating noise, and abstracting cause from context.
Yet, because different kinds of populations may experience different effects by taking the same drug, conducting larger and larger RCTs or pooling data in meta-analyses would not reach the purpose, especially when (rare) side-effects are at issue.
The epistemic asymmetry between efficacy and safety assessment is reflected in the different use of meta-analyses in the two settings.
In efficacy assessment meta-analyses are used as a kind of “robustness analysis”: they should confirm the (possibly conflicting) results of individual RCTs; instead concerning adverse events, they are explicitly used to detect risks for which individual RCTs are underpowered.
Recent contributions to the methodology of Systematic Reviews also go in the direction to emphasizing internal validity (Waddington et al. 2012) by appropriately selecting studies in this respect, while obscuring the importance of other issues in evidence amalgamation, such as the combination of heterogenous evidence for the purpose of “connecting the dots” between different constituents of a phenomenon.
However, key to any successful inference in pharmacology — causal or not — is the proper use of the available information. Relevant information can come in different forms, ranging from double, or even triple, blind randomised controlled trials over observational studies to expert opinion and knowledge about interactions at the molecular level. The challenge arises to make sense out of all this information which is — more likely than not — conflicting.
We are currently exploring subjective Bayesian and objective Bayesian approaches to this challenge of evidence synthesis which is also known under the term `evidence amalgamation’.
Foundations of Statistics
The data collected in clinical trials are datasets. Datasets are normally analysed using statistical tools. These tools were designed for applications in which the observed sample is representative for the population of interest. The patients taking part in many medical studies are not representative for the population of interest. This can be due to a number of reasons: the study was carried out a geographical location far away from where the population of interest lives, strict exclusion criteria were applied in the study or the study only follow a small number of patients.
It is hence important to assess the applicability of statistical tools the questions at hand. Furthermore, it is important, if possible, to chose a statistical model for drawing inferences, see further one of our published research papers.
Drug licensing bodies, such as the Food and Drug Agency in the USA or the EMA in Europe, as well as national agencies such as the Bundesinstitut für Arzneimittel und Medizinprodukte in Germany, or the Medicines and Healthcare products Regulatory Agency in the UK, regularly face the problem of whether to approve a drug for treatment or not and the problem of whether or not to let a drug further circulate in the market when its safety profile is updated through the discovery of additional risks. Indeed any given drug is always approved “with reservation” (Osimani2007).
The actions taken by the drug licensing body may have wide influence on public health and public finance as well as economic success of the drug’s manufacturer and its competitors. Intuitively, the normatively right action to take is to leave the drug in the market, provided that — on the basis of the available evidence — the expected utility of not withdrawing it exceeds the utility of withdrawing it.
The precautionary principle has been introduced in the pharmaceutical domain in order to account for the uncertainty arising in cases where suspicion arises about a new harm, possibly associated with the drug, but evidence cannot conclusively point to a causal connection between them.
In fact, before its introduction in the legal system (and through various international agreements related to environmental law, see Osimani2013) no preventive measure was possible without a scientific proof of the causal connection between suspected source of damage and expected harm. This is because liability and safety regulations are grounded on a clear causal connection between the agent deemed responsible for the hazard and the hazard itself.
The precautionary principle relaxes this requirement in view of the good at stakes (health and environment) and of the radical uncertainty related to the possible unintended outcomes of human interventions on nature.
In such cases, a well-founded suspicion may suffice to take action (withdraw the drug or restrict its usage), and the principle of proportionality applies. This means that the probability associated with the hypothesis of causal association may be as low as the expected harm is high (with respect to the expected benefit, see Osimani, 2011).
Hence, the decision, withdraw the drug or not, will depend on some threshold which reflects the nature of the medication, the pharmaceutical environment (i.e., the availability of alternative treatments for the same condition), policy and ethical dimensions, as well as the perceived acceptability of the risk.
A straightforward consequence of this state of affairs is that there is a need of instruments which allow a probabilistic assessment of the suspected causal link between drug and side-effect, by taking into account all available evidence at time of decision.
In particular, four desiderata are essential for a framework of causal assessment of drug induced harm:
- It must allow for probabilistic hypothesis confirmation.
- It must be able to incorporate heterogeneous kinds of data.
- It must be able to integrate diverse types of inferential patterns, in order to optmise the epistemic import of available evidence.
- The framework should be particularly focused on causal assessment in pharmacology and therefore consider the specific issues which arise in this context.
The ultimate goal is to help decision makers in their highly complex task of weighing perceived benefits of a drug against the suspected harms caused by drug use.
Social Epistemology of Pharmacology
Medicine is a social enterprise. It involves human beings their conscious and unconscious: opinions, feelings, preferences, biases and interests at various levels. Hence, all available information should not be taken at face value as it requires an understanding of social circumstances in which information was gathered.
Social epistemology and decision-theoretic tools are increasingly acknowledged as useful instruments to model research dynamics, knowledge flux and evaluate funding policies. Since medical research and clinical practice are pervaded by all sorts of conflicts of interest, there is room for implementing these approaches to understand the way medical knowledge works in our social world.
In particular we propose a 4-layers approach to modelling epistemic dynamics in probabilistic causal assessment:
- a basic level of evidence amalgamation, where various pieces of possibly heterogeneous evidence are combined on the basis of various inferential paths;
- a higher order level related to “meta-epistemic” dimensions related to the structure and organisation of the entire body of evidence (such as consistency/coherence of reports, (in)dependence of observations); and to the individual pieces of evidence (reliability, relevance);
- a further level related to knowledge or evidence concerning these meta-epistemic dimensions themselves (e.g. grounds for judging a given source of information as reliable, reasons for assuming specific (in)dependency relations);
- a proper social epistemology level investigating the incentives/deterrents for bias and accuracy of reports, the social ontology of the research domain, and possibly developing nudging instruments for the improvement of medical research.
For a detailed overview over the projects aims and goals view the “ERC Starting Grant 2014 Research proposal” (152 Kb).