Research Article: Predicting Harms and Benefits in Translational Trials: Ethics, Evidence, and Uncertainty

Date Published: March 8, 2011

Publisher: Public Library of Science

Author(s): Jonathan Kimmelman, Alex John London

Abstract: None

Partial Text: First-in-human clinical trials represent a critical juncture in the translation of
laboratory discoveries. However, because they involve the greatest degree of
uncertainty at any point in the drug development process, their initiation is beset
by a series of nettlesome ethical questions [1]: has clinical promise been
sufficiently demonstrated in animals? Should trial access be restricted to patients
with refractory disease? Should trials be viewed as therapeutic? Have researchers
adequately minimized risks?

According to the core tenets of human research ethics, investigators, sponsors, and
institutional review boards (IRBs) are obligated to ensure that risks to volunteers
are minimized and balanced favorably with anticipated benefits to society and, if
applicable, to the volunteers themselves [4],[6]. Accurate prediction plays a
critical role in this process. When research teams underestimate the probability of
favorable clinical or translational outcomes, they undermine health care systems by
impeding clinical translation. When investigators overestimate the probability of
favorable outcomes, they potentially expose individuals to unjustified burdens,
which may be considerable for phase 1 studies involving unproven drugs. In both
cases, misestimation threatens the integrity of the scientific enterprise, because
it frustrates prudent allocation of research resources [7].

First, decision-makers may not be adequately responsive to problems in preclinical
research practice [15]. Systematic reviews repeatedly demonstrate that many
animal studies do not enable reliable causal inference and clinical generalization
because they do not address important threats to internal, construct, and external
validity. With respect to the first, one recent analysis of animal studies showed
that only 12% used random allocation and 14% used blinded outcome
assessment [16].
Construct validity concerns the relationship between clinical implementation of an
intervention and implementations evaluated in preclinical studies. A recent review
found that clinical studies of cardiac arrest interventions applied treatment
significantly sooner after cardiac events than in preclinical studies [17]. In the case
of Astra Zeneca’s failed stroke drug NXY-059, use of normotensive rodents in
preclinical development may have led to spurious predictions of clinical activity
[18].
Preclinical studies do not always test the extent to which cause and effect
relationships hold up under varied conditions (external validity). In a systematic
review of neuroprotective agents in phase 2 and 3 trials, only two of ten agents
were tested in both rodents and higher order species [19]. Finally, deficiencies in
reporting and aggregation of preclinical evidence deprive decision-makers of crucial
evidence. In one recent analysis, publication bias in preclinical stroke studies led
to a 30% overestimation of treatment effect size [20]. Clearly, preclinical researchers
should endeavor to follow reporting guidelines [21] such as the recently proposed
Animals In Research: Reporting In Vivo Experiments Guidelines
(ARRIVE; http://www.nc3rs.org.uk/page.asp?id=1357) [22], and clinical predictions
following from animal studies should take into account deficiencies in design and
reporting.

A second concern about forecasting outcomes in translational trials relates to a
tendency to base clinical inferences on a relatively narrow class of evidence: those
preclinical studies that involve the particular agent. We call this
“evidential conservatism.” Such evidential conservatism is reflected in
various policies. For example, the American Society of Clinical Oncology states that
“the decision to move an agent into phase I evaluation is based…
central[ly on]… the observation of sufficient preclinical antitumor
activity, such that a therapeutic effect in human cancer is anticipated” [24],[25].
International Council on Harmonization policy requires investigators to furnish
ethics review committees with only a narrow type of preclinical evidence [26].
Similarly, some commentators argue that risk-benefit decisions in early phase trials
should be driven by mechanistic evidence about an agent [27].

How might researchers depart from evidential conservatisim in a way that is open to
scrutiny and amenable to assessment, revision, and improvement? Decision-makers who
make forecasts about agent activity in early phase research must identify reference
classes that are relevant to the decision at hand. Delimiting the reference class of
relevant evidence poses a challenge in that interventions possess limitless
characteristics. A drug might be classed within neuroprotective compounds, stroke
drugs, and drugs beginning with the letter “n.” Decision-makers thus
confront the timeless problem of selecting those characteristics most salient for
prediction.

To illustrate how our suggestions interface with ethical decision-making, consider
recent proposals to reinitiate trials of fetal-derived tissues for Parkinson’s
disease [31].
Previous trials involved treatment-refractory patients, but investigators are now
proposing trials involving patients with recent onset. The rationale is that
fetal-derived tissues can only protect dopaminergic neurons to the extent that the
latter remain intact. However, the risk-benefit balance is contentious, because the
trial will expose patients who can manage symptoms with standard treatments to the
risks of neurosurgery, immunosuppression, and cell transplantation.

Systematic study of preclinical research has centered on stroke and practices focused
on internal validity. Our proposal makes clear the need to broaden the scope of this
research agenda to cover a wider range of preclinical research, and to expand its
focus to include issues of construct and external validity. A key component of this
process will involve creating databases for aggregating translational outcomes
according to relevant reference classes.

Source:

http://doi.org/10.1371/journal.pmed.1001010

 

Leave a Reply

Your email address will not be published.