Research Article: Observational Research, Randomised Trials, and Two Views of Medical Science

Date Published: March 11, 2008

Publisher: Public Library of Science

Author(s): Jan P Vandenbroucke

Abstract: Two views exist of medical science, says the author, one emphasizing discovery and explanation, the other emphasizing evaluation of interventions.

Partial Text: Two views about medical science seem to have split ever more apart over the past decades. One view is that of medical researchers who rejoice in discoveries and explanations of causes of disease. Discoveries happen when things are suddenly seen in another light: the odd course of a disease in a patient, the strange results of a lab experiment, a peculiar subgroup in the analysis of data, or some juxtaposition of papers in the literature. Researchers get enthusiastic about an idea, and try to find data—preferably existing data—to see whether there is “something in it”. As soon as there is a hint of confirmation, a paper is submitted. The next wave of researchers immediately tries to check this idea, using their own existing data or their trusted lab experiments. They will look at different subgroups of diseased persons, vary the definition of exposures, take potential bias and confounding into account, or vary the lab conditions, in attempts to explain why the new idea holds—or why it is patently wrong. In turn, they swiftly submit their results for publication. These early exchanges may lead to strong confirmation or strong negation. If not, new studies are needed to bring a controversy to resolution.

Underlying these differences in views are differences in the hierarchy of research designs that apply to different problems. A hierarchy of “strength” of research designs with the randomised trial on top and the anecdotal case report at a suspect bottom has been well known since the 1980s in various guises [6] and under various names. A typical rendering is shown in Box 1. I have qualified this hierarchy by naming it the hierarchy of study designs for “intended effects of therapy”, i.e., the beneficial effects of treatments that are hoped for at the start of a study.

The argument for why randomisation is most of the time not needed in observational research on causes of diseases [9] can be briefly recapitulated by pointing out the contrast between the investigation of beneficial effects versus the investigation of adverse effects of treatments. Beneficial effects are “intended effects” of treatment. In daily medical practice, prescribing will be guided by the prognosis of the patient: the worse the prognosis, the more therapy is given. This leads to intractable “confounding by indication”. Hence, to measure the effect of treatment, we need “concealed randomisation” to break the link between prognosis and prescription [10]. In contrast, adverse effects are “unintended effects” of treatment, and are mostly unexpected and unpredictable, which means that they usually are not associated with the indications for treatment [11]. Thus, there is no possibility of “confounding by indication”, and observational studies on adverse effects can provide data that are as valid as data from randomised trials [12,13]. A straightforward example of an unexpected and unpredictable adverse effect is the development of a rash after prescription of ampicillin in a patient who never used any penicillin derivative or analogue before. The prescribing physician cannot predict this occurrence. Hence, data from routine care in daily practice can be used to study the frequency of such rashes.

Many scientists believe that results from observational research are less credible because of the problem of subgroups and multiplicity of analysis: multiple looks at data for associations that were not the original aims of the data collection.

The ideas about subgroups and prior odds of hypotheses lead to further insight in the usual hierarchy of strength of study designs with the randomised trial on top and the case report at a suspect bottom (Box 1). Perhaps this hierarchy is a hierarchy of prior odds. Intuitively, we may feel that randomised trials are the most robust type of study because positive findings from such trials stand the test of time better than findings from other designs. However, that might be because they start with higher prior odds.

We need both hierarchies, the hierarchy of discovery and explanation as well as that of evaluation. Without new discoveries leading to potentially better diagnosis, prevention, or therapy, what would we do randomised trials on? Conversely, how could we know that a discovery is useful, if not rigidly evaluated?

Source:

http://doi.org/10.1371/journal.pmed.0050067

 

Leave a Reply

Your email address will not be published.