Research Article: Seventy-Five Trials and Eleven Systematic Reviews a Day: How Will We Ever Keep Up?

Date Published: September 21, 2010

Publisher: Public Library of Science

Author(s): Hilda Bastian, Paul Glasziou, Iain Chalmers

Abstract: Hilda Bastian and colleagues examine the extent to which critical summaries of clinical trials can be used by health professionals and the public.

Partial Text: Thirty years ago, and a quarter of a century after randomised trials had become widely accepted, Archie Cochrane reproached the medical profession for not having managed to organise a “critical summary, by speciality or subspeciality, adapted periodically, of all relevant randomised controlled trials” [1]. Thirty years after Cochrane’s reproach we feel it is timely to consider the extent to which health professionals, the public and policymakers could now use “critical summaries” of trials for their decision-making.

Keeping up with information in health care has never been easy. Even in 1753, when James Lind published his landmark review of what was then known about scurvy, he needed to point out that “… before the subject could be set in a clear and proper light, it was necessary to remove a great deal of rubbish” [2]. And 20 years later, Andrew Duncan launched a publication summarising research for clinicians, lamenting that critical information “…is scattered through a great number of volumes, many of which are so expensive, that they can be purchased for the libraries of public societies only, or of very wealthy individuals” [3]. We continue to live with these two problems—an overload of unfiltered information and lack of open access to information relevant to the well-being of patients.

Despite this progress, the task keeps increasing in size and complexity. We still do not know exactly how many trials have been done. For a variety of reasons, a large proportion of trials have remained unpublished [18],[19]. Furthermore, many trials have been published in journals without being electronically indexed as trials, which makes them difficult to find. One of the first steps in being able to adequately review literature is that scientific contributions which predate digitalised information systems and trial indexing need to be “rediscovered and inserted into the memory system” [20]. Through the 1990s, to identify possible reports of controlled trials, the Cochrane Collaboration mobilised thousands of volunteers around the globe to comb the major databases, and to hand-search nondigitalised health literature, unpublished conference proceedings, and books. The result of this collaborative effort is the Cochrane Controlled Trials Register (CCTR) (now called the Cochrane Central Register of Controlled Trials).

In 1986 and 1987, Goldschmidt and Mulrow showed how great the potential is for error in reviews of health literature that were not conducted systematically [9],[10]. Looking at data such as those in Figure 3 could provide the comforting illusion that systematic reviews have displaced other less reliable forms of information. However, as Figure 4 shows, this is far from the case. The growth has been even more remarkable in non-systematic (“narrative”) reviews and case reports. Journal publishing of non-systematic reviews, and the emergence of many journals whose sole product is non-systematic reviews, has far outstripped the growth of systematic reviews and HTAs, as impressive as the latter has been. And the number of case reports—which can also provide important new information such as adverse effects—is far higher than the number of trials or systematic reviews. Trials, systematic reviews, and HTAs have undoubtedly had major impacts, including on clinical guidelines: they are more likely to be cited and read than other study types [25]. However, the staple of medical literature synthesis remains the non-systematic narrative review.

First, we need to prioritise effectively and reduce avoidable waste in the production and reporting of research evidence [29]. This has implications for trials as well as systematic reviews. Some funders and others will now not consider supporting a trial unless a systematic review has shown the trial to be necessary [30]. It is essential that this requirement be more widely adopted. And it is essential that reviews address questions that are relevant to patients, clinicians and policymakers.

Source:

http://doi.org/10.1371/journal.pmed.1000326

 

Leave a Reply

Your email address will not be published.