Current Issue - November/December 2008 - Vol 11 Issue 6


  1. 2008;11;717-773Evidence-Based Medicine, Systematic Reviews, and Guidelines in Interventional Pain Management: Part 2: Randomized Controlled Trials
    Health Policy Review
    Joshua A. Hirsch, MD, Laxmaiah Manchikanti, MD, and Howard S. Smith, MD.

Evidence-based medicine (EBM) is a shift in medical paradigms and about solving clinical problems, acknowledging that intuition, unsystematic clinical experience, and pathophysiologic rationale are insufficient grounds for clinical decision-making. The importance of randomized trials has been created by the concept of the hierarchy of evidence in guiding therapy. Even though the concept of hierarchy of evidence is not absolute, in modern medicine, most researchers synthesizing the evidence may or may not follow the principles of EBM, which requires that a formal set of rules must complement medical training and common sense for clinicians to interpret the results of clinical research. N of 1 randomized controlled trials (RCTs) has been positioned as the top of the hierarchy followed by systematic reviews of randomized trials, single randomized trial, systematic review of observational studies, single observational study, physiologic studies, and unsystematic clinical observations. However, some have criticized that the hierarchy of evidence has done nothing more than glorify the results of imperfect experimental designs on unrepresentative populations in controlled research environments above all other sources of evidence that may be equally valid or far more applicable in given clinical circumstances.

Design, implementation, and reporting of randomized trials is crucial. The biased interpretation of results from randomized trials, either in favor of or opposed to a treatment, and lack of proper understanding of randomized trials, leads to a poor appraisal of the quality.

Multiple types of controlled trials include placebo-controlled and pragmatic trials. Placebo controlled RCTs have multiple shortcomings such as cost and length, which limit the availability for studying certain outcomes, and may suffer from problems of faulty implementation or poor generalizability, despite the study design which ultimately may not be the prime consideration when weighing evidence for treatment alternatives. However, in practical clinical trials, interventions compared in the trial are clinically relevant alternatives, participants reflect the underlying affected population with the disease, participants come from a heterogeneous group of practice settings and geographic locations, and endpoints of the trial reflect a broad range of meaningful clinical outcomes.