Update
Show Summary Details

Page of

PRINTED FROM OXFORD REFERENCE (www.oxfordreference.com). (c) Copyright Oxford University Press, 2023. All Rights Reserved. Under the terms of the licence agreement, an individual user may print out a PDF of a single entry from a reference work in OR for personal use (for details see Privacy Policy and Legal Notice).

Subscriber: null; date: 23 June 2025

evidence-based medicine

Source:
The Oxford Companion to Medicine
Author(s):
Patricia HustonPatricia Huston

evidence-based medicine 

Medicine has long been characterized as an art as well as a science, but there has been a surprising lack of insight into the nature of the link between the two: How is science translated into the art of medicine? How do physicians keep up with new advances? How do physicians assess the validity and applicability of new evidence? The evidence-based medicine movement arose from the observation that new scientific data are often applied to patient care in an uneven and haphazard manner. Physicians may read medical journals, but not immediately apply what they have learned to their practice, in part because numerous studies that address a clinical question may arrive at conflicting conclusions. Physicians may reach their own conclusions based on what they happen to read, or learn about at continuing medical education courses, or they may rely on ‘expert opinion’ by assuming that experts have a more thorough knowledge of the evidence and the judgement to determine how best it should be applied.

With the rise of evidence-based medicine in the 1980s, expert opinion began to be questioned. Several key studies showed that experts were not always abreast of the most recent advances in medical research and were often selective in the evidence they relied upon. Proponents of evidence-based medicine believed that clinicians could evaluate the evidence for themselves and developed a systematic way for clinicians to deal with the information explosion in medical research.

Evidence-based medicine focuses on how science should be integrated into the art of patient care. It exposed the unscientific way that information transfer often occurred and offered what was claimed to be an ‘evidence-based’ alternative. This alternative was a framework to assess and critique medical research. Designed to empower the practicing clinician, evidence-based medicine has offered sobering insights into the quality of much of the current published scientific literature, provoked higher standards in clinical research, and has begun to illuminate the nature of knowledge development in medicine.

Evidence-based medicine has been defined by David Sackett, an Oxford professor, and colleagues as ‘the conscientious and judicious use of current best evidence from clinical care research, in the management of individual patients’. Thus, when a question arises in clinical practice (such as ‘Should medical therapy be recommended to prevent myocardial infarction in a 45-year-old woman with a moderately raised cholesterol level, who is otherwise at low risk for heart disease?’), advocates of evidence-based medicine advise that clinicians pose the seminal question: ‘What is the evidence?’ and follow three basic steps:

  • access the best and most recent evidence

  • evaluate it, and, if the evidence is found to be valid

  • apply it to future treatment recommendations.

This three-step approach was thoroughly described for various types of clinical questions in a landmark series of articles published in the early 1990s by the Evidence-Based Medicine Working Group.

To access the best and most recent scientific evidence, advocates of evidence-based medicine have promoted the use of electronic databases, such as MEDLINE and EMBASE. They have popularized the use of search strategies and given generic suggestions for how to find one or two key articles from a literature search to address a specific clinical question. Although the initial search may identify hundreds of articles, use of evidence-based medicine protocols (including a preference for the use of randomized controlled trials and well-specified clinical outcomes) often narrow down the search to a few choice studies.

While it is fairly straightforward to undertake this assessment with one article, what if the targeted search strategy comes up with numerous randomized controlled trials that address the same question, have similar patient populations and outcome measures? Such redundancy in medical research is not unusual. This led to a natural partnership with other trends in medical research and information management at the time: the rise of systematic reviews and meta-analysis.

Authors of systematic reviews employ a methodical approach to analyze and combine the results of numerous studies on a particular topic to answer a clinical question that is based on the best available evidence. Results may be combined qualitatively or, in cases where study designs are similar enough, quantitatively. When results of numerous studies are mathematically combined, the systematic review is referred to as a meta-analysis.

There has been an exponential rise in the number of systematic reviews published in the past decade. The method of systematic reviews is still under development and methods vary among researchers. One of the leaders has been the Cochrane Collaboration, an international initiative to ‘prepare, maintain, and disseminate systematic reviews of the effects of healthcare’. It has been likened to the human genome project with respect to its implications for medicine and medical research. It espouses evidence-based medicine principles and has made important contributions to synthesizing current research information, notably in such disciplines as obstetrics and neurology.

Repercussions

The effects of evidence-based medicine have been widespread, profound, and, in some cases, disturbing. It has been increasingly incorporated into undergraduate medical education to the extent that learning search strategies for accessing recent evidence from electronic databases is considered a basic skill development in the first year of medical school. Critical appraisal skills are being integrated into the undergraduate medical curriculum. Evidence-based health policy is increasingly embraced at all levels of government. In short, it has become a new standard.

One of the more sobering effects of evidence-based medicine is that it has disclosed how much published research is of dubious value. By developing a system to ‘sift through’ the abundant scientific literature, evidence-based medicine is implicitly identifying many research studies that have important methodologic flaws, casting doubt on the validity of their findings. Most well-targeted search strategies will eliminate over 95% of published articles on any particular clinical question. For example, proponents of evidence-based medicine have established ‘levels’ of evidence, where randomized controlled trials are ‘at the top’ and, when available, suggest that studies with other types of research design be ignored. This has evoked outrage from some sectors in the research community; evidence-based medicine has certainly not been embraced by all.

Although proponents of this approach may seem imperious, the method of critical appraisal has revealed that much of the published medical literature is flawed, and that even the best evidence is often limited. As earlier noted, physicians who use the three-step evidence-based medicine approach can address answers to clinical questions if the identified studies pass the critical appraisal tests. More often than not, however, none of the studies pass all the tests — what then? For example, can the results of randomized controlled trials that were carried out only on men be applied to women? What if studies on a new treatment for a chronic disease were conducted for only 6–8 weeks — can a physician be confident about its long-term effectiveness on this basis? Another common dilemma arises when a study shows a very small, but statistically significant difference; is this clinically significant? Physicians faced with these situations still have to use their ‘best judgement’ to decide whether to base their therapeutic recommendations on this type of limited evidence. This is disturbingly akin to ‘expert opinion’ — the very thing evidence-based medicine was developed to avoid.

The greatest irony, however, is that it is itself based on surprisingly little evidence. If one poses the seminal question of ‘What is the evidence?’ for the validity of critical appraisal questions, the response is far from definitive. In the original series of articles, no justification or rationale — let alone evidence — was offered for the choice of critical appraisal questions. Certainly the questions appear to have ‘face validity’ — research methodologists would probably agree that these questions would identify weaknesses in a study design and therefore reveal threats to the validity of the findings. But are these the most important questions? Are any other questions equally or even more relevant?

There are some threats to the validity of studies that evidence-based medicine does not emphasize. In classic epidemiology, for example, there are six criteria for inferring that an association might be causal: biologic plausibility, specificity of the association, consistency of the association, a temporally correct association, strength of the association, and a dose–response relationship. In the article on assessing non-randomized studies (often conducted to identify causation), only the last three of these six criteria are identified — why? No explanation was offered. Evidence-based medicine generally does not emphasize the importance of sample size calculations — a critical methodologic step to ensure adequate statistical power. Evidence for the validity of evidence-based medicine is sadly lacking.

The Evidence-Based Medicine Working Group has acknowledged that ‘there is no correct way to assess validity’. Although the questions they pose are undoubtedly useful, there is no assurance that they are the only useful questions, nor that they are the most revealing. More important, there is no evidence that identifies the relative weight of each critical appraisal question. Thus, if a study does not meet one critical appraisal criterion, can the results still be valid? Generally, if no better study is available, then such results are accepted. This is the application of logic, not science. If a study does not meet two critical appraisal criteria, is it less valid than a study that does not meet a third? We do not know and logic cannot help. Again we fall back on expert opinion and making the best of the currently available evidence. This is an especially difficult problem in meta-analysis methodology; there is no comprehensive and valid method of giving relative weights to numerous flaws among studies with different weaknesses.

Implications

Evidence-based medicine has not only revealed the limitations of scientific research; it has done nothing less than bring into question the very nature of scientific evidence. It touches upon the epistemology of medicine — how is knowledge established? In doing so it has renewed efforts towards improving the rigor of scientific research.

There are definitive signs that evidence-based medicine has already been effective in improving the quality of medical research. Systematic reviews have abundantly identified common methodologic flaws in research design, as well as gross redundancies and gaps in addressing specific clinical questions. These are increasingly hard to ignore when considering future research directions. Systematic reviews are rapidly becoming the current standard for gathering and assessing evidence on which to base clinical practice guidelines.

Improvements in the reporting of research have been stimulated by this methodologic research. In 1996, the Consolidating Standards of Reporting Trials (CONSORT) Statement provided a checklist for the reporting of randomized controlled trials. It used evidence for its reporting criteria whenever possible, as well as a bit of what they called ‘common sense’ when no empirical evidence was available. The CONSORT Statement has been adopted by dozens of medical journals worldwide. Research into how to assess the relative weights of different studies when the results are combined in systematic reviews should be forthcoming.

Although evidence-based medicine was designed to help physicians deal with the massive amount of research that has been published, its greater contribution may be in promoting a deeper appreciation of how limited medical knowledge actually is. This has stimulated renewed efforts to improve the quality of research and identified the need to develop more rigorous research methods. In the meantime, how physicians apply scientific evidence to caring for patients remains, to some degree, an art.

Patricia Huston

See also audit; health economics; journals; statistics

critical appraisal

Evaluation of clinical research studies, often referred to as ‘critical appraisal’, is at the core of evidence-based medicine. It consists of looking for any threats to the validity of a study (such as bias that might arise from non-randomization, incomplete follow-up of patients, or inappropriate outcome measures); evaluating the certainty of the results (for example, determining whether the effect size has both statistical and clinical significance); and considering the applicability of the results to the current clinical question (for example, by examining whether the study population had relevant clinical characteristics).

To apply the example on p 294 to the final step, if a physician finds the key article on cholesterol-lowering medication and determines that it is free of bias and the study was conducted on a patient population that included 45-year-old women with similarly raised blood cholesterol levels, and the results showed a substantial reduction in myocardial infarction in a reasonable follow-up period, then the physician can confidently use this evidence to recommend the medication.