Evidence-based medicine and articles on therapy :
A step-by-step analytical approach

Dr. Ajit N. Babu, FACP, MPH (USA)
Associate Professor of Medicine, Saint Louis University, Missouri, USA
Director, Doctor’s Diagnostic Centre International, M.G. Road, Kochi


Evidence-based medicine (EBM) is becoming a buzzword around the globe. Though the individual components of EBM have been part of medical science for decades, the architects of the EBM movement have succeeded in taking what was a dry and dreary set of concepts relegated to the backwaters of clinical practice and bringing them to the forefront of current medical thought. To elaborate on the general principles of EBM discussed at the last IMA meeting, this essay will take a concise look at an element of EBM at the top of many clinician’s lists: How to analyze an article dealing with therapy.

The basic approach requires asking three specific questions and related sub-questions of primary and secondary importance. If the answer is no, particularly to the main questions, then it is best to discard the article and look for something better. My commentary is provided in italics. For further study, the excellent set of articles called User’s Guides to the Medical Literature in JAMA are highly recommended, and can be found full-text on the Internet for free at http://www.cche.net/usersguides/main.asp

1.  Were the results valid? (Meaning was the study done in a scientific way?)

A. Was the assignment of patients to treatment randomized? (The best type of study design to look at therapy is the randomized trial, where patients are assigned completely at random to either the experimental treatment or placebo/standard treatment group)

B. Were all patients accounted for at the end of the study? (Clear information must be provided about the fate of patients who were in the study. If it looks like a lot of patients mysteriously went “missing”, then the study is poorly reported and not worth reading further. The results should reflect analysis of patients in the group they started the study in, even if they later crossed over to another group, or dropped out of the study)

a.  Were patients, physicians and staff “blinded” to treatment? (If either the patients or the medical professionals involved know who got the experimental treatment, then this may prevent impartial reporting and interpretation of the results thereby compromising the integrity of the study)

b. Were the groups similar at the start of the trial? (If the groups were not similar, then the study is not comparing apples to apples so to say. For example, if the placebo group has a greater number of smokers, then other things being equal, that group will likely have poorer outcomes thus inflating by comparison the apparent superiority of the experimental treatment)

c. Aside from the experimental intervention, were the groups treated equally? (Obviously, if the experimental treatment group is getting better care overall, then their outcomes would be more favorable, even if experimental treatment by itself was of no real benefit!)

2.  What were the results? (This is where you consider the reported benefits of the treatment. Traditional outcomes include morbidity, mortality, hospitalization rate, costs of care, etc.)

A. How large was the treatment effect? (This refers to the extent of the reported benefit of the treatment. Carefully consider the difference between relative and absolute risk reduction. For example, if an experimental drug reduces mortality for a condition from 2% with standard therapy to 1%, the relative reduction in risk is 50% as the risk has declined by half – an impressive looking number. However, the ACTUAL reduction in risk which is the absolute risk, is decreased only by 1% which may not be clinically significant at all)

B. How precise was the estimate of the treatment effect? (This refers to the accuracy with which magnitude of the apparent benefit was determined. Remember, the p value DOES NOT give us this information. It only tells us how likely the findings were to have occurred by chance – thus the smaller the number, the “better” it is. 95% confidence intervals, on the other hand, do give us a range of values within which the true result is likely to lie 95% of the time. The narrower this range, the greater the power of the study, and the more convincing are it’s results)

3. Will the results help me in taking care of my patients? (A fundamental question!)

A. Can the results be applied to my patient care? (Consider if the patients in the study were similar to yours, and also if the treatment environment matches your own)

B. Were all clinically important outcomes considered? (Example: If the article only looked at mortality, but you and the patient are worried about morbidity prevention, then the article may be irrelevant even if the study was otherwise well conducted)

C. Are likely treatment benefits worth potential harms and costs? (Careful evaluation of clinical significance, financial cost and patient preferences are essential before embarking on any treatment plan)

Clearly, practicing EBM can be a challenge for busy clinicians. Often, referring to online EBM summaries from reputable sources might be the most practical approach. Still, it is valuable to know how to independently review and appraise the medical literature. It may be helpful to start off by writing yourself an “educational prescription” of puzzling questions that come up in your practice which warrant a search for articles. Set aside a specific time each week when you will grit your teeth and dive into the ocean of EBM to find the answers. After all, a good dip every now and then could make you a champion swimmer before you know it…!