Practical Practice

Critically Appraising Evidence

We’re all guilty of it – reading an abstract, or worse yet only the newspaper headline title, and taking the results as fact. Not that you have to spend hours critiquing an article every time results are of interest (that’s what reviewers and editors are for!), but taking a better look at study design, bias and clinical importance can provide a clearer picture of what study results really mean in the context of actual practice.

If you are unfamiliar with appraising literature, here are some steps and tools to help you get started.

1. Define the Study Characteristics (PICO)

 P   Population being studied

  • Human? Animal study? In vitro work?
  • Is there a particular population of interest based on age, gender or health condition? If a sub-population is being studied, such as smokers or women, how generalizable are study findings to the greater population?
  • In vitro and in vivo studies often provide the basis for future clinical trials and human studies; that doesn’t mean that results are directly transferable to human populations.

I/E  Intervention(s) or exposure(s) applied to population

  • What therapeutic, diagnostic, preventive or other health care interventions were applied or observed in the study population?
  • As with animal and human research alike, it is important to assess whether the intervention (nutrient/food intake applied) was provided at a dose that is actually realistic in the human diet, whether by diet or supplements.

C   Comparison/control (if appropriate)

  • Evaluating this section becomes particularly relevant when you are comparing the results of several studies and different placebos, controls and comparisons were used.
  • Was an alternative intervention or exposure studied?
  • Was the comparison group a placebo treatment or just “typical” diet (so technically no intervention)?

O   Outcomes that are measured

  • Were the measures used to assess the effects/results of the intervention or exposure appropriate?
  • Were the measures reliable and valid? Sensitive to detect effects?

Example: How promising does this headline from Science Daily sound? Breast Cancer Risk Drops When Diet Includes Walnuts, Researchers Find. Pretty great! Let’s define the PICO and see how our perception of the headline changes.

  • P = What the headline doesn’t tell you is that that study was conducted with mice, not women, which definitely changes its direct applicability.
  • I = In our breast cancer example, mice were fed a diet containing walnuts – they were given the human-equivalent of 2 oz of walnuts per day – a completely plausible intake (though pretty calorie dense!)..
  • C = In the example, mice were fed a typical diet, which serves as a control (though not placebo).
  • O = Breast cancer development, tumour number and size, breast cancer-related gene activity – with the exception maybe of gene activity, very clinically relevant outcomes (though in mice!).

 2. Distinguish the Study Design

Once you have established the PICO for the study, these characteristics can be used to pinpoint the study design. This is helpful because each study design has inherent advantages and potential disadvantages, which impact the quality of evidence and study findings. It is important to establish whether the study design was appropriate for the original research questions.

Tool: Oxford Centre for Evidence-Based Medicine decision tree

Hierarchy of Evidence

Systematic reviews and meta-analyses
Randomized Control Trials (RCTs) and Randomized Cross-Over Trials (RCOs)
Observational cohort
Case report/series
Expert opinion

Adapted from: Oxford Centre for Evidence-Based Medicine 2011 Levels of Evidence table

3. Identify Limitations and Risk of Bias

Issues with study design and methodology may raise questions about the validity of study results and findings. A bias is a systematic error (deviation from the truth) in results or inferences. Biases can lead to underestimation or overestimation of the true intervention or exposure effect. Some biases are small and trivial, while others are substantial and apparent study findings may be entirely due to bias. There are several different types of bias that may be present.

Tool: Cochrane “Risk of Bias” assessment tool

Adapted from: Julian PT Higgins, Douglas G Altman, Jonathan AC Sterne (eds) on behalf of the Cochrane Statistical Methods Group and the Cochrane Bias Methods Group.

4. Extract Study Results

It can be helpful, to again utilize the PICO characteristics when extracting study results.

  • P  Sample size: how large was the study population? Would this be sufficient to detect differences or make inferences?
  • P  Were the intervention/control populations comparable? How does the study population compare to clinical populations with respect to demographics, etc.?
  • I/C  How does the study interventions/exposure compare to others in the literature? Was the intervention carried out as intended?
  • O  Were the measures categorical or continuous?
  • O  Were the results statistically significant? How large was the variance in the population (i.e., SDs)?
  • O  How is missing data handled? What was the drop-out rate?
  • O  Were the statistics used appropriate? How were confounders controlled for (if attempted)? Do the authors make mention of statistical power and required sample size?

5. Formulate Conclusions

 This pertains to translating study findings into the “big picture” of clinical practice.

  • Are the results relevant to the WHO – WHAT – WHEN – WHERE – WHY of the original research question?
  • What can you imply about causality based on study design advantages and disadvantages? Be careful in your interpretation of observational study results.
  • Were the results clinically important? Whether or not results are statistically significant may be irrelevant if the results do not correspond with overall health measures (i.e., change in calcium intake of 15 mg vs. 250 mg and bone health)
  • Are findings consistent with results of other similar studies?
  • Are the findings applicable to the clinical populations? Look again at the study participant characteristics.
  • Can findings be realistically integrated into practice?

Adapted from: Wiens K, Couzens K, Hards L, Holmes R, Tsui V, Fenton TF. (2011). Critically appraising the literature: a dietitian’s toolbox. Poster presented at the Dietitians of Canada Annual Conference; Edmonton, AB.

For more information:

Oxford Centre for Evidence-Based Medicine:

Effective Public Health Practice Project. Quality Assessment Tool and Dictionary.

Ontario Public Health Libraries Association. Critical Appraisal of Research Evidence 101.

American Journal of Nursing, Evidence-based practice step-by-step series:
The seven steps of evidence-based practice. 2010; 110(1): 51 – 53
Asking the clinical question. 2010; 110(3): 58 – 61
Searching for the evidence. 2010; 110(5): 41 – 7
Critical appraisal of the evidence I. 2010; 110(7): 54 – 60
Critical appraisal of the evidence II. 2010; 110(9): 41 – 8
Critical appraisal of the evidence III. 2010; 110(11): 43 – 51
Following the evidence. 2011; 111(1): 54 – 60
Implementing EBP change. 2011; 11(3): 54 – 60


2 thoughts on “Critically Appraising Evidence

  1. Pingback: The War on Spurious Science – Cause and Effect? | No Baloney

  2. Pingback: Lighten Up trial results: No Baloney’s appraisal | No Baloney

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s