This is part three in a four-part series inspired by the TEDTalk “Battling Bad Science“.
Part 3: Correlation does not imply causation.
The headline reads: “Beer causes lung cancer!” then explains that lung cancer rates are higher among beer drinkers; therefore, beer causes lung cancer. Is this a simple case of cause and effect or are there other factors that may explain this finding?
Observational studies are often confounded or confused by other factors or variables. Although still very useful for investigating associations between dietary factors and disease risk, they cannot prove cause and effect because other factors may also be influencing the outcome.
As we discussed in Debunking Headlines last week, the media may sometimes misrepresent findings from nutrition-related studies. Another common issue is confusing correlation with causation, or cause and effect. Just because two things happen together (are correlated or associated) does not mean that one caused the other. Correlation is a measure of the relationship between two variables, while causality is a relationship where an outcome (i.e., lung cancer) occurs as a consequence of a cause (i.e., smoking) (1). While both are used to inform evidence-based practice, it is important to understand how each can be used appropriately and what conclusions can be drawn.
1. Distinguish the study design (see our previous post on Critical Appraisal)
The study design must match study purpose or you run the risk of making false conclusions. Here’s a primer on study design (2):
Descriptive: qualitative and cross-sectional studies provide a “snapshot” of a population. They often measure of describe frequency/incidence of variables and may examine associations between variables. Cannot establish causation.
Observational: case-control and cohort studies investigate and record exposures at several time points AND observe subsequent outcomes. Cannot establish causation.
Experimental: randomized controlled trials (RCTs) and randomized crossover trials (RCOs) manipulate exposure to a certain factor (i.e., nutrient or diet) and monitor outcomes while comparing with a control or placebo group. May be able establish causation.
2. Identify confounding variables
A confounding variable can confuse study results and potentially alter the outcome of the study by influencing the relationship between two variables. In our “Beer Causes Lung Cancer” example above, a very important confounding variable is smoking – beer drinking may also be associated with smoking, which in turn contributes to lung cancer. There are a multitude of confounding variables in nutrition research, including but not limited to: age, gender, income, marital status, social support, ethnicity, education, physical activity, body composition, etc.
A study cannot control for every confounder, however, you should control for as many as possible, and potential confounders should be discussed in the conclusions/study limitations.
3. Look at the statistics
We are trained to always look for the p-value as a measure of significance, but just because something is statistically significant does not mean that it is clinically important. The degree of the change is important, for example a 0.25 kg weight loss (p < 0.05) is statistically significant but not clinically important.
When it comes to correlation, a highly significant p-value (p < 0.05) may exaggerate our interpretation of positive results and be confused with true clinical importance. This is particularly common in studies with a large sample size. It’s very important to determine the magnitude of effect. Take a look at the correlation coefficient (often recorded as r) to see how closely two variables are associated (1):
- < 0.25 = little or no relationship
- 0.25 – 0.50 = fair degree
- 0.50 – 0.75 = moderate to good
- > 0.75 = good to excellent
4. RCTs are not always perfect…
While experimental studies are the only design that may be able to establish causation, they are not without flaws. You should always consider: the outcome measures chosen, the methods used, the population studied, study duration, etc.
For instance, imagine a weight loss intervention study where participants are randomized to receive either diet counseling, a nutrition handout or standard care (control). Researchers find that there is no difference in weight loss between the groups… but the study only used 3 participants per group and followed them for 2 weeks. We know this is too small a sample size and too short of an intervention to measure change.
Some common problems with RCTs include (1):
- Inadequate blinding
- Contamination: When the groups in some way are affected by each other. This usually occurs when the conditions of the experiment are not adequately controlled.
- Non-compliance: When the participant does not follow the treatment protocol as the researcher has outlined.
- Placebo effect
Often a meta-analysis, a study that statistically combines the findings from a variety of independent studies, can provide clarity and establish greater confidence in the conclusions (3).
There will always be equivocal research findings in nutrition – that’s part of the fun and frustration of nutrition science! While our desire to definitely establish causation related to nutrition and disease risk is genuine, it is frequently unrealistic, logistically impossible and even unethical to perform experimental trials, particularly for the length of time necessary to see health outcomes. As we struggle with correlation vs. causation, the importance of critical thinking, clinical judgment and cautious translation of research findings to practice is imperative.
next week: part four of our TEDTalk-inspired series – Knowledge Transfer…
1. Portney LG, Watkins MP. Foundations of clinical research: applications to practice. 2nd ed. New Jersey: Prentice Hall Health; 2000.
2. Study Designs. Centre for Evidence-Based Medicine; University of Oxford.
3. Crombie IK, Davies HT. What is meta-analysis? What is…..? series 2nd Edition. Available at http://www.medicine.ox.ac.uk/bandolier/painres/download/whatis/Meta-An.pdf; 2009.