Alternative explanations can be investigated to rule out other reasons for the observed outcomes.
Alternative explanations should be considered in all outcome evaluations, whether formally or informally. In practice good evaluators, researchers and managers should scrutinise outcomes for alternative explanations and initiate further investigations.
Apparent outcomes (or lack thereof) might reflect methodological issues such as selection bias (where participants are systematically different from non-participants), and contamination effects (where non-participants benefit from the program as well, reducing the difference between them and participants in terms of outcomes). They might reflect the influence of other factors, such as other programs or population movements between areas assigned to receive a program and those without one.
More formal methods for identifying and ruling out possible alternative explanations, or combining these different elements of explanation include:
- General elimination methodology – possible alternative explanations are identified and then investigated to see if they can be ruled out.
- Searching for disconfirming evidence/following up exceptions
An evaluation of the outcome of legislation for compulsory bicycle helmets found that there had been a significant decline in the number of head injuries among cyclists. While this was consistent with the theory of change an alternative explanation was that the overall level of injuries had declined due to increased building of bicycle lanes during the same period. Examination of serious injuries showed that while the level of head injuries had declined in this period the number of other types of injuries had remained stable, supporting the theory that it was the helmets that had produced the change. (Walter et al. 2011 cited in Introduction To Impact Evaluation)
Some approaches combine ruling out possible alternative explanations with options to check that the results support causal attribution.
- Contribution analysis – a systematic approach that involves developing a theory of change, mapping existing data, identifying challenges to the theory – including gaps in evidence and contested causal links – and iteratively collecting additional evidence to address these.
- Multiple lines and levels of evidence (MLLE) – a wide range of evidence from different sources is reviewed by a panel of credible experts spanning a range of relevant disciplines. The panel identifies consistency with the theory of change while also identifying and explaining exceptions. MLLE reviews the evidence for a causal relationship between a program and observed outcomes in terms of its strength, consistency, specificity, temporality, coherence with other accepted evidence, plausibility, and analogy with similar programs.