Impact Evaluation and Casual Inference
Course Summary:

The science of impact evaluation is a rigorous field that requires thorough knowledge of the area of work, simple to complex study designs, as well as knowledge of advanced statistical methods for causal inference. Many programme funders and managers want to know whether their programmes have produced measurable impact. They want to be able to measure that impact and express it in numbers.

The key focus of impact evaluation is attribution and causality – that the programme is indeed responsible for the observed changes reported. To achieve this, a major challenge is the possibility of selecting an “untouched” comparison group and using the appropriate statistical methods for inference.

Course Objectives:

  • Introduction to impact evaluation
  • The need for and culture of impact evaluation
  • Programme indicators
  • Introduction to programmes and logic models
  • What really is impact – efficiency versus effectiveness
  • Counterfactuals – looking for a suitable control
  • Experimental and quasi-experimental study designs
  • Research ethics in IE and design implications
  • Causality
  • Attribution and research validity in impact evaluation
  • Potential biases in impact evaluation
  • Design and statistical methods for controlling for confounders
  • Statistical techniques for causal inference

Course Outline

  • We will review the what, how, and why of impact evaluation, with particular emphasis on the role of control groups, pre-&-post measurement, and covariate data to define counterfactual scenarios (including formal definition of all terms).
  • We will review detailed examples of the main methods for evaluation - randomized experiments and quasi-experiments (including natural experiments, and matching methods) - with a clear description of the pros and cons of each method.
  • We will work in groups to develop a plan for evaluating some real life projects. The emphasis will be on defining the counterfactual situation and identifying potential confounders. 
  • We will do stylized evaluations in class and in evenings using STATA so we can apply the concepts. 
  • Finally, we will place econometric evaluations within the broader context - how can we move beyond press-button evaluations; what do we do under time, resource and data constraints; when and where should we rely on theory-based evaluations and mixed methods to complement and/or substitute for econometric evaluations.
  • Participants will have to think about and articulate a solution to the evaluation problem in all three aspects of the empirical work - study design, data collection, and analysis.