1 Introduction
2 Basics of impact evaluation
2.1 The fundamental problem of impact evaluation
2.2 Analyzing the impact: characterization and assessment
2.3 The problem of comparing apples to oranges
3 Experiments (A/B testing)
3.1 Comparing apples to apples
3.2 Behavioral assumptions and methods for analyzing experiments
3.3 Multiple interventions
3.4 Use cases in R
3.5 Use cases in Python
4 Selection on observables: aim to compare apples with apples
4.1 Making groups comparable in observed characteristics
4.2 Behavioral assumptions
4.3 Methods for impact evaluation
4.4 Use cases in R
4.5 Use cases in Python
5 Causal machine learning
5.1 Motivating causal machine learning
5.2 Elements of causal machine learning
5.3 A brief introduction to several machine learning algorithms
5.4 Effect heterogeneity and optimal policy learning
5.5 Use cases in R
5.6 Use cases in Python
6 Instrumental variables
6.1 Instruments and complier effects
6.2 Behavioral assumptions
6.3 Use cases in R
7 Use cases in Python
8 Regression discontinuity designs
8.1 Sharp and fuzzy regression discontinuity designs
8.2 Behavioral assumptions and methods
8.3 Use cases in R
8.4 Use cases in Python
9 Difference-in-Differences
9.1 Difference-in-Differences and the impact in the treatment group
9.2 Behavioral assumptions and extensions
9.3 Use cases in R
9.4 Use cases in Python
10 Synthetic controls
10.1 Impact evaluation when a single unit receives the intervention
10.2 Behavioral assumptions and variants
10.3 Use cases in R
11 Use cases in Python
12 Conclusion