Platform: MITx / J-PAL (Abdul Latif Jameel Poverty Action Lab)
Duration: Self-paced – Lectures and materials can be accessed for 3 months
Certification: You can learn and access course materials for free, but certification requires payment of $99
To develop the best solution for a societal problem, we need to know what really works and the impact of interventions. The course “Evaluating social programs,” offered by J-PAL through MITx Platform, helps learners to understand how randomised evaluations can be used to measure the true impact of policies and programs. It explains each step in designing a randomised evaluation from developing a theory of change to maximising policy impact. Through lectures and case studies using real-world examples, it helps learners understand the technical aspects and practical considerations in conducting a randomised evaluation to measure impact.
The course is asynchronous and self-paced, and is helpful for researchers, policymakers, and students interested in impact evaluation. The course is organised into nine lectures and five case studies, along with optional tutorials and exercises.
Course outline:
- Introduction
- Why Evaluate
- Theory of change and Measurement
- Why and When to Randomise
- How to Randomise
- Sample size and Power
- Threats and Analysis
- Ethics
- Cost-effectiveness Analysis and Scaling up
- Generalising and Applying Evidence
- Conclusion

The course begins with the basic concepts of what a counterfactual situation is, how randomised evaluation mimics counterfactuals to estimate a program’s true impact, and why this approach is essential. It then describes the theory of change as the foundation of programme evaluation, from needs assessment to outcome measurement. The unit also explains basic concepts of measurement and indicators, as well as strategies to maximise the reliability and validity of data sources. Further, the course briefly discusses how different non-experimental impact evaluation methods aim to measure impact and explains why and when randomised evaluation is appropriate.

The lecture “How to randomise?” provides a deeper understanding of randomisation, units of randomisation and how to select the appropriate level of randomisation. It also covers real-world political, resource-related, spillover/crossover, and logistical constraints involved in conducting randomised evaluations. It explains how and when to use different randomised designs, such as the lottery, phase-in, rotation, and encouragement designs, to address these constraints effectively.
Familiarity with basic statistics is required to understand how the statistical power of a randomised evaluation study is calculated. I found this calculation a bit complex, yet a tutorial and practice exercise, along with EGAP’s Power calculator and JPAL’s sample code for conducting power calculations in Stata and R, are provided to help learners. The course also discusses in detail the main threats to validity, such as attrition, spillover and non-compliance, and how to mitigate these threats using Intention to Treat (ITT) and Local Average Treatment Effect (LATE) estimates.
The course emphasised the importance of following ethical principles when conducting randomised evaluations, such as respecting people’s privacy and consent, maximising benefits and minimising potential risks, and ensuring fairness and justice for the subjects under the assessment. It also explains the steps involved in getting ethical approval from the Institutional Review Board (IRB).
Further, the course clarifies the difference between Cost-effectiveness Analysis (CEA) and Cost-benefit analysis (CBA) for choosing better interventions to scale up, and how to assess whether the findings of one evaluation can be generalised to another context, and how insights from one study can be applied to other contexts using the generalizability framework.
Instead of long lectures, each lecture is broken into short 6-10-minute videos, making it easier for learners to follow. After each video, there are multiple-choice questions to assess our understanding of the concept. Each question allows one attempt, while some challenging questions allow two attempts. Correct answers with explanations are provided after the submission. These quizzes are graded and count toward the final course grade. All lecture slides and videos (with subtitles) can be downloaded, and additional J-PAL’s research and policy resources are available for further learning.
In addition to the lectures, the course includes case studies, which involve short readings followed by discussions in the forum that explore the concepts and issues covered in the lecture sequences. The discussion topics include graded multiple-choice and ungraded reflection questions. Answers to the case study questions will count toward our final grade. Lecture quizzes account for 40% of the final grade, while case studies and the final quiz contribute 30% each. A minimum of 65% is required to pass the course. But only those who pay for the certificate track will have access to the final quiz.
Overall, the course is comprehensive, covering all aspects of conducting a randomised evaluation to measure the true impact of programs and policies accurately. The way each lecture integrates real-world examples and instructors, including Rachel Glennerster and Benjamin Olken, as well as senior J-PAL staff, explains each concept using their own experiences from conducting randomised impact evaluation, makes it more interesting and engaging. For Agricultural Extension students, this course adds significant value. While courses such as Research Methodology and Programme Planning and Evaluation provide limited exposure to impact evaluation, this course offers an in-depth understanding of randomised impact evaluation, its practical applications, and its importance in evidence-based policymaking.
Thirumalai Nambi is a Research intern at CRISP. He recently completed his Masters in Agricultural Extension Education from the University of Agricultural Sciences, Dharwad. He is interested in Agripreneurship, rural development and qualitative research. He can be reached at tnambi2001@gmail.com.









Add Comment