Design, Monitoring and Evaluation for Peacebuilding

You are here

What Evaluation Approach Should I Use?

Evaluation item: 
Developmental Evaluation
  • Gamble, Jamie A. A. A Development Evaluation Primer. Canada: The J.W. McConnell Family Foundation, 2008.
    • Available here
    • Beginner, Intermediate
    • This ‘primer’ introduces the concept of developmental evaluation and provides tools to foster its use. The first part of the book discusses the basics of DE, as well as a series of myths around it, and highlights some of the conditions needed to assess if organizations are in an appropriate space to apply DE. The second part of the book focuses on how to apply developmental evaluation, and discusses the key features of a developmental evaluator, as well as tools, issues and challenges.
Empowerment Evaluation
  • Cox, Pamela J., Dana Keener, Tiffanee L. Woodard, and Abraham H. Wandersman. Evaluation for Improvement: A Seven Step Empowerment Evaluation Approach for Violence Prevention Organizations. Atlanta: Centers for Disease Control and Prevention, 2009.
    • Available here
    • Intermediate
    • This manual is designed to help violence prevention organizations hire an empowerment evaluator who will assist them in building their evaluation capacity through a learn-by-doing process of evaluating their own strategies.
Most Significant Change
  • Davies, Rick, and Jess Dart. The ‘Most Significant Change’ (MSC) Technique. A Guide to Its Use. 2005.
    • Available here
    • Beginner
    • This practical guide walks users through a clear step-by-step process of how to implement MSC, and also provides insights on its history, and how it compares to other approaches. In addition to reference to further reading, the guide is complemented by samples of story collection formats, sample MSC stories, and other annexes that can be useful for practitioners looking into using this approach for the first time.
Goal-Free Evaluation
  • Youker, Brandon W. and Allyssa Ingraham. "Goal-Free Evaluation: An Orientation for Foundations’ Evaluations." The Foundation Review 5, No. 4 (2013): 51-61.
    • Available here
    • Intermediate
    • This paper discusses the concept and main features of Goal-Free Evaluation, demonstrates GFE’s actual use, highlights aspects of its methodology, and details its potential benefits.
Outcome Mapping and Outcome Harvesting
  • Earl, Sarah. "Overview of Outcome Mapping." Filmed 2007 by the Pan Asia Networking project of IDRC at a workshop on Utilization Focused Evaluation (UFE) in Kuala Lumpur. 22:50.
    • Available here
    • Beginner, Intermediate
    • One of the originators of the approach, Sarah Earl, discusses the origins and fundamental features of outcome mapping and how it relates to evaluation, highlighting what can be done in M&E with outcome mapping. This resource is particularly useful for evaluators.
  • White, Jonathan. "Introduction to Outcome Mapping." DM&E for Peace.
    • Available here
    • Beginner
    • This short discussion on outcome mapping as an evaluative methodology provides an overview of OM and what it entails, and its relation to peacebuilding programming. It also provides useful “hot tips” and additional resources (e.g. webinars) for further reference.
  • Wilson-Grau, Ricardo, and Heather Britt. Outcome Harvesting. Cairo: Ford Foundation, 2012 (Revised November 2013.)
    • Available here
    • Intermediate
    • This brief is intended to introduce the concepts and approach used in Outcome Harvesting to grant makers, managers, and evaluators, with the hope that it may inspire them to learn more about the method and apply it to appropriate contexts. Thus, it is not a comprehensive guide to or explanation of the method, but an introduction to allow evaluators and decision makers to determine if the method is appropriate for their evaluation needs.
Participatory Approaches

Despite the fact that other approaches can be conducted in a participatory manner, participatory approaches are based on the premise that the structured participation of stakeholders throughout the different stages of the evaluation and in the decision-making process is essential to the conduct of evaluations.  There is little guidance for use of these approaches in evaluation of peacebuilding; these resources provide guidance on participatory approaches in other fields and could be adapted and tested for peacebuilding. 

  • KU Work Group for Community Health and Development. "Chapter 36, Section 6: Participatory Evaluation." In the Community Tool Box. Lawrence, KS: University of Kansas, 2015.
    • Available here
    • Beginner, Intermediate
    • This reading introduces the concept of Participatory Evaluation, and explains the reasons for using it (and not using it), and who should be involved in participatory evaluation. Additionally, it provides a series of steps for conducting participatory evaluations.
  • Guijt, Irene. Participatory ApproachesMethodological Briefs: Impact Evaluation 5. Florence: UNICEF Office of Research, 2014.
    • Available here
    • Intermediate
    • This guide explains the use of participatory approaches in impact evaluation, discussing when it is best to use this approach, and how to make the most of it.
  • Catley, Andy, John Burns, Dawit Abebe, and Omeno Suji. Participatory Impact Assessment: A Design Guide. Medford, MA: Feinstein International Center, Tufts University, 2014.
    • Available here
    • Intermediate, Advanced
    • This document provides step-by-step guidance on participatory approaches to measure impacts of livelihoods, development and humanitarian interventions.  While not specifically designed for peacebuilding interventions, it provides helpful guidance on how to organize, prepare and conduct impact evaluations in which local people participate in defining and measuring impact.
Theory-Based Approaches to Evaluation
  • Ober, Heidi et al. Guidance for Designing, Monitoring and Evaluating Peacebuilding Projects: Using Theories of Change. London: CARE International UK, 2012.
    • Available here
    • Beginner, Intermediate
    • This guide provides useful guidance on using theories of change in the design phase as well as to monitor and evaluate peacebuilding programs.
  • Funnell, Sue C. and Patricia J. Rogers. Purposeful Program Theory: Effective Use of Theories of Change and Logic Models. San Francisco, CA: Jossey-Bass/Wiley, 2011.
    • Available here
    • Intermediate
    • This book discusses ways of developing, representing and using program theory and theories of change in different ways to suit the particular situation. It discusses how to address complicated and complex aspects of program in terms of: focus; governance; consistency; necessity; sufficiency; and change trajectory. Additional information on the book is available here.
  • White, Howard. Working Paper 3Theory-Based Impact Evaluation: Principles and Practice. New Delhi: International Initiative for Impact Evaluation, 2009.
    • Available here
    • Advanced
    • This paper identifies the following six principles to successful application of Theory-Based Impact Evaluation: (1) map out the causal chain (program theory); (2) understand context; (3) anticipate heterogeneity; (4) rigorous evaluation of impact using a credible counterfactual; (5) rigorous factual analysis; and (6) use mixed methods.
  • Mayne, John. Contribution Analysis: An approach to exploring cause and effect. The Institutional Learning and Change (ILAC) Initiative, 2008.
    • Available here
    • Intermediate
    • Contribution analysis is one form of theory-based evaluation. After introducing the concept of contribution analysis, this document explains, step-by-step, how to conduct evaluations based on this approach.
Process Tracing
  • Oxfam GB. Process Tracing: Draft Protocol.
    • Available here
    • Beginner
    • This document presents the concept of process tracing and provides detailed guidance for undertaking evaluations using this approach.
  • Collier, David. "Understanding Process Tracing." Political Science and Politics 44, No. 4 (2011): 823 -830.
    • Available here
    • Intermediate
    • This article describes how process tracing works, and the essential elements of the approach.
Utilization Focused Evaluation (UFE)
  • Patton, Michael Q. Utilization-Focused Evaluation Checklist. DM&E, 2002.
    • Available here
    • Beginner, Intermediate
    • After a brief overview of UFE, this checklist explains the 12 steps of UFE, and highlights a series of premises, primary tasks and challenges related to each of these steps and that need to be taken into account in order to maximize use by intended users.
  • Patton, Michael Q. Essentials of Utilization-Focused Evaluation. Thousand Oaks, CA: SAGE Publications, 2012.
    • Available here
    • Intermediate
    • Based on Michael Quinn Patton's best-selling Utilization-Focused Evaluation, this briefer book provides an overall framework and essential checklist steps for designing and conducting evaluations that actually get used.
  • Ramirez, Ricardo, and Dal Broadhead. Utilization Focused Evaluation: A Primer for Evaluators. Penang: Southbound, 2013.
    • Available here
    • Beginner, Intermediate
    • This primer is designed for practitioner evaluators and project implementers who are interested in using Utilization Focused Evaluation. The primer covers each of the 12 steps of UFE, using case studies to illustrate what it is like to learn to use UFE.
Case Studies
  • Neale, Palena, Shyam Thapa, and Carolyn Boyce. Preparing a Case Study: A Guide for Designing and Conducting a Case Study for Evaluation Input. Watertown, MA: Pathfinder International, 2006.
    • Available here
    • Beginner
    • This short guidance provides basic information on case studies, their purposes and uses, and the elements of a case study conducted as part of an evaluation.
  • Balbach, Edith. Using Case Studies to do Program Evaluation. CA: California Department of Health Services, 1999.
    • Available here
    • Intermediate
    • This guide will help evaluators assess whether to use a case study evaluation approach and how to do a case study
  • Goodrick, Delwyn. Comparative Case Studies, Methdological Briefs: Impact Evaluation 9. Florence: UNICEF Office of Research, 2014.
    • Available here
    • Intermediate
    • This methodological brief provides “how to” advice on conducting comparative case studies, especially for evaluation of impacts of interventions, when there is a need to understand and explain how features within the context influence the success of program or policy initiatives.
Experimental/Quasi-Experimental Approaches

These approaches require a high level of technical expertise.  The resources below provide a good overview of the approach as well as broad guidance on how they are done, when and for what purpose they might be used, and challenges and ethical considerations—and can be useful for commissioners of evaluations and program teams engaging with evaluators using these designs.

  • Anderson Moore, Kristin. Quasi Experimental Evaluations. Part 6 in a Series on Practical Evaluation Methods. Washington, D.C.: Child Trends, 2008.
    • Available here
    • Beginner
    • This short paper presents the concept of quasi-experimental evaluations; what can be learnt from them; under what circumstances is it appropriate to conduct a quasi-experimental evaluation; and the types of quasi-experimental outcome evaluations. The paper finishes by discussing a series of risks and obstacles that may be faced when planning or implementing this kind of evaluation.
  • White, Howard, Shagun Sabarwal, and Thomas de Hoop. Randomized Control Trials (RCTs). Methodological Briefs, Impact Evaluation 7. Florence: UNICEF Office of Research, 2014.
    • Available here
    • Intermediate
    • This brief explains Randomized Control Trials (experimental methods), when to use them, and outlines basic steps for conducting them.  There are references to more technical guidance; this is useful for commissioners of evaluations and program teams considering such methods to assess impact, especially whether results are attributable to their programs.
  • White, Howard, and Shagun Sabarwal. Quasi-Experimental Design and Methods. UNICEF Methodological Briefs, Impact Evaluation 8. Florence: UNICEF Office of Research, 2014.
    • Available here
    • Intermediate
    • Quasi-experimental designs are used to test causal hypotheses about whether a program/intervention has produced a particular change, and are used when experimental methods (RCTs) are not possible.  This brief explains what quasi-experimental designs are and ways to develop comparison groups when random assignment is not possible.
Real Time Evaluation
  • Herson, Maurice, and John Mitchell. " Real-time Evaluation: Where does its value lie?." Humanitarian Exchange Magazine, No. 32 (2005).
    • Available here
    • Intermediate
    • This paper discusses the history of RTE, and addresses some common aspects of the RTE methodology and outcomes.
  • Cosgrove, John, Ben Ramalingam, and Tony Beck. Real-time evaluations of humanitarian action: An ALNAP Guide. London: ALNAP, 2009.
    • Available here
    • Intermediate, Advanced
    • The guide is intended to help both evaluation managers and team leaders in commissioning, overseeing and conducting real-time evaluations (RTEs) of humanitarian operational responses. While not directly applicable to peacebuilding, it has detailed guidance on RTEs that can be adapted for peacebuilding contexts. 
Additional Resources

This list of approaches is by no means exhaustive. The BetterEvaluation site provides a more comprehensive list of Evaluation Approaches, explaining their meanings and main features, and providing different degrees of details about each of them, as well as additional resources for further reference. The site currently has information on the following approaches:

Body: 

Evaluation approaches refer to the principles or framework guiding the design and implementation of an evaluation. This section provides resources on a select number of evaluation approaches that can be and are commonly used to evaluating peacebuilding initiatives.

Situational appropriateness is increasingly being seen as the best criterion for choosing approaches and methods. Commissioners of evaluations, together with evaluators, should decide what approach is appropriate for the kinds of evaluation questions being asked, the users’ needs, the nature of the intervention and the context in which the evaluation will be conducted, as well as the availability of resources (both financial and human).  In Designing for Results (Chapter 8), Church and Rogers provide guidance on how to decide what approach to use and summarize the pros and cons of each.

Resources on some key Evaluation Approached are listed below.