Design, Monitoring and Evaluation for Peacebuilding

You are here

Myths in Impact Evaluation

Two recent (2012) publications by the International Initiative for Impact Evaluation (3ie) and the U.K. Department for International Development have broadened the range of approaches and methods for the use of impact evaluation in international development. These new works, if embraced by peacebuilders and peacebuilding evaluators, could broaden the applicability of impact evaluation designs in assessing causation in the complex environments in which we work.

Impact evaluation, it seems, is frequently seen as inapplicable in peacebuilding for a variety of reasons – many of these are well outlined by Reina Neufeldt in her complimentary articles for the Berghof Handbook for Conflict Transformation. But the primary arguments against IE seems to be that causation is difficult to attribute to a single intervention due to the intangibility of the results, their oft long-term nature, and the complexity of the systems in which we work.

The assumption that IE is inapplicable may not be fully correct, however. There are a range of misperceptions about impact evaluation, probably due to our unfamiliarity, and our collective unwillingness to become familiar, with it.

First, a few key terms. The OECD defines impact as the “positive and negative, primary and secondary long-term effects produced by a development intervention, directly or indirectly, intended or unintended.”1  Impact evaluation, therefore, seeks to attribute observable changes in the environment to a particular intervention. With that in mind, let’s jump right in.

De-Bunking 3 Myths

Cause-effect is an oversimplification of the complex

Attribution is a sensitive subject in peacebuilding. On the one hand, our field is largely based on a complex worldview where there are multiple contributing factors to certain changes in the environment. At the same time, we claim to have impact at the micro, meso and macro-levels. Such claims cannot be made without evidence, which requires some form of attribution (which for these purposes includes rigorous contribution analysis). 

There are multiple ways in which to approach attribution; these are laid out well in the DFID paper mentioned above, and include some surprising contestants: theory-based, case-based and even participatory evaluation designs (see page 24).

But taking a step back, there are multiple ways in which to attribute cause and effect (the DFID paper lists four):

  1. One cause (the intervention) associated with one outcome (result)
  2. One cause associated with multiple outcomes
  3. Multiple causes associated with  multiple outcomes
  4. Multiple causes associated with one main outcome

The third and fourth causal pathways are perhaps most appropriate to peacebuilding evaluation. So attribution is not necessarily an oversimplification of the complex: there is room for complexity.

Impact evaluation requires randomized-control trials, experimental and quasi-experimental designs

This myth holds that IE is infeasible for peacebuilding evaluation for a multitude of reasons but primarily due to ethics: is it ethical to withhold treatment from a conflict community (which would involve the construction of a counterfactual)? First, this myth is based on misunderstandings of impact evaluation designs: randomized control trials are one possible option for such designs, but by no means are the only. IE approaches and methods range and within this range lay numerous approaches that are already in use in peacebuilding evaluation. 

The recent 3ie paper  categorizes impact evaluation approaches into two groups: Group I approaches “explicitly set out to discover the causes of observed effects with the goal of establishing beyond reasonable doubt how an outcome or set of outcomes occurred”2. Examples include realist evaluation, general elimination methodology, process tracing, and contribution analysis. With the exception of contribution analysis, these are generally viewed as inapplicable, particularly realist evaluation which holds that while perception matters, there is still a common reality that we can all agree on – something the ‘circlers’ in Neufeldt’s article may object to. The validity of this perception of inapplicability has yet to be fully determined, and may never be. Nevertheless, it is important to challenge our implicit assumptions about certain evaluative methods and approaches – this is, after all, what much of peacebuilding is all about.

Group II approaches, on the other hand, do not place such explicit emphasis on attribution of cause and effect, favoring instead to “establish what factors are perceived to have been important in producing change; in so doing, they aim to gain an insight into how a programme is performing and the part that it is playing in driving change.”3 Examples include most significant change, success case method, and outcome mapping. While these approaches are intended for a variety of monitoring and evaluation purposes, rather than the specific purpose of causation, “they can offer a systematic method for gathering the necessary information to rigorously attribute outcomes to a programme.”4

In other words, Group II approaches when combined with an approach that specifically seeks to attribute cause and effect (Group I) can add significant value to the data collection and utilization strategy of the evaluation.

Impact evaluation is quantitative

While most impact evaluation methods will include quantitative data, by no means does it rely exclusively on such. IE can be and is a mixed-methods approach to data. As already demonstrated above, this can include outcome mapping and most significant change methodologies—both highly qualitative.

Furthermore, peacebuilding evaluations already use quantitative methods in some form: surveys are a common data collection tool – perhaps what scares peacebuilders about quantification in IE is the level of rigor implied in the data collection method.

Many of us in this field, it seems, do not like math. I certainly count myself among them. But the fact is that quantitative data can add significant insight and scale to the qualitative. And besides, a large portion of peacebuilding evaluations claim a mixed-methods approach, which by necessity must include quantitative methods. And if IE is a mixed-methods approach, what are we so afraid of?

Jonathan White is the Manager of the Learning Portal for DM&E for Peacebuilding at Search for Common Ground. Views expressed herein do not represent SFCG, the Learning Portal or its partners or affiliates.

  • 1. OECD, “Glossaries of Key Terms in Evaluation and Results based Management,” OECD-DAC Working Party on Aid Evaluation.
  • 2. Howard White and Daniel Phillips, “Addressing attribution of cause and effect in small n impact evaluations: towards an integrated framework,” International Initiative for Impact Evaluation (3ie), Working Paper 15, June 2012, p. 7.
  • 3. White and Phillips, “Addressing attribution,” p. 13.
  • 4. Ibid, p. 19.

Dear Jonathan,

Great to see this work being done. Another piece of work on this topic published last month is Measuring the Impact of Peacebuilding Interventions on Rule of Law and Security Institutions by Vincenza Scherrer at DCAF. It looks at the different types of evaluations used by UN agencies, and concludes that some level of impact assessment is possible. It distinguishes between evaluation and assessment.

Hope you and others can find it helpful.

Thanks for bringing that report to my attention Thammy! I'll be sure to have it up on the Portal in the next few days. There's also an Interaction Webinar on Sept. 6 on mixed methods impact evaluation by Michael Bamberger. I encourage you to check it out!

This is an important post.  I'm working as a consultant with Nonviolent Peaceforce, where we are piloting different ways to assess the impact of their Unarmed Civilian Peacekeeping.  We've successfully completed quantitative community-level assessments (including with comparison communities) in the Philippines and have strong statistical evidence that there is a higher level of perceived senses of safety and security in the NP communities.  It is not perfect science, but that's not what we're going for.  We're trying to learn what are feasible, defensible ways to attribute some impact to NPs work.  One key word you mentioned above is "systematic."  Lots of data collection going on, but both collection and analysis are not happening in ways that currently allow for good impact evaluation.  Yet, we absolutely believe it is possible and know there will be lots of valuable lessons learned along the way.