Design, Monitoring and Evaluation for Peacebuilding

You are here

Evaluating People-to-People Reconciliation Programs: Findings and Next Steps

Since 2004, the United States Agency for International Development’s (USAID) Office of Conflict Management and Mitigation (CMM) in Washington has managed an annual small grants competition known as the Annual Program Statement (APS). The APS is funded through a Congressional appropriation mandating that grants utilize a people-to-people reconciliation approach to guide their work. While CMM/Washington manages the overall APS, responsibility for the award and immediate oversight of the funded projects rests with USAID Missions abroad. CMM’s ability to assess the effectiveness of the APS grants is further complicated by the mercurial and complex nature of conflict-related programs, which are difficult to monitor and evaluate using standard, linear monitoring and evaluation (M&E) approaches

As a result, CMM has explored various methodologies for evaluating complex development programs, including developmental evaluation, which may add greater depth of analysis and understanding. The DE approach uses evaluative information, analysis, and processes to contribute to the organic evolution of a project, rather than simply judge its success or failure. Michael Quinn Patton calls this relationship to an evaluated program “co-evolutionary.” The evaluator is not an independent entity standing outside the project but a facilitator of a reflective learning process (action reflection) whereby evaluators, project staff, and other stakeholders are part of an inclusive, ongoing project design.

The Evaluative Learning Review

In the fall of 2011, USAID/CMM awarded Social Impact a contract to conduct a two-year evaluative learning review of targeted awards and activities under the APS inspired by developmental evaluation methodology. The objectives of this review were not only to learn about the effectiveness of the Reconciliation APS projects themselves, but also to build CMM’s technical leadership in evaluation of complex programs. SI’s work included desk research, a meta evaluation and meta-analysis, three field evaluations of APS programming, a final synthesis report, and ongoing systematic reflective learning on emerging lessons from the evaluation activities and the team’s developmental process.

To follow the co-evolutionary approach described above, CMM and SI established a collaborative and adaptive working relationship with CMM program managers as active partners in the evaluation work. This was a time consuming, but critical component of implementing the action reflection model of the review, and was done through monthly leadership meetings to reflect on the work, examine lessons learned, and plan for next steps, participation on the three field evaluation teams, and participating in overall decision making for the evaluation. This continual reflection process with multiple stakeholders meant that the evaluative learning review process raised more questions than answers, and surfaced challenging areas for ongoing consideration. However, this was the goal of the review. Rather than come to any set conclusions of the definitive “best practice” or the best way to do something, the evaluative review led to scenarios for potential development of the people-to-people reconciliation program and examines the values, principles, and scenarios for potential development of evaluation within CMM, USAID, or similar complex settings. This information is reflected in the final synthesis report and is based on our team’s findings related to current APS implementation and evaluation practice and reflections on our team’s process of adapting a developmental approach. Here we touch on some of the challenges and lessons learned related to evaluating these reconciliation efforts and our own evaluative learning review process.

Evaluation and Adaptability

One major opportunity that the evaluative learning review process surfaced is how evaluation can both tell us what we can know about what works, and also encourage ongoing adaptability over time. There is, on the one hand, a widespread yearning for finding out “what works” so we can simply do that. But, on the other hand, there is also an awareness that context matters, and, particularly in conflict contexts, the context changes dramatically sometimes as the conflicts develop. There really can be no one set “what works” to fit all reconciliation projects. Instead of finding success stories and mimicking those approaches, reconciliation program managers need to be guided by appropriate values and principles and adopt modes of learning what works in their particular context. With this approach, ongoing evaluation, or action reflection styles of evaluation, can support ongoing learning and ongoing development. This calls for balancing the doing of a project with regular reflecting on it, and allowing the space for projects to develop over time. In addition, when the core stakeholders engage in tracking and supporting the development of their reconciliation program, that engagement itself supports reconciliation. As people participating in a reconciliation program bridge across their conflict divide to further develop that reconciliation program, they work together towards a shared goal, informed by evaluative insights. Working towards a shared goal is itself a reconciliation process. Evaluation can change the way people work together. Rather than seeking one right way and sticking to it, groups can find their way gradually, through trial and error, seeking constant feedback, and continuously learning. Ultimately, this means that evaluation is supporting adaptive implementation, and, we have found, reconciliation programs are most effective when they are adaptively implemented.

In order to engage effective and manageable action reflection evaluation techniques two things need to be considered:

  • Time and
  • Scale and levels of analysis.  

As learning from evaluation takes place, the time frame for a project to implement new plans or change its design, informed by that learning, varies. When the learning is most significant, and the shift in plans most radical, that may be the most important time to act quickly. But, that most urgent larger shift is also the situation in which the practicality of the project’s actual ability to make the changes (e.g. contractual obligations) can require many levels of careful review of such a major change. Our experiment in bringing some developmental aspects to a short-term engagement with a long-term program like the APS suggests that developmental evaluation approaches and tools can be useful for large-scale global programs, but that the utility will be seen over longer periods of time, depending on the speed with which global program managers are able to try out innovations and make changes to the program design or implementation based on the evaluation findings.

Second, developmental approaches can be useful at any level of the project, or with any one of the stakeholders or participants in reconciliation processes. As demonstrated in our field evaluations, and the unexpected learning that occurred when headquarters USAID staff met with implementers and participants in the field, there is much benefit to engaging multiple levels of the reconciliation system, from headquarters to field project managers to participants. Useful evaluation will recognize the interconnections between different parts of the system, from micro local communities to regional, national, and global. However, as described above, the time frame for shifting strategies and implementing project innovations based on the evaluation learnings will be different, depending on the bureaucratic or culture context in which the project or program is situated.

The thoughts offered above are shaping the final stages of our evaluative learning review.  SI and CMM are offering our synthesis of our two year learning process in the spirit of ongoing learning, as a contribution to dialogue with evaluators and those engaged in reconciliation efforts.

Our evaluation team will be presenting our research at the Conflict Prevention and Resolution Forum (CPRF) hosted at SAIS on March 11th from 9-10:30am. From 10:30am-3pm we’ll be hosting interactive sessions on reconciliation programming and evaluation of complex programming. The DM&E for Peacebuilding portal, www.DMEforpeace.org, will be live streaming the event as well. 

Discussion created by Kelly Skeith, Deputy Director for Performance Evaluation at Social Impact, based on the Evaluative Learning Review team's synthesis report findings. 

 

I think your report is important and provides support for what many practitioners experience.  For those of us unalbe to attend the meeting, it would be useful to post a link to the report, or a more fully summary.  Thank you.

Hi Furnarie. Thanks for your interest in the report. 

USAID will be publishing the final synthesis report at the end of the month. You can view a recording of the event here

Hi All,

The final synthesis report is now available on USAID's DEC, http://pdf.usaid.gov/pdf_docs/pbaaa370.pdf.

 

I found your report to be very informative, and many of my questions about implementation evaluation have been answered. Focusing on the formative evaluation of a program and accomodating the space and time needed to alter activities and goals to work toward a chosen outcome is difficult, but your research has offered more insight into how this concept can be applied.

I believe one of the main purposes in doing evaluation is to improve projects and activities over time and to test the theories of change implied or explained by those executing the projects.  Michael Quinn Patton says in his book “Essentials of Utilization-Focused Evaluation” (2012) that evaluations answer three questions: 1. What happens in the program, activities and outcomes 2. What do the findings mean, implications 3. What recommendations follow from findings? (3). Too often I believe evaluations are something people do, for the sake of doing it, rather than something people use, to understand and make changes for the present and future.  Patton and others mentioned on DME like Church and Rogers, for example, make it easier for practitioners to make useful evaluation plans.  These three questions serve as a guide to how evaluations should be planned and how they should be used: to answer the attribution question on if the project led to the results or not or to what extent, and what to do now?