Design, Monitoring and Evaluation for Peacebuilding

You are here

Evaluative Learning Review Synthesis Report: USAID/CMM's People-to-People Reconciliation Fund, Annual Program Statement (APS)

Author, Copyright Holder: 
Social Impact/USAID/CMM

Background on APS:

Since 2004, the United States Agency for International Development’s (USAID) Office of Conflict Management and Mitigation (CMM) in Washington has managed an annual small grants competition known as the Annual Program Statement (APS). The APS is funded through a Congressional appropriation mandating that grants utilize a people-to-people reconciliation approach to guide their work. While CMM/Washington manages the overall APS, responsibility for the award and immediate oversight of the funded projects rests with USAID Missions abroad. CMM’s ability to assess the effectiveness of the APS grants is further complicated by the mercurial and complex nature of conflict-related programs, which are difficult to monitor and evaluate using standard, linear monitoring and evaluation (M&E) approaches. 

As a result, CMM has explored various methodologies for evaluating complex development programs, including developmental evaluation, which may add greater depth of analysis and understanding. The DE approach uses evaluative information, analysis, and processes to contribute to the organic evolution of a project, rather than simply judge its success or failure. Michael Quinn Patton calls this relationship to an evaluated program “co-evolutionary.” The evaluator is not an independent entity standing outside the project but a facilitator of a reflective learning process (action reflection) whereby evaluators, project staff, and other stakeholders are part of an inclusive, ongoing project design.

The Evaluative Learning Review: 

In the fall of 2011, USAID/CMM awarded Social Impact a contract to conduct a two-year evaluative learning review of targeted awards and activities under the APS inspired by developmental evaluation methodology. The objectives of this review were not only to learn about the effectiveness of the Reconciliation APS projects themselves, but also to build CMM’s technical leadership in evaluation of complex programs. SI’s work included desk research, a meta evaluation and meta-analysis, three field evaluations of APS programming, a final synthesis report, and ongoing systematic reflective learning on emerging lessons from the evaluation activities and the team’s developmental process.

To follow the co-evolutionary approach described above, CMM and SI established a collaborative and adaptive working relationship with CMM program managers as active partners in the evaluation work. This was a time consuming, but critical component of implementing the action reflection model of the review, and was done through monthly leadership meetings to reflect on the work, examine lessons learned, and plan for next steps, participation on the three field evaluation teams, and participating in overall decision making for the evaluation. This continual reflection process with multiple stakeholders meant that the evaluative learning review process raised more questions than answers, and surfaced challenging areas for ongoing consideration. However, this was the goal of the review. Rather than come to any set conclusions of the definitive “best practice” or the best way to do something, the evaluative review led to scenarios for potential development of the people-to-people reconciliation program and examines the values, principles, and scenarios for potential development of evaluation within CMM, USAID, or similar complex settings. This information is reflected in the final synthesis report and is based on our team’s findings related to current APS implementation and evaluation practice and reflections on our team’s process of adapting a developmental approach. Here we touch on some of the challenges and lessons learned related to evaluating these reconciliation efforts and our own evaluative learning review process.