Design, Monitoring and Evaluation for Peacebuilding

You are here

Reflections from the American Evaluation Association (AEA) Conference

Every year, the American Evaluation Association (AEA) Conference brings together evaluators from incredibly diverse fields. The commonality of challenges faced by evaluators however, allows for an honest and vibrant discussion on how we can learn from each other’s mistakes, and collectively address those problems. 

One of the key lessons and themes that emerged from the Conference that has special relevance for evaluation in peacebuilding was the focus on Real-Time Evaluation. In conflict-affected environments, there is a special need for regular data collection and monitoring that takes into account changing realities on the ground, and creates space for the implementer to test their theory of change in real time1

Flowing from that theme was the need to make the results of monitoring and evaluation accessible in order to facilitate decision-making in real-time and engage stakeholders beyond the donor. Evaluators shared various techniques (graphic timelines, collaboration cafes, infographics) they used to engage stakeholders both during and after the evaluation.  We were however, also cautioned to be responsible when choosing what data gets seen and talked about. To quote one panelist “What gets measured, gets talked about”2

Discussions surrounding dissemination of evaluations also highlighted the tension between accountability and learning in the evaluation field. Who is the evaluation for? Is it for donor reporting purposes, or is it for internal learning? To reconcile the two, there was an emphasis on the need to bring the evaluator, evaluee, and the donor to the same table and engage in an honest discussion about the process. 

The importance of the right methodology can also not be over-emphasized.  A problem frequently cited in the peacebuilding evaluation field is the difficulty in quantifying results but sessions revealed that complex results can be quantified.  It is a matter of identifying the right tools and methodologies3.

Lastly, a recurring theme was the need to create organizational will around improving the practice of evaluation. As Cheyanne Church reminded us, while trainings play a role in capacity-building, for long-term success, there needs to be an intentional effort from the leadership of organizations to invest resources, shift organizational culture, set up accountability processes,  and make evaluations and learning  an organizational priority. To this end, evaluators have a responsibility to make sure they are doing their part to help legitimize the practice of evaluation and work with the organization’s leardership to ensure that evaluations are not only used but also celebrated. 

Thanks for sharing your reflections from the conference, Maryam!

For people interested in following up on your final point, I wanted to let you know that my colleague, Cheyanne Church, has just posted the handout from her session on effectiveness assessments. It's available via the AEA eLibrary at http://comm.eval.org/Go.aspx?c=ViewDocument&DocumentKey=08430213-c4a7-426d-bc55-0e2542efc313.

Happy reading!

 

Lillie Ris

M&E Associate

Besa: Catalyzing Strategic Change