Design, Monitoring and Evaluation for Peacebuilding

You are here

Notes from the Field: Reflections from AfrEA 2014 Part II

I attended my first Africa Evaluation Association (AfrEA) Conference in Yaounde, Cameroon this March. This was my first international conference experience and I was overwhelmed by the series of paper presentations and panel discussions. Every topic on the programme seemed really interesting and inspiring, and therefore making choices as to what presentation to participate in was quite difficult.

Based on all the discussions and presentations I had the opportunity to participate in, a key takeaway of for me was that findings of evaluations should have strong evidence based approach and innovation. A strong evidence based approach is not only concerned with quantitative measurement but also with creating consensus on what constitutes qualitative improvements that contribute to the broader goals of the systems involved. It ensures that national counterparts are actively involved or at least interested in the relevant monitoring and evaluation activities. This has unfortunately not been the pattern in Sierra Leone where some international evaluators come in and conduct evaluations without involving locals, which often results in government officials rejecting the findings  of the evaluation.

Further, evaluations should be based on day-to-day experiences and emerging themes, rather than on predetermined indicators of progress, and integrate mixed methods for transformative research and evaluation that emphasize inclusiveness and comfort of respondents.  For example, previously I was insistent that focus groups participants be between 8 to 12 respondents. Following the various sessions at AfrEA, I  learnt that focus groups participants should be small like six or less to adequately encourage meaningful interaction. 

Another key takeaway was that data collection tools and methodology must include ‘work stories’ as a means of ‘making sense’ out of what is happening, and what effects are emerging. They should work to demystify M&E, and allow for even the most vulnerable stakeholders or beneficiaries to have a voice in periodic reflection. This  also helps nurture capacities for critical analysis, debate and decision taking.

Discussions at AfrEA also emphasized that adaptation of potential evaluation results should be factored in every evaluation conducted at institutional, organizational and national levels; and that while conducting an evaluation, it is also important to build capacity. Evaluation is a dynamic process, and one should use it as an opportunity to build the capacity of M&E Officers, project team members and other relevant stakeholders like policy makers and parliamentarians, in order to improve data collection, analysis and the dissemination and use research findings and recommendations. 

In conclusion, I want to stress that registering and participating with networks will help shape the agenda for future evaluation work in Africa. Being part of networks provides learning and experience sharing opportunities in critical issues emerging in our country context especially where another network might have conducted  similar research or work.

For instance, my presentation at the conference on “The use of evaluation for improvement of governance systems: A case study in Sierra Leone” linked me to another Sierra Leonean participant at the conference with whom I have now teamed up to establish a network of M&E Officers and Specialists in the country based on institutions that work in the non-profit sector.  The aim of the network will be to (a) establish a data base for tracking M&E specialists in country to foster coordination and consolidation of M&E activities with other institutions; (b) collaborate with various institutions conducting evaluations to provide technical support, and create a database for evaluations in country evaluations and document findings to ensure that relevant stakeholders are engaged on key findings; (c) establish a unified front to engage government and policy makers for implementation of relevant evaluation findings that may require policy change. I am excited to report that registration procedures of the network have already commenced alongside mapping and invitation of M&E Officers for various non-profit organizations and agencies.      

It is important to note here that all I learnt is not limited to the takeaways above. I am still reviewing and absorbing all the great information I received from interactions with my peers and evaluation experts from across Africa, and look forward to continue to engage with AfrEA and share SFCG’s experience in monitoring and evaluating peacebuilding programs. 
Saa Bandabla is the Design, Monitoring & Evaluation (DM&E) Coordinator for Search for Common Ground (SFCG)'s Sierra Leone Programme. Saa has worked with SFCG for about seven years in various capacities including Officer-in-Charge. Prior to his work with SFCG, Saa served as the Technical Coordinator  at Care International, and as Reintegration Officer for Sierra Leone’s National Committee for Disarmament, Demobilization and Reintegration (NCDDR).