Design, Monitoring and Evaluation for Peacebuilding

You are here

DM&E Tip: Participatory Evaluation Designs

Participatory project strategies are quite popular these days, and there’s good reason: study after study have found that participatory project strategies can increase local ownership, develop local capacities, and build trust both between the organization and beneficiaries and amongst beneficiaries.

Participatory evaluation designs put project stakeholders ‘in the driver’s seat:’ all major decisions regarding the evaluation design, scope, questions, data collection tools and strategies, involve seeking input and feedback from project stakeholders. With this in mind, it is important to budget an appropriate amount of time for such input and feedback. Participatory evaluation designs also tend to place greater emphasis on learning, project improvement, and nurturing local critical thinking capacities than external summative evaluations.

Literature abounds on participatory evaluation designs, and this extensive literature can help guide your choices in designing your participatory evaluation. Most of these documents outline several key steps in participatory evaluation. These are explored in brief below.

Identifying Stakeholders

The first step in conducting a participatory evaluation is to identify who the primary stakeholders are, and to understand their interests and needs for the evaluation. This information will inform the development of the evaluation scope, objectives, questions, etc., and can be collected through focus groups or key informant interviews (qualitative semi-open methods are generally preferred here in order to solicit the right information from the stakeholders). Stakeholder analysis is useful here, but as a project staff you will probably already have a good idea of who needs to be involved. Still, there are always surprises and it is good practice to be thorough.

Developing Evaluation Scope

Developing the ‘right’ evaluation questions can be a time consuming process, especially given the range of information needs various stakeholders have. “One way to deal with this is to work with stakeholders to envision how they would use evaluation information if they had it and what decisions they would make.”1 This will guide the stakeholders towards developing action- and decision-oriented evaluation strategies. If, however, this strategy proves too time consuming or there are too wide a range of needs, then you can always opt to design the evaluation based on the project logframe to assess the extent to which goals, objectives, outputs and outcomes were achieved.


In the participatory evaluation approach, the ‘evaluator’ acts as a facilitator, trainer and coach to the participatory process. This is quite a different role than the traditional role of an external evaluator, and so one should carefully select the designated person for this role. Remember, evaluation questions should be determined by stakeholders, and therefore requires organizational staff to enter into a negotiation to balance donor reporting and accountability demands with the inclusion of the identified needs of stakeholders.

Data Collection

Participatory evaluation designs should maximize the degree of participation sought and how stakeholders will be included and in what. Much of this will have been determined in the initial stages of planning for the evaluation. Common tools for data collection in participatory designs are ‘rapid appraisal’ methods because they are quick and easy to use and understand. This can be particularly important when your evaluation is targeting illiterate or semi-literate stakeholders.

Generally, it is best to format the data collection to solicit information from the communities without removing it from the communities. In other words, the information provided for the evaluation by the community should be owned by the community so that there is a greater commitment to action and local capacity development and learning.

If you want stakeholders to be involved in data collection, it may be necessary to provide brief capacity development trainings so the data collected is of good quality. This is a great way to nurture critical thinking and provide important skills that can be used for a range of other community development purposes.

Analyze the Data

Participatory analysis of data can be tricky. Splitting into small groups, each responsible for analyzing the same small set of data can help with arriving at consensus on particularly conclusions – but it is also time consuming. This method can also help bring out the small worldview differences that cause individuals to view data in different ways, providing greater organizational insight into its participating stakeholders.

And, of course, disagreements will emerge. When this occurs it is the responsibility of the evaluator-facilitator to mediate the disagreement and help the parties arrive at agreement.

Action Plan

Once the data has been analyzed and conclusions and recommendations agreed upon, the next step is to develop an action plan to continue to build on past success and to correct past shortcomings, gaps or failures. This might mean the development of a new project or a continuation or slight shift in the old program – whatever the decision, it should be arrived at in a participatory manner: “empowered by knowledge, participants become agents of change and apply the lessons they have learned to improve performance.”2

Further Resources

How to Perform Evaluations: Participatory Evaluations by Canadian International Development Agency

TIPS: Conducting a Participatory Evaluation by USAID

Participatory Monitoring and Evaluation Systems: Improving the Performance of Poverty Reduction Programs and Building Capacity of Local Partners by Rolf Sartorius for Social Impact

Participatory Monitoring & Evaluation: Learning from Change by Institute of Development Studies

A Practical Guide for Engaging Stakeholders in Developing Evaluation Questions by Hallie Preskill and Natalie Jones

TIPS: Using Rapid Appraisal Methods by USAID

Jonathan White manages the Learning Portal for DM&E for Peacebuilding at Search for Common Ground. Views expressed herein do not represent SFCG, the Learning Portal or its partners or affiliates.

  • 1. Rolf Sartorius, Participatory Monitoring and Evaluation Systems: Improving Performance of Poverty Reduction Programs and Building Capacity of Local Partners, Social Impact, p. 5.
  • 2. USAID, Performance Monitoring and Evaluation TIPS: Conducting a Participatory Evaluation, 1996, p. 3.


Hi Jonathan:

While reading your description I noticed the strong ties that the Participatory Evaluation (PE) process has to mediation.  In mediation, an important goal is to get the stakeholders to engage, in order to create and subsequently to own their process and outcomes.  Of course, usually the more stakeholders become engage with the process, the higher the quality of the outcome.  In mediation this amounts to a rich agreement with both parties maximizing their personal gains, which will take into account future changes and potential solutions to future contingencies.  In the case of PE, the quality of process amounts to maximizing the creative potential of the stakeholders to design, or modify an intervention that can maximize efficiency while adapting to changing operational challenges over time.

Also, to a point, the amount of resources (time and money) that can be brought to bare on the process is often proportional to the quality of outcome.  So, as you point out, a high quality outcome is often time consuming and expensive.  Operating under the assumption that people (when monetary exchange is involved) will usually try to minimize their outlay, and maximize their ROI, there would seemingly be an economic threshold at which people’s perceived value of process outweighs their willingness to pay.  The question necessarily becomes, at what point do people (or organizations) not see the process outcome worth the financial cost?  Therein lies a dichotomy of having a well crafted, expensive and time consuming process as opposed to an okay, cheaper and quicker alternative in moving forward, be it from conflict caused by a bad business venture, or in building a better intervention. 

Back to the correlation with mediation, in that field there is an army of willing volunteers and a precious few paid professionals.  This creates a situation where there is an over-supply of practitioners who have to compete vehemently for extremely scarce resources.  Admittedly I don’t know enough about the DM+E field to speculate as to whether there is a correlation to mediation in this area, however, from what I have seen, the field of dispute resolution generally suffers from a surplus of willing and competent practitioners coupled with a scant demand for services. 

This means that viable practitioners must adapt their services to the needs of the consumer, so grand ideas of deep stakeholder engagement and extended periods of time to create rich and sustainable process outcomes are simply not the norm.  To be sure I am hopeful that one day there will be a strong market for the creation of rich, complete and forward looking process outcomes, which can support the supply of interested and competent practitioners.  Unfortunately, until such time, quality of process outcome will necessarily be limited to market restrictions and practitioners will need to adapt to viable market support by limiting the factors (time and money) that are required to achieve the best process outcomes.