Design, Monitoring and Evaluation for Peacebuilding

You are here

Adapting to Real-World Constraints: Managing Common Challenges in the Evaluation Process

On Thursday July 24, 2014, the Peacebuidling Evaluation Consortium (PEC) and the Network for Peacebuilding Evaluation (NPE) were pleased to have hosted the Thursday Talk on "Adapting to Real-World Constraints: Managing Common Challenges in the Evaluation Process" with Mathias Kjaer, Evaluation Specialist at Social Impact.

This session deviated slightly from previous NPE Thursday Talks in both format and content. Prior to the session, participants were asked to view a video documentary of Social Impact’s mixed-method performance evaluation of the USAID “Growth with Equity in Mindanao III” (GEM-3) program implemented in conflict-affected areas in Mindanao. The session then featured a facilitated discussion regarding some of the common challenges experienced during the evaluation and how the evaluation team was able to address these. Three common challenges were highlighted, including: (1) the importance of narrowing down the list of primary evaluation questions; (2) the need to prepare contingency data collection sites given site inaccessibility; and (3) reconciling conflicting data during joint team analysis. 

About the Speaker: Mr. Kjaer is a full time Evaluation Specialist at Social Impact (SI). He has managed several of SI's priority contracts including SI's evaluation of USAID's implementation of the Paris Declaration on Aid Effectiveness; a Democracy and Governance Assessment for USAID/Georgia to inform its next five-year strategic plan; a final performance evaluation of USAID/Georgia Judicial Administration and Management Reform project; a summative evaluation for The Carter Center on the effectiveness of an Irish Aid Block Grant to support peace, elections, and human rights programming; and is currently managing a major two-year contract with USAID/CMM to evaluate its People-to-People Reconciliation portfolio and pilot test its use of Developmental Evaluation methodology. Previously, Mr. Kjaer served as SI's Business Development Manager where he was involved in over 80 proposals with USAID, MCC, DFID, AusAID, and other major international donors. He holds an MA in Conflict Resolution from Georgetown University and a BA-Honours from McGill University.

 

View the video on GEM 3

 

Recording, Transcript, and Powerpoint

Read a summary of the conversation here

Review Mathias' Powerpoint here


Suggested Reading

Read the GEM 3 video brief here 

Read Social Impact's concept note on mixed methods here 

Learn more about Randomized Response Technique here

 


Thank you for a wonderful discussion this morning!

If you were unable to attend, please check back for a transcript of today's talk and discussion, today's powerpoint, and a recording of the entire talk. 

We were unable to respond to all of today's question during our Q&A, and hope to keep the discussion going here! Please see below for a list of questions from today: 

Matthew Simonson (attendee): Randomized response technique sounds promising.  Certainly the interviewee who flips "heads" feels safer being honest. Yet there's still a chance he/she won't answer honestly. Do we have any way to measure or quantify this?

Abelardo Rodriguez (attendee): I would like to know how the team or teams summarize their daily achievements. In the video there was some reference to note taking or analysis at the end of the day, and in your talk you mentioned, "Sundowners". Could you elaborate on how you facilitate sharing information between team members? 

Claire-Lorentz Ugo-Ike (attendee): In a situation where the communities are inaccessible and the local evaluators may not understand the purpose of the evaluation, what do you do to ensure you are still getting usable data of value? 

 

 

 

Hello. Matthew Kjaer's presentation was thought-provoking. I dipped into the GEM project on the Social Impact website a bit more and saw a reference to an online community new to me -- Better Evaluation -- http://betterevaluation.org/about. Just wanted to pass along this resource on scholarship and practice. It's a good example of mixed support and collaboration, too, as it is funded by Rockefeller, the governments of Australia and Netherlands, inter alia. 

 

Thank you to all those that were able to attend the session last Thursday. I hope you enjoyed the session and that we’re able to continue our conversation both here and during future Thursday Talks! Please continue to post questions here and I’ll try to answer them as best I’m able.

@Matthew – I agree that RRT has promise but want to emphasize that it is not always easy to implement. You’ll need a relatively large sample size (it’ll vary based on your randomizing device—coin, dice, cards, etc.) and you’ll never be able to guarantee that the respondent is actually telling the truth when it’s their “turn” to do so. That said, if you’re interested in looking at the effect of an intervention, you’re likely interested in identifying trends or patterns, and for this RRT will still be useful. In other words, while you can’t guarantee that each respondent was actually telling the truth when it was their turn to do so, you can see, assuming that you have a large sample size and that other variables remain constant over time, a trend in the respondents’ self-reported behaviors which could be informative. When we’ve used this technique in the past here at Social Impact, we’ve used it in parallel with other techniques like unmatched count (see link on RRT above) or with surveys asking both self and peer-reported behavior.

@Abelardo – the nightly debriefs are such an important part of our rolling analysis and I strongly recommend evaluation teams do this on a regular (nightly or bi-nightly) basis. Field days are long and it’s exhausting to interview but these sessions are helpful for crystalizing what you’ve heard, contextualizing responses and identifying non-verbal/cultural clues, and distilling emerging patterns and trends that can be explored during future interviews. Generally, we try to keep these brief—the team will meet for 30-45 mins and discuss 3 questions: (1) what did we hear today that matches patterns we’ve heard before; (2) what did we hear that we hadn’t heard before; and (3) what do we need to investigate further/what questions are we not asking that we should be (or what groups of stakeholders are we not interviewing but should be). Usually team members take turns taking notes and post the notes in the team’s online folder for more detailed analysis (and refreshers) post-fieldwork.

@Claire-Lorentz – Good question. Honestly, the best answer is that we take several steps to avoid this situation in the first place. Our local evaluators are almost always full members of the team and we take several steps at the outset of the evaluation to clarify the evaluation purpose, questions, and approach to the team members. Our Team Planning Meetings are critical for this and the nightly debriefs also help ensure that team members are clear (and focused!) on the original evaluation purpose.

@dtrent

Yes, BetterEvaluation is a great resource and one that we use often here at SI (along with DMEforPeace of course!).

Abelardo Rodríguez (attendee): Mathias, thanks a lot for making available the presentation and transcripts of the Q&A session.  I found the reference to Utilization-Focused Evaluation very interesting and useful.  Regarding to my question in the last section of your presentation (Reconciling Conflicting Data) about balance between qualitative and quantitative data I would like you to provide more details.  Perhaps I misunderstood the video and I thought that the 900 surveys were done during the time when the qualitative data was gathered—how do you verify/compare the qualitative data and quantitative data.  This would be particularly challenging in only six or seven weeks of fieldwork.


Qualitative data is about how or why (as mentioned by Isabella Jean) the processes happen, under what circumstances or contexts, or what if scenarios, among others; in contrast, quantitative data is more about the how much, what  or performance of the intervention(s) measured by a set of indicators.  Qualitative and quantitative question related to a particular subject and complement each other but they are not necessarily comparable. In your case, anonymous vs. public answers you possibly refer to the same question in two circumstances, I do not see any problem with that. Could you share with us the final evaluation in the Philippines to read the methodology used? Thanks in advance for your willingness to entertain our questions or comments.

Comparing quantitative and qualitative data.

Great questions, Abelardo. The final evaluation report can be found on USAID’s DEC: http://pdf.usaid.gov/pdf_docs/pdacu710.pdf. We included the survey questionnaire in Annex 6 (pg. 99) and detailed our methodology on pgs. 3-5 in case you want to have a closer look. I should mentioned that while the survey was largely comprised of close-ended, quantitatively-focused questions, it did include more open-ended, qualitative questions that got to the “how” and “why” questions.

As you correctly pointed out, the limited time in-country prevented us from attempting a sequential analysis of our data which could have been useful (but, alas, we faced some "real-world constraints"!). We instead had to rely on parallel analysis where we first compared our data within each data type (key informant, FGD, survey, etc.) and then across the data types once data collection was completed. One of our Senior Evaluation Advisors, Michael Bamberger, wrote a nice piece on how to more systematically integrate quantitative and qualitative methodologies, which I highly recommend and is linked above (see SI’s Concept Note on Mixed Methods).