Design, Monitoring and Evaluation for Peacebuilding

You are here

Framing an Evaluation: The Importance of Asking the Right Questions

The following is a cross-post from a discussion on the importance of asking the "right" evaluation questions on BetterEvaluation.org. Please let me know your thoughts!

http://betterevaluation.org/blog/framing_an_evaluation

52 weeks of BetterEvaluation: Week 28: Framing an evaluation: the importance of asking the right questions

 

BetterEvaluation recently published a paper which presented some the confusion which can result when commissioners and evaluators don’t spend enough time establishing basic principles and understanding before beginning the evaluation. This blog, from Mathias Kjaer of Social Impact (SI), uses a recent evaluation experience in Philippines to present some tips on how to choose the right questions to frame an evaluation.

Framing an evaluation: The importance of asking the right questions

This blog post deals with a challenge common to both commissioners and implementers of evaluations: deciding which and how many evaluation questions to ask. Both parties have an interest in ensuring that the evaluation provides a sufficient breadth and depth of information. However, both parties are also limited with finite resources and time. The process of refining and targeting the evaluation questions will inevitably involve a delicate balance between collecting as much information as possible and being realistic about what questions can be answered definitely and completely. Ultimately, the questions chosen will dictate the design and direction the evaluation will follow.

Here is an example from our recent experience in evaluating the “Growth with Equity in Mindanao III” (GEM-3) program for USAID/Philippines. Representing the largest and most diverse program carried out in Mindanao, and over 60% of total mission funding directed towards Mindanao, the evaluation was of interest to a wide range of stakeholders inside and outside of the United States and Philippines’ Governments. The program included five distinct programming components (infrastructure development; workforce preparation; business growth; governance improvement; and former combatant reintegration) and two-cross cutting components (communications and public relations; and support services). This meant that the commissioner of the evaluation, USAID/Philippines, needed to consult with a variety of internal stakeholders from different technical offices during their drafting of the evaluation scope of work. What resulted was a list of 54 evaluation questions that the evaluation team was asked to answer during the six weeks of field work (we’ve included the list here if you are interested)!

In a particularly innovative move SI negotiated with USAID to develop a video component to the evaluation to be used for future USAID evaluation trainings. SI partnered with Quimera TV and produced a video which helps convey some of the challenges the team faced in trying to answer this extensive list of evaluation questions, as well as some other data collection challenges common to these evaluations. We hope that the video will serve as a reminder to commissioners and implementers to take the time needed at the beginning of their evaluation to make sure that their evaluation questions are focused and prioritized. This will undoubtedly lead to a better and more informative final evaluation report. Watch the video below or on YouTube here. While drafting our evaluation design during the original proposal, our team noticed that some of the evaluation questions were really just permutations of larger questions and were therefore able to relatively quickly reduce the list to 21 major evaluation questions - still 5-6 times larger than what we would have ideally hoped. Fearing that further refinement might be interpreted as being unresponsive to the Request for Proposals (RFP), we suggested that our evaluation team would work with relevant USAID/Philippines staff to further reduce the number of questions following award and our team’s initial document review. Unfortunately, due to an administrative error, our technical approach was never attached to the final contract and only the original SOW from the RFP was included. This resulted in the team being told during their in-brief and subsequent discussions that they would need to answer all 54 original questions in order to be compliant to their signed contract.  

Eight tips for good evaluation questions:

  • Limit the number of main evaluation questions to 3-7. Each main evaluation question can include sub-questions but these should be directly relevant for answering the main question under which they fall.
  • Prioritize and rank questions in terms of importance. In the GEM example, we realized that relevance, effectiveness, and sustainability were of most importance to the USAID Mission and tried to refine our questions to best get at these elements.
  • Link questions clearly to the evaluation purpose. In the GEM example, the evaluation purpose was to gauge the successes and failures of the program in developing and stabilizing conflict-affected areas of Mindanao. We thus tried to tailor our questions to get more at the program’s contributions to peace and stability compared to longer-term economic development goals. 
  • Make sure questions are realistic in number and kind given time and resources available. In the GEM example, this did not take place. The evaluation questions were too numerous and some were not appropriate to either the evaluation methods proposed or the level of data available (local, regional, and national).
  • Make sure questions can be answered definitively. Again, in the GEM example, this did not take place. For example, numerous questions asked about the efficiency/cost-benefit analysis of activity inputs and outputs. Unfortunately, much of the budget data needed to answer these questions was unavailable and some of the costs and benefits (particularly those related to peace and stability) were difficult to quantify. In the end, the evaluation team had to acknowledge that they did not have sufficient data to fully answer certain questions in their report.
  • Choose questions which reflect real stakeholders’ needs and interests. This issue centers on the question of utility. In the GEM example, the evaluation team discovered that a follow-on activity had already been designed prior to the evaluation and that the evaluation would serve more to validate/tweak this design rather than truly shape it from scratch. The team thus tailored their questions to get more at peace, security, and governance issues given the focus on the follow-on activity.
  • Don’t use questions which contain two or more questions in one. See for example question #6 in the attached—“out of the different types of infrastructure projects supported (solar dyers, box culverts, irrigation canals, boat landings, etc.), were there specific types that were more effective and efficient (from a cost and time perspective) in meeting targets and programmatic objectives?” Setting aside the fact that the evaluators simply did not have access to sufficient data to answer which of the more than 10 different types of infrastructure projects was most efficient (from both a cost and time perspective), the different projects had very different intended uses and number of beneficiaries reached. Thus, while box culverts (small bridge) might have been both efficient (in terms of cost and time) and effective (in terms of allowing people to cross), their overall effectiveness in developing and stabilizing conflict-affected areas of Mindanao were minimal.  
  • Use questions which focus on what was achieved, how and to what extent, and not simple yes/no questions. In the GEM example, simply asking if an activity had or had not met its intended targets was much less informative than asking how those targets were set, whether those targets were appropriate, and how progress towards meeting those targets were tracked.

Additional resources

About Social Impact (SI)

SI is an international development consulting firm based near Washington, D.C., supporting international agencies, civil society, and governments become more effective agencies of positive social and economic change. We offer a suit of services ranging from Program Strategy and Design, Capacity Building and Facilitation, Gender and Social Analysis, but are probably best known for our Monitoring and Evaluation work. We currently provide the main M&E training for USAID and Department of State and hold 13 performance management related IQCs as a prime with USAID, Department of State, USDA, MCC, DFID, and others.

For additional information on our GEM-3 evaluation or SI in general, please feel free to contact: Mathias Kjaer, Program Manager, at mkjaer@socialimpact.com.

 

Great tips - thanks for sharing! Would also be interested to hear how the process of answering all 54 questions worked out in practice. 

Thanks for the eight tips provided in the article. As a new practitioner, I think the tips provided clarified many issues I have had (currently only theoretically and academically) with the interview design process, and I will be referring to them in the future when designing future evaluation interviews. They are also present the potential to be a quick and effective source to share and use with evaluation teams when constructing interviews. The tips will help when issues of clarity, question count, and question complexity no doubt arise.

I found this article extremely useful and appreciated the breakdown of eight helpful tips to achieve well-rounded evaluation questions. I’m currently a graduate student in a Reflective Practice and Evaluation course. I just completed an assignment on data collection and analysis via interviewing. I used Raymond Gorden’s Basic Interviewing Skills text to guide my assignment, consisting of reflection on the development and execution of an interview process. I found there to be several parallels in the processes while reading this article. While conducting my recent interview I found the process more manageable when utilizing Rubin and Rubin’s tree-to-branch approach. (The Art of Hearing Data) This approach is a way of structuring questions and sub-questions in a way to provide depth of analysis. In the tree-to-branch approach the researcher divides the topic into roughly equal parts and plans to cover each part with a main question (a branch). Each main question is prepared around significant identified issues and followed up to obtain the same degree of depth. This approach could help practitioners limit the number of main questions.

 

            In order to link questions clearly to the evaluation purpose, Gorden would argue you must first clearly define goals and objectives prior to writing the evaluation question. This will aid in narrowing your scope, purpose, and also with transitioning properly from question to question.