Design, Monitoring and Evaluation for Peacebuilding

You are here

The State of Development Evaluation 2013: Results from the SID-Washington/Charney Research Survey

What is happening in development evaluation?  How often do development projects include evaluation and of what types?  How are evaluators chosen?  How big are their budgets?  How satisfactory are their results?  To provide a snapshot of the current state of development evaluation, the Society for International Development’s Washington Chapter (SID-Washington) partnered with Charney Research to survey 624 development professionals, including funders, contractors, and evaluators, involved in development, stabilization, and humanitarian assistance.

Surveyed development professionals can utilize the report as a benchmark to assess their work and knowledge as compared to the field at large.  Funders will find it helpful to measure awareness and application of recommended evaluation practices.  The report also offers policy-makers a chance to see how far the field has come and ways to move it forward.

The SID-Washington and Charney Research report findings illustrate how to facilitate innovation in development evaluation.  The report found that projects seldom had have a holistic approach.  For example, regardless of the overall project budget, less than half of the development projects and proposals were composed of both impact and performance evaluations.  This was further illustrated by the finding that fifty percent of the evaluations were running below the USAID evaluation budget guidelines.  Both types were more likely to be used when more resources were specifically allocated to evaluations. 

Although the report found that general methodological topics on development evaluation had the greatest appeal to survey respondents, the lack of a holistic approach was further highlighted because few projects included both baseline data collections and final evaluations.  Despite this tendency, most respondents reported being “fairly satisfied” with the evaluations conducted, while few reported being either “very satisfied” or “dissatisfied.”  However, dissatisfaction appeared more frequently in evaluations where spending ran below the USAID guidelines.  The report also found that some development professionals were unaware of USAID evaluation definitions and guidelines. 

The report found that education and access to information was crucial to having a better understanding of development evaluation.  Although the local evaluators worked with internationals on most evaluations, development professionals specifically asked for guidance about the evaluation regarding methodology, capacity building, and resources.  The findings also illustrated that websites are the leading source of information on development evaluation, with the World Bank and USAID Impact blogs being the most widely accessed.  Moreover, websites and live events were also reported to be the most popular sources for evaluation education.  For example, development professionals are quite interested in SID-Washington initiatives in the field, especially online groups, one-day workshops, and webinars.  Ultimately, the report found that internet-based sources were the most favorable resource to educate individuals on development evaluation.

The report had three main conclusions.  First, despite the tremendous amount of progress in the development evaluation field, there is still much to explore and accomplish.  Second, more action needs to be taken in the field, including widespread use of both performance and impact indicators, baseline and follow-up studies, beneficiary-based methods, and more adequate resourcing.  Lastly, funder, contractor, and evaluator staff need to become more aware of USAID’s requirements and how to meet the challenges of evaluation through effective training.

Based on these three conclusions, the report proposes recommendations for funders as well as for SID-W.  First, funders were recommended to comply with the USAID evaluation guidelines in proposal evaluation and to make this widely known.  In addition, they were recommended to separate funding and contracting streams for the evaluations, particularly for smaller projects which tend to have inadequate evaluation budgets in both absolute and percentage terms. 

The survey of development professionals conducted by SID-Washington and Charney Research has ultimately provided a benchmark for these professionals.  The report’s findings and its findings have illuminated shortcomings in the education of USAID evaluation definitions and guidelines, as well as knowledge of the evaluation guidance regarding methodology, capacity building, and resources.  These shortcomings in conjunction with the conclusions and the recommendations for funders and SID-W will facilitate growth and innovation in the development evaluation field.

For the full report, please click here. 

Craig R. Charney, PhD, is President & CEO of Charney Research