Enable low bandwidth mode Disable low bandwidth mode
FEX 29 Banner

Improving impact evaluations

Published: 

Summary of published research1

The ENN has long-championed the need for more evidence based learning in food and nutrition emergency programming and in particular more rigorous impact studies. The report summarised below by the Centre for Global Development (CFGD) sets out a broad agenda for impact assessment across all areas of social development and proposes institutional mechanisms for bringing this about (Ed)

In 2004, the Centre for Global Development (CFGD) convened the Evaluation Gap Working Group in order to find out why rigorous impact assessment of social development programmes are relatively rare and to develop proposals to stimulate more and better impact evaluations.

Many governments and organisations are taking initiatives to improve the evidence base in social development policy but investment is still insufficient while the quality of evaluation studies is mixed. While these institutions do well in their normal data collection and evaluations tasks related to monitoring inputs, improving operations, and assessing performance) they largely fail to build knowledge that require studies outside normal budget and planning cycles. Lacking are impact studies that show the intervention has impacted the condition that the programme sought to alter, e.g. health status, income generation, etc. While the knowledge gained from rigorous impact studies is in part a public good, the cost of producing such studies are borne by individual institutions or agencies.

Even when impact evaluations are commissioned, they frequently fail to yield useful information because they do not use rigorous methods or data. A systematic review of UNICEF estimated that 15% of all its reports included impact assessments but noted that many were unable to properly assess impact because of methodological shortcomings. Similarly a review of 127 studies of 258 community health financing programmes found that only two studies were able to derive robust conclusions about impact.

The CFGD report concluded that there are too few incentives to conduct good impact evaluations and too many technical, bureaucratic and political obstacles. However, tolerance for evaluation gaps is waning and donor countries are increasingly concerned that international financial assistance should generate results.

Concern about the evaluation gap is widespread as demonstrated by the many initiatives underway. However progress will be slow and investment insufficient without greater effort. The Evaluation Gap Working Group recommends that the full range of stakeholders should both reinforce existing initiatives and collaborate on a new set of actions to promote more and better impact evaluations.

Governments and agencies should reinforce efforts to generate and apply knowledge from impact evaluations of social programmes. This includes strengthening overall monitoring and evaluation systems; dedicating resources to impact evaluation; ensuring collaboration between policymakers, project managers, and evaluations experts; improving standards for evidence; facilitating access to knowledge; and building capacity in developing countries to conduct rigours evaluations.

Progress is likely to be much faster if some countries and agencies collectively commit to increase the number of evaluations and adhere to high standards of quality. In one form of commitment, similar to a contract, each organisation would agree to do its part, while another form would see organisations support a common infrastructure to carry out joint functions. The working group identified the following characteristics of a successful new initiative:

  • Complementarity to existing initiatives
  • Strategic in its choice of topics and studies
  • Opportunistic in its approach to supporting good impact studies
  • Linked directly and regularly engaged with policymakers, governments and agencies
  • Involving collective, voluntary commitment by a set of governments and public and private agencies to conduct their own studies or contribute funds for contracting such studies by others
  • Committed to independence, credibility and high standards of evidence

The Evaluation Gap Working Group developed consensus that some entity ('council') - whether a committee, standards-based network secretariat or other organisation - is needed as a focal point for leading such an initiative. Council functions were identified as follows;

  • To establish quality standards for rigorous evaluations
  • To administer a standards based review process for evaluation designs and completed studies to help distinguish between stronger and weaker forms of evidence.
  • To identify priority topics around which governments and agencies can cluster evaluations and that will also enable efforts to focus on the most relevant programmes for policymakers.
  • To provide grants for impact evaluation design, where the council may catalyse impact evaluations that otherwise would not be undertaken or, in other cases, increase the likelihood that funded evaluations generate reliable and valid conclusions.

Other council functions might include;

  • organising and disseminating information.
  • building capacity to produce, interpret and use knowledge by encouraging links between researchers, agency staff and project managers.
  • creating a directory of researchers for use by members and actively encouraging the use of qualified experts.
  • undertaking communication activities and public education programmes to explain benefits and uses of impact evaluation.
  • administering funds on behalf of members.

Some Working Group members were concerned that such a fund would divert financial resources from current impact evaluation efforts. Others argued that giving the council adequate funds to commission impact evaluations was essential to address the central concerns set out in the analysis.

The report also discusses how to constitute the council to best provide collectively beneficial services. Suggestions included an interagency committee, a network, secretariat or independent organisation. The authors recognise that the choice will depend on assessing the relevant tradeoffs and the institutional design should ultimately be guided by the structure that will best fulfil a range of aims, including high technical standards, independence and legitimacy, operational efficiency and international leadership.


1When Will We Ever Learn: Improving Lives through Impact Evaluation. Policy Recommendations from the CGD Evaluation Gap Working Group, May 2006 http://www.cgdev.org/content/calendar/detail/7829/

Imported from FEX website

Published 

About This Article

Article type: 
Original articles

Download & Citation

Recommended Citation
Citation Tools