FEX 45 Banner

The use of evidence in humanitarian decision making

Published: 

Summary of study1

Location: Ethiopia, DRC and Philippines

What we know: Decision making in humanitarian response requires timely information and analysis and there are ongoing efforts to coordinate and improve needs assessment to inform decision making. How decisions are made in practice is influenced by many factors.

What this adds: The influence of evidence on programmatic response is limited by previously decided strategic priorities, is considered selectively by decision makers, is influenced by personal experiences and is typically used to justify rather than determine interventions. An ‘automatic’ response is common in chronic situations; limited response flexibility within agencies and by donors means there is little incentive or capacity to innovate according to need. The decision-making process lacks transparency and is poorly documented.

A recent paper reports the results of a study undertaken during 2012 by Tufts University to address the question of how assessments and other sources of information and analysis are used by humanitarian decision makers. The study is based on a combination of literature review, case studies (Ethiopia, Democratic Republic of the Congo (DRC) and Philippines), and key informant interviews.

The study asks three main questions. First, how do decision makers in the humanitarian sector currently use information and analysis? Second, what factors, other than information and analysis, are influential in making decisions? Third, what would enable better-informed response decisions?

In order to address these questions, the study looks first at some of the main processes of decision-making in the humanitarian sector and the factors that appear to have most influence on decisions of different kinds. It goes on to look at the way information and analysis is currently generated in the humanitarian sector—both through formal and informal means—and related questions of relevance and credibility. These two topics are then brought together in addressing the question of the use of information by decision makers and what might enable more informed and evidence-based response decisions.

In reviewing the way decisions are made in practice, the study considers the ways in which such information is used (or not) at different points in the process, which varies across different kinds of decisions in different contexts.

Findings

Overall, the study revealed high levels of diversity in the contexts for decision making, as well as in the use of information and analysis. Some patterns emerge, however. In those contexts where strong governmental systems exist, the generation and use of information is either highly controlled by government (Ethiopia) or else is dominated by government-led systems (Philippines), with international actors playing only an auxiliary role. Most of the key decisions regarding resource allocation are in effect made by local and national government officials in these cases, on the basis of national or regional plans. Domestic political factors represent a significant potential bias which risks distorting the data available. That said, there are checks in most systems. In Ethiopia for example, international actors partner with central and local government in both the assessment of need and the provision of relief. While political bias may affect which areas are prioritised for relief, major discrepancies between assessed and "stated" need are hard to disguise, and the larger international donors have a substantial influence over the recipient government in this regard. Thus, although the validity of the published data may be questionable (Ethiopia), the process of micro-resource allocation and programme design is able to a substantial degree to iron out some of the more obvious anomalies at the local level. In contexts where government is relatively absent from humanitarian decision making, a different set of factors are at play. In the most extreme cases (Somalia, Eastern DRC, parts of Afghanistan), government systems are almost completely absent or bypassed by the international system. Here the dominant political narrative is an international one and it provides the backdrop for macro-resource allocation decisions. The biases in these cases come as much from pre-determined international strategic priorities as from domestic factors; aid is provided as much in proportion to an area’s strategic significance as it is based on assessed need. This is evident in the ebb and flow of funding in response to annual appeals (Consolidated Appeals (CAP), etc.), which fluctuates more according to foreign policy agenda, like counterterrorism and stabilisation, than it does according to apparent need. This factor also affects those countries like Central Africa Republic whose international profile and related strategic priority is low. The threshold for response is correspondingly higher in such contexts.

In this second category of contexts, the data available come mainly from international agencies. In most crisis contexts, however, there is a mix of government-generated (e.g., National Disaster Management Authorities (NDMA)) and external agency-generated data and analysis. Increasingly the mainly UN-led Clusters or else government- led coordination bodies are attempting to bridge the gap between the two. Joint assessment processes are one feature of this, an attempt to forge consensus and buy-in, as well as to streamline and harmonise data collection. This has potential strengths and weaknesses from the point of view of evidence-based responses. The main strengths come from comparability of data and ‘buy-in’ for the process and its results. The main weaknesses relate partly to the often cumbersome and slow nature of these joint processes, and partly to the potential for ‘group think’ to dominate the related analysis. In this regard, independent assessment and monitoring processes (e.g. by individual agencies) continue to be an essential part of the evidential picture, often acting as early warning or corrective to the wider system, whose processes may not be responsive to significant changes at either the micro or macro level.

Despite the diversity of contexts for decision making, it is possible to draw a number of conclusions from the study.

Conclusions

Decision making

In most decision making processes in the sector, the range of options is limited by previously decided strategic priorities, resource allocation and other biases. In some cases, these are parameters set by host government authorities, in other cases, they are set more by donors and by implementing agencies. This significantly limits the extent to which decisions are open to influence by evidence, particularly where organisational incentives to generate and respond to new evidence are limited.

Children during the floods in the Philippines in 2009

The extent to which decisions are ‘predetermined’ varies according to the type of decision. In some of the cases reviewed, the dominant political narrative and relative strategic priority given to the country/crisis in question was the factor that had by far the most significant bearing on strategic decisions about crisis response (approach, level of funding, etc.). In some cases of protracted crisis like DRC and Ethiopia, it is more about programmatic inertia: programmes ‘roll’ from year to year without fundamental reassessment of approach. Where programmes are more responsive to context, this tends to be at the lower levels of decision making and at the more local level of programming.

Decision makers may be highly selective in their uptake and interpretation of evidence. Personal biases, rules of thumb, and mental models - as well as a variety of (dis) incentives - may prevent individuals and organizations from responding to a situation in the way that evidence appears to demand. It is common for experienced staff to base decisions mainly on past experiences, instinct, and assumptions - even in the face of contradicting evidence. In institutional terms, this in turn leads to building agency capacity around established interventions types, which continue to be the ‘preferred response’ with each new crisis, irrespective of available evidence.

The use of standard predefined response packages is now being challenged, particularly in the area of food security and livelihoods. In evidential terms, this should involve combining evidence about context with historic knowledge about "what works." Yet it remains the case that very few agencies conduct a formal or structured analysis of the various options and base decisions on the evidence that points to the most appropriate response. Even when assessment is viewed as a priority for programme planning, agencies often disregard field-validated assessments as a precursor to intervention. Ultimately, the choice of response does not always involve an evidence-based, analytical process.

Current processes of decision making tend to be undocumented and untransparent. It is therefore hard to judge whether or how information and evidence have been used to inform them. In particular, key assumptions are often unstated and therefore hard to test.

Generation of evidence

Relatively few documented needs assessments are available beyond the confines of the agencies that conduct them. There has been a rise in the number of joint (multi-agency, multisector) needs assessments, and increased focus on the use of the Multi-Cluster Initial Rapid Assessment (MIRA) tool in rapidonset crises. This is a significant advance, although there remains a lack of genuine multi-sectoral analysis with the result that responses remain largely "siloed." To date, there has been less progress on joint assessment in protracted crises.

Even where documented assessments exist, the link between assessment and decision making appears weak. Assessments are still largely front-loaded and used to justify proposals or appeals (Flash Appeal, CAP, etc.). It remains the case that most assessments are conducted in order to substantiate a case made for funding by a particular agency to do a particular thing. Inevitable biases result in a lack of credibility - both of the analysis and of proposed interventions based on that analysis.

Clearly, there is still room for improvement regarding the processes of data collection and needs assessment, but this is not a silver bullet for achieving improved decision making. First, larger, systemic changes must occur whereby there are better incentives for generating and using evidence in decision making. Second, ongoing assessment and situational monitoring must be more widely adopted. However, in order for this to be effective in improving humanitarian response, the wider humanitarian system must allow flexibility for agencies to adapt programmes to meet the changing needs throughout the duration of a crisis. Third, quality analysis and the use of evidence must be highly valued through increased investments in diagnostics. Fourth, the evidence base proving which humanitarian responses are most effective is extremely lacking. Investments must be made in the consolidation of evidence about what works in response to different kinds of needs in different contexts. Fifth, the way evidence is presented is often crucial to its uptake. Knowing how to present it, to whom, and in what form may be essential to informed decision making.


1Darcy J et al (2013). The use of evidence in humanitarian decision making Assessment Capacities Project (ACAPS). Operational learning paper. Feinstein International Centre January 2013. http://www.acaps.org/img/documents/t-tufts_1306_acaps_3_online.pdf

Imported from FEX website

Published 

About This Article

Article type: 
Original articles

Download & Citation

Recommended Citation
Citation Tools