• Users Online: 120
  • Print this page
  • Email this page


 
 
Table of Contents
ARTICLE
Year : 2021  |  Volume : 19  |  Issue : 1  |  Page : 4-14

Capturing Intervention in Its Context: The Next Frontier in Disaster Response Evaluation and Scale-Up Planning


ARQ National Psychotrauma Centre, Diemen, The Netherlands, Netherlands Institute of Health Services Research (NIVEL), Utrecht, The Netherlands & Faculty of Behavioural and Social Sciences, University of Groningen, Groningen, The Netherlands

Date of Submission29-Nov-2020
Date of Decision29-Jan-2021
Date of Acceptance12-Feb-2021
Date of Web Publication31-Mar-2021

Correspondence Address:
Michel Duckers
Professor of Crises, Safety and Health, ARQ National Psychotrauma Centre, Nienoord 5, 1112 XE Diemen
The Netherlands
Login to access the Email id

Source of Support: None, Conflict of Interest: None


DOI: 10.4103/INTV.INTV_49_20

Rights and Permissions
  Abstract 


Disasters and humanitarian crises threaten the health and wellbeing of people across the world, especially in more vulnerable regions. Many efforts are made to ensure that public health interventions, including mental health and psychosocial support (MHPSS), are based on the best available evidence. Important progress has been made in effectiveness research in recent decades. However, our understanding of the value of MHPSS programmes for individuals and communities confronted with adversity still depends heavily on expert opinion and educated guess. This contribution proposes several steps to enhance our evaluation paradigm for the organised response to disasters. Obviously, we need to evaluate routinely, focusing beyond clinical outcomes, whilst applying a broader concept of the quality of mental health intervention. Moreover, disaster response evaluations need to be more attentive to capturing the intervention or disaster vulnerability context. This context includes risk and protective factors at different levels. The context might vary along the timeline of a particular event, but it remains a product of a locally unique interplay between exposure, history and culture. On the one hand, capturing this context is a prerequisite to understand what constitutes a high-quality post-disaster response. On the other, it is a key component for a viable scale-up of promising interventions.

Keywords: disasters, evaluation, implementation, MHPSS, psychosocial, quality, scale-up


How to cite this article:
Duckers M. Capturing Intervention in Its Context: The Next Frontier in Disaster Response Evaluation and Scale-Up Planning. Intervention 2021;19:4-14

How to cite this URL:
Duckers M. Capturing Intervention in Its Context: The Next Frontier in Disaster Response Evaluation and Scale-Up Planning. Intervention [serial online] 2021 [cited 2023 Jun 2];19:4-14. Available from: http://www.interventionjournal.org//text.asp?2021/19/1/4/312729



Key implications for practice



  • Despite important progress in strengthening the evidence base for post-disaster public health interventions, we still insufficiently understand how to ensure support and attention to people confronted with adversity from a mental health perspective in different communities.
  • Research into mental health problems and risk and protective factors remains important, but should be subordinate to the evaluation of the quality of interventions within their disaster vulnerability context.
  • The context includes a variety of factors that, besides disaster vulnerability, also delineate the implementation context and therefore contains crucial information for the development of scale-up initiatives to disseminate lessons from one time and place to another.



  Introduction Top


What we can all agree upon is that disasters and humanitarian crises, whether linked to floods, earthquakes, extreme weather and climate change, war and conflict, terrorism, pandemics, industrial accidents or a combination thereof, pose a serious threat to the mental and physical health and wellbeing of people affected. This impact has been well established in the scientific literature of the last decades (Yzermans et al., 2009; Bonanno et al., 2010; Fazel et al., 2012; Doocy et al., 2013; Bonde et al., 2016; Ripoll Gallardo et al., 2018; Safarpour et al., 2020). The question of what needs to be done in response is a normative question that can be answered in different ways. From a mental health and psychosocial support (MHPSS) perspective, a general strong consensus exists among researchers on the nature of the required support. Several guidelines have been developed that reflect guiding principles and recommended directions for support, evidence-based health care interventions and evaluation models (Hobfoll et al., 2007; IASC, 2007, (2017); Te Brake et al., 2009; Bisson et al., 2010; Suzuki et al., 2012; WHO, 2013; Te Brake & Dückers, 2013; Juen et al., 2016; Chalmers et al., 2020; Snider & Hijazi, 2020). Guidelines like these are attempts to bring together the best available knowledge and lessons from the mental health response to past disasters on behalf of future events that will come. Few would disagree with the importance of identifying good practices and to learn from bad practices, although these are complicated and subjective terms. Yes, we need to understand what works, what does not work and why. Only then can we transplant practices to new disaster contexts, ad hoc or through planned scale-up. But in order to fully comprehend the effects, enablers and barriers of intervention, we need to evaluate more than we do now, systematically yet practically. The importance of evaluation, development of methodologies and tools and incorporation of evaluation and monitoring in our daily practices has been emphasised by many (Tol et al.,2011a,b; Reifels et al., 2013; Dieltjens et al., 2014; Dückers & Thormar, 2015; IASC, 2017; Dückers et al., 2018; Généreux et al., 2019; Haroz et al., 2020; Cénat et al., 2020).

Focus of This Contribution

This contribution proposes several steps to enhance our evaluation paradigm for the organised response to disasters on behalf of decision-making on implementation and scale-up. Firstly, findings from recent studies are briefly discussed to see what we can learn from initiatives to collect and synthesise information from programmes for different target groups in different contexts. The next step is to thematically work out several dimensions to consider when evaluating interventions and planning for scale-up, under the assumption that these two activities should be viewed in relation to each other.


  What Can We Learn from Recent MHPSS Evaluations? Top


In recent years, an array of studies produced results helpful to answer this important question. Let us examine a selection of these works, starting with a systematic review by Bangpan et al. (2019), carried out with the aim to better understand the effectiveness of MHPSS programmes delivered to adults affected by humanitarian emergencies in low- and middle-income countries. Thirty-five studies were analysed. Partly, the findings were linked to mental health problems. Overall, the programmes showed benefits in improved functioning and reducing posttraumatic stress disorder. The authors found some indications that cognitive behavioural therapy, narrative exposure therapy and other psychotherapy modalities may improve mental health outcomes. Bangpan et al. (2019) applied a broader perspective and recommended future research into the impact of basic services and security, and community and family support. Future evaluations should take into account outcomes beyond mental health, including social aspects and cost-effectiveness. Moreover, it is important to consider social, cultural, methodological and ethical aspects when designing and implementing MHPSS programmes for different populations and contextual settings (Bangpan et al., 2019).

Kamali et al. (2020) reviewed 157 publications on MHPSS for women and children in conflict settings, again in low- and middle-income countries. Only 19 publications reported on MHPSS intervention coverage or effectiveness. Despite the growing literature, Kamali et al. (2020) recommend more efforts to further establish and better document MHPSS intervention research and practice in conflict settings. They also encourage multisectoral collaboration and better use of existing social support networks to increase reach and sustainability of MHPSS interventions.

Dickson and Bangpan (2018) reviewed the literature on barriers to, and facilitators of, implementing and receiving MHPSS programmes delivered to populations affected by humanitarian crises in low- and middle-income countries. The 14 studies included in the review differed in methodological reliability, depth and breadth in their findings and use of methods that enabled participants to express their views on implementing or engaging in programmes. It concluded that community engagement is “a key mechanism to support the successful delivery and uptake of MHPSS programmes” in humanitarian settings. Furthermore, “mental health sensitisation and mobilisation strategies”, and development of “effective partnerships with governments and local communities” were seen as “pivotal to increasing overall programme accessibility and reach” (Dickson & Bangpan, 2018, p. 10).

Another review, conducted by Augustinavicius et al. (2018), identified 38 programme documents and 89 peer-reviewed articles, describing monitoring and evaluation of a wide range of MHPSS activities. In both types of publications there was a lack of specificity and overlap in language used for goals and outcomes. Six themes could be distinguished in the focus of goals, outcomes and their indicators (more on this later). Well-validated, reliable instruments were rarely used in monitoring and evaluation practices (Augustinavicius et al., 2018).

Dückers et al. (2018) measured and analysed the quality of 40 MHPSS programmes in disaster and humanitarian crisis contexts using data collected among programme coordinators using a standardised instrument developed in dialogue with experts. Programmes with a more developed organisational structure (e.g. coordinated multidisciplinary planning, based on guidelines, involvement of government, local individuals and trauma experts) implemented more measures and interventions described in evidence-informed guidelines. In such programmes, coordinators were more positive about the programme’s need-centeredness, effectiveness, equity and other quality criteria as well as the realisation of essential psychosocial principles (Dückers et al., 2018). Ideally, programmes are embedded in existing, regular healthcare and support structures, to ensure sustainability and an adequate follow-up where needed. However, this is not guaranteed, and our understanding of interventions within their contexts is to be improved: “It is meaningful to learn more about the interrelation between contextual characteristics of the disaster setting and the programme. Which environmental features help or hinder implementation of a programme and particular components? Socioeconomic country characteristics matter, but little is known about the question of how this works. And inversely: what is needed to tailor a programme to different country settings, with different healthcare systems and institutional characteristics” (Dückers et al., 2018, p. 12).

A systematic review by Cénat et al. (2020) revealed that most mental health programmes in response to Ebola virus disease were implemented by international organisations in collaboration with local partners. Many programmes were implemented following WHO Mental Health Gap Action Programme (mhGAP) and Psychological First Aid (PFA) guidelines. Mental health programmes were implemented in hospitals, Ebola treatment centres, communities among different categories of individuals exposed to Ebola virus disease such as survivors, health workers and volunteers, other frontline workers, children and adults. Only two of the identified programmes which integrated cultural factors were empirically evaluated. The evaluations showed mental health improvement for children and adults. Like other studies, this review emphasises the need to increase efforts to systematically document and evaluate implemented programmes (Cénat et al., 2020).

Finally, perhaps the most complete synthesis of evaluation practices is provided in the review by Haroz et al. (2020) whose objective was to decipher what works in MHPSS programming in humanitarian contexts in low- and middle-income countries. Out of a massive number of unique records (n = 42,435) they included 211 records. Based on the analysis the authors conclude: “While there are many studies of interventions, it was challenging to identify the same intervention across studies, leaving almost no interventions with more than one rigorous study supporting their use and many interventions that are poorly described. This makes it difficult to choose between them or even to implement them. Future research should focus on replication of well-described interventions in multiple different sites (or stages of humanitarian response), to place future intervention selection on a more scientific basis. There is also a need to better understand the impact of psychosocial programmes in sectors other than health and protection, such as nutrition. These sectors may provide critical delivery mechanisms for psychosocial programming to broaden the reach of such interventions” (Haroz et al., 2020, p. 3).

It is impossible to give a comprehensive overview of all the information available from these and other studies. Nevertheless, the selection described here justifies several conclusions:



  1. Evaluations of MHPSS programmes, especially comparable ones, with strong study designs and long-term timeframes, are scarcely available in the international literature, which is an obstacle to evidence-informed preparedness planning and the response to future crises (Dückers et al., 2018; Cénat et al., 2020; Haroz et al., 2020; Kamali et al., 2020).
  2. Evaluations are heterogeneous in specificity and language of objectives, outcomes and indicators and vary in target population, interventions studied, methodology, methodological reliability and usefulness (Augustinavicius et al., 2018; Dickson & Bangpan, 2018; Haroz et al., 2020).
  3. Evaluations have a tendency to focus heavily on mental health problems and psychopathology, although a growing need is recognised to apply a broader focus on outcomes and other aspects of service provision (Augustinavicius et al., 2018; Bangpan et al., 2019; also see Tol et al., 2011a).
  4. Evaluations are modestly informative when it comes to descriptions of what is done by whom and when, and it is difficult to extract what helped or hindered the implementation and the evaluation itself (Dickson & Bangpan, 2018; Haroz et al., 2020; also see O'Connell et al., 2012).
  5. Evaluations confirm the importance of interventions being developed and implemented within communities in a setting of community engagement and multisectoral collaboration (Dickson & Bangpan, 2018).
  6. Evaluations highlight the importance of responsiveness to contextual factors such as the phase of the crisis/response, cultural and socioeconomic characteristics, although these aspects are not typically included in evaluations coherently and systematically (Dückers et al., 2018; Bangpan et al., 2019; Cénat et al., 2020; Haroz et al., 2020).



  What Should We Consider When Evaluating Interventions? Top


These conclusions give us an idea of aspects we should consider in order to enhance our evaluation paradigm. Clearly, we need to evaluate more than we do now, with strong study designs, in a reliable and replicable manner that contributes to comparability of findings. One thing to think through at the start of any evaluation is what we are really interested in and want to learn. It follows that we decide explicitly beforehand what the focal areas of the evaluation and evaluation criteria will be.

Focal Areas

In the Inter-Agency Standing Committee (IASC) guidelines, MHPSS is defined as “Any type of local or outside support that aims to protect or promote psychosocial wellbeing and/or prevent or treat mental disorder” (IASC, 2007, p. 1). This description gives an idea of possible focal areas for evaluation. There are many potential providers and receivers of MHPSS and approaches we can focus upon, at the individual, family, community or society level. The focus can be more generic or specific. In its most narrow sense MHPSS is about dealing with mental health disorders through applying highly specialised, clinical interventions in response to exposure to potentially traumatic events. A broader approach might seek to pursue reduction of risk for wellbeing and mental health − after all, public health interventions in the form of policies and programmes are designed to modify the distribution of health determinants in a population (Minary et al., 2018).

How the bandwidth of services ranges between narrow and broad is reflected in the classical pyramid in the gold standard of MHPSS, the IASC guidelines, with its apex of specialised health care and broad generalist, community-based support at the base (IASC, 2007). In a similar model, the MHPSS continuum encompasses community-focused and individual-focused interventions from “general humanitarian programming”, “social activities” and “psychological activities” to “treatment of disorders” (Haroz et al., 2020). An alternative categorisation of MHPSS programming was described by Dückers and Thormar (2015): “basic aid” (i.e. shelter, safety, food, drinking water, first aid and medication), “information” (i.e. about what has happened, about the fate of loved ones, about normal reactions), “social and emotional support” (i.e. comfort, a listening ear, recognition of grief, compassion, social acknowledgment), “practical help” (i.e. legal and financial issues, household), and “mental health services” (i.e. adequate detection and treatment of mental health problems.

In terms of outcomes, the focal areas of MHPSS evaluations can be equally diverse. Evaluators have focused on outcomes such as wellbeing, quality of life, emotional distress, coping, school enrolment, financial capability, particular mental disorders or symptomology, violence and suicidality (Haroz et al., 2020). Augustinavicius et al. (2018) distinguished six themes among the goals, outcomes and their indicators of MHPSS programmes. MHPSS programmes generally seek to: (a) promote individual resilience and psychosocial wellbeing, and prevent mental health and psychosocial problems; (b) reduce specific mental health and psychosocial symptoms and functional impairment; (c) build capacity to identify, intervene on and monitor mental health and psychosocial problems; (d) enhance environments in which child development can flourish; (e) address macro-level goals and outcomes (e.g. peacekeeping between groups, restoration of social fabric) and (f) protect vulnerable groups of people, such as women, children, the elderly and people with disabilities (Augustinavicius et al., 2018).

In sum, evaluations can help us to acquire knowledge about what was done by whom, for whom, why, how and when, including conditions contributing to intended and unintended effects and the implementation of the intervention. Therefore, the focal areas for evaluation can include, although not exclusively:



  1. what − particular types of support or interventions (one or multiple);
  2. by whom − single actor or multiple actors (professional, organisation, network, system), local or outside/close or distant;
  3. for whom individuals or groups of beneficiaries (general adult population or potentially vulnerable populations such as older persons, migrants, women and children);
  4. why − from a prevention (or risk reduction), health promotion or treatment perspective, directed at a multitude of possible outcomes;
  5. how − behavioural attitude, working methods and instruments, technology, organisational dynamics, stakeholder interactions, utilisation of local capacity, level of multisectoral collaboration and community engagement, and conditions helping or hindering;
  6. when − in the preparedness, response or recovery phase (or moment linked to a notable event or situation in the crisis timeline), occasional or structural.


Evaluation Criteria

In addition to the focal areas, it is important to be thoughtful when it comes to the selection of evaluation criteria. Traditionally, the quality of MHPSS has been approached strongly in terms of effectiveness, which is beyond doubt imperative, yet quality has more faces. Typical health care quality criteria such as efficiency, need-centeredness, safety, timeliness and equity are applicable to MHPSS (Dückers & Thormar, 2015). In principle, any meaningful normative or ethical criterion might be relevant, especially if it matters to the beneficiaries − survivors, victims, patients, affected citizens. So, ask them what they expect and need (under the realisation that needs change over time), discuss whether this is feasible or realistic, and then evaluate whether services live up to it.

What is more, particularly ironic in a journal carrying the title Intervention is to consider whether intervention is always better than waiting. If we assume that people are resilient or are able to recover themselves, it might not be necessary in each situation to intervene. In the two-dimensional evaluation model proposed by Dückers and Thormar (2015), MHPSS quality (reflected in evaluation scores) is linked to the behavioural attitude of interveners toward particular target groups of MHPSS beneficiaries. Evaluation scores can differ between low or high and the behavioural attitude can range from passive to active or waiting versus intervention. The model is shown in [Figure 1]. The attitude toward beneficiaries can be plotted somewhere along the parabolic shape, depending on the evaluation score. When a response after a disaster is situated in the upper half, it scores positively; (watchful) waiting or intervention is indicative for high quality or good practice until the threshold to negative scores (the horizontal marker) is crossed downwards into the realm of bad practice. On each side of the parabola, the quality of MHPSS deteriorates after crossing the threshold, and reaches the point where an approach is either too passive or active, respectively, overestimates or underestimates the resilience of individuals and communities. Low evaluation scores on the passive end of the behavioural spectrum are caused by neglect, disregard or a lack of insight, capacity or opportunity. Low evaluation scores on the active side suffer from overattention, newly created problems and wasted resources. Evaluation scores can be linked to different focal areas and can reflect different evaluation criteria. Examples are included in [Figure 1].
Figure 1 The Importance of Attitude in Disaster Response Evaluations. Note. Disaster response evaluations can result in positive or negative conclusions. Figure 1 shows how quality aspects of the response can be connected to the attitude toward affected individuals or groups. The parabolic shape depicts the possible positions of a particular response that can be more or less passive or active and more or less positively reviewed. The vertical line in the middle marks the boundary between a passive and an active response. The horizontal line in the middle marks the point where positive evaluations (high quality, “good practice”) can be distinguished from negative evaluations (low quality, “bad practice”). Evaluation findings can roughly be categorised in four quadrants. Examples of possible evaluation results are given for each quadrant. Ideally, evaluations include perspectives of different stakeholders in the response, including the target groups, policy makers and service providers. Source: Adapted from Dückers & Thormar (2015).

Click here to view


Additionally, the two-dimensional model can be used in a more historical–methodological way to illustrate the Hegelian dialectic − thesis, anti-thesis and synthesis − of discussions within the MHPSS field, for instance in relation to so-called early psychosocial interventions. Several early interventions have been recommended at first (e.g. psychological debriefing and psychoeducation) and criticised later, sometimes intensely, resulting in the recommendation to refrain from applying them in the future for being ineffective at best and harmful at worse (Mitchell, 1983; Rose et al., 2002; Sijbrandij et al., 2006; Wessely et al., 2008; Bisson et al., 2010; Kearns et al., 2012). Interventions can be popular at one moment, controversial at the next and eventually acceptable again in a modified frame or form. Perhaps this might even happen to contemporary approaches and interventions that yield positive evaluations of their use and effectiveness such as Eye Movement Desensitisation and Reprocessing (EMDR), Programme Management Plus (PM+) and Psychological First Aid (PFA) when practitioners and researchers are craving for something new − or when the relative weight of other evaluation criteria increases.

Intervention and Disaster Vulnerability Context

This brings us to one of the more complex evaluation issues: how to distinguish the intervention from its context? In order to separate the two we need to identify three elements: elements belonging to the intervention (“and therefore participate in its effects and can be transferred”), elements belonging to the context and interact with the former to influence results (“and therefore must be taken into account when transferring the intervention”) and contextual elements irrelevant to the intervention (Minary et al., 2018, 319). Minary et al. (2018) describe context as a complex and dynamic construct existing within complex multilayered systems, with elements interacting not only with each other, but also with a broader environment, usually in nonlinear ways.

The intervention context was defined by Poland et al. (2008) as the spatial and temporal conjunction of social events, individuals and social interactions, which generate causal mechanisms that interact with the intervention and can modify the intervention effects. In our case the intervention context is a disaster vulnerability context. Disaster vulnerability can be defined as “the characteristics and circumstances of a community, system or asset that make it susceptible to the damaging effects of a hazard” (UNISDR, 2009, p. 30). The disaster vulnerability lens is applicable to individuals and groups within communities confronted with disasters and humanitarian crises and allows for similar definitions of context as given by Poland et al. (2008): the timeline before, during and after a disastrous event within a community comprises social events, individuals and social interactions, which generate causal mechanisms that interact with health, wellbeing and risk and protective factors, and can modify the intervention effects.

Contextual Variation

The context of intervention and disaster vulnerability is not static. This has implications for attempts to separate interventions or disaster health risks from their context. The dynamics can be traced along the timeline of the crisis, but contexts are likely to differ across wider dimensions.

The Crisis Timeline

Contextual changes along the crisis timeline are typically ignored in MHPSS evaluations. Yes, we can compare the situation on a given moment T2 and T3 and see how a group with a certain risk profile exposed to treats, loss and a given intervention develops compared to another group with a more or less similar profile in terms of risks and exposure to treats, losses and alternative (or no) intervention. Seasoned researchers will realise how difficult this is in itself. However, in reality it is even more complicated than this. The truth is that the groups we study at a given moment in relation to a particular intervention in a disaster vulnerability context are by no means similar to a controlled laboratory setup. Disasters and community crises are dynamic events with complex disturbances of people’s social environment. This can be illustrated using the (simplified) models used in the literature to plot the increase (honeymoon) and loss (disillusionment) of social support or levels of collective wellbeing in different stages of an event (Raphael, 1986; Yzermans & Gersons, 2002; Dückers et al., 2017b; Ursano et al., 2020). In [Figure 2], the two-dimensional model is complemented by a time dimension with different time phases (T0–T4). The development of collective wellbeing or social support along the time phases is visualised as an erratic line. [Figure 2] helps to illustrate why assessments of MHPSS quality can reflect the social atmosphere of a phase within the microcosmos of a community or society confronted with a crisis. In the disillusionment phase (T2), in this example, mental health might be negatively influenced by a perceived lack of social support or dissatisfaction about governmental responses, while researchers might incorrectly interpret the lower score as an indication of the ineffectiveness of the intervention. Conversely, a measurement conducted in the honeymoon phase (T1) where people experience cohesion and support, might result in positive wellbeing scores that suggest effectiveness of an intervention, while this might be a false positive attributable to the social support context.
Figure 2 The Importance of Time Phases in Disaster Response Evaluations. Note. Shown here is that disaster response evaluations might generate different results at different moments in the timeline of a disaster. The ribbon crossing the centre of the image is inspired by the disaster stages model by Raphael (1986). It highlights an increase in wellbeing and social support, then a decrease followed by a recovery with drawbacks. In this model it illustrates how information to assess effects can be collected before the event or the moment of impact (T0), in a climate of compassion, understanding and support for the people affected (T1), in a phase of disillusionment, frustration, anger and grief (T2), at a time when people are crawling up (T3), and when they regained control over their lives and the impact of the event (T4). It is problematic to include information from different phases (or to combine measurements from two or more phases) in an evaluation without considering the different contexts.

Click here to view


Time and Place

This brings us to another aspect, typically disregarded in intervention studies and evaluations. Information collected in the microcosmos of a community confronted with death and loss due to a disaster should be interpreted against the background of the crisis timeline, but also along the two wider dimensions of time and place ([Figure 3]).
Figure 3 The Importance of Wider Dimensions of Vulnerability and History in Disaster Response Evaluations. Note. This figure emphasises the complexity of comparing evaluation results from one event to another. Specific disasters are placed here in a landscape shaped along two wider axes of vulnerability (place) and history (time). Cultural values and norms, habits, socioeconomic conditions as well as the capacity to provide services in line with certain standards or guidelines differ dramatically along the dimensions of place and time. Source: Information on the number of deaths was taken from Wikipedia on 25 January 2021.

Click here to view


Time − In the first place, it is problematic to exchange experiences from the Buffalo Creek disaster in 1972 (Erikson, 1976) or the Xenia Tornado in 1974 (Clay et al., 2018) with events more recent such as Hurricane Karina in 2005. This is not only problematic because of the differences in the scale of human losses and material damage between the various events, but also because of the social and technological changes that shaped communities in the United States over the course of decades. The same applies to comparisons of the response to the North Sea Flood in 1953, the biggest disaster in The Netherlands since the Second World War, and the fireworks explosion in Enschede in 2000, also in The Netherlands. Over time, history will limit our opportunity for inference, as ideas about MHPSS and specific circumstances that we took for granted, despite having played a role in the quality of service delivery, can change. Also, ideas about mental health problems are far from stable (Jones & Wessely, 2005). The posttraumatic stress disorder (PTSD) diagnosis was fairly recently established in the 1980s and has undergone several modifications since. Prolonged grief disorder was proposed more recently (Prigerson et al., 2009). Similarly, intervention habits also develop. Currently terms like emotional ventilation and psychological debriefing are considered with caution, while PFA, EMDR and PM+ are reviewed favourably. The idea that disorders come and go, together with ideas about appropriate responses, affects the generalisability (and usefulness) of evaluation findings over time.

Place − This is linked to a second wider dimension of geography. The disaster vulnerability of countries in terms of susceptibility and coping and adaptive capacities differs substantially (Welle & Birkmann, 2015; Day et al., 2019) and has been confirmed to be associated with cultural characteristics (Dückers et al., 2015), population mental health (Dückers et al., 2019) and the capacity to provide evidence-based MHPSS (Dückers et al., 2017a). From a global mental health viewpoint this vulnerability of place is reflected in different models for mental health care capacity between resources settings (also see Patel et al., 2018).


  What Should We Consider in Evaluations to be Able to Prepare for Scale Up? Top


The previous paragraph suggests that evaluations of MHPSS in different events, places and times will not automatically yield lessons and good practices applicable to events elsewhere. This is connected to a third perspective on the context of MHPSS, in addition to the disaster vulnerability and intervention perspective, namely, the implementation context. The first context is about understanding disaster health risks, the second context is crucial to interrogate the working mechanism of particular interventions and the third context is about what determines implementation success. Different authors have emphasised the importance of factors linked to the intervention, target groups, providers and their interaction, organisational (change) capacity, available resources, and wide-ranging legal, cultural and societal factors (Greenhalgh et al., 2004; Rogers, 2010; Dückers et al., 2011, 2014a; Flottorp et al., 2013; Weiner, 2020). Nilsen (2015) concluded that the role ascribed to context in implementation literature varies “from studies (…) that essentially view the context in terms of a physical environment or setting in which the proposed change is to be implemented (…) to studies (…) that assume that the context is something more active and dynamic that greatly affects the implementation process and outcomes” (p. 7). It goes too far to discuss all potential implementation factors concerning MHPSS practices in detail here. Still, evaluations that ignore them omit a crucial ingredient for future scale-up initiatives. Moreover, implementation factors cannot be seen separate from the disaster vulnerability context and the intervention context. What these contexts have in common is that they are resource dependent: larger availability of resources implies more capacity to provide professional health services and more opportunity to develop, evaluate and disseminate interventions − locally, without outside assistance.


  Discussion Top


A variety of improvement points in evaluation research and practice were reviewed in this contribution with the aim to enhance our evaluation paradigm.

A first key message is that evaluations need to look beyond (clinical) outcomes and apply a broader concept of the quality of mental health intervention. In this respect the importance of a careful selection of focal areas and evaluation criteria was accentuated. Based on the work of Donabedian (1980) evaluations should cover the outcome (the effects on target groups), the process (sum of all actions) and the structure (the factors that affect the context) of service delivery. When information is collected on each category and when the association between the three is plausible, it makes more sense to draw conclusions on MHPSS quality than based on information on one, two or even three categories without a logical connection (Donabedian, 1980; also see Dückers & Thormar, 2015; Dückers et al., 2018). Focal areas and evaluation criteria can be linked to all three categories. In order to understand the effectiveness and efficiency of MHPSS this is arguably a requirement. Outcome assessments might be sufficient if the goal is to gain insight into how beneficiaries perceive services in terms of safety, need-centeredness or satisfaction. Yet, it is more informative if such outcomes can be linked to information on process and structure.

A second key message is that disaster response evaluations need to capture the intervention context, otherwise it is tremendously difficult − not to say impossible − to make progress in formulating context-specific guidance and evidence-informed scale-up. The intervention context is a multifaceted, complex and dynamic concept that, in the case of MHPSS, can be seen as a vulnerability and an implementation context as well. In the end these contexts form a logical chain. Effective intervention might change the disaster vulnerability context by reducing risks and susceptibility and by strengthening capacity coping and adaptive capacities. Also, it might make the implementation context more favourable toward change, that is, influence determinants for future scale-up. That said, regardless of the label used to specify contexts, the challenge for service providers, evaluators and scale-up planners remains roughly the same: where can the intervention be separated from contextual elements, and which elements should be utilised or modified or can be ignored? The contextual information that is indispensible to understand the effects of the intervention or the potential for successful scale-up hardly makes it into meta-analyses of evaluations or practical guidance. This is an important omission. Apart from sociodemographic factors linked to individuals and groups, the disaster vulnerability context contains other mental health determinants that, in part, might change during the timeline of an event. Furthermore, the context is likely to differ substantially between wider dimensions of place and time − where it is conceptually shaped by the interplay of culture, history and disaster exposure (Alexander, 2012).

Transferring Lessons from One Context to Another

What MHPSS guidelines and lessons from disaster evaluations have in common is that their value increases when proposed principles, recommendations, and practical solutions to challenges are broadly applicable. As elucidated earlier, the available capacity and required conditions to provide MHPSS, especially when it comes to specialised health care in the apex of the IASC pyramid, will not be the same in different vulnerability contexts or resource settings. Notwithstanding that we can agree to an extent about the importance of particular principles, their practical applicability locally, in complex crisis settings, is uncertain. This also applies to disaster lessons. Some lessons from disaster response evaluations sound rather universal despite variation in disaster vulnerability contexts. For instance, after the tsunami in Southeast Asia (2004), hurricane Katrina (2005) and the earthquakes in L’Aguila (2010) and Haiti (2011) evaluators identified similar challenges in: “optimising cooperation and synergy between activities at different levels within the response system”, “connecting to local capacities and needs” (involve local actors, utilise skills and knowledge), “setting realistic expectations” (too much is promised to target populations) and “guarding the response system’s internal integrity” (to limit self-promotion, lack of transparency, corruption and abuse; Dückers et al., 2014b, pp. 1036-1037). At the same time, we cannot say beforehand whether solutions (e.g. community engagement and multisectoral collaboration strategies) to address these and other lessons from research will work along the same lines in different disaster vulnerability and implementation contexts. A scenario like the global COVID-19 pandemic might very well offer a unique opportunity to evaluate and compare health responses to similar exposure risks and health challenges in different countries, cultures or systems across the world. Multiple case studies can help to disentangle the association between the structure, process and outcome of disaster response efforts. Or alternatively, from a realist review perspective they can shed light on “explaining the relationship between the context in which the intervention is applied, the mechanisms by which it works and the outcomes which are produced” (Pawson et al., 2005, p. 21).

Implications for Planning and Evaluation

This contribution focused on what we can and seek to achieve for people confronted with adversity in communities under extreme pressure from external and internal events. Some of the topics might appear a bit distant from everyday struggles in disaster mental health risk reduction and humanitarian emergency work, and may seem impossible to apply in the hectic, pressure cooker of an unfolding crisis or in a pre- or postcrisis situation with limited funding. However, they are at the core of the dilemmas and questions researchers, policy-makers and practitioners need to address when planning and evaluating public health interventions (for examples see [Table 1]).
Table 1 Examples of Relevant Questions to Ask When Planning MHPSS Intervention or Evaluation

Click here to view


Tol et al. (2012) identified the following five priorities in MHPSS research: (a) the prevalence and burden of mental health and psychosocial difficulties in humanitarian settings; (b) how MHPSS implementation can be improved; (c) evaluation of specific MHPSS interventions; (d) the determinants of mental health and psychological distress and (e) improved research methods and processes (Tol et al., 2012). All five remain relevant, nevertheless, we know quite a lot today about the health impact of disasters and humanitarian crises and risk and protective factors. A major problem, underscored repeatedly by now, is that we still know little on how to influence these factors in an effective, efficient, sustainable way, while engaging with communities and strengthening local capacity. Therefore, we need to evaluate promising strategies in different contexts and invest in a comprehensive evaluation toolkit − inspired by existing tools (O’Connell et al., 2012; IASC, 2017; Dückers et al., 2018) and complemented by new ones that adhere to quality standards of ethics and safety (Shah, 2012), consider contextual factors and allow for comparisons between geographical locations and moments in time. From this angle, themes c and e should be top priorities. Knowledge on how to improve MHPSS implementation is crucial for scale-up, consequently its high position in the priority list deserves to be preserved.


  Conclusion Top


This contribution was written with the ambition of enhancing our evaluation paradigm for the organised response to disasters. With an emphasis on MHPSS, several conclusions were drawn based on recent literature and further explored. Evaluation practice will benefit from the uptake of a broader concept of what intervention quality entails and a careful selection of focal areas and evaluation criteria. It is doubtful whether the importance of understanding intervention in relation to its context can be overstated. Contexts can differ during the dynamic crisis timeline and across wider dimensions of time and place. A common denominator of disaster vulnerability or implementation contexts is that they include vital factors for a successful response to disaster health risks and dissemination of good practices from one time and place to another. Although it might seem that the reflections in this contribution are not making things easier, they are necessary to strengthen the knowledge base of organising high-quality responses to disaster in different contexts. After all, without a collaborative, constructively critical, transparent and context-sensitive assessment we cannot be convinced that we are doing the right things, at the right time, and in the right place.

Acknowledgement

The author wishes to acknowledge Marc Obbens for his work on [Figure 2].

Financial support and sponsorship

Nil.

Conflicts of interest

There are no conflicts of interest.







 
  References Top

1.
Alexander D. (2012). Models of social vulnerability to disasters. RCCS Annual Review. A selection from the Portuguese journal Revista Crítica de Ciências Sociais, 4 (4), 22-40.  Back to cited text no. 1
    
2.
Augustinavicius J. L., Greene M. C., Lakin D. P., Tol W. A. (2018). Monitoring and evaluation of mental health and psychosocial support programs in humanitarian settings: A scoping review of terminology and focus. Conflict and Health, 12 (1), 9.  Back to cited text no. 2
    
3.
Bangpan M., Felix L., Dickson K. (2019). Mental health and psychosocial support programmes for adults in humanitarian emergencies: A systematic review and meta-analysis in low and middle-income countries. BMJ Global Health, 4(5), e001484.  Back to cited text no. 3
    
4.
Bisson J. I., Tavakoly B., Witteveen A. B., Ajdukovic D., Jehel L., Johansen V. J., Nordanger D., Orengo-Garcia F., Punamaki R.L., Schnyder U., Sezgin A. U. (2010). TENTS guidelines: Development of post-disaster psychosocial care guidelines through a Delphi process. The British Journal of Psychiatry, 196 (1), 69-74.  Back to cited text no. 4
    
5.
Bonanno G. A., Brewin C. R., Kaniasty K., Greca A. M. L. (2010). Weighing the costs of disaster: Consequences, risks, and resilience in individuals, families, and communities. Psychological Science in the Public Interest, 11(1), 1-49.  Back to cited text no. 5
    
6.
Bonde J. P., Utzon-Frank N., Bertelsen M., Borritz M., Eller N. H., Nordentoft M., Olesen K., Rod N.H., Rugulies R. (2016). Risk of depressive disorder following disasters and military deployment: Systematic review with meta-analysis. The British Journal of Psychiatry, 208 (4), 330-336.  Back to cited text no. 6
    
7.
Cénat J. M., Mukunzi J. N., Noorishad P. G., Rousseau C., Derivois D., Bukaka J. (2020). A systematic review of mental health programs among populations affected by the Ebola virus disease. Journal of Psychosomatic Research, 131, 109966.  Back to cited text no. 7
    
8.
Chalmers K. J., Jorm A. F., Kelly C. M., Reavley N. J., Bond K. S., Cottrill F. A., Wright J. (2020). Offering mental health first aid to a person after a potentially traumatic event: A Delphi study to redevelop the 2008 guidelines. BMC Psychology, 8 (1), 1-11.  Back to cited text no. 8
    
9.
Clay L. A., Greer A., Kendra J. (2018). Learning from historic disaster response: Reviewing old lessons on disaster mental health. Risk, Hazards & Crisis in Public Policy, 9 (3), 303-331.  Back to cited text no. 9
    
10.
Day S., Forster T., Himmelsbach J., Korte L., Mucke P., Radtke K., Theilbörger P., Weller D. (2019). World Risk Report 2019–Focus: Water Supply. Bündnis Entwicklung Hilft.  Back to cited text no. 10
    
11.
Dickson K., Bangpan M. (2018). What are the barriers to, and facilitators of, implementing and receiving MHPSS programmes delivered to populations affected by humanitarian emergencies? A qualitative evidence synthesis. Global Mental Health, 5, e21.  Back to cited text no. 11
    
12.
Dieltjens T., Moonens I., Van Praet K., De Buck E., Vandekerckhove P. (2014). A systematic literature search on psychological first aid: Lack of evidence to develop guidelines. PLoS One, 9(12), e114714.  Back to cited text no. 12
    
13.
Donabedian A. (1980). The definition of quality and approaches to its assessment. Health Administration Press.  Back to cited text no. 13
    
14.
Doocy S., Daniels A., Packer C., Dick A., Kirsch T. D. (2013). The human impact of earthquakes: A historical review of events 1980-2009 and systematic literature review. PLoS Currents, 5.  Back to cited text no. 14
    
15.
Dückers M. L., Frerks G., Birkmann J. (2015). Exploring the plexus of context and consequences: An empirical test of a theory of disaster vulnerability. International Journal of Disaster Risk Reduction, 13, 85-95.  Back to cited text no. 15
    
16.
Dückers M. L., Groenewegen P. P., Wagner C. (2014a). Quality improvement collaboratives and the wisdom of crowds: Spread explained by perceived success at group level. Implementation Science, 9 (1), 91.  Back to cited text no. 16
    
17.
Dückers M. L., Rooze M., Alexander D. (2014b). Towards resilient organisation of recovery and care after disaster. In Bierens J. (Eds.), Drowning (pp. 1033-1038). Springer.  Back to cited text no. 17
    
18.
Dückers M. L., Reifels L., De Beurs D. P., Brewin C. R. (2019). The vulnerability paradox in global mental health and its applicability to suicide. The British Journal of Psychiatry, 215(4), 588-593.  Back to cited text no. 18
    
19.
Dückers M. L., Thormar S. B. (2015). Post‐disaster psychosocial support and quality improvement: A conceptual framework for understanding and improving the quality of psychosocial support programs. Nursing & Health Sciences, 17(2), 159-165.  Back to cited text no. 19
    
20.
Dückers M. L., Thormar S. B., Juen B., Ajdukovic D., Newlove-Eriksson L., Olff M. (2018). Measuring and modelling the quality of 40 post-disaster mental health and psychosocial support programmes. PloSOne, 13(2), e0193285.  Back to cited text no. 20
    
21.
Dückers M. L., Wagner C., Vos L., Groenewegen P. P. (2011). Understanding organisational development, sustainability, and diffusion of innovations within hospitals participating in a multilevel quality collaborative. Implementation Science, 6 (1), 18.  Back to cited text no. 21
    
22.
Dückers M. L., Witteveen A. B., Bisson J. I., Olff M. (2017a). The association between disaster vulnerability and post-disaster psychosocial service delivery across Europe. Administration and Policy in Mental Health and Mental Health Services Research, 44 (4), 470-479.  Back to cited text no. 22
    
23.
Dückers M. L., Yzermans C. J., Jong W., Boin A. (2017b). Psychosocial crisis management: The unexplored intersection of crisis leadership and psychosocial support. Risk, Hazards & Crisis in Public Policy, 8 (2), 94-112.  Back to cited text no. 23
    
24.
Erikson K. (1976). Everything in its path. Simon and Schuster.  Back to cited text no. 24
    
25.
Fazel M., Reed R. V., Panter-Brick C., Stein A. (2012). Mental health of displaced and refugee children resettled in high-income countries: Risk and protective factors. The Lancet, 379 (9812), 266-282.  Back to cited text no. 25
    
26.
Flottorp S. A., Oxman A. D., Krause J., Musila N. R., Wensing M., Godycki-Cwirko M., Baker R., Eccles M. P. (2013). A checklist for identifying determinants of practice: A systematic review and synthesis of frameworks and taxonomies of factors that prevent or enable improvements in healthcare professional practice. Implementation Science, 8 (1), 1-11.  Back to cited text no. 26
    
27.
Généreux M., Schluter P. J., Takahashi S., Usami S., Mashino S., Kayano R., Kim Y. (2019). Psychosocial management before, during, and after emergencies and disasters: Results from the Kobe expert meeting. International Journal of Environmental Research and Public Health, 16 (8), 1309.  Back to cited text no. 27
    
28.
Greenhalgh T., Robert G., Macfarlane F., Bate P., Kyriakidou O. (2004). Diffusion of innovations in service organizations: Systematic review and recommendations. The Milbank Quarterly, 82 (4), 581-629.  Back to cited text no. 28
    
29.
Haroz E. E., Nguyen A. J., Lee C. I., Tol W. A., Fine S. L., Bolton P. (2020). What works in psychosocial programming in humanitarian contexts in low-and middle-income countries: A systematic review of the evidence. Intervention, 18 (1), 3.  Back to cited text no. 29
    
30.
Hobfoll S. E., Watson P., Bell C. C., Bryant R. A., Brymer M. J., Friedman M. J., Friedman M., Gersons B.P., De Jong J.T., Layne C.M., Maguen S. (2007). Five essential elements of immediate and mid-term mass trauma intervention: Empirical evidence. Psychiatry: Interpersonal and Biological Processes, 70 (4), 283-315.  Back to cited text no. 30
    
31.
Inter-Agency Standing Committee (IASC). (2007). Guidelines on mental health and psychosocial support in emergency settings. WHO, IASC.  Back to cited text no. 31
    
32.
Inter-Agency Standing Committee (IASC). (2017). IASC common monitoring and evaluation framework for mental health and psychosocial support programmes in emergency settings. WHO, IASC.  Back to cited text no. 32
    
33.
Jones E., Wessely S. (2005). Shell shock to PTSD: Military psychiatry from 1900 to the Gulf War. Psychology Press.  Back to cited text no. 33
    
34.
Juen B., Warger R., Nindl S., Siller H., Lindenthal M. J., Huttner E., Thormar S. (2016). The comprehensive guideline on mental health and psychosocial support (MHPSS) in disaster settings. OPSIC.  Back to cited text no. 34
    
35.
Kamali M., Munyuzangabo M., Siddiqui F. J., Gaffey M. F., Meteke S., Als D., Jain R.P., Radhakrishnan A., Shah S., Ataullahjan A., Bhutta Z. A. (2020). Delivering mental health and psychosocial support interventions to women and children in conflict settings: A systematic review. BMJ Global Health, 5(3), e002014.  Back to cited text no. 35
    
36.
Kearns M. C., Ressler K. J., Zatzick D., Rothbaum B. O. (2012). Early interventions for PTSD: A review. Depression and Anxiety, 29 (10), 833-842.  Back to cited text no. 36
    
37.
Minary L., Alla F., Cambon L., Kivits J., Potvin L. (2018). Addressing complexity in population health intervention research: The context/intervention interface. Journal of Epidemiology and Community Health, 72 (4), 319-323.  Back to cited text no. 37
    
38.
Mitchell J. T. (1983). When disaster strikes: The critical incident stress debriefing process. Journal of Emergency Medical Services, 8 (1): 36-39.  Back to cited text no. 38
    
39.
Nilsen P. (2015). Making sense of implementation theories, models and frameworks. Implementation Science, 10 (1), 53.  Back to cited text no. 39
    
40.
O’Connell R., Poudyal B., Streel E., Bahgat F., Tol W., Ventevogel P. (2012). Who is where, when, doing what: Mapping services for mental health and psychosocial support in emergencies. Intervention, 10 (2), 171-176  Back to cited text no. 40
    
41.
Patel V., Saxena S., Lund C., Thornicroft G., Baingana F., Bolton P., Chisholm D., Collins P. Y., Cooper J. L., Eaton J., Herrman H. (2018). The Lancet Commission on global mental health and sustainable development. The Lancet, 392 (10157), 1553-1598.  Back to cited text no. 41
    
42.
Pawson R., Greenhalgh T., Harvey G., Walshe K. (2005). Realist review: A new method of systematic review designed for complex policy interventions. Journal of Health Services Research & Policy, 10(1_Suppl), 21-34.  Back to cited text no. 42
    
43.
Poland B., Frohlich K. L., Cargo M. (2008). Context as a fundamental dimension of health promotion program evaluation. In Potvin L., McQueen D. V., Hall M., De Salazar L., Anderson L. M. (Eds.), Health promotion evaluation practices in the Americas (pp. 299-317). Springer.  Back to cited text no. 43
    
44.
Prigerson H. G., Horowitz M. J., Jacobs S. C., Parkes C. M., Aslan M., Goodkin K., Raphael B., Marwit S. J., Wortman C., Neimeyer R. A., Bonanno G. (2009). Prolonged grief disorder: Psychometric validation of criteria proposed for DSM-V and ICD-11. PLoS Medicine, 6(8), e1000121.  Back to cited text no. 44
    
45.
Raphael B. (1986). When disaster strikes: How individuals and communities cope with catastrophe. Basic Books.  Back to cited text no. 45
    
46.
Reifels L., Pietrantoni L., Prati G., Kim Y., Kilpatrick D. G., Dyb G., Halpern J., Olff M., Brewin C.R., O’donnell M. (2013). Lessons learned about psychosocial responses to disaster and mass trauma: An international perspective. European Journal of Psychotraumatology, 4(1), 22897.  Back to cited text no. 46
    
47.
Ripoll Gallardo A., Pacelli B., Alesina M., Serrone D., Iacutone G., Faggiano F., Della Corte F., Allara E. (2018). Medium- and long-term health effects of earthquakes in high-income countries: A systematic review and meta-analysis. International Journal of Epidemiology, 47(4), 1317-1332.  Back to cited text no. 47
    
48.
Rogers E. M. (2010). Diffusion of innovations. Simon and Schuster.  Back to cited text no. 48
    
49.
Rose S. C., Bisson J., Churchill R., Wessely S. (2002). Psychological debriefing for preventing post traumatic stress disorder (PTSD). Cochrane Database of Systematic Reviews, 2002, CD000560.  Back to cited text no. 49
    
50.
Safarpour H., Sohrabizadeh S., Malekyan L., Safi-Keykaleh M., Pirani D., Daliri S., Bazyar J. (2020). Suicide death rate after disasters: A meta-analysis study. Archives of Suicide Research, 1-14.  Back to cited text no. 50
    
51.
Shah S. A. (2012). Ethical standards for transnational mental health and psychosocial support (MHPSS): Do no harm, preventing cross-cultural errors and inviting pushback. Clinical Social Work Journal, 40 (4), 438-449.  Back to cited text no. 51
    
52.
Sijbrandij M., Olff M., Reitsma J. B., Carlier I. V., Gersons B. P. (2006). Emotional or educational debriefing after psychological trauma: Randomised controlled trial. The British Journal of Psychiatry, 189 (2), 150-155.  Back to cited text no. 52
    
53.
Snider L., Hijazi Z. (2020). UNICEF Community-based mental health and psychosocial support (MHPSS) operational guidelines. In Child, adolescent and family refugee mental health 101-119. Springer.  Back to cited text no. 53
    
54.
Suzuki Y., Fukasawa M., Nakajima S., Narisawa T., Kim Y. (2012). Development of disaster mental health guidelines through the Delphi process in Japan. International Journal of Mental Health Systems, 6 (1), 1-11.  Back to cited text no. 54
    
55.
Te Brake H., Dückers M. (2013). Early psychosocial interventions after disasters, terrorism and other shocking events: Is there a gap between norms and practice in Europe? European Journal of Psychotraumatology, 4 (1), 19093.  Back to cited text no. 55
    
56.
Te Brake H., Dückers M., De Vries M., Van Duin D., Rooze M., Spreeuwenberg C. (2009). Early psychosocial interventions after disasters, terrorism, and other shocking events: Guideline development. Nursing & Health Sciences, 11 (4), 336-343.  Back to cited text no. 56
    
57.
Tol W. A., Barbui C., Galappatti A., Silove D., Betancourt T. S., Souza R., Golaz A., Van Ommeren M. (2011a). Mental health and psychosocial support in humanitarian settings: Linking practice and research. The Lancet, 378 (9802), 1581-1591.  Back to cited text no. 57
    
58.
Tol W. A., Patel V., Tomlinson M., Baingana F., Galappatti A., Panter-Brick C., Silove D., Sondorp E., Wessells M., Van Ommeren M. (2011b). Research priorities for mental health and psychosocial support in humanitarian settings. PLoS Medicine, 8(9), e1001096.  Back to cited text no. 58
    
59.
Tol W. A., Patel V., Tomlinson M., Baingana F., Galappatti A., Silove D., Sondorp E., Van Ommeren M., Wessells M. G., Panter-Brick C. (2012). Relevance or excellence? Setting research priorities for mental health and psychosocial support in humanitarian settings. Harvard Review of Psychiatry, 20(1), 25-36.  Back to cited text no. 59
    
60.
UNISDR. (2009). UNISDR Terminology on disaster risk reduction. United Nations.  Back to cited text no. 60
    
61.
Ursano R. J., Morganstein J. C., West J. C. (2020). Essential issues on terrorism: Planning for acute response and intervention. In Vermetten E., Frankova I., Carmi L., Chaban O., Zohar J. (Eds.), Risk management of terrorism induced stress: Guidelines for the golden hours (Who, What and When) (pp. 3-9). IOS Press.  Back to cited text no. 61
    
62.
Weiner B. J. (2020). A theory of organizational readiness for change. In Nilsen P., Birken S. A. (Eds.), Handbook on implementation science. Edward Elgar Publishing.  Back to cited text no. 62
    
63.
Welle T., Birkmann J. (2015). The World Risk Index-An approach to assess risk and vulnerability on a global scale. Journal of Extreme Events, 2 (01), 1550003.  Back to cited text no. 63
    
64.
Wessely S., Bryant R. A., Greenberg N., Earnshaw M., Sharpley J., Hughes J. H. (2008). Does psychoeducation help prevent post traumatic psychological distress? Psychiatry: Interpersonal and Biological Processes, 71(4), 287-302.  Back to cited text no. 64
    
65.
WHO. (2013). Guidelines for the management of conditions that are specifically related to stress. World Health Organization.  Back to cited text no. 65
    
66.
Yzermans J. C., Gersons B. P. (2002). The chaotic aftermath of an airplane crash in Amsterdam. In Havenaar J., Cwikel J., Bromet E. (Eds.), Toxic turmoil (pp. 85-99). Springer.  Back to cited text no. 66
    
67.
Yzermans J. C., Van der Berg B., Dirkzwager A. J. E. (2009). Physical health problems after disasters. In Neria Y., Galea S., Norris F. H. (Eds.), Mental health and disasters (pp. 67-93). Cambridge University Press.  Back to cited text no. 67
    


    Figures

  [Figure 1], [Figure 2], [Figure 3]
 
 
    Tables

  [Table 1]



 

Top
 
  Search
 
    Similar in PUBMED
   Search Pubmed for
   Search in Google Scholar for
 Related articles
    Access Statistics
    Email Alert *
    Add to My List *
* Registration required (free)  

 
  In this article
Abstract
Introduction
What Can We Lear...
What Should We C...
What Should We C...
Discussion
Conclusion
Introduction
What Can We Lear...
What Should We C...
What Should We C...
Discussion
Conclusion
References
Article Figures
Article Tables

 Article Access Statistics
    Viewed4640    
    Printed204    
    Emailed0    
    PDF Downloaded393    
    Comments [Add]    

Recommend this journal