Grow the .002% of all global development projects that are evaluated ex-post closure for sustainability

Grow the .002% of all global development projects that are evaluated ex-post closure for sustainability

It seems like ‘fake news’ that after decades of global development so few evaluations would have peered back in time to see what was sustained. While I was consulting to the Policy Planning and Learning Bureau at USAID, I asked the head of this M&E department who does ex-post sustainability evaluation as I knew USAID had done some in the 1980s, Cindy Clapp-Wincek answered ‘No one, there are no incentives to do it.’ (She later became our advisor.)

Disbelieving, I did a year of secondary keyword research before devoting my professional consulting life to advocating for and doing ex-post evaluations of sustained outcomes and impacts. I searched USAID, OECD, and other bilateral and later multilateral donors’ databases and found thousands of studies, most of which were inaccurately named ‘ex-post’ or ‘post-closure’ studies.  Some of the roughly 1,000 projects I looked at at USAID and OECD that came up under ‘ex-post’, ‘ex post’, ‘post closure’ were final evaluations that were slightly delayed, a few were evaluations that were at least one year after closure, but were desk studies without interviews. Surprisingly, the vast majority of final evaluations found were those that only recommended ex-post evaluation several years later to confirm projected sustainability.

 

 

 

 

 

 

 

 

In 2016 at the American Evaluation Association conference, a group of us did a presentation. In it, I cited these statistics from of 1st year of Valuing Voices’ research:

  • Of 900+ “ex-post” “ex post” “post closure” documents in USAID’s DEC database, there were only 12 actual post-project evaluations with fieldwork have been done in the last 20 years
  • Of 12,000 World Bank projects – only 33 post-project evaluations asked ‘stakeholders’ for input, and only 3 showed clearly they talked to participants
  • In 2010 Asian Development Bank conducted 491 desk reviews of completed projects, and returned to 18 actual field-based post-project evaluations that included participant voices; they have done only this 1 study.
  • We found no evaluations by recipient governments of aid projects’ sustainability

12 years of research, advocacy and fieldwork later, the ‘catalysts’ database on Valuing Voices now shows actual fieldwork-informed evaluations by 40 organizations that had actual ex-posts that returned to the field to ask participants and project partners what was sustained, highlighting 92 ex-posts.

How many ex-post project closure evaluations have been done? .002% of all projects. The 0.002% statistic looks at just public foreign development aid from 1960 (not even counting private funding such as foundations or gifts to organizations, which isn’t tracked in any publicly available database). Calculating aggregated OECD aid statistics (excluding private because it’s only recent data) over 62 years $5.6 trillion by 2022 (thanks to Rebecca Regan-Sachs for the updated #s).

I then estimated 3.000 actual ex-posts which comes from 2,500 JICA projects plus almost 500 other projects that I have either found looking through databases all across the spectrum from governments and multilaterals (almost 100 in our catalysts, and am assuming there must be 400 others done in the 1980s-2000 like USAID and the World Bank).

Without a huge research team it is improssible to aggregate data on the total number of projects by all donors. So I extrapolated from project activity disbursements of one year (2022) for Mali on the www.foreignassistance.gov page. In my 35 years of experience, Mali, where I did my doctoral research, typifies he average USAID aid recipient. They had 382 projects going in 2022. I rounded up to 400 projects x 70 years (since 1960 when OECD data began) x 100 countries by just one donor (of the 150 possible recipient countries, to be conservative). This comes to 2.8 million projects. So if we take 39 OECD countries as donors (given most have far less to give than US), in total 109 million publicly funded aid projects disbursed $5.6 trillion since 1960. While final evaluations are industry standard, only .002% is the estimated number of ex-post evaluations of projects the were evaluated with data from local participants and partners of the 109 million projects .

This became Valuing Voices focus, and we created an open-access database for learning, and conducted our own  My team and I identified 92 ex-posts that returned to ask locals what lasted, what didn’t, why, and what emerged from their own efforts. We also created evaluability checklists and created a new evaluation, Sustained and Emerging Impacts Evaluation that included examining not just what donors put in place to last, but also what emerged outcomes from local efforts to sustain results with more limited resources, partnerships, capacities and local ownership/motivation. These four drivers were found by Rogers and Coates for USAID’s food security exit study in 2015). We have done 15 ex-posts for 9 clients since 2006 and shared Adaptation Fund ex-post training materials in 2023.

 

Yet the public assumes we know our development is sustainable. 2015’s ‘Sustainable Development Goals‘ focused aid on 17 themes, which was to generate $12 trillion more in annual spending on SDG sectors than the;$21 trillion already being invested each year. Nonetheless, a recent UN report states that there is now a $4 trillion annual financing gap to achieve the SDGs. All this funding goes to projects that are currently implemented, not to evaluate what had been sustained from past projects that already closed. Such learning from what succeeded or failed, or what emerged from local efforts to keep activities and results going is pivotal to improving current and future programming is almost wholly missing from the dialogue; I know, I asked multiple SDG evaluation experts.

 

Why do we return to learn so rarely? There are many reasons, the most prosaic among them being administrative.

  • When aid funds are spent over 2-10 years, projects are closed, evaluated at the end, ‘handed over’ to national governments, and no additional funding exists to return ‘ex-post’ closure to learn.
  • Next is the push to continue to improve lives through implementation which means low rates of overhead allocated to M&E and learning during, much less after closure.
  • Another is the assumption that ‘old’ projects differ so much from new ones, but there are few differences. After all there are only so many ways to grow food, feed the malnourished, educate children; evaluating ‘old’ projects can teach ‘new’ projects.
  • A last major one, from Valuing Voices’ research of 12 years may be the largest: Fear of admitting failure. Please read Valuing Voices’ 2016 blog highlighted many Lessons about Funding, Assumptions and Fears (Part 3). One US aid lobbyist told me in 2017 that I must not share this lack of learning about sustained impacts because it could imperil US aid funding; I told her I had to tell people because lives were at stake.
  • Overall, there is much to learn; most ex-post evaluations show mixed results. None show 100% sustainability and while most show 30-60% sustainability, none are 0% sustained either. If we don’t learn to replicate what worked and cease what didn’t now, then future programming will be as flawed and successes, especially brilliant emerging locally designed ex-post outcomes such as Niger’s local funding of redesign of health incentives will remain hidden.

 

Occasionally donors invest in sets of ex-post learning evaluations such as USAID’s ‘global waters’ seven water/ sanitation evaluations linked to the E3 Bureau taking sustainability as a strategic goal. Yet the overall findings from USAID’s own staff of these ex-posts Drivers of WASH study were chilling. While 25 million gained access to drinking water and 18 million to basic sanitation, ‘they have largely not endured.’ But the good news in such research is that the donor learned that infrastructure fails when spare parts are not accessible and maintenance not funded or performed, which can be planned for and addressed during implementation by investing in resources and partnerships. They learned that relying on volunteers is unreliable and management needs to be bolstered, which can lead to some implementation funding to be focused on capacities and local ownership. We can plan better for sustainability by learning from ex-post and exit studies (see Valuing Voices’ checklists in this 2023 article on Fostering Values-Driven Sustainability).

 

And since 2019, three climate funds, the Adaptation Fund, the Global Environmental Facility, and the Climate Investment Funds have turned to ex-post evaluations to look at sustainability and longer-term resilience and even transformation, given environmental shocks may take years to affect the project sites. The Adaptation Fund has done four ex-posts, with more to come in 2024/25, and the CIF is beginning now. The GEF has done a Post-Completion Assessment Pilot for the Yellow Sea Region . Hopeful!

Fostering Values-Driven Sustainability Through an Ex-Post Capacities Lens (reposting a book chapter)

We all want our project results to be sustained, but without doing ex-post sustainability evaluations, we don’t know if they are. However, ex-post evaluations can also teach us how to fund, design, monitor, and evaluate projects before they close. They also require some evaluator competencies, and the checklists below are designed to help build capacities to make implemented projects more sustained, This research was also informed by excellent research by INTRAC and CDA. Enjoy! Also, you can download it from this great array of evaluator competencies via the Journal of Multidisciplinary Evaluation.

Fostering Values-Driven Sustainability Through an Ex-Post Evaluation Capacities Lens

 

Jindra Cekan/ova

Founder of Valuing Voices at Cekan Consulting LLC

Background: Ex-post evaluation of sustainability has been done for 40 years in global development. However, it has been done far less than 1% of all global development projects, for there is little proof that “sustainable” development is or is not. Similarly, foreign aid projects are implemented to foster sustainability, but without the benefit of evidence from ex-post evaluations of what drove it and limited research on the benefits of robust exit strategies..

 

Purpose: Transparency in values we hold, and evaluative capacities’ best practices that we bring to our evaluations inform how they are done, with whom, and for what. Using the evidence base from ex-post evaluations and exit strategies led to these nine checklists. Professionals in monitoring and evaluation should use them to foster long-term sustainability and learning.

 

Setting: Drawing on primary and secondary research across 91 ex-post evaluations of foreign aid sustainability plus two major studies of exit strategies globally..

 

Intervention: Not applicable.

 

Research Design: The checklists were drafted based on sustainability and exit studies and then vetted with lead researchers of the two exit studies. They were revised, and additional research was done on both values-driven evaluation and evaluation competencies.

Data Collection and Analysis: Some primary data was collected during ex-post evaluations by the author, complemented by secondary research.

 

Findings: Sustained exit commitments and conditions checklists can build evaluator capacities in evaluating sustainability. Several have been used by Tufts, USAID, the GEF, and the Adaptation Fund and verified actual sustainability and its prospects. Also, evaluator capacities can be built.

 

Keywords: ex-post evaluation; sustainability; monitoring and evaluation; values; competencies; M&E checklists

Abstract

 

Monitoring and evaluation (M&E) work is guided by an array of values held by funders, implementers, M&E experts, and project participants and partners. Some values are explicit, while others are assumed, such as the truth of “values-neutral” evaluation or that projects are sustainable in the long term. I espouse Patton’s (2022) “activist interventionist change-committed evaluation” by both advocating for ex-post evaluation of many development aid projects’ untested hypotheses about durability, and suggesting ex-post lessons can shape development aid projects from design to closure. Ex-post lessons are valuable for current project planning, design, implementation, and M&E. Using them can make development results more sustainable. Checklists created to ease monitoring and evaluation of prospects for sustainability should be used with country nationals. Six evaluator competencies support sustainability practice, namely systems thinking competency, collaboration competency, anticipatory competency, and reflective, technical, and situational practice competencies. Drawing on several studies that validate this approach, this paper shows how infrequently ex-post evaluations of sustainability are conducted. This seems to indicate that the lessons learned from ex-post evaluation are not valued. Bringing lessons from rare ex-post evaluations to benefit current implementation and exit is the core of the checklists described in this article. Learning from ex-post evaluations and exit studies is very beneficial to inform current aid projects and helps results last. Evaluator competencies are built through this paper. Evaluating both the results expected by donors and new, locally emerging outcomes from local efforts to sustain results also adds value to the canon. Ongoing learning and sharing lessons from progress around the project cycle, from participants to donors, and among M&E experts is vital, especially bringing those lessons back to new projects. The six competencies, the technical checklists, and evaluative thinking about sustainability can help shift programming toward locally led and sustainable development.

Introduction

 

This paper explores a range of values and capacities needed to support the sustainability of foreign aid development projects. It draws on 12 years of Valuing Voices research.[1] This initiative, aimed to increase sustainable solutions for excellent impact through learning from ex-post project sustainability evaluations, also focuses on how evaluators can promote the design, monitoring, and evaluation of sustainability pre-closure and draw on germane evaluator competencies. This paper explores a range of evaluators’ views on the values we bring as monitoring and evaluation experts, as well as the competencies needed to design, implement, monitor, and evaluate for long-term sustainability.

 

Both implicit and explicit values that donors, implementers, and M&E commissioners bring to global development work influence how that work is done. Evaluators need to be aware of and promote the explicit and implicit values that drive M&E work to build evaluation capacity that manifests evaluation values to ascertain which project results are sustainable, by whom, for how long, and why.

 

Sustainability, i.e., the long-term durability of project results, does not happen by itself; it needs to be fostered during the project, but more needs to be known about the conditions required for sustainability to take root after project closure and exit. Valuing Voices’ founder, consultants, and clients believe that evaluating sustainability cannot be limited to desk studies; that eliciting the views of country-based former project participants and partners is key. Based on the lessons from 10 such ex-post sustainability and exit evaluations done by Valuing Voices and over 90 other studies that include participant responses from a variety of donors and implementers,[2] plus seminal studies of exit strategies from Lewis (2016) and CDA (2020), we found nine elements need to be monitored and evaluated from project design to the ex-post years after closure. Development practitioners, including evaluators, need to build their knowledge about what has been sustained in ex-post evaluations and have this inform how they advocate to include these nine elements in project design, implementation, monitoring, and evaluation. This will need equal participation by national partners and participants to be built in throughout to foster long-term results and for new emerging pathways to emerge.

 

The nine elements are presented below in the form of checklists, which function as evaluator capacities tools. For by identifying what elements are needed to foster sustainability in programming, evaluators can inform clients and employers of what needs to be designed, implemented, monitored, and evaluated. The checklists cover two kinds of sustainability drivers: (a) commitments to sustainability, which includes designing beyond the project lifetime through a theory of sustainability, thinking about how to foster sustainability through the process of exit/handover, and considering risks and resilience; and (b) building conditions within the very project to foster lasting sustainability. This involves looking beyond resources as the only driver of durability, to seeing what makes local ownership of results robust. This includes considering several questions: How should equitable partnerships be fostered for long-term results? What capacities to keep disseminating behavior change exist? How adaptive are the timeframe and exit to foster sustainability? How accountable are projects in their communications to partners as they exit?

 

One of the greatest shocks that threatens the sustainability of most global development aid investments is climate change, which is why the natural world and access to viable nature is part of both risks and resilience to shocks. It is discussed separately, given the urgency with which we need to monitor and evaluate its progression and effect on sustainability. Some evaluator competency-building resources that help to evaluate the natural world have been added (e.g., Brouselle, 2022; Rowe, 2019). This is because nature is assumed and often overlooked in much global development programming design and evaluation, as seen in the review of several hundred ex-posts, exit reports, webinars, and evaluations, including blog posts about sustainable development by Cekan (2020a; 2020b), and underscored by Rowe (2019). The natural world and its environmental sustainability are a missing link, while the oft-stated but rarely evaluated “resilience” is often unproven (except for new ex-post research by the Adaptation Fund (2022). A viable natural world continuing to support lives and livelihoods underpins sustainability across so much of global foreign aid and urgently needs inclusion in all evaluations.

 

Defining Evaluation, Its Values, and Sustainability

 

Michael Scriven defined evaluation this way: “Evaluation determines the merit, worth, or value of things” (Scriven, 1991, as cited in Coffman, 2004, p. 1). “Valuation” (measurement, estimation of worth) is embedded in our work as evaluators. Increasingly, the field of evaluation is discussing the values that underpin the work of evaluators. Thomas Archibald notes in a book review, “Schwandt, House, and Scriven—call into question the dubious ‘value-free doctrine’ of the social sciences… [and] emphasize[s] the obvious yet frequently ignored primacy of values and valuing in evaluation” (2016, p. 448). Evaluation, from the perspective of Michael Scriven, is filled with values:

 

If evaluators cling to a values-free philosophy, then the inevitable and necessary application of values in evaluation research can only be done indirectly, by incorporating the values of other persons who might be connected with the programs, such as program administrators, program users, or other stakeholders. (Encyclopedia.com, 2018, para. 26)

 

This opens a door for participatory input from those most closely connected to projects¾the partners and the participants.

 

Michael Quinn Patton highlights tensions between evaluations that seek independent definitive judgments versus those that honor diverse perspectives. He values work done via participatory co-creation by activist, interventionist, change-committed evaluators, where the evaluation itself engages in change. This paper explicitly encourages those involved in monitoring and evaluation to work through participatory co-creation, because sustainability can only be maintained if it is locally driven. Evaluation also needs change-committed evaluators who embrace long-term sustainability.

 

The Development Assistance Committee of the Organisation for Economic Co-operation and Development (OECD/DAC) defines sustainability as the basis for ex-post project evaluation. Their definition includes that same reference to long-term sustainability, and its evaluation is part of the change needed in our field¾namely, a focus on longitudinal results: “the continuation of benefits from a development intervention after major development assistance has been completed…. [and] [t]he probability of continued long-term benefits. The resilience to risk of the net benefit flows over time” (2002, p. 37). In OECD/DAC’s updated and detailed definition, evaluators are directed to consider sustainability

 

at each point of the results chain and the project cycle of an intervention. Evaluators should also reflect on sustainability in relation to resilience and adaptation in dynamic and complex environments. This includes the sustainability of inputs (financial or otherwise) after the end of the intervention and the sustainability of impacts in the broader context of the intervention. For example, an evaluation could assess whether an intervention considered partner capacities and built ownership at the beginning of the implementation period as well as whether there was willingness and capacity to sustain financing at the end of the intervention. In general, evaluators can examine the conditions for sustainability that were or were not created in the design of the intervention and by the intervention activities and whether there was adaptation where required…. If the evaluation is taking place ex post, the evaluator can also examine whether the planned exit strategy was properly implemented to ensure the continuation of positive effects as intended. (2019 Sustainability, para. 3, 6).

 

These key elements, especially the “conditions for sustainability,” inform the checklists in this paper.

 

The OECD also differentiates between durability and ecological sustainability. With the latter being relegated to:

 

Confusion can arise between sustainability in the sense of the continuation of results, and environmental sustainability or the use of resources for future generations…. environmental sustainability is a concern (and may be examined under several criteria, including relevance, coherence, impact, and sustainability). (2019, Sustainability, para. 2)

 

Yet sustainability rests on our valuing the environment and planning for risks and resilience to sustainability (see Figure 8). As evaluators, we need to push donors and implementers to examine the natural system’s resilience, which supposedly unrelated sectors rely on. For instance, the environment affects sectors such as income generation (e.g., natural products being processed by people generating income) and education (e.g., the gardens that subsidize teacher salaries, or the farming, relying on rain, that supports parents to afford school fees). In “Planting Seeds for Change,” evaluator Brouselle (2022) reminds us of the primacy of climate values in Evaluation’s COP26 compendium:

 

We must challenge the ways that evaluations are commissioned; how policies and programmes are framed¾to take risks, going beyond existing evaluation mandates, to improve equity, health and prosperity; reduce pollution; take care of our air, waters and lands; and protect biodiversity… we should use our facilitating skills to foster democracy and engagement. Evaluators can contribute to creating spaces for dialogue and debate with commissioners, participants, and stakeholders, on the socio-ecological impacts of projects, programmes and policies. (para. 4)

 

Linking Competencies and Capacities to Sustainability via Valuing Voices Sustained Exit Checklists

There are six types of evaluator competencies that are relevant to focus work planning for sustainability during design/implementation or conducting an ex-post sustainability evaluation.

 

Evaluation as a field needs to embrace a variety of such competencies as we seek to address a range of complex problems. The first three competencies come from the United Nations Educational, Scientific and Cultural Organization (UNESCO), from a 2017 report called “Education for Sustainable Development Goals: Learning Objectives,” which informs the macro view for sustainability and locally led development.

 

Systems Thinking Competency

UNESCO (2017) defines this competency as “the abilities to recognize and understand relationships; to analyse complex systems; to think of how systems are embedded within different domains and different scales; and to deal with uncertainty” (p. 10). This is key as interventions interact with complicated societies, often with wider aims than what just one project wants to achieve. Uncertainty affects projects in implementation (which is why adaptive management is a checklist item (see Figure 7). Further, because ex-posts are not about direct attribution, given the complexity of communities, but contribution, it is vital to look at a range of outside influences post–project closure that could explain the results (not) seen.

 

Collaboration Competency

 

This competency is pivotal in designing, implementing, monitoring, and evaluating sustainability, which lies in both “the abilities to learn from others; to understand and respect the needs, perspectives and actions of others… and to facilitate collaborative and participatory problem solving” (UNESCO, 2017, p. 10). Listening to those who will be tasked with sustaining results or innovating emerging outcomes involves a close collaboration, as does using participatory methods to both design for and troubleshoot/problem-solve with.

 

Anticipatory Competency

 

Anticipatory competency is “the ability to understand and evaluate multiple futures¾possible, probable and desirable¾and to create one’s own visions for the future, to apply the precautionary principle, to assess the consequences of actions, and to deal with risks and changes” (UNESCO, 2017, p. 10). This competency is key to the whole field of sustainability as a field of study. Often projects assume sustainability will be the long-term result of development efforts. But, as Rogers and Coates (2015) note,

 

Hope is not a strategy. Sustainability plans that depend on the expectation, or hope, that individuals and organizations will continue to function without the key factors previously identified are not likely to achieve this goal. Such plans should take account of what is feasible within the economic, political, and social/cultural context of the areas in which they work. (p. 44)

 

This also relates to two other competencies, systems thinking (discussed above) and situational practice (discussed below).

 

The Canadian Evaluation Society (CES; 2018) provides us with the second three domains relevant to sustainability that evaluators need to consider in terms of how the M&E is done.

 

Reflective Practice Competencies

CES’s Reflective Practice domain includes competencies that “focus on the evaluator’s knowledge of evaluation theory and practice; application of evaluation standards, guidelines, and ethics; and awareness of self, including reflection on one’s practice and the need for continuous learning and professional growth” (2018, p. 5). This competency applies to the content of the sustainability methods presented below, as well as the knowledge evaluators will gain from evaluating prospects for sustainability and emerging outcomes (Figure 1) in projects. Additionally, this competency domain includes both considering “the well-being of human and natural systems in evaluation practice” and being “committed to transparency” (p. 6), which is the aim of using the checklists as a whole sustainability learning process. It is important in such reflection to clarify one’s values.

 

Technical Practice Competencies

These competencies focus on the “strategic, methodological, and interpretive decisions required to conduct an evaluation” (CES, 2018, p. 5), which directly applies to the five sustained exit commitments and conditions (see Figure 3). One competency, “assesses program evaluability,” is germane to ex-post evaluation and prospects for long-term sustainability. Cekan and Legro (2021) have applied the elements in the nine checklists which comprise the Embedding Sustainability in the Project Cycle framework to a World Bank sustainability study, and Cekan has used it in ex-post evaluations, such as a recent one for youth employment (USAID Mali, 2022). It has informed the training materials created for the Adaptation Fund (2023) on how to evaluate sustainability and resilience ex-post.

 

Situational Practice Competencies

As so few projects are “cookie-cutter” versions of each other, it is always vital to contextualize each project and its prospects for sustainability in its unique context, applying CES’s third competency domain, Situational Practice: “Focus on understanding, analyzing, and attending to the many circumstances that make every evaluation unique, including culture, stakeholders, and context” (CES, 2018, p. 6), identifying how specifically the project has moved around the project cycle (see Figure 2), particularly monitoring “organizational changes and changes in the program environment during the course of the evaluation” (p. 7) as well as tracing changes that lead to likely sustainability post-project, and building evaluation capacity by “engag[ing] in reciprocal processes in which evaluation knowledge and expertise are shared between the evaluator and stakeholders” (p. 7) throughout both the analysis and the sharing of the learning results.

 

Competencies that M&E professionals need can be used when monitoring and evaluating prospects for sustainability during project implementation as well as during ex-post evaluations. Sustainability prospects increase when they are designed and planned for, as Zivetz et al. (2017) found in researching ex-posts. There are clear advantages of planning for sustainability measurement from the outset of the project as well as measuring sustainability through the entire project cycle. Donors, implementers, and experts in monitoring and evaluation, as well as national partners, need to be trained in these competencies.

Evaluating Sustainability in Practice

Aid experts including evaluators embed values in their work in a myriad of ways, starting with how projects are funded and designed and by whom; for this reason, much M&E emphasis is on final rather than ex-post evaluations and learning from them. Over $3.5 trillion has been spent on public foreign aid projects in the past 70 years (OECD, 2019). Yet, the aid industry has evaluated fewer than 1% of these projects for sustainability (Cekan, 2015). Valuing Voices’ ex-post research on 39 organizations’ ex-post evaluations of sustainability shows that most project results decrease (10–90%) as early as 2 years ex-post (Valuing Voices, 2012).

 

Except for the Japan International Cooperation Agency (JICA), which has done over 2,500 ex-post evaluations on their grants, loans, and technical assistance, learning from what lasts is rare among international aid donors and implementers. An Asian Development Bank study (2010) of post-completion sustainability found that “some early evidence suggests that as many as 40% of all new activities are not sustained beyond the first few years after disbursement of external funding” (p. 1). The World Bank and Inter-American Development Bank, both multilateral banks, show less stellar investments in ex-post learning (Lopez, 2015; Cekan, 2022). Ex-post evaluations are rare, as is illustrated by a Sustainable Governance Indicators overview of EU member state policy evaluations, with most countries using them rarely or not at all (Sustainable Governance Indicators, n.d.).

 

Often in the ex-post evaluation of sustained impact, we see some results fade as early as 2 years ex-post. It is key to prioritize learning from what was sustained by asking our project participants and local/national partners directly during implementation about sustainability prospects. Field inquiry gives no time to test assumptions about drivers/barriers that the project is being implemented under and test whether optimistic trajectories will hold post-closure, as is widely assumed in the global development industry. For as Sridharan and Nakaima (2010) write:

 

There is no reason for the trajectory of performance outcomes to be linear or monotonic over time¾this has important implications for an evaluation system… [and] should programs that do not have a ‘successful’ trajectory of ‘performance measures’ be terminated? (p. 144)

 

To make sustainability more likely, designing, implementing, monitoring, and evaluating for sustainability is key, and makes successful trajectories more likely. While widespread ex-post learning would be the most effective, lessons can be learned to manifest our values of pro-sustainable development by extracting learning from the ex-post evaluations and exit studies that have been done. This is the aim of the rest of this article.

 

Most ex-posts have found mixed results of some activities being sustained, and others not. Often, what was relevant and locally owned, was sustained, whereas activities that relied on donor incentives such as food aid failed to continue (Catholic Relief Services [CRS], 2016). A 2020 Jones and Jordan ex-post study of USAID Global Waters projects found that while 25 million have gained access to water and sanitation,

 

despite tremendous achievements within the life of our programs, they have largely not endured… Rural water systems that, at activity close, delivered safe water to households have fallen into disrepair. Basic latrine ownership and use have dwindled. Communities certified as open-defecation free are backsliding, and gains in handwashing have not been sustained. [Nonetheless,] where USAID invested in providing technical assistance to committed government partners and utilities, gains in service provision and local capacity were sustained, with local actors taking up and expanding upon best practices introduced during activity implementation. (para. 3, 4)

 

This again supports designing and implementing for sustainability during the project, which is the aim of this paper. But such reviews are rare among donors.

 

The dearth of ex-post evaluations suggests that most global development evaluations currently being conducted are not value neutral. Commissioners seem to value short-term results rather than showing and learning from sustained impacts. Further, donors and implementers design and fund aid projects and their evaluations. Country nationals need to be engaged throughout the project cycle (Figure 2), for they will be left to sustain results. As Scriven stated in discussions with Donaldson, Patton and Fetterman (2010),

 

I want to hear, not just about intended use or users of the evaluation. I want to find out about impact on intended and actual impactees—the targeted and accidental recipients of the program, not just the people that get the evaluation. So I consider my task as an evaluator to find out who it is that this program is aimed at reaching and helping. (p. 23)

 

Emerging Outcomes

 

Typical ex-post evaluations focus on what lasted from what donors funded. Few evaluations return ex-post to also ask the front-line users, project participants, and partners what lasted of the prior project, and what emerged from their local efforts to sustain results with fewer or different resources, partnerships, etc. This glaring omission speaks to a lack of valuing sustained results, much less learning from local capacities to sustain results differently. Thus an innovation by Valuing Voices in evaluating sustainability, either ex-post or for monitoring sustainability, is the search for emerging outcomes, namely what emerges from local efforts to sustain results, rather than focusing only on expected donor-designed pathways to still exist.

 

The example in Figure 1 comes from 2023 Adaptation Fund training materials on ex-post; it draws on a three-year World Food Program Ecuadorian ex-post evaluation of sustainability and resilience. The expected change was that improving the water supply for crops would lead to improved food security. While that was happening to some degree, other outcome pathways were happening as well. In some areas, more water was used to improve cultivation methods, which led to an emerging outcome of children returning home to their rural villages to help their parents and continued to sustain food security, which decreased family vulnerability. Elsewhere, maladaptive pathways also emerged, in which a landslide eliminated the stable water reservoir source in one site, leading farmers to revert to drawing water from a river via pump systems, which likely led to decreased water for the community.

Figure 1. Expected, Emerging, and Unexpected Outcomes Ex-Post

 

 

 

 

 

 

 

 

Note. From Training Material for Ex Post Pilots, by Adaptation Fund, 2023 (https://www.adaptation-fund.org/document/training-material-for-ex-post-pilots/).

 

The picture is incomplete without looking at what was expected to be sustained and what local communities had to innovate to maintain results. Unless we look at both what was expected to be sustained and what local communities had to innovate to sustain results, the picture would be incomplete. Both can be traced during implementation and at ex-post evaluation.

 

Sustainability Around the Project Cycle

We need to build sustainability in from the onset, from funding and design to implementation, while looking out for alternative paths that locals create (see the orange slices in Figure 2). Once local stakeholders are involved throughout the project cycle (green slices in Figure 2), results are more likely to be sustained, for the programming is done with country nationals who will sustain results after donors leave. Assumptions need to be checked, adaptation to foster durability needs to be monitored and evaluated, and exit needs to include consultations on ownership, resources, partnerships, adaptation, resilience, and communications, much of which can be traced in a theory of sustainability.

 

Figure 2. Embedding Sustainability in the Project Cycle

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Note. From “What Happens After the Project Ends?”, by J. Cekan, 2016 (https://valuingvoices.com/what-happens-after-the-project-ends-country-national-ownership-lessons-from-post-project-sustained-impact-evaluations-part-2/ ).

 

As ex-post evaluation of projects is an important link missing before exiting with participants and partners leading sustainability; this paper focuses on lessons learned from the 90+ ex-posts reviewed. Lessons come from projects such as those below. Roughly 80% of the CRS Niger PROSAN food security project was sustained 3 years ex-post. It was implemented for sustainability by taking the final 18 months to exit, rather than 3 to 6 months. National partners were co-implementers pre–project closure. The UK charity EveryChild similarly worked with INTRAC (Lewis, 2016; Morris et al., 2021) to evaluate sustainability during exit. They did so in four countries 5 years ex-post, learning similar lessons about phasing down and over before exiting sustainably.

Were national stakeholders to partner equally, these local “targeted recipients” as Scriven tells us, could require projects not to close until further funding was secured, as EveryChild UK did. Donors, implementers, and evaluators need to listen to what locals want and can sustain. All of us who value sustainable development need to design M&E to incorporate sustainability. Exemplary studies are an ex-post tracing national primary teacher training (USAID Uganda, 2017) and final evaluation projecting sustainably prospects pre-exit from migrants and NGOs in Bangladesh (Hasan, 2021).

Thus, the checklists below help foster sustainability through M&E that involves questioning assumptions that donors and implementers, partners, and participants hold about the sustainability of results. It means building capacities to monitor and evaluate conditions for sustainable impact that are embedded in a traceable, relevant way as projects are implemented. It means documenting and learning from data throughout implementation, planning sustained exit beyond the final evaluation, and retaining data to be evaluated ex-post. This involves building understanding and capacities for ex-post evaluation and project planning (funding, design, implementation, and M&E) to foster it. This includes national stakeholders and evaluators who have a greater stake in their countries who can help foreign national stakeholders focus on learning what excelled or failed and how to use it for future projects in-country.

 

Validation

 

Several sources of expertise inform and validate the checklists (see Figures 4 to 8). In their 2015 analysis of exit strategies and sustainability for four USAID / Food and Peace countries, Rogers and Coates highlighted monitoring and evaluating the presence of four “drivers” of sustainability. These drivers create conditions that both are used to evaluate sustainability ex-post and are likely indicators for how likely sustainability is (if such drivers were put in place during implementation pre-exit). Rogers and Coates’ drivers are (a) sustained motivation/ ownership by national stakeholders to sustain a project’s activities; if activities are yielding relevant results, they are far more likely to be sustained; (b) a sustained flow of resources from, national or international sources; (c) sustained technical and managerial capacities passed on to new participants; and (d) linkages/partnerships with governmental/private or other organizations, for an array of support. Negi and Sohn (2022) confirmed the presence of these drivers across Global Environment Facility (GEF) projects created by Rogers and Coates and applied by Cekan and Legro (2022). Negi and Sohn’s review of 62 projects also confirmed that project design, a key sustainability driver, feeds into OECD’s (2019) Relevance criterion, as well as Figure 4. Similarly, USAID Uganda (2017) found the same four drivers were operational in sustainability.

These elements of sustainability draw on ex-post research by Cekan and key studies about participatory implementation and exit. One is Anderson, Brown, and Jean’s (2012) report Time to Listen. They interviewed 6,000 recipients and implementers of international aid across 20 countries from inside and outside the aid system. Their study focuses on unearthing stories “on the ways that people on the receiving side of aid suggest it can become more effective and accountable” (p. i). A second source was CDA (2020) case studies research led by Jean and a consortium of non-governmental organizations (NGOs), focused on improving exit. This work, Stopping as Success, highlighted that a gradual exit process contributes to sustainability. This research informs one of the commitments mentioned in Figure 3, namely phasing down over time during implementation and to national partners before exiting. These studies underscore that global development should be informed by local conditions and country nationals. Local participation is important while checking on sustainability prospects, as is getting local feedback on how well exit is going pre-closure. These checklists below also draw on seminal research by Lewis for INTRAC (2016), from extensive work on exit among NGOs.

Sustained Exit Commitments and Conditions Checklists

 

Figure 3. Valuing Voices Sustained Exit Commitments and Conditions Checklists

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Note. From “Exit for Sustainability Checklists,” by Valuing Voices, 2020 (https://valuingvoices.com/wp-content/uploads/2021/03/Exit-For-Sustainability-Checklists-Dec2020-2.pdf).

 

Now, let’s return to reflect on how the evaluator competencies articulated by UNESCO and CES fit into these Figure 3 commitments and conditions. Systems thinking competency leads us to consider what a theory of sustainability could consist of, and how to plan for it, given the complex ecosystems any project is embedded in. Collaborative and anticipatory competencies are brought into play when handing over projects during implementation, pre-exit. This is especially relevant to partnerships seeking to best face unknown future risks to sustainability and foster resilience to shocks pre-closure. Taking these commitments to heart predisposes projects to continuation. Another competency, reflective practice, needs to be used to discern which conditions of sustainability are driving change. Further, technical and situational practice are used in the field, examining if and to what degree sustainability is driven by these six conditions. While four of the six conditions (ownership, resources, capacities, and partnerships) driving sustainability come from the Rogers and Coates study, two additional conditions have been found to be important in the exit literature. Namely, how well timeframes pre-exit can be shifted to enable sustainability, and how clear and accountable the communication is between those closing out and those being left before closure. Consider using the nine checklists listed in Figures 4 through 8 along a scale of high–medium–low and revisiting them periodically to gauge change.

Revising a theory of change into a theory of sustainability (Figure 4) is helpful to chart stakeholders, assumptions, trajectories, key questions, and whom to ask.

 

Figure 4. Sustainability Ex-Post Project: Theory of Sustainability

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Ask all stakeholders involved long before exit about how much they feel they “own” the project’s continuation and the resources needed. There is a wide range of resources to be explored and questions to ask about how much the interventions are generating local results that are valued (see Figure 5).

 

Figure 5. Designing for Exit: Ownership/Motivation and Resources

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

The questions in Figure 6 can be used during baseline and midterm evaluations. Some questions can also be selected, as part of ongoing monitoring, from the lists of resources and ownership (above) and capacity strengthening and partnerships. With such data, evaluating sustainability during ex-post evaluations is much easier.

 

Figure 6. Checking Assumptions: Capacity Strengthening and Partnerships

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Two of the elements that tell the most about the extent to which project implementation fosters sustainability are the amount of planning that has gone into project exit and handover, as well as adapting timeframes to readiness for exit (see Figure 7).

 

Figure 7. Monitoring and Adaptation: Exit/Handover, Timeframe, and Adaptation of Implementation

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Finally, long-term sustained and responsible exit fostering local ownership is based on planning for the immediate term (communications about who leaves and who knows why the project is closing, how respectfully this is this done and with how much involvement by local partners). As shown in the two checklists in Figure 8, it is vital to examine how well consideration of present and future risks and resilience to shocks have been embedded in programming.

 

Figure 8. Exit Consultations and Close: Risks/Resilience and Accountable Communications

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Conclusions

 

In addition to infusing sustainability into the project cycle during implementation, it is important to live one’s values and use evaluator capacities as guiding lights for one’s work. What also matters is monitoring and evaluating sustained ownership and the other hallmarks of sustainability within the checklists during programming and at ex-post evaluation. Further, it’s important to look for the capacities that remain behind after projects close (emerging outcomes) and learn from ex-post evaluations to inform current programming to facilitate sustainability while there are sufficient resources, partnerships, capacities and other conditions. Also important is fostering what national and local stakeholders want to sustain through their commitments and conditions. Six competencies equipping monitoring and evaluation experts to do this well have been outlined above, namely systems thinking competency, collaboration competency, anticipatory competency, and reflective, technical, and situational practice competencies. These types of “evaluative thinking” lenses can and should be used, as Archibald (2021) calls for an “ethical accountability” in locally led development. Values-driven sustainability can be a powerful driving force to improve public accountability and good governance. Equipped with such skills, evaluators simultaneously bolster evaluation systems and capacities among national evaluators and program implementers alike. For equitable, values-driven accountability for sustainability to happen, power needs to shift to people at national and local levels to determine what resources, partnerships, and capacities are needed and what is a priority for them to take ownership of. We can begin as soon as possible by building the most likely conditions for sustainability and commitments to foster sustainable exit into the project cycle. We have no time to lose; embracing such values-driven sustainability would be of great benefit.

 

 

References

 

Adaptation Fund (2022). Training material for ex post evaluations. https://www.adaptation-fund.org/about/evaluation/publications/evaluations-and-studies/ex-post-evaluations/

Adaptation Fund. (2023). Training materials for ex post pilots. https://www.adaptation-fund.org/document/training-material-for-ex-post-pilots/

Anderson, M. B., Brown, D., & Jean, I. (2012, December 1). Time to listen: Hearing people on the receiving end of international aid. CDA Collaborative. https://www.cdacollaborative.org/publication/time-to-listen-hearing-people-on-the-receiving-end-of-international-aid/

Archibald, T. (2016). Evaluation foundations revisited: Cultivating a life of the mind for practice [Review of the book Evaluation foundations revisited: Cultivating a life of the mind for practice, by T. A. Schwandt]. American Journal of Evaluation, 37(3), 448–452. https://doi.org/10.1177/1098214016648794

Archibald, T. (2021, February 18). Critical and evaluative thinking skills for transformative evaluation. Eval4Action. https://www.eval4action.org/post/critical-and-evaluative-thinking-skills-for-transformative-evaluation

Asian Development Bank. (2010, October). Special evaluation study on post-completion sustainability of Asian Development Bank-assisted projects. Organisation for Economic Co-operation and Development. https://www.oecd.org/derec/adb/47186868.pdf

Brouselle, A. (2022). Planting seeds for change. Evaluation28(1), 7–35. https://journals.sagepub.com/doi/full/10.1177/13563890221074173

Canadian Evaluation Society. (2018, November). Competencies for Canadian evaluation practice. Evaluation Canada. https://evaluationcanada.ca/files/pdf/2_competencies_cdn_evaluation_practice_2018.pdf

Catholic Relief Services. (2016, October 7). Participation by all: The keys to sustainability of a CRS food security project in Niger. https://www.crs.org/our-work-overseas/research-publications/participation-all

CDA. (2020). Stopping as success: Research findings case studies. CDA, Peace Direct, Search for Common Ground. https://www.stoppingassuccess.org/resources/

Cekan, J. (2015). When funders move on. Stanford Social Innovation Review. https://ssir.org/articles/entry/when_funders_move_on

Cekan, J. (2016, February 19). What happens after the project ends? Country-national ownership lessons from post-project sustained impacts evaluations (Part 2). Valuing Voices. https://valuingvoices.com/what-happens-after-the-project-ends-country-national-ownership-lessons-from-post-project-sustained-impact-evaluations-part-2/

Cekan, J. (2020a, April 20). Sustaining sustainable development. Valuing Voices. https://valuingvoices.com/sustaining-sustainable-development/

Cekan, J. (2020b, October 28). Sustained exit? Prove it or improve it! [Webinar]. Valuing Voices. https://valuingvoices.com/interactive-webinar-sustained-exit-prove-it-or-improve-it-nov-6-2020/

Cekan, J. (2022, November 26). Inter-American Development Bank (IDB) – Where have your ex-post evaluations, and learning from them, gone? Valuing Voices. https://valuingvoices.com/inter-american-development-bank-idb-where-have-your-ex-post-evaluations-and-learning-from-them-gone/

Cekan, J., & Legro, S. (2022). Can we assume sustained impact? Verifying the sustainability of climate change mitigation results. In J. I. Uitto & G. Batra (Eds.), Transformational change for people and the planet: Evaluating environment and development. Springer. https://link.springer.com/book/10.1007/978-3-030-78853-7

Coffman, J. (2004). Michael Scriven on the differences between evaluation and social science research. Evaluation Exchange, 9(4). https://archive.globalfrp.org/evaluation/the-evaluation-exchange/issue-archive/reflecting-on-the-past-and-future-of-evaluation/michael-scriven-on-the-differences-between-evaluation-and-social-science-research

Donaldson, S., Patton, M., Fetterman D., & Scriven, M. (2010). The 2009 Claremont debates: The promise and pitfalls of utilization-focused and empowerment evaluation. Journal of MultiDisciplinary Evaluation, 6(13). https://www.researchgate.net/publication/41391464_The_2009_Claremont_Debates_The_Promise_and_Pitfalls_of_Utilization-Focused_and_Empowerment_Evaluation

Encyclopedia.com. (2018, May 17). Evaluation research: Brief history. https://www.encyclopedia.com/social-sciences-and-law/sociology-and-social-reform/sociology-general-terms-and-concepts/evaluation-research

Hasan, A. A. (2021, January 17). Ex-post eval week: Are we serious about project sustainability and exit? American Evaluation Association AEA365. https://aea365.org/blog/ex-post-eval-week-are-we-serious-about-project-sustainability-and-exit-by-abu-ala-hasan/

Japan International Cooperation Agency. (n.d.). Ex-post evaluation (technical cooperation). https://www.jica.go.jp/english/our_work/evaluation/tech_and_grant/project/ex_post/index.html

Jones, A., & Jordan, E. (2020, October 19). Unpacking the drivers of WASH sustainability. USAID Global Waters. https://www.globalwaters.org/resources/blogs/unpacking-drivers-wash-sustainability

Lewis, S. (2016, January). Developing a timeline for exit strategies: Experiences from an Action Learning Set with the British Red Cross, EveryChild, Oxfam GB, Sightsavers and WWF-UK. INTRAC. https://www.intrac.org/wpcms/wp-content/uploads/2016/09/INTRAC-Praxis-Paper-31_Developing-a-timeline-for-exit-strategies.-Sarah-Lewis.pdf

Lopez, K. (2015, April 7). IEG blog series part II: Theory vs. practice at the World Bank. Valuing Voices. https://valuingvoices.com/ieg-blog-series-part-ii-theory-vs-practice-at-the-world-bank/

Morris, L., George, B., Gondwe, C., James, R., Mauney, R., & Tamang, D. D. (2021, June). Is there lasting change, five years after EveryChild’s exit? Lessons in designing programmes for lasting impact. INTRAC. https://www.intrac.org/wpcms/wp-content/uploads/2021/07/Praxis-Paper-13_EveryChild-exit.pdf

Negi, N. K., & Sohn, M. W. (2022). Sustainability after project completion: Evidence from the GEF. In J. I. Uitto & G. Batra (Eds.), Transformational change for people and the planet: Evaluating environment and development. Springer. https://doi.org/10.1007/978-3-030-78853-7_4

OECD. (2019). Applying evaluation criteria thoughtfully. [Chapter: Understanding the six criteria: Definitions, elements for analysis and key challenges]. OECD Publishing. https://www.oecd-ilibrary.org/sites/543e84ed-en/1/3/4/index.html?itemId=/content/publication/543e84ed-en&_csp_=535d2f2a848b7727d35502d7f36e4885&itemIGO=oecd&itemContentType=book#section-d1e4964

OECD/DAC. (2002). Evaluation and Aid Effectiveness No. 6 – Glossary of Key Terms in Evaluation and Results Based Management (in English, French and Spanish). https://read.oecd-ilibrary.org/development/evaluation-and-aid-effectiveness-no-6-glossary-of-key-terms-in-evaluation-and-results-based-management-in-english-french-and-spanish_9789264034921-en-fr#page37

Patton, M. Q. (2022, June 13). Why so many evaluation approaches: The short story version [Video]. YouTube. https://www.youtube.com/watch?v=6xY9jMUUorM.

Rogers, B. L. & Coates, J. (2015, December). Sustaining development: A synthesis of results from a four-country study of sustainability and exit strategies among development food assistance projects. Food and Nutrition Technical Assistance III Project (FANTA III) for USAID. https://pdf.usaid.gov/pdf_docs/PA00M1SX.pdf

Rowe, A. (2019). Sustainability-ready evaluation: A call to action. New Directions for Evaluation, 162. https://onlinelibrary.wiley.com/doi/abs/10.1002/ev.20365

Sustainable Governance Indicators. (n.d.). Evidence-based instruments. https://www.sgi-network.org/2020/Good_Governance/Executive_Capacity/Evidence-based_Instruments/Quality_of_Ex_Post_Evaluation

Sridharan, S., & Nakaima, A. (2010). Ten steps to making evaluation matter. Evaluation and Program Planning, 34(2), 135–146. https://doi.org/10.1016/j.evalprogplan.2010.09.003

UNESCO. (2017). Education for sustainable development goals: Learning objectives. https://unesdoc.unesco.org/ark:/48223/pf0000247444

USAID Mali. (2022, December). Ex-post evaluation of the USAID/Mali Out Of School Youth Project (PAJE-NIETA): Final evaluation report. https://pdf.usaid.gov/pdf_docs/PA00ZTBJ.pdf

USAID Uganda. (2017, October 11). Uganda case study summary report: Evaluation of sustained outcomes. https://pdf.usaid.gov/pdf_docs/PBAAJ314.pdf

Valuing Voices. (2020, December). Exit for sustainability checklists. https://valuingvoices.com/wp-content/uploads/2021/03/Exit-For-Sustainability-Checklists-Dec2020-2.pdf

Valuing Voices. (2012). Catalysts for ex-post learning. https://valuingvoices.com/catalysts-2/

Zivetz, L., Cekan J., & Robbins, K. (2017, May). Building the evidence base for post-project evaluation: Case study review and evaluability checklists. Valuing Voices. https://valuingvoices.com/wp-content/uploads/2013/11/The-case-for-post-project-evaluation-Valuing-Voices-Final-2017.pdf

[1] https://valuingvoices.com/

[2] https://valuingvoices.com/catalysts-2/

This chapter in published form can be accessed at: https://www.researchgate.net/publication/376497201_Fostering_Values-Driven_Sustainability_Through_an_Ex-Post_Capacities_Lens

Inter-American Development Bank (IDB) – where have your ex-post evaluations, and learning from them, gone?

Inter-American Development Bank (IDB) – where have your ex-post evaluations, and learning from them, gone?

A Linkedin colleague, Gillian Marcelle, Ph.D. recently asked me about ex-posts by the Inter-American Development Bank (IDB) as more Caribbean accelerators/incubators were planned without learning from previous identical tech investments. Here is what I found, and if anyone knows more, please contact me, as it is not reassuring. Also, some were internal ‘self-evaluations’, some were desk reviews, and only a few involved going to the field to ask aid recipients about what lasted, which is typical for multilaterals (ADB and the IBRD do the same). Given Valuing Voices’ focus on participant’s voices in results, there was an attempt to focus on those, but this report did not make it clear which were which so highlights are presented below.

In 2003 IDB created an ex-post policy,Ex Post Policy (EPP) in October 2003, which mandated two new tasks to OVE: the review and validation of Project Completion Reports and the implementation of ex post project evaluations.”. These were under the Board’s request for “a commitment to a ‘managing for results’ business model.” 2004’s ex-posts were seen as “the first year of the implementation of the EPP, all 16 evaluations can be considered part of the pilot and the findings presented in this report refer to the entire set of ex post evaluations.” Further, “the general evaluative questions proposed by EPP are first “… the extent to which the development objectives of IDB-financed projects have been attained.” and second “… the efficiency with which those objectives have been attained.”

They spent over $300,000 unsuccessfully evaluating six of the projects. In part this was due to data quality. “six had an evaluation strategy identified in the approval stage, most had abandoned the strategy during execution prior to project closure and, with one exception which produced data that could be used to calculate a treatment effect, none had produced quality evaluative information…. No [Project Completion Report] PCR provided adequate information regarding the evolution of development outcomes expected from the project or an update with respect to the evaluation identified at the time of approval. For the other six, they found that the expected results did not match what would be the sustained results. Some were better than expected while more were worse. “A critical finding across all projects is the lack of correspondence between the reflexive estimates and the treatment effect estimates. In practically all cases, the estimates were different.” How sustainably “Improved” are “Lives” as IDB’s logo touts?

 

 

In 2004 IADB chose 16 projects to evaluate and dropped four for a variety of reasons. The remaining dozen projects ex-post evaluated were on land development, improving neighborhoods, and cash programming. There were data quality and comparability issues from the onset. In the land [tenure] ‘regularization’: “Six of the projects mention ex post evaluations in loan proposals, but none have been completed to date. OVE was successful in retrofitting a subset of outcomes expected for three projects: an attrition rate of 50%.”  The neighborhood improvements had positive and negative results, with ‘retrofitting’ being needed regarding data. For the four evaluated, the overall conclusion was mixed. Both, that the projcts led to “greater coverage of certain public services.” and for two cases, “this impact was more pronounced for the poorest segments included in the treated population.” Nonetheless, much more was unachieved. “Beyond this, very little else can be said. The impact on the objectives related to human capital formation and income were not demonstrated. In the case of health interventions, perhaps the intervention type most directly linked to sanitation services; there has been no demonstrated link between the interventions and outcomes, even for the poorest segments of the beneficiary population. There was also no consistent evidence showing an increase in variables related to housing values.”

Regarding cash programming, there were individual evaluations that showed promise but only after statistical analysis of a control group, something which is sorely lacking in most foreign aid evaluations. An IDB project in Panama “shows that in some cases the reflexive evaluation, in fact, understated the true program treatment effect. The development outcome of this project was the reduction in poverty. A reflexive evaluation (the gross effects) of the incidence of poverty suggested that not only was the project unsuccessful but that it actually contributed to worsening poverty; the opposite of its intent. However, a treatment effect evaluation (the net effect) that compared “similar municipalities” shows that municipalities benefiting from FIS funds had a significant decline in poverty relative to comparable municipalities that did not receive FIS financing; the project had clear positive development outcomes”.

The IDB staff consulted in 2005 about the results “questioned whether the analysis of closed projects that were not required to include the necessary outcomes and data at the time of approval was a cost-effective use of Bank resources” which may be a reason why the Bank decided against doing more, in spite of many ex-post findings contradicting expected results. Astonishingly, since then, only one summary of a Jordanian ex-post in 2007 was found, but it is questionable that it is an actual ex-post closure. At a minimum, one would expect that the Bank would ensure data quality improved, and planned strategies would actually be done.

Finally, presumably a bank cares about Return on Investment. As a former investment banker, I would be concerned about the lack of learning, given the low cost of such learning versus the discrepancies found between expected and actual sustainability. Specifically, the extremely low cost of the six evaluations that were done ($113K each, more precisely a cost of .001-.21% of the program value), which is a pittance compared to the millions in loan values. Given that more were not done – or at least publically shared- in the 18 years since, sustainability-aware donors, beware.

Unlike this multilateral, I’ve been busy with two ex-post evaluations which I hope to share in the coming months… Let me know your thoughts!

Hard-wiring and Soft-wiring in Sustainability via health program examples

Hard-wiring and Soft-wiring in Sustainability via health program examples: by Laurence Desvignes and Jindra Cekan/ova

Overview

We all want things to last. Most of us joined the ‘sustainable development’ industry hoping our foreign aid projects not only do good while we are there but long afterward. Following on last month’s blog on better learning about project design, implementation, and M&E, here are some things to do better.

Long-term sustainability rests on four pillars: the first rests on how the project is designed and implemented before exit and the second is to what degree conditions are needed for the continuation of results the project generated are put into place. While the first one embeds sustainability into its very results, the second invests in processes to foster the continuation of results. The other two of the four pillars, returning to see what lasts by evaluating the sustainability of results two or more years later and bringing those lessons back to funding, design, implementation, and building in shock-resilience, e.g., such as to climate change, are in other Valuing Voices blogs.

We focus on 1 and 2 in this blog, and use an analogy of hard-wiring and soft-wiring sustainability into the fabric of the project:

  1. Hardwiring, ‘baking-in’ sustainability involves the design/ implementation which predisposes results lasting. This includes investing in Maternal Child Health and Nutrition’s first 1000 days from conception to age two that are vital for child development. The baby’s physical development and nutrition are so important as is maternal well-being. Investing in these early days leads to better health and nutrition throughout their lives. So too are buying local. Too often our projects rely on imported technology and inputs that are hard to replace if broken. A project on hand pumps used by UNICEF suggested local purchase of those “designed to optimize the chances of obtaining good quality hand pumps and an assured provision of spare parts” involves both the hardware of the pump and also the “capacity building plan and a communication strategy.” Also using local capacity/specialists when available vs external consultants can also be key to building the sustainability of a project.

Another example of baking-in sustainability is using participatory approaches to ensure that those implementing- such as the communities/ local authorities. In this example, it’s grassroots where participants are heard during design in terms of their priorities and how the project should be implemented. This includes targeting discussions and monitoring and evaluation being done with and by communities. The seminal research of 6,000 interviews with aid recipients, Time to Listen, found that they want to participate and when they do, things are more likely to be sustained, rather than being passive recipients…. there is ex-post proof such programming is more ‘owned’ and more sustained.

Conducting in-depth needs assessments at design is usually the way to collect information about what is needed and how projects should be implemented to last. Unfortunately, very often, the time is very limited for the proposal development and (I)NGOs are under pressure of short deadlines to submit the proposal, and needs assessments are either done quickly, collecting very basic information or not done at all. Yet time spent valuing the voices of participants can bring great richness. In 2022, the UN’s FAO did a monitoring and evaluation study in Malawi validating indicators for poverty by asking communities how they identify it from the start. “Researchers were impressed at how accurately the people they interviewed were able to gauge the relative wealth of their neighbors.” We were not surprised as the locals often know best.   Another example with Mines Advisory Group in Cambodia, we developed a community-based participatory approach for design whereby project staff would work with the mine-affected communities to draw local maps of their villages, highlighting the location of the dangerous places and the key areas/places used by the communities. Staff and communities discussed the constraints, risks, needs, etc. to make their community safer, which the project would follow up with risk education, clearance, victim assistance, and/or alternative economic /development solutions to make the community safer. Other mine action agencies, e.g. Danish Refugee Council (Danish Demining Group) are also now using safer community approaches, involving local residents to decide on how to make their village safer depending on the community priorities[1].

Hardwiring in participatory feedback-loop learning from locals during implementation is also key. Implementation of a community feedback strategy once the programme is running is also essential. The community feedback mechanism (CFM) is a formal system established to enable affected populations to communicate information on their views, concerns, and experiences of a humanitarian agency or of the wider humanitarian system. It systematically captures, records, tracks, and follows up on the feedback it receives to improve elements of a response. CFM is key to ensuring that people affected by crisis have access to avenues to hold humanitarian actors to account; to offer affected people a formalized structure for raising concerns if they feel their needs are not being met, or if the assistance provided is having any unintended and harmful consequences;  to understand and solicit information on their experience of a humanitarian agency or response; as part of a broader commitment to quality and accountability that enables organizations to recognize and respond to any failures in response; to promote the voices and influence of people affected so their perspectives, rights, and priorities remain at the forefront of humanitarian/development work[2].

Promoting and implementing community engagement, such as a community feedback strategy, provides a basis for dialogue with people affected on what is needed and on how what is needed might best be provided, especially as needs change during implementation. This will help identify priority needs and is a means to gauge beneficiaries’ understanding of activities being carried out, to assist in the identification of local partners and establishment and follow-up of partnerships, and in the organizational development and capacity building of local institutions and authorities. It can strengthen the quality of assistance by facilitating dialogue and meaningful exchange between aid agencies and affected people at all stages of humanitarian response and result in the empowerment of those involved. Targeted people are viewed as social actors that can play an active role in decisions affecting their lives.

OXFAM’s project in Haiti starting in 2012 came as a result of a cholera epidemic that began in 2010 (“Preventing the Cholera Epidemic by Improving WASH Services and Promoting Hygiene in the North and Northeast”), whose goal was to contribute to the cholera elimination, experimented with the community feedback strategy as a means of gauging the recipients’ understanding of the activities carried out and of further strengthening the links between OXFAM and the communities during implementation. The initial process of community feedback was intended to both receive recommendations from project participants for better management of the action and also to better understand the strengths and weaknesses of Oxfam interventions. Based on the information and recommendations applied, OXFAM served as a bridge between the community and the actors involved (e.g. private firm contracted to carry out some health centers/ water systems renovation work or other) in the implementation of the project. This is also part of Oxfam’s logic of placing more emphasis on the issue of accountability and community engagement.

The feedback-loop benefits of such a community process are manyfold, especially on Protection, human rights, risk management, and further below, adapting Implementation, M&E, and fostering organizational learning:

  • CFMs assist in promoting the well-being, rights, and protection of people by offering them a platform to have a voice and be heard
  • it fosters participation, transparency, and trust
  • It uses Do No Harm and conflict-sensitive programming
  • It helps identify staff misconduct
  • It functions as a risk management and early warning system

Adapting Implementation and Improving M&E:

  • This process makes it possible to adapt to the priorities of the beneficiaries, to better meet their needs hence ensuring the agency’s accountability to the affected population
  • It facilitates and guarantees a better quality of the project.
  • It represents a means of monitoring our approaches and our achievements.
  • It makes it possible to construct a common vision shared between the various actors and the project participants/targeted communities.

Organizational Learning:

  • Ensuring the programme quality and accountability through the establishment of an appropriate accountability strategy (including Transparency, Feedback, Participation, Monitoring, and Effectiveness) and relevant methodologies and tools (since the planning stage of the project) is a key exercise which allow to think and plan for the sustainability of the programme at an early stage.
  • It allows us to gauge the strengths and weaknesses of the interventions while offering us the opportunity to learn from our experiences, hence allowing for programmatic learning and adaptative programming.
  • It conveys the impact of the project and the change brought about in the lives of the beneficiaries.
  • It is part of the logic of capitalizing on experiences to improve the quality of future projects.

 

2. Soft-wiring is creating conditions to make sustainability more likely for local communities and partners by thinking about how to replace what has been brought by the projects’ donors and implementers. This involves an analysis as well as actions that put conditions for sustainability into place before and during the time that foreign aid projects close. Valuing Voices’ checklists for exiting sustainably involves local ownership, sufficient capacities, and resources, viable partnerships, how well risks such as climate and economic shocks were identified and managed, and benchmarking for success 1-2 years before closure. Later it is important to return to check findings at ex-post project, comparing completion results to what was sustained 2-30 years later.

There are four categories of sustainability-fostering actions to do pre-exit which were identified by Rogers and Coates of Tufts for USAID for sustained exit:

  1. RESOURCES:

Several blogs on Valuing Voices deal with resources, including assumptions donors make. Donor resource investments cannot be assumed to be sustained.  The checklists outline a wide array of questions to ask during design and latest a year pre-exit, including what assumptions do aid projects make?  USAID water/ sanitation/ hygiene investments have mostly not been sustained, due to a combination of lack of resources to maintain them and low ownership of the resources invested.   Some key questions are:

  • Did the project consider how those taking over the project would get sufficient resources, e.g., grant funding or other income generation available or renting out their facility or infrastructure that they own or shift some of their activities to for-profit production, sold to cover part of project costs?
  • Does the project or partner have a facility or infrastructure that they own and is rentable to increase resources outside donor funding or can the project shift to for-profit, including institutional and individual in-kind products or technical knowledge skills that can be sold to cover part of project costs?
  • What new equipment is needed, e.g. computers, vehicles, technical (e.g. weighing scales) for activities to continue, and which stakeholder will retain them?
  • Or even no resources are needed because some project activities will scale down, move elsewhere, focus on a smaller number of activities that are locally sustainable, or the whole project will naturally phase-out)

2. PARTNERSHIPS:

The objective of that Oxfam project was to reduce the risks of communities placed in a situation of acute vulnerability to the cholera epidemic in two departments in Haiti (where about 1.5 million inhabitants reside). It focused on sustainability by effectively supporting and accompanying governmental WASH and health structures in the rapid response to alerts and outbreaks recorded in the targeted communities. How? Through awareness-raising activities among the populations concerned, by strengthening the epidemiological surveillance system and coordination between concerned stakeholders. The project also aimed to improve drinking water structures such as drinking water distribution points, drinking water networks or systems, catchments, and boreholes.  As part of this intervention, Oxfam worked in close collaboration and in support of the Departmental Directorates of Health (DH), DINEPA (government services responsible for water and sanitation), and local authorities at the level of cities, towns and neighborhoods, and community structures including civil protection teams.   Oxfam and DINEPA staff intervened through mixed mobile response teams that included technical and managerial staff from the health department to whom Oxfam provided ongoing technical support in terms of WASH analysis and actions, WASH training, finance training, and monitoring, as well as logistical support for the deployment of teams in the field (provision of vehicles and drivers). Oxfam was therefore working to ensure that cholera surveillance and mitigation actions were led by state and community actors, and by supporting state structures to build their capacities and allow ownership of the various aspects of the fight against cholera.   Concretely, this was done as follows:

  • Preliminary meetings and discussions were held with concerned governmental authorities to agree on a plan of action based on needs, implementation means, priorities, and budget for the health and wash governmental services/teams to be able to function. This was followed by the signature of an MoU between Oxfam and the Departmental Directorates of Health (DH).
  • An action plan was set up with the DH and DINEPA (governmental water and sanitation agency) at the very beginning of the project.
  • Outbreak response teams were managed directly by the DH and the staff was recruited, managed, and paid by the DH. The DH and DINEPA implemented the activities, managed the staff of the mobile teams, and provide technical monitoring in coordination with Oxfam.
  • The epidemiological monitoring activities carried out by the DH were also monitored by the Oxfam epidemiologist who, in close coordination with the DH, built the capacities of epidemiologists and staff at the departmental level and at the level of the treatment centers to ensure adequate monitoring and communication.
  • An Oxfam social engineering officer worked with DINEPA to ensure that the various water committees at the sources/infrastructure rehabilitated by Oxfam were functional. Sources/infrastructures were rehabilitated in concert with DINEPA to ensure the proper ownership.
  • Oxfam provided funding, and technical supervision and wrote and submitted the final report to the donor. Based on DH’s regular reports on activities which were then consolidated by Oxfam for the donor.
  • Teams were paid directly by the DH from funds received by Oxfam, based on the budget agreed by both Oxfam and the DH, and were based on government salary scales.
  • The Oxfam WASH team, which systematically accompanies case investigations in the field, further encouraged the participation of DINEPA and its community technicians, through regular meetings with the DINEPA departmental directors.
  • Overall, Oxfam ensured to provide support and capacity building of the DH, DINEPA, and community actors involved in the fight against cholera, to ensure proper ownership and to avoid substitution of the health/wash authorities.

3. OWNERSHIP:

The type of peer-partnering at design and during implementation described above is vital for ownership and sustainability. Unless we consider people’s ownership of the project and capacities to sustain results, they won’t be sustained. See Cekan’s exiting for sustainability checklists on phasing over before phasing out and exit, strengthening ownership, which brings us full circle to the participatory hard-wiring described above in Haiti.

4. CAPACITIES-STRENGTHENING

We have to strengthen capacities at the most sustainable level. Taking an example from IRC’s Sierra Leone Gender-based Violence project involves looking at what happens when capacities training done for local participants and partners to take over are not done right. In this case, there were two-year consultancies to the Ministry (MSWGCA) on strategic planning and gender training, but “it is not clear if this type of support has had a sustainable impact. The institutional memory often disappears with the departure of the consultant, leaving behind sophisticated and extensive plans and strategies that there is simply no capacity to implement.” The report found that community-based initiatives that are the “primary sources of support for GBV victims living in rural areas in a more innovative and sustainable way that promotes local ownership. They also may yield more results,” most donor agencies find it hard to partner with community-based organizations so they recommended a focus on training and capacity-building of mainstream health workers to respond to GBV and aim for the government will assume control of service provision in approximately five years. The excellent manual by Sarriot et al on Sustainability Planning, “Taking the Long View: A Practical Guide to Sustainability Planning and Measurement in Community-Oriented Health Programming puts local capacity strengthening at the core. We have to consult and collaborate throughout and create an ‘enabling environment’ so that the activities and results are theirs.

 

 

 

 

 

 

 

 

Source: Sarriot et al 2008

Obviously, we should check on the sustainability we hope for. As ITAD/CRS note, we should do and learn from more ex-post evaluations which is much of what Valuing Voices advocates for.

 

Recommendations for fostering sustainability:

Few donors require information on how hard-wired or how soft-wired programming pre-exit is at closure which would make sustainability likely. Even fewer demand actual post-closure sustainability data to confirm assumptions at exit, sadly we believe most of our foreign aid has had limited sustained impacts. But this can change.   Donors need to be educated that the “localization” agenda is the new trend (just as gender, resilience, and climate change have been at one point). It is beyond the “nationalization” of staff members (e.g. replacing expatriates with national staff), which is only one of the elements relating to locallization. True localization is to promote the local leadership of communities in their own ‘sustainable development’. While this is easier to say than to do, sustainability depends on it. We foster it through the hard-wiring and soft-wiring we discussed above and more steps, below.   Here are specific steps from Laurence’s and Jindra’s experiences with the Global South:

  • Funds & additional time for local partnership and ownership need to be embedded in the design and planned for, which requires a different approach on which the donors also need to be sensitized/ educated/ advocated to;
  • In-depth needs assessment must be carried out just before or when an NGO sets up an operation – it usually takes time and should be integrated into any operation. Advocating this approach to donors is key so that it can be included in the budget or the NGO needs to find its own funds to do so) and the NGO country and sector strategy can then be updated yearly to embed such activities into the (I)NGO DNA;
  • Conduct a capacity strengthening assessment of the local authorities or partners with whom we are going to conduct the project. This can take between 3 to 6 months, depending on the number and type of actors involved but this is an essential element to build self-sustaining local capacities and ensure that comprehensive capacity building is going to take place. This transparent step is also an essential step to ensure ownership by national/governmental stakeholders;
  • It is vital to allow time to plan for an exit strategy at an early stage, even as early as design. This requires time and needs to be included in the budget for the implementation of the plan at least one year before the end, for phasing over to local implementing partners to take over before the donors/ Global North implementers exit, and for possibly strengthening capacities or extending programming to deliver on their timeline rather than ours before exiting out. More on this from CRS’ Participation by All ex-post and of course the oft-cited “Stopping As Success: Locally-led Transitions in Development” by Peace Direct, Search for Common Ground, and CDA. Also do not forget shared leadership noted by UK’s INTRAC’s “Investing in Exit”;
  • Finally, don’t forget about evaluating ex-posts and embedding those lessons into future design/ implementation/ monitoring and evaluation.

  Investing in sustainability by hard-wiring or soft-wiring works! Let us know what you do…      

[1] https://drc.ngo/our-work/what-we-do/core-sectors/humanitarian-disarmament-and-peacebuilding/

[2] https://www.drc.ngo/media/vzlhxkea/drc_global-cfm-guidance_web_low-res.pdf

Sustainability of what and how do we know? Measuring projects, programs, policies…

On my way to present at the European Evaluation Society’s annual conference, I wanted to close the loop on the Nordic and Netherlands ex-post analysis. The reason is, that we’ll be discussing the intersection of different ways to evaluate ‘sustainability’ over the long- and short-term, and how we’re transforming evaluation systems. The session on Friday morning is called “Long- And Short-Term Dilemmas In Sustainability Evaluations” (Cekan, Bodnar, Hermans, Meyer, and Patterson). We come from academia as professors, consultancies to International organizations, International/ national non-profits, and our European (Dutch, German, Czech), South African, and American governments. We’ll discuss it as a ‘fishbowl’ of ideas.

The session’s abstract adds the confounding factor of program vs project versus portfolio-wide evaluations all-around sustainability.

Details on our session are below and why I’m juxtaposing it to the Nordic and Netherlands ex-posts in detail, comes next. As we note in our EES ’22 session description, “One of the classic complications in sustainability is dealing with short-term – long-term dilemmas. Interventions take place in a local and operational setting, affecting the daily lives of stakeholders. Sustainability is at stake when short-term activities are compromising the long-term interests of these stakeholders and future generations, for instance, due to a focus on the achievement of shorter-term results rather than ensuring durable impacts for participants… Learning about progress towards the SDGs or the daunting task of keeping a global temperature rise this century well below 2 degrees Celsius above pre-industrial levels, for instance, requires more than nationally and internationally agreed indicator-systems, country monitoring, and reporting and good intentions.”

But there are wider ambitions for most sustainability activities undertaken by a range of donors, policy actors, project implementers, and others: Sustainability “needs to span both human-social and natural-ecological systems’ time scales. Furthermore, long-term sustainability, in the face of climate change and SDGs, demands a dynamic view, with due attention for complexity, uncertainty, resilience, and systemic transformation pathways…. the need for a transformation of current evaluation systems – seeing them as nested or networked systems… Their focus may range from focused operational projects to the larger strategic programmes of which these projects are part, to again the larger policies that provide the context or drivers for these programmes. Analogue to these nested layers runs a time dimension, from the short-term projects (months to years), to multi-year programmes, to policies with outlooks of a decade or more.” 

When Preston did his research in 2020-21 which I oversaw, we focused on the projects precisely because that is where we believe ‘impact’ happens in a measurable way by participants and partners. Yet we found that many defined their parameters differently. Preston writes, “This paper focuses on what such research [on projects evaluated at least 2 years post-closure] yielded, not definitive findings of programs or multi-year country strategies that are funded for 20-30 years continuously, nor projects funded by country-level embassies which did not feature on the Ministry site. We focus on project bilateral project evaluations, not multilateral funding of sectors. We also …received input that Sweden’s EBA has a (non-project [not ex-post] portfolio of ‘country evaluations’ which looked back over 10 or even 20-year time horizons

So we present these compiled detailed studies on the Netherlands, Norway, Finland,  Sweden, and Denmark for your consideration. Can we arrive at a unified definition of ‘sustainability’ or imagine a unified ‘sustainability evaluation’ definition and scope? I hope so, will let you know after EES this week! What do you think, is it possible?

Can We Assume Sustained Impact? Verifying the Sustainability of Climate Change Mitigation Results (reposting a book chapter)

So excited to have our chapter verifying the ‘sustainability’ of the Global Environment Facility Trust Fund (GEF) funded projects through examining two tranches of evaluations. My co-writer colleague Susan Legro did a brilliant job pointing out GreenHouse Gasses (GHG) emissions estimated reductions flaws. Given climate change is in full swing, we must trust the data we have.

It appeared in Transformational Change for People and the Planet: Evaluating Environment and Development Edited by  Juha I. Uitto and Geeta Batra. Enjoy!

Abstract

The purpose of this research was to explore how public donors and lenders evaluate the sustainability of environmental and other sectoral development interventions. Specifically, the aim is to examine if, how, and how well post project sustainability is evaluated in donor-funded climate change mitigation (CCM) projects, including the evaluability of these projects. We assessed the robustness of current evaluation practice of results after project exit, particularly the sustainability of outcomes and long-term impact. We explored methods that could reduce uncertainty of achieving results by using data from two pools of CCM projects funded by the Global Environment Facility (GEF).

Evaluating sustainable development involves looking at the durability and continuation of net benefits from the outcomes and impacts of global development project activities and investments in various sectors in the post project phase, i.e., from 2 to 20 years after donor funding ends.1 Evaluating the sustainability of the environment is, according to the Organisation for Economic Co-operation and Development (OECD, ), at once a focus on natural systems of “biodiversity, climate change, desertification and environment” (p.1) that will need to consider the context in which these are affected by human systems of “linkages between poverty reduction, natural resource management, and development” (p. 3). This chapter focuses more narrowly on the continuation of net benefits from the outcomes and impacts of a pool of climate change mitigation (CCM) projects (see Table 1). The sustainability of CCM projects funded by the Global Environment Facility (GEF), as in a number of other bilateral and multilateral climate funds, rests on a theory of change that a combination of technical assistance and investments contribute to successfully durable market transformation, thus reducing or offsetting greenhouse gas (GHG) emissions.

 

Table 1: Changes in OECD DAC Criteria from 1991 to 2019

1991

2019

SUSTAINABILITY:

SUSTAINABILITY: WILL THE BENEFITS LAST?

Sustainability is concerned with measuring whether the benefits of an activity are likely to continue after donor funding has been withdrawn. Projects need to be environmentally as well as financially sustainable.

The extent to which the net benefits of the intervention continue, or are likely to continue. Note: Includes an examination of the financial, economic, social, environmental, and institutional capacities of the systems needed to sustain net benefits over time. Involves analyses of resilience, risks, and potential trade-offs.

IMPACT:

IMPACT:

The positive and negative changes produced by a development intervention, directly or indirectly, intended or unintended. This involves the main impacts and effects resulting from the activity on the local social, economic, environmental, and other development indicators.

The extent to which the intervention has generated or is expected to generate significant positive or negative, intended or unintended, higher-level effects. . . . It seeks to identify social, environmental, and economic effects of the intervention that are longer-term or broader in scope.

Source: OECD/DAC Network on Development Evaluation, (); italics are emphasis added by Cekan

 

CCM projects lend themselves to such analysis, as most establish ex-ante quantitative mitigation estimates and their terminal evaluations often contain a narrative description and ranking of estimated sustainability beyond the project’s operational lifetime, including the achievement of project objectives. The need for effective means of measuring sustainability in mitigation projects is receiving increasing attention (GEF Independent Evaluation Office [IEO], ) and is increasingly important, as Article 13 of the Paris Agreement mandates that countries with donor-funded CCM projects report on their actions to address climate change (United Nations, ). As several terminal evaluations in our dataset stated, better data are urgently needed to track continued sustainability of past investments and progress against emissions goals to limit global warming.

Measuring Impact and Sustainability

Although impactful projects promoting sustainable development are widely touted as being the aim and achievement of global development projects, these achievements are rarely measured beyond the end of the project activities. Bilateral and multilateral donors, with the exception of the Japan International Cooperation Agency (JICA) and the U.S. Agency for International Development (USAID),2 have reexamined fewer than 1% of projects following a terminal evaluation, although examples exist of post project evaluations taking place as long as 15 years (USAID) and 20 years (Deutsche Gesellschaft fur Internationale Zusammenarbeit [GIZ]) later (Cekan, ). Without such fieldwork, sustainability estimates can only rely on assumptions, and positive results may in fact not be sustained as little as 2 years after closure. An illustrative set of eight post project global development evaluations analyzed for the Faster Forward Fund of Michael Scriven in 2017 showed a range of results: One project partially exceeded terminal evaluation results, two retained the sustainability assumed at inception, and the other five showed a decrease in results of 20%–100% as early as 2 years post-exit (Zivetz et al., ).

 

Since the year 2000, the U.S. government and the European Union have spent more than $1.6 trillion on global development projects, but fewer than several hundred post project evaluations have been completed, so the extent to which outcomes and impacts are sustained is not known (Cekan, ). A review of most bilateral donors shows zero to two post project evaluations (Valuing Voices, ). A rare, four-country, post project study of 12 USAID food security projects also found a wide variability in expected trajectories, with most projects failing to sustain expected results beyond as little as 1 year (Rogers & Coates, ). The study’s Tufts University team leaders noted that “evidence of project success at the time of exit (as assessed by impact indicators) did not necessarily imply sustained benefit over time.” (Rogers & Coates, , p. v.). Similarly, an Asian Development Bank (ADB) study of post project sustainability found that “some early evidence suggests that as many as 40% of all new activities are not sustained beyond the first few years after disbursement of external funding,” and that review examined fewer than 14 of 491 projects in the field (ADB, ). The same study described how assumed positive trajectories post funding fail to sustain and noted a

tendency of project holders to overestimate the ability or commitment of implementing partners—and particularly government partners—to sustain project activities after funding ends. Post project evaluations can shed light on what contributes to institutional commitment, capacity, and continuity in this regard. (ADB, , p. 1)

 

Learning from post project findings can be important to improve project design and secure new funding. USAID recently conducted six post project evaluations of water/sanitation projects and learned about needed design changes from the findings, and JICA analysed the uptake of recommendations 7 years after closure (USAID, ; JICA, ). As USAID stated in their  guidance,

An end-of-project evaluation could address questions about how effective a sustainability plan seems to be, and early evidence concerning the likely continuation of project services and benefits after project funding ends. Only a post project evaluation, however, can provide empirical data about whether a project’s services and benefits were sustained. (para. 9)

 

Rogers and Coates () expanded the preconditions for sustainability beyond only funding, to include capacities, partnerships, and ownership. Cekan et al. () expanded ex-post project methods from examining the sustainability of expected project outcomes and impacts post closure to also evaluating emerging outcomes, namely “what communities themselves valued enough to sustain with their own resources or created anew from what [our projects] catalysed” (para. 19). In the area of climate change mitigation, rigorous evaluation of operational sustainability in the years following project closure should inform learning for future design and target donor assistance on projects that are most likely to continue to generate significant emission reductions.

How Are Sustainability and Impact Defined?

The original 1991 OECD Development Assistance Committee (DAC) criteria for evaluating global development projects included sustainability, and the criteria were revised in 2019. The revisions related to the definition of sustainability and emphasize the continuation of benefits rather than just activities, and they include a wider systemic context beyond the financial and environmental resources needed to sustain those benefits, such as resilience, risk, and trade-offs, presumably for those sustaining the benefits. Similarly, the criteria for impact have shifted from simply positive/negative, intended/unintended changes to effects over the longer term (see Table 1).

 

In much of global development, including in GEF-funded projects, impact and sustainability are usually estimated only at project termination, “to determine the relevance and fulfilment of objectives, development efficiency, effectiveness, impact and [projected] sustainability” (OECD DAC, , p. 5). In contrast, actual sustainability can only be evaluated 2–20 years after all project resources are withdrawn, through desk studies, fieldwork, or both. The new OECD definitions present an opportunity to improve the measurement of sustained impact across global development, particularly via post project evaluations. Evaluations need to reach beyond projected to actual measurement across much of “sustainable development” programming, including that of the GEF.

 

GEF evaluations in recent years have been guided by the organization’s 2010 measurement and evaluation (M&E) policy, which requires that terminal evaluations “assess the likelihood of sustainability of outcomes at project termination and provide a rating” (GEF Independent Evaluation Office [IEO], p. 31). Sustainability is defined as “the likely ability of an intervention to continue to deliver benefits for an extended period of time after completion; projects need to be environmentally as well as financially and socially sustainable” (GEF IEO, , p. 27).

 

In 2017, the GEF provided specific guidance to implementing agencies on how to capture sustainability in terminal evaluations of GEF-funded projects (GEF, , para. 8 and Annex 2): “The overall sustainability of project outcomes will be rated on a four-point scale (Likely to Unlikely)”:

  • Likely (L) = There are little or no risks to sustainability;

  • Moderately Likely (ML) = There are moderate risks to sustainability;

  • Moderately Unlikely (MU) = There are significant risks to sustainability;

  • Unlikely (U) = There are severe risks to sustainability; and

  • Unable to Assess (UA) = Unable to assess the expected incidence and magnitude of risks to sustainability

 

Although this scale is a relatively common measure for estimating sustainability among donor agencies, it is not a measure that has been tested for reliability, i.e., whether multiple raters would provide the same estimate from the same data. It has also not been tested for construct validity, i.e., whether the scale is an effective predictive measure of post project sustainability. Validity issues include whether an estimate of risks to sustainability is a valid measure of the likelihood of post project sustainability, whether the narrative estimates of risk are ambiguous or double-barreled; and the efficacy of using a ranked, ordinal scale that treats sustainability as an either/or condition rather than a range (from no sustainability to 100% sustainability).

 

Throughout this chapter, we identify projects by their GEF identification numbers, with a complete table of projects provided in the appendix.

The Limits of Terminal Evaluations

Terminal evaluations and even impact evaluations that mostly compare effectiveness rather than long-term impact were referenced as sources for evaluating sustainability in the GEF’s 2017 Annual Report on Sustainability (GEF IEO, ). Although they can provide useful information on relevance, efficiency, and effectiveness, neither is a substitute for post project evaluation of the sustainability of outcomes and impacts, because projected sustainability may or may not occur. In a terminal evaluation of Mexican Sustainable Forest Management and Capacity Building (GEF ID 4149), evaluators made the case for ex-post project monitoring and evaluation of results:

There is no follow-up that can measure the consolidation and long-term sustainability of these activities. . . . Without a proper evaluation system in place, nor registration, it is difficult to affirm that the rural development plans will be self-sustaining after the project ends, nor to what extent the communities are readily able to anticipate and adapt to change through clear decision-making processes, collaboration, and management of resources. . . . They must also demonstrate their sustainability as an essential point in development with social and economic welfare from natural resources, without compromising their future existence, stability, and functionality. (pp. 5–9)3

 

Returning to a project area after closure also fosters learning about the quality of funding, design, implementation, monitoring, and evaluation and the ability of those tasked with sustaining results to do so. Learning can include how well conditions for sustainability were built in, tracked, and supported by major stakeholders. Assumptions made at design and final evaluation can then also be tested, along with theories of change (Sridharam & Nakaima, ). Finally, post project evaluations can verify the attributional claims made at the time of the terminal evaluation. As John Mayne explained in his  paper:

In trying to measure the performance of a program, we face two problems. We can often—although frequently not without some difficulty—measure whether or not these outcomes are actually occurring. The more difficult question is usually determining just what contribution the specific program in question made to the outcome. How much of the success (or failure) can we attribute to the program? What has been the contribution made by the program? What influence has it had? (p. 3)

 

In donor- and lender-funded CCM projects, emission reduction estimates represent an obvious impact measure. They are generally based on a combination of direct effects—i.e., reductions due to project-related investments in infrastructure—and indirect effects—i.e., reductions due to the replication of “market transformation” investments from other funding or an increase in climate-friendly practices due to improvements in the policy and regulatory framework (Duval, ; Legro, ). Both of these effects are generally estimated over the lifetime of the mitigation technology involved, which is nearly always much longer than the operational duration of a given project (see Table 2).

 

Table 2: Typology of GHG Reductions Resulting from Typical Project Interventions

Type of GHG reductions

Project lifetime (quarterly annual monitoring)

TERMINAL EVALUATION

Post project lifetime (post project evaluation)

Direct reductions

Reductions directly financed by donor-funded pilot project(s) or investment(s)

Continuing reductions from project-financed investments (through the end of the technology lifetime; e.g., 20 years for buildings, 10 years for industrial equipment, etc.)

Indirect reductions

Reductions from policy uptake (e.g., reduced fossil fuel use from curtailment of subsidies, spillover effects from tax incentives, increased government support for renewable energy due to strategy development) (co-) funded by the donor

Continuing reductions from policy uptake (e.g., reduced fossil fuel use from curtailment of subsidies, spillover effects from tax incentives, increased government support for energy efficiency or renewable energy due to strategy development)

Reductions from market transformation (changes in availability of financing, increased willingness of lenders, reduction in perceived risk) supported by pilot demonstrations and/or outreach and awareness raising (co-)funded by the donor

Continuing reductions from market transformation (changes in availability of financing, increased willingness of lenders, reduction in perceived risk) as a legacy of the pilot demonstrations and/or outreach and awareness raising funded by the donor-funded project

New reductions from the continuation of the investment or financing mechanism established by the donor-funded project

 

The increasing use of financial mechanisms such as concessional loans and guarantees as a component of donor-funded CCM projects, such as those funded by the Green Climate Fund (https://www.greenclimate.fund/), can also limit the ability of final evaluations to capture sustainability, because the bulk of subsequent investments in technologies that are assumed with revolving funds will not take place during the project lifetime. A 2012 paper by then-head of the GEF Independent Evaluation Office, Rob van den Berg, supported the need for post project evaluation and importantly included:

Barriers targeted by GEF projects, and the results achieved by GEF projects in addressing market transformation barriers . . . facilitate in understanding better whether the ex-post changes being observed in the market could be linked to GEF projects and pathways through which outcomes and intermediate states . . . [and] the extent GEF-supported CCM activities are reducing GHGs in the atmosphere . . . because it helps in ascertaining whether the incremental GHG reduction and/or avoidance is commensurate with the agreed incremental costs supported by GEF. . . . It is imperative that the ex-ante and ex-post estimates of GHG reduction and avoidance benefits are realistic and have a scientific basis. (GEF IEO, , p. 13)

 

This description of GHG-related impacts illustrates the difficulties associated with accurately drawing conclusions about sustainability from using a single scale to estimate “the likely ability [emphasis added] of an intervention to continue to deliver benefits for an extended period of time” (GEF IEO, , p. 35) due to several factors. First, the GEF’s 4-point scale is supposed to capture two different aspects of continuation: ongoing benefits from a project-related investment, and new benefits from the continuation of financing mechanisms. Without returning to evaluate the continued net benefits of the now-closed investment, such assumptions cannot be fully claimed. Second, the scale is supposed to capture benefits that can be estimated in a quantitative way (e.g., solar panels that offset the use of a certain amount of electricity from diesel generators); benefits that can be evaluated through policy or program evaluation (e.g., the introduction of a law on energy efficiency); and benefits that will require careful, qualitative study to determine impacts (e.g., training programs for energy auditors or awareness-raising for energy consumers, leading to knowledge and decision changes). Aggregating and weighing such an array of methods into one ranking is methodologically on shaky ground, especially without post project measurements to confirm whether results happened at any time after project closure.

Methodology

The impetus for this research was a sustainability analysis conducted by the GEF IEO that was summarized in the 2017 GEF Annual Performance Report (GEF IEO, ). The study stated: “The analysis found that outcomes of most of the GEF projects are sustained during the postcompletion period, and a higher percentage of projects achieve environmental stress reduction and broader adoption than at completion” (p. 17). Learning more about postcompletion outcomes and assessing how post project sustainability was evaluated was the aim of this work.

 

This chapter’s research sample consists of two sets of GEF project evaluations. We chose projects funded by the GEF because of the large size of the total project pool. For example, the Green Climate Fund lacks a large pool of mitigation projects that would be suitable for post project evaluation. Our first tranche was selected from the pool of CCM projects cited in the sustainability analysis, which included a range of projects with the earliest start date of 1994 and the latest closing date of 2013 (GEF IEO, ). These constituted $195.5 million dollars of investments. The pool of projects in the climate change focal area (n = 17), comprising one third of the GEF IEO sample, was then selected from the 53 projects listed in the report for further study. We then classified the selected projects by which ones had any mention of field-based post project verification according to an evaluability checklist (Zivetz et al., ). This list highlights methodological considerations including: (a) data showing overall quality of the project at completion, including M&E documentation needed on original and post project data collection; (b) time postcompletion (at least 2 years); (c) site selection criteria; and (d) proof that project results were isolated from concurrent programming to ascertain contribution to sustained impacts (Zivetz et al., ).

 

Next, we reviewed GEF documentation to identify any actual quantitative or qualitative measures of post project outcomes and impacts. These could include: (a) changes in actual energy efficiency improvements against final evaluation measures used, (b) sustained knowledge or dissemination of knowledge change fostered through trainings, (c) evidence of ownership, or (d) continued or increased dissemination of new technologies. Such verification of assumptions in the final documents typically explores why the assumptions were or were not met, and what effects changes in these assumptions would have on impacts, such as CO2 emissions projections.

 

The second tranche consisted of projects in the climate change focal area that were included in the 2019 cohort of projects for which the GEF received terminal evaluations. As the GEF 2019 Annual Performance Report explained:

Terminal evaluations for 193 projects, accounting for $ 616.6 million in GEF grants, were received and validated during 2018–2019 and these projects constitute the 2019 cohort. Projects approved in GEF-5 (33 percent), GEF-4 (40 percent) and GEF-3 (20 percent) account for a substantial share of the 2019 cohort. Although 10 GEF Agencies are represented in the 2019 cohort, most of these projects have been implemented by UNDP [United Nations Development Programme] (56 percent), with World Bank (15 percent) and UNEP [United Nations Environment Programme] (12 percent) also accounting for a significant share. (GEF IEO, , p. 9)

 

We added the second tranche of projects to represent a more current view of project performance and evaluation practice.

The climate change focal area subset consisted of 38 completed GEF projects, which account for approximately $155.7 million in GEF grants (approximately 20% of the total cohort and 25% of the overall cohort budget). Projects included those approved in 1995–1998 (GEF-1; n = 1) and 2003–2006 (GEF-3; n = 2), but 68% were funded in 2006–2010 (GEF-4; n = 26), and 24% in 2010–2014 (GEF-5; n = 9), making them more recent as a group than the 2019 cohort as a whole. Six GEF agencies were represented: Inter-American Development Bank (IDB), International Fund for Agricultural Development (IFAD), UNDP, UNEP, United Nations Industrial Development Organization (UNIDO), and the World Bank.

 

We eliminated three projects listed in the climate focal area subset from consideration in the second tranche because they had not been completed, leaving a pool of 35 projects. Ex-ante project documentation, such as CEO endorsement requests, and terminal evaluation reports were then reviewed for initial estimates of certain project indicators, such as GHG emission reductions, and ratings of estimated sustainability on the 4-point scale, including the narrative documentation that accompanied the ratings.

Findings

The question of whether post project sustainability was being measured was based on the first tranche of projects and on the sustainability analysis in which they were included. Most of the documents cited in the sustainability analysis were either terminal or impact evaluations focused on efficiency (GEF IEO, ), and most of the documents and report analysis focused on estimated sustainability. Of the 53 “postcompletion verification reports,” as they are referred to in the review (GEF IEO, , p. 62), we found only 4% to contain adequate information to support the analysis of sustainability. Our wider search for publicly available post project evaluations, which would have constituted an evidence base for sustained outcomes and environmental stress reduction and adoption cited in the GEF IEO 2019 analysis, did not identify any post project evaluations. We were unable to replicate the finding that “84% of these projects that were rated as sustainable at closure also had satisfactory postcompletion outcomes. . . . Most projects with satisfactory outcome ratings at completion continued to have satisfactory outcome ratings at postcompletion” (GEF IEO, , p. 3) or to compare the CCM subset of projects with this conclusion. The report stated that “the analysis of the 53 selected projects is based on 61 field verification reports. For 81 percent of the projects, the field verification was conducted at least four years after implementation completion [emphasis added].” However, we found no publicly accessible documentation that could be used to confirm the approach to field verification for 8 of the 17 projects.

 

Similarly, the available documentation for the projects lacked the most typical post project hallmarks, such as methods of post project data collection, comparisons of changes from final to post project outcomes and impacts at least 2 years post closure, and tracing contribution of the project at the funded sites to the changes. Documentation focused on a rating of estimated sustainability with repeated references to only the terminal evaluations and closure reports. In summary, of the 17 projects selected for review in the first tranche, 14 had data consisting of terminal evaluations, and none was 2–20 years post closure. We did not find publicly available evidence to support measurement of post project sustainability other than statements that such evidence was gathered in a handful of cases. Of the pool of 17 projects, only two (both from India) made any reference to post project data regarding the sectors of activity in subsequent years. However, these two were terminal evaluations within a country portfolio review and could not be substantiated with publicly accessible data.

 

We then screened the first tranche of projects using the Valuing Voices evaluability checklist (Zivetz et al., ):

  • High-quality project data at least at terminal evaluation, with verifiable data at exit: Of 14 projects rated for sustainability, only six were rated likely to be sustained and outcome and impact data were scant.

  • Clear ex-post methodology, sufficient samples: None of the evaluations available was a post project evaluation of sustainability or long-term impact. Although most projects fell within the evaluable 2–20 years post project (the projects had been closed 4–20 years), none had proof of return evaluation. There were no clear post project sampling frames, data collection processes including identification of beneficiaries/informants, site selection, isolating legacy effects of the institution or other concurrent projects, or analytic methods.

  • Transparent benchmarks based on terminal, midterm, and/or baseline data on changes to outcomes or impacts: M&E documents show measurable targets and indicators, baseline vs. terminal evaluations with methods that are comparable to methods used in the post project period: For some of the 17 projects, project inception documents and terminal evaluations were available; in other cases, GEF evaluation reviews were available. Two had measurable environmental indicators that compared baseline to final, but none were after project closure.

  • Substantiated contribution vs. attribution of impacts: Examples of substantiated contribution were not identified.

 

Evaluation reports revealed several instances for which we could not confirm attribution. For example, evaluation of the project Development of High Rate BioMethanation Processes as Means of Reducing Greenhouse Gas Emissions (GEF ID 370), which closed in 2005, referenced the following subsequent market information:

As of Nov 2012, capacity installed from waste-to-energy projects running across the country for grid connected and captive power are 93.68MW and 110.74 MW respectively [versus 3.79KW from 8 sub-projects and 1-5 MW projects]. . . . The technologies demonstrated by the 16 sub-projects covered under the project have seen wide-scale replication throughout the country. . . . An installed capacity of 201.03MW within WTE [waste to energy] projects and the 50% of this is attributed to the GEF project. (GEF IEO, , vol. 2, p. 64)

 

Claims of “the technical institutes strengthened as a result of the project were not fully effective at the time of project completion but are now actively engaged in the promotion of various biomethanation technologies” are unsubstantiated in publicly available information; as a result, the ex-post methods of contribution/attribution data are not clear. Another project in India, Optimizing Development of Small Hydel [hydroelectric] Resources in Hilly Areas (GEF ID 386), projected that later investments in the government’s 5-year plans would happen, and the resulting hydropower production would be attributable to the original project (GEF IEO, ); again, this attributional analysis was not documented. Analysis of a third project in India, Coal Bed Methane Capture and Commercial Utilization (GEF ID 325), which closed in 2008, claimed results that could not be reproduced: “Notable progress has been made through replication of projects, knowledge sharing, and policy development” and “expertise was built” (GEF IEO, , Vol. 2, p. 90). Further claims that the project contributed to “the total coal bed methane production in the country and has increased to 0.32 mmscmd [million metric standard cubic meters per day], which is expected to rise to 7.4 mmscmd by the end of 2014” is without proof. The evaluation reported estimates of indirect GHG emission reduction, based on postcompletion methane gas production estimates of 0.2 million m3 per day:

1.0 Million tons equivalent per year, considering an adjustment factor of 0.5 as the GEF contribution [emphasis added], the indirect GHG emission reduction due to the influence of the project is estimated to be 0.5 million tons of CO2 equivalent per annum (2.5 million tons over the lifetime period of 5 years). (GEF IEO, , Vol. 2, p. 91)

 

Yet without verification of coal bed methane capture and commercial utilization continuing, this impact cannot be claimed.

How Is Sustainability Being Captured?

Fifteen of the 17 CCM projects we reviewed in the first tranche were rated on a 4-point scale at terminal evaluation. Of those 15, 12 had overall ratings of either satisfactory or marginally satisfactory, and one highly satisfactory overall. Eleven of the sustainability ratings were either likely or marginally likely. Only two projects were rated marginally unlikely overall or for sustainability, and only one project received marginally unlikely in both categories (the Demand Side Management Demonstration energy conservation project that ended in 1999 [GEF ID 64]). Although none of the documents mentioned outcome indicators, eight of the 17 rated estimated CO2 direct and indirect impacts.

 

In the second pool of projects—the CCM subset of the 2019 cohort—63% of the projects were rated in the likely range for sustainability (n = 22; nine were rated likely and 13 marginally likely). This is slightly higher than the 2019 cohort as a whole, in which 59% were rated in the likely range. In turn, the 2019 annual performance report noted that “the difference between the GEF portfolio average and the 2019 cohort is not statistically significant for both outcome and sustainability rating” (GEF IEO, , p. 9). It is slightly lower than the percentage of CCM projects receiving an overall rating of marginally likely or higher in the 2017 portfolio review (68%, n = 265; GEF IEO, , p. 78).

 

In this second set of projects, only two received a rating of marginally unlikely and only one received a sustainability rating of unlikely. The remainder of the projects could not be classified using the 4-point rating scale, either because they had used an either/or estimate (one project), a 5-point scale (one project), or an estimate based on the assessment of risks to development outcome (two projects). Six projects or could not be assessed due to the absence of a publicly accessible terminal evaluation in the GEF and implementing agency archives.

How Effectively Is Sustainability Being Captured?

Throughout the first set of reports on which the sustainability was claimed, “84% of these projects that were rated as sustainable at closure also had satisfactory postcompletion outcomes, as compared with 55% percent of the unsustainable projects” (GEF IEO, , p. 29). The data did not support the claim, even during implementation.

  • As a Brazilian project (GEF ID 2941) showed, sustainability is unlikely when project achievements are weak, and exit conditions and benchmarks need to be clear: The exit strategy provided by IDB Invest77 is essentially based on financial-operational considerations but does not provide answers to the initial questions how an EEGM [energy efficiency guarantee mechanism] should be shaped in Brazil, how relevant it is and for whom, and to whom the EEGM should be handed over (p. 25).

  • In Russia, the terminal evaluation for an energy efficiency project (GEF ID 292) cited project design flaws that seemed to belie its sustainability rating of likely: “From a design-for-replication point of view the virtually 100% grant provided by the GEF for project activities is certainly questionable” (Global Environment Facility Evaluation Office [GEF EO], , p. 20). Further, the assessment that “the project is attractive for replication, dissemination of results has been well implemented, and the results are likely to be sustainable [emphasis added] for the long-term, as federal and regional legislation support is introduced” (GEF EO, , p. 39), makes a major assumption regarding changes in the policy environment. (In fact, federal legislation was introduced 2 years post project, and the extent of enforcement would require examination.)

  • A Pacific regional project (GEF ID 1058) was rated as likely to be sustained, but its report notes that it “does not provide overall ratings for outcomes, risks to sustainability, and M&E” (p. 1).

  • The Renewable Development Energy project in China (GEF ID 446) that closed in 2007 was evaluated in 2009 (not post project, but a delayed final evaluation). The report considered the project sustainable with a continued effort to support off-grid rural electrification, claiming, “the market is now self-sustaining, and thus additional support is not required” (p. 11). The project estimated avoided CO2 emissions and cited 363% as achieved; however, calculations were based on 2006 emissions values for thermal power sector and data from all wind farms in China, without a bottom-up estimate. The interpolation of this data lacks verification.

  • Similar sampling issues emerge in a project in Mexico (GEF ID 643): “A significant number of farmers . . . of an estimated 2,312 farmers who previously had had no electricity” (p. 20) saw their productivity and incomes increase as a result of their adoption of productive investments (e.g., photovoltaic-energy water-pumping systems and improved farming practices). A rough preliminary estimate is extrapolated from an evaluation of “three [emphasis added] beneficiary farms, leading to the conclusion that in these cases average on-farm increases in income more than doubled (rising by139%)” (p. 21).

 

Baseline to terminal evaluation comparisons were rare, with the exception of photovoltaic energy projects in China and Mexico, and none were post project. Two were mid-term evaluations, which could not assess final outcomes much less sustainability. Ex-post project evaluations far more typically focus on the contributions that projects made, because only in rare cases can the attribution be isolated, especially for a project pool, where the focus is often on creating an enabling environment reliant on a range of actors. One such example is the Indian energy efficiency project approved in 1998 (GEF ID 404), in which

the project resulted in a favorable environment for energy-efficiency measures and the sub-projects inspired many other players in similar industries to adopt the demonstrated technologies. Although quantitative data for energy saved by energy efficiency technologies in India is not available, it is evident that due to the change in policy and financial structure brought by this project, there is an increase in investment in energy efficiency technologies in the industries. (GEF IEO, , Vol. 2., p. 95)

 

And while such GEF evaluators are asking for ex-post evaluation, in an earlier version of this book, Evaluating Climate Change Action for Sustainable Development (Uitto et al., ), the authors encouraged us to be “modest” in expectations of extensive ex-post evaluations and exploration of ex-post’s confirmatory power seemingly has not occurred:

The expectations have to be aligned with the size of the investment. The ex-post reconstruction of baselines and the assessment of quantitative results is an intensive and time-consuming process. If rigorous, climate change-related quantitative and qualitative data are not available in final reports or evaluations of the assessed projects, it is illusive to think that an assessment covering a portfolio of several hundred projects is able to fill that gap and to produce aggregated quantitative data, for example on mitigated GHG emissions. When producing data on proxies or qualitative assessments, the expectations must be realistic, not to say modest. (p. 89)

Project Evaluability

Following an analysis of the sustainability estimates in the first pool of projects, we screened project documentation and terminal evaluations for conditions that foster sustainability during planning, implementation, and exit. We also analyzed how well the projects reported on factors that could be measured in a post project evaluation and factors that would predispose projects to sustainability. These sustained impact conditions consisted of the following elements: (a) resources, (b) partnerships and local ownership, (c) capacity building, (d) emerging sustainability, (e) evaluation of risks and resilience, and (f) CO2 emissions (impacts).

 

Although documentation in evaluations did not verify sustainability, many examples exist of data collection that could support post project analyses of sustainability and sustained impacts in the future. Most reports cited examples of resources that had been generated, partnerships that had been fostered for local ownership and sustainability, and capacities that had been built through training. Some terminal evaluations also captured emerging impacts due to local efforts to sustain or extend impacts of the project that had not been anticipated ex-ante.

 

The Decentralized Power Generation project (GEF ID 4749) in Lebanon provides a good example of a framework to collect information on elements of sustainability planning at terminal (see Table 3).

 

Table 3: Sustainability Planning from a Decentralized Power Generation Project in Lebanon (GEF ID 4749)

Resources

Are there financial risks that may jeopardize the sustainability of project outcomes?

What is the likelihood of financial and economic resources not being available once GEF grant assistance ends?

Ownership

What is the risk, for instance, that the level of stakeholder ownership (including ownership by governments and other key stakeholders) will be insufficient to allow for the project outcomes/benefits to be sustained?

Do the various key stakeholders see that it is in their interest that project benefits continue to flow?

Is there sufficient public/stakeholder awareness in support of the project’s long-term objectives?

Partnerships

Do the legal frameworks, policies, and governance structures and processes within which the project operates pose risks that may jeopardize sustainability of project benefits?

Benchmarks, risks, & resilience

Are requisite systems for accountability and transparency, and required technical know-how, in place?

Are there ongoing activities that may pose an environmental threat to the sustainability of project outcomes?

Are there social or political risks that may threaten the sustainability of project outcomes?

Source: 4749 Terminal Evaluation, p. 45. Note: Capacity Building and Emerging Sustainability were missing from project 4749

 

Tangible examples of the above categories at terminal evaluations include the following.

Resources

The most widespread assumption for sustainability was sufficient financial and in-kind resources, often reliant on continued national investments or new private international investments, which could be verified. National resources that could sustain results include terminal evaluation findings such as:

Funding for fuel cell and electric vehicle development by the Chinese Government had increased from Rmb 60 million (for the 1996-2000 period) to more than Rmb 800 million (for the 2001-2005 period). More recently, policymakers have now targeted hydrogen commercialization for the 2010-2020 period. (GEF ID 445, p. 17)

 

Another example is: “About 65 percent of [Indian] small Hydro electromechanical Equipment is sourced locally” (GEF ID 386; GEF IEO, , Vol.2, p. 76). The terminal evaluation of a global IFC project stated that “Moser Baer is setting up 30 MW solar power plants with the success of the 5 MW project. Many private sector players have also emulated the success of the Moser Baer project by taking advantage of JNNSM scheme” (GEF ID 112, p. 3).

Local Ownership and Partnerships

The Russian Market Transformation for EE Buildings project (GEF ID 3593) showed in its recommendation to governmental stakeholders that their ownership would be essential for sustainability, describing “a suitable governmental institution to take over the ownership over the project web site along with the peer-to-peer network ensuring the sustainability of the tools [to] support the sustainability of the project results after the project completion” (p. xi). An Indian project (GEF ID 386) noted how partnerships could sustain outcomes:

By 2001, 16 small hydro equipment manufacturers, including international joint ventures (compared to 10 inactive firms in 1991) were operational. . . . State government came up with policies with financial incentives and other promotional packages such as help in land acquisition, getting clearances, etc. These profitable demonstrated projects attracted private sector and NGOs to set up similar projects. (GEF IEO, , Vol. 2, p. 74)

Capacity Building

The Renewable Energy for Agriculture project in Mexico (GEF ID 643) established the “percentage of direct beneficiaries surveyed who learned of the equipment through FIRCO’s promotional activities” (86%), “number of replica renewable energy systems installed” (847 documented replicas), and “total number of technicians and extensionists trained in renewable energy technologies” (p. 33). This came to 3022, or 121% of the original goal of 2500, which provides a good measure of how the project exceeded this objective.

Emerging Sustainability

Recent post project evaluations also address what emerged after the project that was unrelated to the existing theory of change. These emerging findings are rarely documented in terminal evaluations, but some projects in the first pool included information about unanticipated activities or outcomes at terminal evaluation, and these could be used for future post project fieldwork follow-up. As a consequence of the hydroelectric resource project, for example, the Indian Institute “developed and patented the designs for water mills” (GEF ID 386; GEF IEO, , Vol. 2, p. 73). The terminal evaluation for another project stated that “following the UNDP-GEF project, the MNRE [Ministry of New and Renewable Energy] initiated its own programs on energy recovery from waste. Under these programs, the ministry has assisted 14 projects with subsidies of US$ 2.72 million” (GEF ID 370; GEF IEO, , Vol. 2, p. 62).

Benchmarks, Risks, and Resilience

As the GEF’s 2019 report itself noted, “The GEF could strengthen its approach to assessing sustainability further by explicitly addressing resilience” (GEF IEO, , p. 33). Not doing so is a risk, as our climate changes. Two evaluations noted “no information on environmental risks to project sustainability;” these were the Jamaican pilot on Removal of Barriers to Energy Efficiency and Energy Conservation (GEF ID 64; p. 68) and a Pacific regional project (GEF ID 1058). For likelihood of sustainability, the Jamaican project was rated moderately unlikely and the Pacific Islands project was rated likely but “does not provide overall ratings for outcomes, risks to sustainability, and M&E” other than asserting that

the follow-up project, which has been approved by the GEF, will ensure that the recommendations entailed in the documents prepared as part of this project are carried out. Thus, financial risks to the benefits coming out of the project are low. (p. 3)

Greenhouse Gas Emissions (Impacts)

In GEF projects, timeframe is an important issue, which makes post project field verification that much more important. As the GEF IEO stated in 2018, “Many environmental results take more than a decade to manifest. Also, many environmental results of GEF projects may be contingent on future actions by other actors.” (GEF IEO, , p. 34).

Uncertainty and Likelihood Estimates

Estimating the likelihood of sustainability of greenhouse gas emissions at terminal evaluation raises another challenge: the relatively high level of uncertainty concerning the achievement of project impacts related to GHG reduction. GHG reductions are the primary objective stated in the climate change focal area, and they appear as a higher level impact across projects regardless of the terminology used. For a global project on bus rapid transit and nonmotorized transport, the objective was to “reduce GHG emissions for transportation sector globally” (GEF ID 1917, p. 9). For a national project on building sector energy efficiency, the project goal was “the reduction in the annual growth rate of GHG emissions from the Malaysia buildings sector” (GEF ID 3598; Aldover & Tiong, , p. i). For a land management project in Mexico, the project objective was to “mitigate climate change in the agricultural units selected . . . including the reduction of emissions by deforestation and the increase of carbon sequestration potential” (GEF ID 4149, p. 21). For a national project to phase out ozone-depleting substances, the project objective was to “reduce greenhouse gas emissions associated with industrial RAC (refrigeration and air conditioning) facilities in The Gambia” (GEF ID 5466, p. vii). Clearly, actual outcomes in GHG emissions need to be considered in any assessment of the likelihood of sustainability of outcomes.

 

Unlike projects in the carbon finance market, GEF projects estimate emissions for a project period that usually exceeds the duration of the GEF intervention. In most cases, ex-ante estimated GHG reductions in the post project period are larger than estimated GHG reductions during the project lifetime. In practice, this means that for projects for which the majority of emissions will occur after the terminal evaluation, evaluators are being asked to estimate the likelihood that benefits will not only continue, but will increase due to replication, market transformation, or changes in the technology or enabling environment. Table 4 provides several examples from the GEF 2019 cohort of how GHG reductions may be distributed over the project lifecycle.

 

Table 4: Distribution of Estimated GHG Reductions Ex-Ante for Selected Projects in the CCM Subset of the GEF 2019 Cohort

GEF ID

Country

Sub-Sector

Ex-ante GHG reduction estimates

% of reductions achieved by the terminal evaluation

During project lifetime (tCO2e)

Total reductions (tCO2e)

2941

Brazil

EE Buildings

705,000

9,588,000

7

2951

China

EE Financing

5,400,000

111,500,000

5

3216

Russia

EE Standards / Labels

7,820,000

123,600,000

6

3555

India

EE Buildings

454,000

5,970,000

8

3593

Russia

EE Industry

0

3,800,000

0

3598

Malaysia

EE Buildings

2,002,000

18,166,000

11

3755

Vietnam

EE Lighting

2,302,000

5,268,000

44

3771

Philippines

EE Industry

560,000

560,000

100

Sources: 2941 Project Document, pp. 35–37; 2951 PAD/CEO Endorsement Request, p. 88; 3216 Project Document, pp. 80–90; 3555 Terminal Evaluation; 3593 Terminal Evaluation, p. 23; 3598 Terminal Evaluation, p. 24; 3755 GEF CEO Endorsement Request; 3771 Terminal Evaluation pp. 8–9

 

The range in Table 4 shows the substantial variation in uncertainty when estimating the likelihood of long-term project impacts. For projects designed to achieve all of their emission reductions during their operational lifetimes, the achievement of GHG reductions can be verified as a part of the terminal evaluation. However, most projects assume that nearly all estimated GHG reductions will occur in the post project period, so uncertainty levels are much higher and estimates may be more difficult to compile. In other evaluations, evaluators may identify inconsistent GHG estimates (e.g., GEF ID 4157 and 5157), or recommend that the ex-ante estimates be downsized (e.g., GEF ID 3922, 4008, and 4160). These trends may also be difficult to capture in likelihood estimates.

Conclusions and Recommendations

While sustainability has been estimated in nearly all of the projects in the two pools we considered, it has not been measured. Assessing the relationship between projected sustainability and actual post project outcomes was not possible due to insufficient data. Further, findings from the first pool of climate change mitigation projects did not support the conclusion that “outcomes of most of the GEF projects are sustained during the postcompletion period” (GEF IEO, , p. 17). In the absence of sufficient information regarding project sustainability, determining post project GHG emission reductions is not possible, because these are dependent on the continuation of project benefits following project closure.

 

We also conclude that although the 4-point rating scale is a common tool for estimating the likelihood of sustainability, the measure itself has not been evaluated for reliability or validity. The scale is often used to summarize diverse trends in the midst of varying levels of uncertainty limits. The infrequency of the unlikely rating in terminal evaluations may result from this limitation—evaluators believe that some benefits (greater than 0%) will continue. However, the 4-point scale cannot convey an estimate of what percentage of benefits will continue. Furthermore, the use of market studies to assess sustainability is not effective in the absence of attributional analysis linking results to the projects that ostensibly caused change.

 

As a result, the current evaluator’s toolkit still does not provide a robust means of estimating post project sustainability and is not suitable as a basis for postcompletion claims. That said, M&E practices in the CCM projects we studied supported the collection of information that documented conditions (e.g., resources, partnerships, capacities, etc.) in a way that projects could be evaluable, or suitable for post project evaluation. We recommend that donors provide financial and administrative support for project data repositories to retain data in-country at terminal evaluation for post project return and country-level learning, and include evaluability (control groups, sampling sizes, and sites selected by evaluability criteria) in the assessment of project design. We also recommend sampling immediately from the 56 CCM projects in the two sets of projects that have been closed at least 2 years.

 

Donors’ allocation of sufficient resources for CCM project evaluations would allow verification of actual long-term, post project sustainability using the OECD DAC () definition of “the continuation of benefits from a development intervention after major development assistance has been completed” (p. 12). It would also enable evaluators to consider enumerating project components that are sustained rather than using an either/or designation (sustained/not sustained). Evaluation terms of reference should clarify the methods used for contribution vs. attribution claims, and they should consider decoupling estimates of direct and indirect impacts, which are difficult to measure meaningfully in a single measure. For the GEF portfolio specifically, the development of a postcompletion verification approach could be expanded from the biodiversity focal area to the climate change focal area (GEF IEO, ), and lessons could also be learned from the Adaptation Fund’s () commissioned work on post project evaluations. Bilateral donors such as JICA have developed rating scales for post project evaluations that assess impact in a way that captures both direct and indirect outcomes (JICA, ).

 

Developing country parties to the Paris Agreement have committed to providing “a clear understanding of climate change action” in their countries under Article 13 of the agreement (United Nations, ), and donors have a clear imperative to press for continued improvement in reporting on CCM project impacts and using lessons learned to inform future support.

Footnotes

  1. 1.

    We use the term “postproject” evaluations to distinguish these longer term evaluations from terminal evaluations, which typically occur within 3 months of the end of donor funding. While some donors (JICA, ; USAID, ) use the term “ex-post evaluation” to refer to evaluations distinct from the terminal/final evaluation and occurring 1 year or more after project closure, other donors use the terms “terminal evaluation” and “ex-post evaluation” synonymously. Other terms include postcompletion, post-closure, and long-term impact.

  2. 2.

    In a  meta-evaluation, Hageboeck et al. found that only 8% of projects in the 2009–2012 USAID PPL/LER evaluation portfolio (26 of 315) were evaluated post-project following the termination of USAID funding.

  3. 3.

    Page numbers provided with GEF ID numbers only refer to project terminal evaluations; see Appendix.

References

  1. Adaptation Fund. (2019). Report of the Adaptation Fund Board, note by the chair of the Adaptation Fund Board – Addendum. AFB/B.34–35/3. Draft – 8 November 2019. https://www.adaptation-fund.org/document/report-of-the-adaptation-fund-board-note-by-the-chair-of-the-adaptation-fund-board-addendum/
  2. Aldover, R. Z., & Tiong, T. C. (2017). UNDP/GEF project PIMS 3598: Building sector energy efficiency project (BSEEP): Terminal evaluation report. Global Environment Facility and United Nations Development Programme. https://erc.undp.org/evaluation/evaluations/detail/8919
  3. Asian Development Bank. (2010). Post-completion sustainability of Asian Development Bank-assisted projects. https://www.adb.org/documents/post-completion-sustainability-asian-development-bank-assisted-projects
  4. Cekan, J. (2015, March 13). When funders move on. Stanford Social Innovation Review. https://ssir.org/articles/entry/when_funders_move_on#
  5. Cekan, J., Zivetz, L., & Rogers, P. (2016). Sustained and emerging impacts evaluation. Better Evaluation. https://www.betterevaluation.org/en/themes/SEIE
  6. Duval, R. (2008). A taxonomy of instruments to reduce greenhouse gas emissions and their interactions. Organisation for Economic Co-operation and Development.  https://doi.org/10.1787/236846121450.CrossRefGoogle Scholar
  7. Global Environment Facility. (2017). Guidelines for GEF agencies in conducting terminal evaluation for full-sized projects. https://www.gefieo.org/evaluations/guidelines-gef-agencies-conducting-terminal-evaluation-full-sized-projects
  8. Global Environment Facility Evaluation Office. (2008). Evaluation of the catalytic role of the GEFhttps://www.gefieo.org/sites/default/files/ieo/ieo-documents/gef-catalytic-role-qualitative-analysis-project-documents.pdf
  9. Global Environment Facility Independent Evaluation Office. (2010). GEF monitoring and evaluation policyhttps://www.gefieo.org/sites/default/files/ieo/evaluations/gef-me-policy-2010-eng.pdf
  10. Global Environment Facility Independent Evaluation Office. (2012). Approach paper: Impact evaluation of the GEF support to CCM: Transforming markets in major emerging economies. https://www.gefieo.org/sites/default/files/ieo/ieo-documents/ie-ccm-markets-emerging-economies.pdf
  11. Global Environment Facility Independent Evaluation Office. (2013). Country portfolio evaluation (CPE) India. http://www.gefieo.org/evaluations/country-portfolio-evaluation-cpe-india
  12. Global Environment Facility Independent Evaluation Office. (2017). Climate change focal area study. https://www.thegef.org/council-meeting-documents/climate-change-focal-area-study
  13. Global Environment Facility Independent Evaluation Office. (2018). Sixth overall performance study of the GEF: The GEF in the changing environmental finance landscape. https://www.thegef.org/sites/default/files/council-meeting-documents/GEF.A6.07_OPS6_0.pdf
  14. Global Environment Facility Independent Evaluation Office. (2019a). Annual Performance Report 2017https://www.gefieo.org/evaluations/annual-performance-report-apr-2017
  15. Global Environment Facility Independent Evaluation Office. (2019b). A methodological approach for post-project completionhttps://www.gefieo.org/council-documents/methodological-approach-post-completion-verification
  16. Global Environment Facility Independent Evaluation Office. (2020). Annual performance report 2019https://www.gefieo.org/evaluations/annual-performance-report-apr-2019
  17. Hageboeck, M., Frumkin, M., & Monschein S. (2013). Meta-evaluation of quality and coverage of USAID evaluations. USAID. https://www.usaid.gov/evaluation/meta-evaluation-quality-and-coverage
  18. Japan International Cooperation Agency. (2004). Issues in ex-ante and ex-post evaluation. In JICA Guideline for Project Evaluation: Practical Methods for Project Evaluation (pp. 115–197). https://www.jica.go.jp/english/our_work/evaluation/tech_and_grant/guides/pdf/guideline01-01.pdf
  19. Japan International Cooperation Agency. (2017). Ex-post evaluation results. In JICA annual evaluation report 2017 (Part II, pp. 1–34). https://www.jica.go.jp/english/our_work/evaluation/reports/2017/c8h0vm0000d2h2gq-att/part2_2017_a4.pdf
  20. Japan International Cooperation Agency. (2020a). Ex-post evaluation (technical cooperation). https://www.jica.go.jp/english/our_work/evaluation/tech_and_grant/project/ex_post/index.html
  21. Japan International Cooperation Agency. (2020b). Ex-post evaluation (ODA loan). https://www.jica.go.jp/english/our_work/evaluation/oda_loan/post/index.html
  22. Legro, S. (2010, June 9–10). Evaluating energy savings and estimated greenhouse gas emissions in six projects in the CIS: A comparison between initial estimates and assessed performance [paper presentation]. International Energy Program Evaluation Conference, Paris, France. https://energy-evaluation.org/wp-content/uploads/2019/06/2010-paris-027-susan-legro.pdf
  23. Mayne, J. (2001). Assessing attribution through contribution analysis: Using performance measures sensibly. The Canadian Journal of Program Evaluation, 16(1), 1–24.Google Scholar
  24. OECD/DAC Network on Development Evaluation. (2019). Better criteria for better evaluation: Revised evaluation criteria definitions and principles for use. Organisation for Economic Co-operation and Development. http://www.oecd.org/dac/evaluation/revised-evaluation-criteria-dec-2019.pdf
  25. Organisation for Economic Co-operation and Development. (2015). OECD and post-2015 reflections. Element 4, Paper 1: Environmental Sustainabilityhttps://www.oecd.org/dac/environment-development/FINAL%20POST-2015%20global%20and%20local%20environmental%20sustainability.pdf
  26. Organisation for Economic Co-operation and Development, Development Assistance Committee. (1991). DAC criteria for evaluating development assistance. https://www.oecd.org/dac/evaluation/2755284.pdf
  27. Rogers, B. L., & Coates, J. (2015). Sustaining development: A synthesis of results from a four-country study of sustainability and exit strategies among development food assistance projects. FANTA III, Tufts University, & USAID. https://www.fantaproject.org/research/exit-strategies-ffp
  28. Sridharam, S., & Nakaima, A. (2019). Till time (and poor planning) do us part: Programs as dynamic systems—Incorporating planning of sustainability into theories of change. The Canadian Journal of Program Evaluation. https://evaluationcanada.ca/system/files/cjpe-entries/33-3-pre005.pdf
  29. Uitto, J., Puri, J., & van den Berg, R. (2017). Evaluating climate change action for sustainable development. Global Environment Facility Independent Evaluation Office. https://www.gefieo.org/sites/default/files/ieo/documents/files/cc-action-for-sustainable-development_0.pdf
  30. United Nations. (2015, December 12). Paris agreementhttps://unfccc.int/sites/default/files/english_paris_agreement.pdf
  31. United States Agency for International Development. (2018). Project evaluation overview. https://www.usaid.gov/project-starter/program-cycle/project-design/project-evaluation-overview
  32. United States Agency for International Development. (2019). USAID’s impact: Ex-post evaluation serieshttps://www.globalwaters.org/resources/ExPostEvaluations
  33. Valuing Voices. (2020). Catalysts for ex-post learninghttps://valuingvoices.com/catalysts-2/
  34. Zivetz, L., Cekan, J., & Robbins, K. (2017a). Building the evidence base for post project evaluation: A report to the faster forward fund. Valuing Voices. https://valuingvoices.com/wp-content/uploads/2013/11/The-case-for-post-project-evaluation-Valuing-Voices-Final-2017.pdf
  35. Zivetz, L., Cekan, J., & Robbins, K. (2017b). Checklists for sustainability. Valuing Voices. https://valuingvoices.com/wp-content/uploads/2017/08/Valuing-Voices-Checklists.pdf

Copyright information

 

© The Author(s) 2022

Open AccessThis chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

Cite: Cekan J., Legro S. (2022) Can We Assume Sustained Impact? Verifying the Sustainability of Climate Change Mitigation Results. In: Uitto J.I., Batra G. (eds) Transformational Change for People and the Planet. Sustainable Development Goals Series. Springer, Cham. https://doi.org/10.1007/978-3-030-78853-7_8