Grow the .002% of all global development projects that are evaluated ex-post closure for sustainability

Grow the .002% of all global development projects that are evaluated ex-post closure for sustainability

It seems like ‘fake news’ that after decades of global development so few evaluations would have peered back in time to see what was sustained. While I was consulting to the Policy Planning and Learning Bureau at USAID, I asked the head of this M&E department who does ex-post sustainability evaluation as I knew USAID had done some in the 1980s, Cindy Clapp-Wincek answered ‘No one, there are no incentives to do it.’ (She later became our advisor.)

Disbelieving, I did a year of secondary keyword research before devoting my professional consulting life to advocating for and doing ex-post evaluations of sustained outcomes and impacts. I searched USAID, OECD, and other bilateral and later multilateral donors’ databases and found thousands of studies, most of which were inaccurately named ‘ex-post’ or ‘post-closure’ studies.  Some of the roughly 1,000 projects I looked at at USAID and OECD that came up under ‘ex-post’, ‘ex post’, ‘post closure’ were final evaluations that were slightly delayed, a few were evaluations that were at least one year after closure, but were desk studies without interviews. Surprisingly, the vast majority of final evaluations found were those that only recommended ex-post evaluation several years later to confirm projected sustainability.

 

 

 

 

 

 

 

 

In 2016 at the American Evaluation Association conference, a group of us did a presentation. In it, I cited these statistics from of 1st year of Valuing Voices’ research:

  • Of 900+ “ex-post” “ex post” “post closure” documents in USAID’s DEC database, there were only 12 actual post-project evaluations with fieldwork have been done in the last 20 years
  • Of 12,000 World Bank projects – only 33 post-project evaluations asked ‘stakeholders’ for input, and only 3 showed clearly they talked to participants
  • In 2010 Asian Development Bank conducted 491 desk reviews of completed projects, and returned to 18 actual field-based post-project evaluations that included participant voices; they have done only this 1 study.
  • We found no evaluations by recipient governments of aid projects’ sustainability

12 years of research, advocacy and fieldwork later, the ‘catalysts’ database on Valuing Voices now shows actual fieldwork-informed evaluations by 40 organizations that had actual ex-posts that returned to the field to ask participants and project partners what was sustained, highlighting 92 ex-posts.

How many ex-post project closure evaluations have been done? .002% of all projects. The 0.002% statistic looks at just public foreign development aid from 1960 (not even counting private funding such as foundations or gifts to organizations, which isn’t tracked in any publicly available database). Calculating aggregated OECD aid statistics (excluding private because it’s only recent data) over 62 years $5.6 trillion by 2022 (thanks to Rebecca Regan-Sachs for the updated #s).

I then estimated 3.000 actual ex-posts which comes from 2,500 JICA projects plus almost 500 other projects that I have either found looking through databases all across the spectrum from governments and multilaterals (almost 100 in our catalysts, and am assuming there must be 400 others done in the 1980s-2000 like USAID and the World Bank).

Without a huge research team it is improssible to aggregate data on the total number of projects by all donors. So I extrapolated from project activity disbursements of one year (2022) for Mali on the www.foreignassistance.gov page. In my 35 years of experience, Mali, where I did my doctoral research, typifies he average USAID aid recipient. They had 382 projects going in 2022. I rounded up to 400 projects x 70 years (since 1960 when OECD data began) x 100 countries by just one donor (of the 150 possible recipient countries, to be conservative). This comes to 2.8 million projects. So if we take 39 OECD countries as donors (given most have far less to give than US), in total 109 million publicly funded aid projects disbursed $5.6 trillion since 1960. While final evaluations are industry standard, only .002% is the estimated number of ex-post evaluations of projects the were evaluated with data from local participants and partners of the 109 million projects .

This became Valuing Voices focus, and we created an open-access database for learning, and conducted our own  My team and I identified 92 ex-posts that returned to ask locals what lasted, what didn’t, why, and what emerged from their own efforts. We also created evaluability checklists and created a new evaluation, Sustained and Emerging Impacts Evaluation that included examining not just what donors put in place to last, but also what emerged outcomes from local efforts to sustain results with more limited resources, partnerships, capacities and local ownership/motivation. These four drivers were found by Rogers and Coates for USAID’s food security exit study in 2015). We have done 15 ex-posts for 9 clients since 2006 and shared Adaptation Fund ex-post training materials in 2023.

 

Yet the public assumes we know our development is sustainable. 2015’s ‘Sustainable Development Goals‘ focused aid on 17 themes, which was to generate $12 trillion more in annual spending on SDG sectors than the;$21 trillion already being invested each year. Nonetheless, a recent UN report states that there is now a $4 trillion annual financing gap to achieve the SDGs. All this funding goes to projects that are currently implemented, not to evaluate what had been sustained from past projects that already closed. Such learning from what succeeded or failed, or what emerged from local efforts to keep activities and results going is pivotal to improving current and future programming is almost wholly missing from the dialogue; I know, I asked multiple SDG evaluation experts.

 

Why do we return to learn so rarely? There are many reasons, the most prosaic among them being administrative.

  • When aid funds are spent over 2-10 years, projects are closed, evaluated at the end, ‘handed over’ to national governments, and no additional funding exists to return ‘ex-post’ closure to learn.
  • Next is the push to continue to improve lives through implementation which means low rates of overhead allocated to M&E and learning during, much less after closure.
  • Another is the assumption that ‘old’ projects differ so much from new ones, but there are few differences. After all there are only so many ways to grow food, feed the malnourished, educate children; evaluating ‘old’ projects can teach ‘new’ projects.
  • A last major one, from Valuing Voices’ research of 12 years may be the largest: Fear of admitting failure. Please read Valuing Voices’ 2016 blog highlighted many Lessons about Funding, Assumptions and Fears (Part 3). One US aid lobbyist told me in 2017 that I must not share this lack of learning about sustained impacts because it could imperil US aid funding; I told her I had to tell people because lives were at stake.
  • Overall, there is much to learn; most ex-post evaluations show mixed results. None show 100% sustainability and while most show 30-60% sustainability, none are 0% sustained either. If we don’t learn to replicate what worked and cease what didn’t now, then future programming will be as flawed and successes, especially brilliant emerging locally designed ex-post outcomes such as Niger’s local funding of redesign of health incentives will remain hidden.

 

Occasionally donors invest in sets of ex-post learning evaluations such as USAID’s ‘global waters’ seven water/ sanitation evaluations linked to the E3 Bureau taking sustainability as a strategic goal. Yet the overall findings from USAID’s own staff of these ex-posts Drivers of WASH study were chilling. While 25 million gained access to drinking water and 18 million to basic sanitation, ‘they have largely not endured.’ But the good news in such research is that the donor learned that infrastructure fails when spare parts are not accessible and maintenance not funded or performed, which can be planned for and addressed during implementation by investing in resources and partnerships. They learned that relying on volunteers is unreliable and management needs to be bolstered, which can lead to some implementation funding to be focused on capacities and local ownership. We can plan better for sustainability by learning from ex-post and exit studies (see Valuing Voices’ checklists in this 2023 article on Fostering Values-Driven Sustainability).

 

And since 2019, three climate funds, the Adaptation Fund, the Global Environmental Facility, and the Climate Investment Funds have turned to ex-post evaluations to look at sustainability and longer-term resilience and even transformation, given environmental shocks may take years to affect the project sites. The Adaptation Fund has done four ex-posts, with more to come in 2024/25, and the CIF is beginning now. The GEF has done a Post-Completion Assessment Pilot for the Yellow Sea Region . Hopeful!

Can We Assume Sustained Impact? Verifying the Sustainability of Climate Change Mitigation Results (reposting a book chapter)

So excited to have our chapter verifying the ‘sustainability’ of the Global Environment Facility Trust Fund (GEF) funded projects through examining two tranches of evaluations. My co-writer colleague Susan Legro did a brilliant job pointing out GreenHouse Gasses (GHG) emissions estimated reductions flaws. Given climate change is in full swing, we must trust the data we have.

It appeared in Transformational Change for People and the Planet: Evaluating Environment and Development Edited by  Juha I. Uitto and Geeta Batra. Enjoy!

Abstract

The purpose of this research was to explore how public donors and lenders evaluate the sustainability of environmental and other sectoral development interventions. Specifically, the aim is to examine if, how, and how well post project sustainability is evaluated in donor-funded climate change mitigation (CCM) projects, including the evaluability of these projects. We assessed the robustness of current evaluation practice of results after project exit, particularly the sustainability of outcomes and long-term impact. We explored methods that could reduce uncertainty of achieving results by using data from two pools of CCM projects funded by the Global Environment Facility (GEF).

Evaluating sustainable development involves looking at the durability and continuation of net benefits from the outcomes and impacts of global development project activities and investments in various sectors in the post project phase, i.e., from 2 to 20 years after donor funding ends.1 Evaluating the sustainability of the environment is, according to the Organisation for Economic Co-operation and Development (OECD, ), at once a focus on natural systems of “biodiversity, climate change, desertification and environment” (p.1) that will need to consider the context in which these are affected by human systems of “linkages between poverty reduction, natural resource management, and development” (p. 3). This chapter focuses more narrowly on the continuation of net benefits from the outcomes and impacts of a pool of climate change mitigation (CCM) projects (see Table 1). The sustainability of CCM projects funded by the Global Environment Facility (GEF), as in a number of other bilateral and multilateral climate funds, rests on a theory of change that a combination of technical assistance and investments contribute to successfully durable market transformation, thus reducing or offsetting greenhouse gas (GHG) emissions.

 

Table 1: Changes in OECD DAC Criteria from 1991 to 2019

1991

2019

SUSTAINABILITY:

SUSTAINABILITY: WILL THE BENEFITS LAST?

Sustainability is concerned with measuring whether the benefits of an activity are likely to continue after donor funding has been withdrawn. Projects need to be environmentally as well as financially sustainable.

The extent to which the net benefits of the intervention continue, or are likely to continue. Note: Includes an examination of the financial, economic, social, environmental, and institutional capacities of the systems needed to sustain net benefits over time. Involves analyses of resilience, risks, and potential trade-offs.

IMPACT:

IMPACT:

The positive and negative changes produced by a development intervention, directly or indirectly, intended or unintended. This involves the main impacts and effects resulting from the activity on the local social, economic, environmental, and other development indicators.

The extent to which the intervention has generated or is expected to generate significant positive or negative, intended or unintended, higher-level effects. . . . It seeks to identify social, environmental, and economic effects of the intervention that are longer-term or broader in scope.

Source: OECD/DAC Network on Development Evaluation, (); italics are emphasis added by Cekan

 

CCM projects lend themselves to such analysis, as most establish ex-ante quantitative mitigation estimates and their terminal evaluations often contain a narrative description and ranking of estimated sustainability beyond the project’s operational lifetime, including the achievement of project objectives. The need for effective means of measuring sustainability in mitigation projects is receiving increasing attention (GEF Independent Evaluation Office [IEO], ) and is increasingly important, as Article 13 of the Paris Agreement mandates that countries with donor-funded CCM projects report on their actions to address climate change (United Nations, ). As several terminal evaluations in our dataset stated, better data are urgently needed to track continued sustainability of past investments and progress against emissions goals to limit global warming.

Measuring Impact and Sustainability

Although impactful projects promoting sustainable development are widely touted as being the aim and achievement of global development projects, these achievements are rarely measured beyond the end of the project activities. Bilateral and multilateral donors, with the exception of the Japan International Cooperation Agency (JICA) and the U.S. Agency for International Development (USAID),2 have reexamined fewer than 1% of projects following a terminal evaluation, although examples exist of post project evaluations taking place as long as 15 years (USAID) and 20 years (Deutsche Gesellschaft fur Internationale Zusammenarbeit [GIZ]) later (Cekan, ). Without such fieldwork, sustainability estimates can only rely on assumptions, and positive results may in fact not be sustained as little as 2 years after closure. An illustrative set of eight post project global development evaluations analyzed for the Faster Forward Fund of Michael Scriven in 2017 showed a range of results: One project partially exceeded terminal evaluation results, two retained the sustainability assumed at inception, and the other five showed a decrease in results of 20%–100% as early as 2 years post-exit (Zivetz et al., ).

 

Since the year 2000, the U.S. government and the European Union have spent more than $1.6 trillion on global development projects, but fewer than several hundred post project evaluations have been completed, so the extent to which outcomes and impacts are sustained is not known (Cekan, ). A review of most bilateral donors shows zero to two post project evaluations (Valuing Voices, ). A rare, four-country, post project study of 12 USAID food security projects also found a wide variability in expected trajectories, with most projects failing to sustain expected results beyond as little as 1 year (Rogers & Coates, ). The study’s Tufts University team leaders noted that “evidence of project success at the time of exit (as assessed by impact indicators) did not necessarily imply sustained benefit over time.” (Rogers & Coates, , p. v.). Similarly, an Asian Development Bank (ADB) study of post project sustainability found that “some early evidence suggests that as many as 40% of all new activities are not sustained beyond the first few years after disbursement of external funding,” and that review examined fewer than 14 of 491 projects in the field (ADB, ). The same study described how assumed positive trajectories post funding fail to sustain and noted a

tendency of project holders to overestimate the ability or commitment of implementing partners—and particularly government partners—to sustain project activities after funding ends. Post project evaluations can shed light on what contributes to institutional commitment, capacity, and continuity in this regard. (ADB, , p. 1)

 

Learning from post project findings can be important to improve project design and secure new funding. USAID recently conducted six post project evaluations of water/sanitation projects and learned about needed design changes from the findings, and JICA analysed the uptake of recommendations 7 years after closure (USAID, ; JICA, ). As USAID stated in their  guidance,

An end-of-project evaluation could address questions about how effective a sustainability plan seems to be, and early evidence concerning the likely continuation of project services and benefits after project funding ends. Only a post project evaluation, however, can provide empirical data about whether a project’s services and benefits were sustained. (para. 9)

 

Rogers and Coates () expanded the preconditions for sustainability beyond only funding, to include capacities, partnerships, and ownership. Cekan et al. () expanded ex-post project methods from examining the sustainability of expected project outcomes and impacts post closure to also evaluating emerging outcomes, namely “what communities themselves valued enough to sustain with their own resources or created anew from what [our projects] catalysed” (para. 19). In the area of climate change mitigation, rigorous evaluation of operational sustainability in the years following project closure should inform learning for future design and target donor assistance on projects that are most likely to continue to generate significant emission reductions.

How Are Sustainability and Impact Defined?

The original 1991 OECD Development Assistance Committee (DAC) criteria for evaluating global development projects included sustainability, and the criteria were revised in 2019. The revisions related to the definition of sustainability and emphasize the continuation of benefits rather than just activities, and they include a wider systemic context beyond the financial and environmental resources needed to sustain those benefits, such as resilience, risk, and trade-offs, presumably for those sustaining the benefits. Similarly, the criteria for impact have shifted from simply positive/negative, intended/unintended changes to effects over the longer term (see Table 1).

 

In much of global development, including in GEF-funded projects, impact and sustainability are usually estimated only at project termination, “to determine the relevance and fulfilment of objectives, development efficiency, effectiveness, impact and [projected] sustainability” (OECD DAC, , p. 5). In contrast, actual sustainability can only be evaluated 2–20 years after all project resources are withdrawn, through desk studies, fieldwork, or both. The new OECD definitions present an opportunity to improve the measurement of sustained impact across global development, particularly via post project evaluations. Evaluations need to reach beyond projected to actual measurement across much of “sustainable development” programming, including that of the GEF.

 

GEF evaluations in recent years have been guided by the organization’s 2010 measurement and evaluation (M&E) policy, which requires that terminal evaluations “assess the likelihood of sustainability of outcomes at project termination and provide a rating” (GEF Independent Evaluation Office [IEO], p. 31). Sustainability is defined as “the likely ability of an intervention to continue to deliver benefits for an extended period of time after completion; projects need to be environmentally as well as financially and socially sustainable” (GEF IEO, , p. 27).

 

In 2017, the GEF provided specific guidance to implementing agencies on how to capture sustainability in terminal evaluations of GEF-funded projects (GEF, , para. 8 and Annex 2): “The overall sustainability of project outcomes will be rated on a four-point scale (Likely to Unlikely)”:

  • Likely (L) = There are little or no risks to sustainability;

  • Moderately Likely (ML) = There are moderate risks to sustainability;

  • Moderately Unlikely (MU) = There are significant risks to sustainability;

  • Unlikely (U) = There are severe risks to sustainability; and

  • Unable to Assess (UA) = Unable to assess the expected incidence and magnitude of risks to sustainability

 

Although this scale is a relatively common measure for estimating sustainability among donor agencies, it is not a measure that has been tested for reliability, i.e., whether multiple raters would provide the same estimate from the same data. It has also not been tested for construct validity, i.e., whether the scale is an effective predictive measure of post project sustainability. Validity issues include whether an estimate of risks to sustainability is a valid measure of the likelihood of post project sustainability, whether the narrative estimates of risk are ambiguous or double-barreled; and the efficacy of using a ranked, ordinal scale that treats sustainability as an either/or condition rather than a range (from no sustainability to 100% sustainability).

 

Throughout this chapter, we identify projects by their GEF identification numbers, with a complete table of projects provided in the appendix.

The Limits of Terminal Evaluations

Terminal evaluations and even impact evaluations that mostly compare effectiveness rather than long-term impact were referenced as sources for evaluating sustainability in the GEF’s 2017 Annual Report on Sustainability (GEF IEO, ). Although they can provide useful information on relevance, efficiency, and effectiveness, neither is a substitute for post project evaluation of the sustainability of outcomes and impacts, because projected sustainability may or may not occur. In a terminal evaluation of Mexican Sustainable Forest Management and Capacity Building (GEF ID 4149), evaluators made the case for ex-post project monitoring and evaluation of results:

There is no follow-up that can measure the consolidation and long-term sustainability of these activities. . . . Without a proper evaluation system in place, nor registration, it is difficult to affirm that the rural development plans will be self-sustaining after the project ends, nor to what extent the communities are readily able to anticipate and adapt to change through clear decision-making processes, collaboration, and management of resources. . . . They must also demonstrate their sustainability as an essential point in development with social and economic welfare from natural resources, without compromising their future existence, stability, and functionality. (pp. 5–9)3

 

Returning to a project area after closure also fosters learning about the quality of funding, design, implementation, monitoring, and evaluation and the ability of those tasked with sustaining results to do so. Learning can include how well conditions for sustainability were built in, tracked, and supported by major stakeholders. Assumptions made at design and final evaluation can then also be tested, along with theories of change (Sridharam & Nakaima, ). Finally, post project evaluations can verify the attributional claims made at the time of the terminal evaluation. As John Mayne explained in his  paper:

In trying to measure the performance of a program, we face two problems. We can often—although frequently not without some difficulty—measure whether or not these outcomes are actually occurring. The more difficult question is usually determining just what contribution the specific program in question made to the outcome. How much of the success (or failure) can we attribute to the program? What has been the contribution made by the program? What influence has it had? (p. 3)

 

In donor- and lender-funded CCM projects, emission reduction estimates represent an obvious impact measure. They are generally based on a combination of direct effects—i.e., reductions due to project-related investments in infrastructure—and indirect effects—i.e., reductions due to the replication of “market transformation” investments from other funding or an increase in climate-friendly practices due to improvements in the policy and regulatory framework (Duval, ; Legro, ). Both of these effects are generally estimated over the lifetime of the mitigation technology involved, which is nearly always much longer than the operational duration of a given project (see Table 2).

 

Table 2: Typology of GHG Reductions Resulting from Typical Project Interventions

Type of GHG reductions

Project lifetime (quarterly annual monitoring)

TERMINAL EVALUATION

Post project lifetime (post project evaluation)

Direct reductions

Reductions directly financed by donor-funded pilot project(s) or investment(s)

Continuing reductions from project-financed investments (through the end of the technology lifetime; e.g., 20 years for buildings, 10 years for industrial equipment, etc.)

Indirect reductions

Reductions from policy uptake (e.g., reduced fossil fuel use from curtailment of subsidies, spillover effects from tax incentives, increased government support for renewable energy due to strategy development) (co-) funded by the donor

Continuing reductions from policy uptake (e.g., reduced fossil fuel use from curtailment of subsidies, spillover effects from tax incentives, increased government support for energy efficiency or renewable energy due to strategy development)

Reductions from market transformation (changes in availability of financing, increased willingness of lenders, reduction in perceived risk) supported by pilot demonstrations and/or outreach and awareness raising (co-)funded by the donor

Continuing reductions from market transformation (changes in availability of financing, increased willingness of lenders, reduction in perceived risk) as a legacy of the pilot demonstrations and/or outreach and awareness raising funded by the donor-funded project

New reductions from the continuation of the investment or financing mechanism established by the donor-funded project

 

The increasing use of financial mechanisms such as concessional loans and guarantees as a component of donor-funded CCM projects, such as those funded by the Green Climate Fund (https://www.greenclimate.fund/), can also limit the ability of final evaluations to capture sustainability, because the bulk of subsequent investments in technologies that are assumed with revolving funds will not take place during the project lifetime. A 2012 paper by then-head of the GEF Independent Evaluation Office, Rob van den Berg, supported the need for post project evaluation and importantly included:

Barriers targeted by GEF projects, and the results achieved by GEF projects in addressing market transformation barriers . . . facilitate in understanding better whether the ex-post changes being observed in the market could be linked to GEF projects and pathways through which outcomes and intermediate states . . . [and] the extent GEF-supported CCM activities are reducing GHGs in the atmosphere . . . because it helps in ascertaining whether the incremental GHG reduction and/or avoidance is commensurate with the agreed incremental costs supported by GEF. . . . It is imperative that the ex-ante and ex-post estimates of GHG reduction and avoidance benefits are realistic and have a scientific basis. (GEF IEO, , p. 13)

 

This description of GHG-related impacts illustrates the difficulties associated with accurately drawing conclusions about sustainability from using a single scale to estimate “the likely ability [emphasis added] of an intervention to continue to deliver benefits for an extended period of time” (GEF IEO, , p. 35) due to several factors. First, the GEF’s 4-point scale is supposed to capture two different aspects of continuation: ongoing benefits from a project-related investment, and new benefits from the continuation of financing mechanisms. Without returning to evaluate the continued net benefits of the now-closed investment, such assumptions cannot be fully claimed. Second, the scale is supposed to capture benefits that can be estimated in a quantitative way (e.g., solar panels that offset the use of a certain amount of electricity from diesel generators); benefits that can be evaluated through policy or program evaluation (e.g., the introduction of a law on energy efficiency); and benefits that will require careful, qualitative study to determine impacts (e.g., training programs for energy auditors or awareness-raising for energy consumers, leading to knowledge and decision changes). Aggregating and weighing such an array of methods into one ranking is methodologically on shaky ground, especially without post project measurements to confirm whether results happened at any time after project closure.

Methodology

The impetus for this research was a sustainability analysis conducted by the GEF IEO that was summarized in the 2017 GEF Annual Performance Report (GEF IEO, ). The study stated: “The analysis found that outcomes of most of the GEF projects are sustained during the postcompletion period, and a higher percentage of projects achieve environmental stress reduction and broader adoption than at completion” (p. 17). Learning more about postcompletion outcomes and assessing how post project sustainability was evaluated was the aim of this work.

 

This chapter’s research sample consists of two sets of GEF project evaluations. We chose projects funded by the GEF because of the large size of the total project pool. For example, the Green Climate Fund lacks a large pool of mitigation projects that would be suitable for post project evaluation. Our first tranche was selected from the pool of CCM projects cited in the sustainability analysis, which included a range of projects with the earliest start date of 1994 and the latest closing date of 2013 (GEF IEO, ). These constituted $195.5 million dollars of investments. The pool of projects in the climate change focal area (n = 17), comprising one third of the GEF IEO sample, was then selected from the 53 projects listed in the report for further study. We then classified the selected projects by which ones had any mention of field-based post project verification according to an evaluability checklist (Zivetz et al., ). This list highlights methodological considerations including: (a) data showing overall quality of the project at completion, including M&E documentation needed on original and post project data collection; (b) time postcompletion (at least 2 years); (c) site selection criteria; and (d) proof that project results were isolated from concurrent programming to ascertain contribution to sustained impacts (Zivetz et al., ).

 

Next, we reviewed GEF documentation to identify any actual quantitative or qualitative measures of post project outcomes and impacts. These could include: (a) changes in actual energy efficiency improvements against final evaluation measures used, (b) sustained knowledge or dissemination of knowledge change fostered through trainings, (c) evidence of ownership, or (d) continued or increased dissemination of new technologies. Such verification of assumptions in the final documents typically explores why the assumptions were or were not met, and what effects changes in these assumptions would have on impacts, such as CO2 emissions projections.

 

The second tranche consisted of projects in the climate change focal area that were included in the 2019 cohort of projects for which the GEF received terminal evaluations. As the GEF 2019 Annual Performance Report explained:

Terminal evaluations for 193 projects, accounting for $ 616.6 million in GEF grants, were received and validated during 2018–2019 and these projects constitute the 2019 cohort. Projects approved in GEF-5 (33 percent), GEF-4 (40 percent) and GEF-3 (20 percent) account for a substantial share of the 2019 cohort. Although 10 GEF Agencies are represented in the 2019 cohort, most of these projects have been implemented by UNDP [United Nations Development Programme] (56 percent), with World Bank (15 percent) and UNEP [United Nations Environment Programme] (12 percent) also accounting for a significant share. (GEF IEO, , p. 9)

 

We added the second tranche of projects to represent a more current view of project performance and evaluation practice.

The climate change focal area subset consisted of 38 completed GEF projects, which account for approximately $155.7 million in GEF grants (approximately 20% of the total cohort and 25% of the overall cohort budget). Projects included those approved in 1995–1998 (GEF-1; n = 1) and 2003–2006 (GEF-3; n = 2), but 68% were funded in 2006–2010 (GEF-4; n = 26), and 24% in 2010–2014 (GEF-5; n = 9), making them more recent as a group than the 2019 cohort as a whole. Six GEF agencies were represented: Inter-American Development Bank (IDB), International Fund for Agricultural Development (IFAD), UNDP, UNEP, United Nations Industrial Development Organization (UNIDO), and the World Bank.

 

We eliminated three projects listed in the climate focal area subset from consideration in the second tranche because they had not been completed, leaving a pool of 35 projects. Ex-ante project documentation, such as CEO endorsement requests, and terminal evaluation reports were then reviewed for initial estimates of certain project indicators, such as GHG emission reductions, and ratings of estimated sustainability on the 4-point scale, including the narrative documentation that accompanied the ratings.

Findings

The question of whether post project sustainability was being measured was based on the first tranche of projects and on the sustainability analysis in which they were included. Most of the documents cited in the sustainability analysis were either terminal or impact evaluations focused on efficiency (GEF IEO, ), and most of the documents and report analysis focused on estimated sustainability. Of the 53 “postcompletion verification reports,” as they are referred to in the review (GEF IEO, , p. 62), we found only 4% to contain adequate information to support the analysis of sustainability. Our wider search for publicly available post project evaluations, which would have constituted an evidence base for sustained outcomes and environmental stress reduction and adoption cited in the GEF IEO 2019 analysis, did not identify any post project evaluations. We were unable to replicate the finding that “84% of these projects that were rated as sustainable at closure also had satisfactory postcompletion outcomes. . . . Most projects with satisfactory outcome ratings at completion continued to have satisfactory outcome ratings at postcompletion” (GEF IEO, , p. 3) or to compare the CCM subset of projects with this conclusion. The report stated that “the analysis of the 53 selected projects is based on 61 field verification reports. For 81 percent of the projects, the field verification was conducted at least four years after implementation completion [emphasis added].” However, we found no publicly accessible documentation that could be used to confirm the approach to field verification for 8 of the 17 projects.

 

Similarly, the available documentation for the projects lacked the most typical post project hallmarks, such as methods of post project data collection, comparisons of changes from final to post project outcomes and impacts at least 2 years post closure, and tracing contribution of the project at the funded sites to the changes. Documentation focused on a rating of estimated sustainability with repeated references to only the terminal evaluations and closure reports. In summary, of the 17 projects selected for review in the first tranche, 14 had data consisting of terminal evaluations, and none was 2–20 years post closure. We did not find publicly available evidence to support measurement of post project sustainability other than statements that such evidence was gathered in a handful of cases. Of the pool of 17 projects, only two (both from India) made any reference to post project data regarding the sectors of activity in subsequent years. However, these two were terminal evaluations within a country portfolio review and could not be substantiated with publicly accessible data.

 

We then screened the first tranche of projects using the Valuing Voices evaluability checklist (Zivetz et al., ):

  • High-quality project data at least at terminal evaluation, with verifiable data at exit: Of 14 projects rated for sustainability, only six were rated likely to be sustained and outcome and impact data were scant.

  • Clear ex-post methodology, sufficient samples: None of the evaluations available was a post project evaluation of sustainability or long-term impact. Although most projects fell within the evaluable 2–20 years post project (the projects had been closed 4–20 years), none had proof of return evaluation. There were no clear post project sampling frames, data collection processes including identification of beneficiaries/informants, site selection, isolating legacy effects of the institution or other concurrent projects, or analytic methods.

  • Transparent benchmarks based on terminal, midterm, and/or baseline data on changes to outcomes or impacts: M&E documents show measurable targets and indicators, baseline vs. terminal evaluations with methods that are comparable to methods used in the post project period: For some of the 17 projects, project inception documents and terminal evaluations were available; in other cases, GEF evaluation reviews were available. Two had measurable environmental indicators that compared baseline to final, but none were after project closure.

  • Substantiated contribution vs. attribution of impacts: Examples of substantiated contribution were not identified.

 

Evaluation reports revealed several instances for which we could not confirm attribution. For example, evaluation of the project Development of High Rate BioMethanation Processes as Means of Reducing Greenhouse Gas Emissions (GEF ID 370), which closed in 2005, referenced the following subsequent market information:

As of Nov 2012, capacity installed from waste-to-energy projects running across the country for grid connected and captive power are 93.68MW and 110.74 MW respectively [versus 3.79KW from 8 sub-projects and 1-5 MW projects]. . . . The technologies demonstrated by the 16 sub-projects covered under the project have seen wide-scale replication throughout the country. . . . An installed capacity of 201.03MW within WTE [waste to energy] projects and the 50% of this is attributed to the GEF project. (GEF IEO, , vol. 2, p. 64)

 

Claims of “the technical institutes strengthened as a result of the project were not fully effective at the time of project completion but are now actively engaged in the promotion of various biomethanation technologies” are unsubstantiated in publicly available information; as a result, the ex-post methods of contribution/attribution data are not clear. Another project in India, Optimizing Development of Small Hydel [hydroelectric] Resources in Hilly Areas (GEF ID 386), projected that later investments in the government’s 5-year plans would happen, and the resulting hydropower production would be attributable to the original project (GEF IEO, ); again, this attributional analysis was not documented. Analysis of a third project in India, Coal Bed Methane Capture and Commercial Utilization (GEF ID 325), which closed in 2008, claimed results that could not be reproduced: “Notable progress has been made through replication of projects, knowledge sharing, and policy development” and “expertise was built” (GEF IEO, , Vol. 2, p. 90). Further claims that the project contributed to “the total coal bed methane production in the country and has increased to 0.32 mmscmd [million metric standard cubic meters per day], which is expected to rise to 7.4 mmscmd by the end of 2014” is without proof. The evaluation reported estimates of indirect GHG emission reduction, based on postcompletion methane gas production estimates of 0.2 million m3 per day:

1.0 Million tons equivalent per year, considering an adjustment factor of 0.5 as the GEF contribution [emphasis added], the indirect GHG emission reduction due to the influence of the project is estimated to be 0.5 million tons of CO2 equivalent per annum (2.5 million tons over the lifetime period of 5 years). (GEF IEO, , Vol. 2, p. 91)

 

Yet without verification of coal bed methane capture and commercial utilization continuing, this impact cannot be claimed.

How Is Sustainability Being Captured?

Fifteen of the 17 CCM projects we reviewed in the first tranche were rated on a 4-point scale at terminal evaluation. Of those 15, 12 had overall ratings of either satisfactory or marginally satisfactory, and one highly satisfactory overall. Eleven of the sustainability ratings were either likely or marginally likely. Only two projects were rated marginally unlikely overall or for sustainability, and only one project received marginally unlikely in both categories (the Demand Side Management Demonstration energy conservation project that ended in 1999 [GEF ID 64]). Although none of the documents mentioned outcome indicators, eight of the 17 rated estimated CO2 direct and indirect impacts.

 

In the second pool of projects—the CCM subset of the 2019 cohort—63% of the projects were rated in the likely range for sustainability (n = 22; nine were rated likely and 13 marginally likely). This is slightly higher than the 2019 cohort as a whole, in which 59% were rated in the likely range. In turn, the 2019 annual performance report noted that “the difference between the GEF portfolio average and the 2019 cohort is not statistically significant for both outcome and sustainability rating” (GEF IEO, , p. 9). It is slightly lower than the percentage of CCM projects receiving an overall rating of marginally likely or higher in the 2017 portfolio review (68%, n = 265; GEF IEO, , p. 78).

 

In this second set of projects, only two received a rating of marginally unlikely and only one received a sustainability rating of unlikely. The remainder of the projects could not be classified using the 4-point rating scale, either because they had used an either/or estimate (one project), a 5-point scale (one project), or an estimate based on the assessment of risks to development outcome (two projects). Six projects or could not be assessed due to the absence of a publicly accessible terminal evaluation in the GEF and implementing agency archives.

How Effectively Is Sustainability Being Captured?

Throughout the first set of reports on which the sustainability was claimed, “84% of these projects that were rated as sustainable at closure also had satisfactory postcompletion outcomes, as compared with 55% percent of the unsustainable projects” (GEF IEO, , p. 29). The data did not support the claim, even during implementation.

  • As a Brazilian project (GEF ID 2941) showed, sustainability is unlikely when project achievements are weak, and exit conditions and benchmarks need to be clear: The exit strategy provided by IDB Invest77 is essentially based on financial-operational considerations but does not provide answers to the initial questions how an EEGM [energy efficiency guarantee mechanism] should be shaped in Brazil, how relevant it is and for whom, and to whom the EEGM should be handed over (p. 25).

  • In Russia, the terminal evaluation for an energy efficiency project (GEF ID 292) cited project design flaws that seemed to belie its sustainability rating of likely: “From a design-for-replication point of view the virtually 100% grant provided by the GEF for project activities is certainly questionable” (Global Environment Facility Evaluation Office [GEF EO], , p. 20). Further, the assessment that “the project is attractive for replication, dissemination of results has been well implemented, and the results are likely to be sustainable [emphasis added] for the long-term, as federal and regional legislation support is introduced” (GEF EO, , p. 39), makes a major assumption regarding changes in the policy environment. (In fact, federal legislation was introduced 2 years post project, and the extent of enforcement would require examination.)

  • A Pacific regional project (GEF ID 1058) was rated as likely to be sustained, but its report notes that it “does not provide overall ratings for outcomes, risks to sustainability, and M&E” (p. 1).

  • The Renewable Development Energy project in China (GEF ID 446) that closed in 2007 was evaluated in 2009 (not post project, but a delayed final evaluation). The report considered the project sustainable with a continued effort to support off-grid rural electrification, claiming, “the market is now self-sustaining, and thus additional support is not required” (p. 11). The project estimated avoided CO2 emissions and cited 363% as achieved; however, calculations were based on 2006 emissions values for thermal power sector and data from all wind farms in China, without a bottom-up estimate. The interpolation of this data lacks verification.

  • Similar sampling issues emerge in a project in Mexico (GEF ID 643): “A significant number of farmers . . . of an estimated 2,312 farmers who previously had had no electricity” (p. 20) saw their productivity and incomes increase as a result of their adoption of productive investments (e.g., photovoltaic-energy water-pumping systems and improved farming practices). A rough preliminary estimate is extrapolated from an evaluation of “three [emphasis added] beneficiary farms, leading to the conclusion that in these cases average on-farm increases in income more than doubled (rising by139%)” (p. 21).

 

Baseline to terminal evaluation comparisons were rare, with the exception of photovoltaic energy projects in China and Mexico, and none were post project. Two were mid-term evaluations, which could not assess final outcomes much less sustainability. Ex-post project evaluations far more typically focus on the contributions that projects made, because only in rare cases can the attribution be isolated, especially for a project pool, where the focus is often on creating an enabling environment reliant on a range of actors. One such example is the Indian energy efficiency project approved in 1998 (GEF ID 404), in which

the project resulted in a favorable environment for energy-efficiency measures and the sub-projects inspired many other players in similar industries to adopt the demonstrated technologies. Although quantitative data for energy saved by energy efficiency technologies in India is not available, it is evident that due to the change in policy and financial structure brought by this project, there is an increase in investment in energy efficiency technologies in the industries. (GEF IEO, , Vol. 2., p. 95)

 

And while such GEF evaluators are asking for ex-post evaluation, in an earlier version of this book, Evaluating Climate Change Action for Sustainable Development (Uitto et al., ), the authors encouraged us to be “modest” in expectations of extensive ex-post evaluations and exploration of ex-post’s confirmatory power seemingly has not occurred:

The expectations have to be aligned with the size of the investment. The ex-post reconstruction of baselines and the assessment of quantitative results is an intensive and time-consuming process. If rigorous, climate change-related quantitative and qualitative data are not available in final reports or evaluations of the assessed projects, it is illusive to think that an assessment covering a portfolio of several hundred projects is able to fill that gap and to produce aggregated quantitative data, for example on mitigated GHG emissions. When producing data on proxies or qualitative assessments, the expectations must be realistic, not to say modest. (p. 89)

Project Evaluability

Following an analysis of the sustainability estimates in the first pool of projects, we screened project documentation and terminal evaluations for conditions that foster sustainability during planning, implementation, and exit. We also analyzed how well the projects reported on factors that could be measured in a post project evaluation and factors that would predispose projects to sustainability. These sustained impact conditions consisted of the following elements: (a) resources, (b) partnerships and local ownership, (c) capacity building, (d) emerging sustainability, (e) evaluation of risks and resilience, and (f) CO2 emissions (impacts).

 

Although documentation in evaluations did not verify sustainability, many examples exist of data collection that could support post project analyses of sustainability and sustained impacts in the future. Most reports cited examples of resources that had been generated, partnerships that had been fostered for local ownership and sustainability, and capacities that had been built through training. Some terminal evaluations also captured emerging impacts due to local efforts to sustain or extend impacts of the project that had not been anticipated ex-ante.

 

The Decentralized Power Generation project (GEF ID 4749) in Lebanon provides a good example of a framework to collect information on elements of sustainability planning at terminal (see Table 3).

 

Table 3: Sustainability Planning from a Decentralized Power Generation Project in Lebanon (GEF ID 4749)

Resources

Are there financial risks that may jeopardize the sustainability of project outcomes?

What is the likelihood of financial and economic resources not being available once GEF grant assistance ends?

Ownership

What is the risk, for instance, that the level of stakeholder ownership (including ownership by governments and other key stakeholders) will be insufficient to allow for the project outcomes/benefits to be sustained?

Do the various key stakeholders see that it is in their interest that project benefits continue to flow?

Is there sufficient public/stakeholder awareness in support of the project’s long-term objectives?

Partnerships

Do the legal frameworks, policies, and governance structures and processes within which the project operates pose risks that may jeopardize sustainability of project benefits?

Benchmarks, risks, & resilience

Are requisite systems for accountability and transparency, and required technical know-how, in place?

Are there ongoing activities that may pose an environmental threat to the sustainability of project outcomes?

Are there social or political risks that may threaten the sustainability of project outcomes?

Source: 4749 Terminal Evaluation, p. 45. Note: Capacity Building and Emerging Sustainability were missing from project 4749

 

Tangible examples of the above categories at terminal evaluations include the following.

Resources

The most widespread assumption for sustainability was sufficient financial and in-kind resources, often reliant on continued national investments or new private international investments, which could be verified. National resources that could sustain results include terminal evaluation findings such as:

Funding for fuel cell and electric vehicle development by the Chinese Government had increased from Rmb 60 million (for the 1996-2000 period) to more than Rmb 800 million (for the 2001-2005 period). More recently, policymakers have now targeted hydrogen commercialization for the 2010-2020 period. (GEF ID 445, p. 17)

 

Another example is: “About 65 percent of [Indian] small Hydro electromechanical Equipment is sourced locally” (GEF ID 386; GEF IEO, , Vol.2, p. 76). The terminal evaluation of a global IFC project stated that “Moser Baer is setting up 30 MW solar power plants with the success of the 5 MW project. Many private sector players have also emulated the success of the Moser Baer project by taking advantage of JNNSM scheme” (GEF ID 112, p. 3).

Local Ownership and Partnerships

The Russian Market Transformation for EE Buildings project (GEF ID 3593) showed in its recommendation to governmental stakeholders that their ownership would be essential for sustainability, describing “a suitable governmental institution to take over the ownership over the project web site along with the peer-to-peer network ensuring the sustainability of the tools [to] support the sustainability of the project results after the project completion” (p. xi). An Indian project (GEF ID 386) noted how partnerships could sustain outcomes:

By 2001, 16 small hydro equipment manufacturers, including international joint ventures (compared to 10 inactive firms in 1991) were operational. . . . State government came up with policies with financial incentives and other promotional packages such as help in land acquisition, getting clearances, etc. These profitable demonstrated projects attracted private sector and NGOs to set up similar projects. (GEF IEO, , Vol. 2, p. 74)

Capacity Building

The Renewable Energy for Agriculture project in Mexico (GEF ID 643) established the “percentage of direct beneficiaries surveyed who learned of the equipment through FIRCO’s promotional activities” (86%), “number of replica renewable energy systems installed” (847 documented replicas), and “total number of technicians and extensionists trained in renewable energy technologies” (p. 33). This came to 3022, or 121% of the original goal of 2500, which provides a good measure of how the project exceeded this objective.

Emerging Sustainability

Recent post project evaluations also address what emerged after the project that was unrelated to the existing theory of change. These emerging findings are rarely documented in terminal evaluations, but some projects in the first pool included information about unanticipated activities or outcomes at terminal evaluation, and these could be used for future post project fieldwork follow-up. As a consequence of the hydroelectric resource project, for example, the Indian Institute “developed and patented the designs for water mills” (GEF ID 386; GEF IEO, , Vol. 2, p. 73). The terminal evaluation for another project stated that “following the UNDP-GEF project, the MNRE [Ministry of New and Renewable Energy] initiated its own programs on energy recovery from waste. Under these programs, the ministry has assisted 14 projects with subsidies of US$ 2.72 million” (GEF ID 370; GEF IEO, , Vol. 2, p. 62).

Benchmarks, Risks, and Resilience

As the GEF’s 2019 report itself noted, “The GEF could strengthen its approach to assessing sustainability further by explicitly addressing resilience” (GEF IEO, , p. 33). Not doing so is a risk, as our climate changes. Two evaluations noted “no information on environmental risks to project sustainability;” these were the Jamaican pilot on Removal of Barriers to Energy Efficiency and Energy Conservation (GEF ID 64; p. 68) and a Pacific regional project (GEF ID 1058). For likelihood of sustainability, the Jamaican project was rated moderately unlikely and the Pacific Islands project was rated likely but “does not provide overall ratings for outcomes, risks to sustainability, and M&E” other than asserting that

the follow-up project, which has been approved by the GEF, will ensure that the recommendations entailed in the documents prepared as part of this project are carried out. Thus, financial risks to the benefits coming out of the project are low. (p. 3)

Greenhouse Gas Emissions (Impacts)

In GEF projects, timeframe is an important issue, which makes post project field verification that much more important. As the GEF IEO stated in 2018, “Many environmental results take more than a decade to manifest. Also, many environmental results of GEF projects may be contingent on future actions by other actors.” (GEF IEO, , p. 34).

Uncertainty and Likelihood Estimates

Estimating the likelihood of sustainability of greenhouse gas emissions at terminal evaluation raises another challenge: the relatively high level of uncertainty concerning the achievement of project impacts related to GHG reduction. GHG reductions are the primary objective stated in the climate change focal area, and they appear as a higher level impact across projects regardless of the terminology used. For a global project on bus rapid transit and nonmotorized transport, the objective was to “reduce GHG emissions for transportation sector globally” (GEF ID 1917, p. 9). For a national project on building sector energy efficiency, the project goal was “the reduction in the annual growth rate of GHG emissions from the Malaysia buildings sector” (GEF ID 3598; Aldover & Tiong, , p. i). For a land management project in Mexico, the project objective was to “mitigate climate change in the agricultural units selected . . . including the reduction of emissions by deforestation and the increase of carbon sequestration potential” (GEF ID 4149, p. 21). For a national project to phase out ozone-depleting substances, the project objective was to “reduce greenhouse gas emissions associated with industrial RAC (refrigeration and air conditioning) facilities in The Gambia” (GEF ID 5466, p. vii). Clearly, actual outcomes in GHG emissions need to be considered in any assessment of the likelihood of sustainability of outcomes.

 

Unlike projects in the carbon finance market, GEF projects estimate emissions for a project period that usually exceeds the duration of the GEF intervention. In most cases, ex-ante estimated GHG reductions in the post project period are larger than estimated GHG reductions during the project lifetime. In practice, this means that for projects for which the majority of emissions will occur after the terminal evaluation, evaluators are being asked to estimate the likelihood that benefits will not only continue, but will increase due to replication, market transformation, or changes in the technology or enabling environment. Table 4 provides several examples from the GEF 2019 cohort of how GHG reductions may be distributed over the project lifecycle.

 

Table 4: Distribution of Estimated GHG Reductions Ex-Ante for Selected Projects in the CCM Subset of the GEF 2019 Cohort

GEF ID

Country

Sub-Sector

Ex-ante GHG reduction estimates

% of reductions achieved by the terminal evaluation

During project lifetime (tCO2e)

Total reductions (tCO2e)

2941

Brazil

EE Buildings

705,000

9,588,000

7

2951

China

EE Financing

5,400,000

111,500,000

5

3216

Russia

EE Standards / Labels

7,820,000

123,600,000

6

3555

India

EE Buildings

454,000

5,970,000

8

3593

Russia

EE Industry

0

3,800,000

0

3598

Malaysia

EE Buildings

2,002,000

18,166,000

11

3755

Vietnam

EE Lighting

2,302,000

5,268,000

44

3771

Philippines

EE Industry

560,000

560,000

100

Sources: 2941 Project Document, pp. 35–37; 2951 PAD/CEO Endorsement Request, p. 88; 3216 Project Document, pp. 80–90; 3555 Terminal Evaluation; 3593 Terminal Evaluation, p. 23; 3598 Terminal Evaluation, p. 24; 3755 GEF CEO Endorsement Request; 3771 Terminal Evaluation pp. 8–9

 

The range in Table 4 shows the substantial variation in uncertainty when estimating the likelihood of long-term project impacts. For projects designed to achieve all of their emission reductions during their operational lifetimes, the achievement of GHG reductions can be verified as a part of the terminal evaluation. However, most projects assume that nearly all estimated GHG reductions will occur in the post project period, so uncertainty levels are much higher and estimates may be more difficult to compile. In other evaluations, evaluators may identify inconsistent GHG estimates (e.g., GEF ID 4157 and 5157), or recommend that the ex-ante estimates be downsized (e.g., GEF ID 3922, 4008, and 4160). These trends may also be difficult to capture in likelihood estimates.

Conclusions and Recommendations

While sustainability has been estimated in nearly all of the projects in the two pools we considered, it has not been measured. Assessing the relationship between projected sustainability and actual post project outcomes was not possible due to insufficient data. Further, findings from the first pool of climate change mitigation projects did not support the conclusion that “outcomes of most of the GEF projects are sustained during the postcompletion period” (GEF IEO, , p. 17). In the absence of sufficient information regarding project sustainability, determining post project GHG emission reductions is not possible, because these are dependent on the continuation of project benefits following project closure.

 

We also conclude that although the 4-point rating scale is a common tool for estimating the likelihood of sustainability, the measure itself has not been evaluated for reliability or validity. The scale is often used to summarize diverse trends in the midst of varying levels of uncertainty limits. The infrequency of the unlikely rating in terminal evaluations may result from this limitation—evaluators believe that some benefits (greater than 0%) will continue. However, the 4-point scale cannot convey an estimate of what percentage of benefits will continue. Furthermore, the use of market studies to assess sustainability is not effective in the absence of attributional analysis linking results to the projects that ostensibly caused change.

 

As a result, the current evaluator’s toolkit still does not provide a robust means of estimating post project sustainability and is not suitable as a basis for postcompletion claims. That said, M&E practices in the CCM projects we studied supported the collection of information that documented conditions (e.g., resources, partnerships, capacities, etc.) in a way that projects could be evaluable, or suitable for post project evaluation. We recommend that donors provide financial and administrative support for project data repositories to retain data in-country at terminal evaluation for post project return and country-level learning, and include evaluability (control groups, sampling sizes, and sites selected by evaluability criteria) in the assessment of project design. We also recommend sampling immediately from the 56 CCM projects in the two sets of projects that have been closed at least 2 years.

 

Donors’ allocation of sufficient resources for CCM project evaluations would allow verification of actual long-term, post project sustainability using the OECD DAC () definition of “the continuation of benefits from a development intervention after major development assistance has been completed” (p. 12). It would also enable evaluators to consider enumerating project components that are sustained rather than using an either/or designation (sustained/not sustained). Evaluation terms of reference should clarify the methods used for contribution vs. attribution claims, and they should consider decoupling estimates of direct and indirect impacts, which are difficult to measure meaningfully in a single measure. For the GEF portfolio specifically, the development of a postcompletion verification approach could be expanded from the biodiversity focal area to the climate change focal area (GEF IEO, ), and lessons could also be learned from the Adaptation Fund’s () commissioned work on post project evaluations. Bilateral donors such as JICA have developed rating scales for post project evaluations that assess impact in a way that captures both direct and indirect outcomes (JICA, ).

 

Developing country parties to the Paris Agreement have committed to providing “a clear understanding of climate change action” in their countries under Article 13 of the agreement (United Nations, ), and donors have a clear imperative to press for continued improvement in reporting on CCM project impacts and using lessons learned to inform future support.

Footnotes

  1. 1.

    We use the term “postproject” evaluations to distinguish these longer term evaluations from terminal evaluations, which typically occur within 3 months of the end of donor funding. While some donors (JICA, ; USAID, ) use the term “ex-post evaluation” to refer to evaluations distinct from the terminal/final evaluation and occurring 1 year or more after project closure, other donors use the terms “terminal evaluation” and “ex-post evaluation” synonymously. Other terms include postcompletion, post-closure, and long-term impact.

  2. 2.

    In a  meta-evaluation, Hageboeck et al. found that only 8% of projects in the 2009–2012 USAID PPL/LER evaluation portfolio (26 of 315) were evaluated post-project following the termination of USAID funding.

  3. 3.

    Page numbers provided with GEF ID numbers only refer to project terminal evaluations; see Appendix.

References

  1. Adaptation Fund. (2019). Report of the Adaptation Fund Board, note by the chair of the Adaptation Fund Board – Addendum. AFB/B.34–35/3. Draft – 8 November 2019. https://www.adaptation-fund.org/document/report-of-the-adaptation-fund-board-note-by-the-chair-of-the-adaptation-fund-board-addendum/
  2. Aldover, R. Z., & Tiong, T. C. (2017). UNDP/GEF project PIMS 3598: Building sector energy efficiency project (BSEEP): Terminal evaluation report. Global Environment Facility and United Nations Development Programme. https://erc.undp.org/evaluation/evaluations/detail/8919
  3. Asian Development Bank. (2010). Post-completion sustainability of Asian Development Bank-assisted projects. https://www.adb.org/documents/post-completion-sustainability-asian-development-bank-assisted-projects
  4. Cekan, J. (2015, March 13). When funders move on. Stanford Social Innovation Review. https://ssir.org/articles/entry/when_funders_move_on#
  5. Cekan, J., Zivetz, L., & Rogers, P. (2016). Sustained and emerging impacts evaluation. Better Evaluation. https://www.betterevaluation.org/en/themes/SEIE
  6. Duval, R. (2008). A taxonomy of instruments to reduce greenhouse gas emissions and their interactions. Organisation for Economic Co-operation and Development.  https://doi.org/10.1787/236846121450.CrossRefGoogle Scholar
  7. Global Environment Facility. (2017). Guidelines for GEF agencies in conducting terminal evaluation for full-sized projects. https://www.gefieo.org/evaluations/guidelines-gef-agencies-conducting-terminal-evaluation-full-sized-projects
  8. Global Environment Facility Evaluation Office. (2008). Evaluation of the catalytic role of the GEFhttps://www.gefieo.org/sites/default/files/ieo/ieo-documents/gef-catalytic-role-qualitative-analysis-project-documents.pdf
  9. Global Environment Facility Independent Evaluation Office. (2010). GEF monitoring and evaluation policyhttps://www.gefieo.org/sites/default/files/ieo/evaluations/gef-me-policy-2010-eng.pdf
  10. Global Environment Facility Independent Evaluation Office. (2012). Approach paper: Impact evaluation of the GEF support to CCM: Transforming markets in major emerging economies. https://www.gefieo.org/sites/default/files/ieo/ieo-documents/ie-ccm-markets-emerging-economies.pdf
  11. Global Environment Facility Independent Evaluation Office. (2013). Country portfolio evaluation (CPE) India. http://www.gefieo.org/evaluations/country-portfolio-evaluation-cpe-india
  12. Global Environment Facility Independent Evaluation Office. (2017). Climate change focal area study. https://www.thegef.org/council-meeting-documents/climate-change-focal-area-study
  13. Global Environment Facility Independent Evaluation Office. (2018). Sixth overall performance study of the GEF: The GEF in the changing environmental finance landscape. https://www.thegef.org/sites/default/files/council-meeting-documents/GEF.A6.07_OPS6_0.pdf
  14. Global Environment Facility Independent Evaluation Office. (2019a). Annual Performance Report 2017https://www.gefieo.org/evaluations/annual-performance-report-apr-2017
  15. Global Environment Facility Independent Evaluation Office. (2019b). A methodological approach for post-project completionhttps://www.gefieo.org/council-documents/methodological-approach-post-completion-verification
  16. Global Environment Facility Independent Evaluation Office. (2020). Annual performance report 2019https://www.gefieo.org/evaluations/annual-performance-report-apr-2019
  17. Hageboeck, M., Frumkin, M., & Monschein S. (2013). Meta-evaluation of quality and coverage of USAID evaluations. USAID. https://www.usaid.gov/evaluation/meta-evaluation-quality-and-coverage
  18. Japan International Cooperation Agency. (2004). Issues in ex-ante and ex-post evaluation. In JICA Guideline for Project Evaluation: Practical Methods for Project Evaluation (pp. 115–197). https://www.jica.go.jp/english/our_work/evaluation/tech_and_grant/guides/pdf/guideline01-01.pdf
  19. Japan International Cooperation Agency. (2017). Ex-post evaluation results. In JICA annual evaluation report 2017 (Part II, pp. 1–34). https://www.jica.go.jp/english/our_work/evaluation/reports/2017/c8h0vm0000d2h2gq-att/part2_2017_a4.pdf
  20. Japan International Cooperation Agency. (2020a). Ex-post evaluation (technical cooperation). https://www.jica.go.jp/english/our_work/evaluation/tech_and_grant/project/ex_post/index.html
  21. Japan International Cooperation Agency. (2020b). Ex-post evaluation (ODA loan). https://www.jica.go.jp/english/our_work/evaluation/oda_loan/post/index.html
  22. Legro, S. (2010, June 9–10). Evaluating energy savings and estimated greenhouse gas emissions in six projects in the CIS: A comparison between initial estimates and assessed performance [paper presentation]. International Energy Program Evaluation Conference, Paris, France. https://energy-evaluation.org/wp-content/uploads/2019/06/2010-paris-027-susan-legro.pdf
  23. Mayne, J. (2001). Assessing attribution through contribution analysis: Using performance measures sensibly. The Canadian Journal of Program Evaluation, 16(1), 1–24.Google Scholar
  24. OECD/DAC Network on Development Evaluation. (2019). Better criteria for better evaluation: Revised evaluation criteria definitions and principles for use. Organisation for Economic Co-operation and Development. http://www.oecd.org/dac/evaluation/revised-evaluation-criteria-dec-2019.pdf
  25. Organisation for Economic Co-operation and Development. (2015). OECD and post-2015 reflections. Element 4, Paper 1: Environmental Sustainabilityhttps://www.oecd.org/dac/environment-development/FINAL%20POST-2015%20global%20and%20local%20environmental%20sustainability.pdf
  26. Organisation for Economic Co-operation and Development, Development Assistance Committee. (1991). DAC criteria for evaluating development assistance. https://www.oecd.org/dac/evaluation/2755284.pdf
  27. Rogers, B. L., & Coates, J. (2015). Sustaining development: A synthesis of results from a four-country study of sustainability and exit strategies among development food assistance projects. FANTA III, Tufts University, & USAID. https://www.fantaproject.org/research/exit-strategies-ffp
  28. Sridharam, S., & Nakaima, A. (2019). Till time (and poor planning) do us part: Programs as dynamic systems—Incorporating planning of sustainability into theories of change. The Canadian Journal of Program Evaluation. https://evaluationcanada.ca/system/files/cjpe-entries/33-3-pre005.pdf
  29. Uitto, J., Puri, J., & van den Berg, R. (2017). Evaluating climate change action for sustainable development. Global Environment Facility Independent Evaluation Office. https://www.gefieo.org/sites/default/files/ieo/documents/files/cc-action-for-sustainable-development_0.pdf
  30. United Nations. (2015, December 12). Paris agreementhttps://unfccc.int/sites/default/files/english_paris_agreement.pdf
  31. United States Agency for International Development. (2018). Project evaluation overview. https://www.usaid.gov/project-starter/program-cycle/project-design/project-evaluation-overview
  32. United States Agency for International Development. (2019). USAID’s impact: Ex-post evaluation serieshttps://www.globalwaters.org/resources/ExPostEvaluations
  33. Valuing Voices. (2020). Catalysts for ex-post learninghttps://valuingvoices.com/catalysts-2/
  34. Zivetz, L., Cekan, J., & Robbins, K. (2017a). Building the evidence base for post project evaluation: A report to the faster forward fund. Valuing Voices. https://valuingvoices.com/wp-content/uploads/2013/11/The-case-for-post-project-evaluation-Valuing-Voices-Final-2017.pdf
  35. Zivetz, L., Cekan, J., & Robbins, K. (2017b). Checklists for sustainability. Valuing Voices. https://valuingvoices.com/wp-content/uploads/2017/08/Valuing-Voices-Checklists.pdf

Copyright information

 

© The Author(s) 2022

Open AccessThis chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.

Cite: Cekan J., Legro S. (2022) Can We Assume Sustained Impact? Verifying the Sustainability of Climate Change Mitigation Results. In: Uitto J.I., Batra G. (eds) Transformational Change for People and the Planet. Sustainable Development Goals Series. Springer, Cham. https://doi.org/10.1007/978-3-030-78853-7_8

 

Sustainable Development Goals (SDGs), Funding and Accountability for sustainable projects?

Sustainable Development Goals (SDGs), Funding and Accountability for sustainable projects?

What are Sustainable Development Goals? ” the United Nations adopted the new post-2015 development agenda. The new proposals – to be achieved by 2030- set 17 new ‘sustainable’ development goals (SDGs) and 169 targets. Some, like Oxfam, see the SDGs as a country budgeting and prioritization as well as an international fundraising tool. They cite that “government revenue currently funds 77% of spending…aligned with government priorities, balanced between investment and recurrent and easy to implement than donor-funded spending…” National investments are vital, but how much has the world used the SDGs to target investments and foster sustainable results?

Using results data such as that of the sectoral SDGs, countries can also ensure accountability for the policies implemented to reduce global and local inequities, but we must learn from the data. Over halfway to the goal, data is being collected, and while there is robust monitoring by countries who have built their M&E systems, other countries are faltering. “A recent report by Paris21 found even highly developed countries are still not able to report more than 40-50% of the SDG indicators” and “only 44% of SDG indicators have sufficient data for proper global and regional monitoring”. Further, there is very little evaluation or transparent accountability. Some of the data illuminate vitally need-to-know-for-better-programming. SDG data shows good news that Western and Asian countries have done better than most of the world 2015-19… but there is a lot of missing data while other data shows staggering inequities such as these:

  • In Vietnam, a child born into the majority Kinh, or Viet, ethnic group is three and a half times less likely to die in his or her first five years than a child from other Vietnamese ethnic groups.
  • In the United States, a black woman is four times more likely to die in childbirth than a white woman.

So are we using the SDG data to better target funding and improve design? This is the kind of evaluative learning (or at least sharing by those that are doing it :)) that is missing. As my colleague and friend Sanjeev Sridharan writes on Rethinking Evaluation, “As a field we need to more clearly understand evaluation’s role in addressing inequities and promoting inclusion” including “Promoting a Culture of Learning for Evaluation – these include focus on utilization and integration of evaluation into policy and programs.” How well learning is integrating is unknown.

As a big picture update on the progress of the Sustainable Development Goals (SDGs) in 2021, with only nine years left to the goal: It’s not looking good. The scorecards show COVID-19 has slowed down or wiped out many achievements, with 100 million people pushed into extreme poverty, according to the IMF. Pre-Covid, our blog on sectoral SDG statistics on health, poverty, hunger, and climate, was already showing very mixed results and a lack of mutual accountability.

The private sector is ever-being pushed to fund more of such development costs, only marginally successfully, as public sector expenditures are squeezed. Yet the G20 estimates that $2.5 TRILLION is needed every year to meet the SDG goals. As we have seen at Impact Guild, the push to incentivize private commitments is faltering. “To ensure its sustainability, the private sector has specific interests in securing long-term production along commodity supply chains, while reducing their environmental and social impacts and mitigating risks… The long-term economic impacts of funding projects that support the sustainability agenda are, thus, clearly understood. However, additional capital needs to flow into areas that address the risks appropriately. For example, much remains to be done to factor climate change as a risk variable into emerging markets that face the largest financing gap in achieving the SDGs.” Further, if decreased funding trends continue, by 2030, at minimum 400 million people will still live on less than $1.25 a day; around 650 million people will be undernourished, and nearly 1 billion people will be without energy access. So we’re not meeting the SDGs, they’re being derailed by COVID in places, and we aren’t beginning to cost out the need to address climate change and its effects on global development…. so now what?

From: https://www.g20-insights.org/policy_briefs/incentivizing-the-private-sector-to-support-the-united-nations-sustainable-development-goals/

To ensure that giving everyone a fair chance in life is more than just a slogan; accountability is crucial. This should include a commitment from world leaders to report on progress on “leaving no one behind” in the SDG follow-up and review framework established for the post-2015 agenda and for the private sector to loudly track their investments across the SDGs. For as The Center for American Progress wrote, money and results are key: We must “measure success in terms of outcomes for people, rather than in inputs—such as the amount of money spent on a project—as well as in terms of national or global outcomes” and that “policymakers at the global level and in each country should task a support team of researchers with undertaking an analysis of each commitment.”

A further concern. While we seem to measure the statistics periodically and see funding allocated to SDG priorities, but there are few causal links drawn between intensity in investment in any SDG goal and sustained results. To what degree are the donations/ investments into the SDGs linked to improvements? Without measuring causality or attribution, it could be a case of “A rising tide lifts all boats” as economies improve or, as Covid-related economic decline wiped out 20 years of development gains as Bill Gates noted last year. We need proof that trillions of dollars of international “Sustainable development” programs have any sustained impact beyond the years of intervention.

We must do more evaluation and learn from SDG data for better targeting of investments and do ex-post sustainability evaluations to see what was most sustained, impactful, and relevant. Donors should raise more funds to meet needs and consider only funding what could be sustained locally. Given the still uncounted demands on global development funding, we can no longer hope or wait for global mobilization of trillions given multiple crises pushing more of the world into crisis. Let’s focus now.

Interactive Webinar: Sustained Exit? Prove it or Improve it! (Nov 6 2020)

Sustained Exit? Prove it or Improve it!

(reposted from Medium https://jindracekan.medium.com/sustained-exit-prove-it-or-improve-it-702ac507e2a5)

Do we exit global development projects knowing our impacts are sustained? We hope so. As Professor Bea Rogers of Tufts said after evaluating 12 projects 2 years post-closure ( https://www.fsnnetwork.org/resource/exit-strategies-study), “ Hope is Not a Strategy”, yet too often that is what projects that assume sustainability does. They/we hope. But is this good enough? For me, confirming that hope means evaluating beyond exit to ex-post, at least 2 years after donor investments end. 99% of the time, donors & development practitioners don’t return to see what lasted, what didn’t, why nor what emerged from people’s own effort. Yet we implement similar programs over and over onward, not learning lessons from the past. Sigh.

We need to evaluate what we expected to remain from our implemented projects. We also need to learn from what evaluator Bob Williams calls, “the sustainability of the idea that underpinned the results (even if the results were no longer evident)”. This is often beneath what emerged: Our projects catalyzed the local’s desire to sustain activities: taking new ways, that are locally manageable (changing how the development idea is implemented onward) or even having entirely new initiatives emerge from the participant groupings — from their own priorities, not ours. (For more on emerging impactshttps://www.betterevaluation.org/en/themes/SEIE)

Evaluation leaders talk about power, they talk about the environment. After 7 years of researching and evaluating projects ex-post evaluations, I have found there are no brilliant 100% sustained + projects nor are there any 100% abjectly scorched earth ones either. Our results are middling at best. And therein lies the rub. Projects are what donors want to give. Sometimes that overlaps with what recipient countries want, sometimes not. Most of the time the resources to sustain our multimillion-dollar, -euro, -yen, etc., investments aren’t there. We can use incentives (e.g. food aid or cash) that can bolster short-term success while we spend, but once phased out, can lead to sustainability sharply falling off as early as 2 years after we exit. It’s because while ‘development’ is about ‘our’ spending on ‘our’ programs, about short-term success while we’re implementing, rather than our equal partners’ priorities and ability to sustain it. We misuse our power. We care about ourselves far more than the people we ostensibly went there to ‘save’.

And as esteemed evaluators Andy Rowe/ Michael Quinn Patton noted, given climate change we need to question even more assumptions about how sustained and resilient our programming can be, by evaluating the natural environment on which our programming relies pre, during & implementation, at exit and ex-post closure. (More on sustained environment: https://valuingvoices.com/sustaining-sustainable-development/)

It also means we need to talk to those to whom we will eventually hand over early on to make sure we’ve built-in resilience to the climatic, economic shocks we know of so far. I recommend my colleagues Holta Trandafili and Isabella Jean’s presentations on partnering we did a couple of months ago: https://valuingvoices.com/sustainability-ready-what-it-takes-to-support-measure-lasting-change-webinar/

Finally, I have come to see that to make sustainability more likely for years to come, we must fund, design, implement, and monitor/ evaluate For Sustainability throughout the project cycle. I have come to see that folks need guidance to help support their integration of sustainability throughout, including environment & resilience, benchmarks, and more. We can learn from what ex-posts teach. Join me please, to help craft more sustained development:

Upcoming Sustained Exit Webinar: 6 Nov 2020, 14:30–17 CEST, 8:30–11 EST

“Sustained Exit? Prove it or Improve it!” Interactive webinar discussion of ex-post sustainability evaluation lessons and how to integrate into ongoing #aid programs. On Zoom, participants get resources: checklists, slides, recording, Join us to #sustain #impacts! Register, sliding scale: https://sustainedexit.eventbrite.com

Sustaining “Sustainable Development”

 

Sustaining “Sustainable Development”?

 

As a global development industry, we have almost no evidence of how (un)sustained the outcomes or impacts of 99% of our projects because we have never returned to evaluate them. But from early indications based on the ex-posts, we have evaluated 2-20 years after donor departure it is, learning from what was and was not sustained is vital before replication and assuming sustainability. Most results taper off quite quickly, showing 20-80% decreases as early as two years post-closure and donor exit. A few cases of good news also appear, but more trajectories falter and fail than rise or remain. Sustainability, then, is not a yes-no answer, but a how much, yet too few ask… hence if they were, resilient, they are less so, or even not at all, now.

 

At Valuing Voices we focus on the sustainability of projects after external support ends. Still, those projects are also dependent on the viability of the environment in which they are based. As Andy Rowe, an evaluator on the GEF’s Adaptation Fund board, noted at IDEAS’ Conference in Prague late 2019 [1], a need for sustainability-ready evaluation to help us know how viable the resources are on which so many of our projects rest [2]. He states, “the evaluation we have today treats human and natural systems as unconnected and rarely considers the natural system”. He goes on to differentiate between biotic natural capital  (air, water, plants, and trees) and abiotic natural capital sources (fossil fuels, minerals, and metals, wind, and solar).

 

How much are projects designed assuming those resources are and will remain plentiful? How often do we evaluate how much our projects drain or rely on these environmental elements? Many projects are required to do environmental compliance and safeguarding against damage at project onset [3]. Others, such as agriculture and natural resource management or water/ sanitation, often focus on improving the environment on which those activities rely, e.g., improving soil or terrain (e.g., terraces, zais), planting seedlings, and improving access to potable water for humans and animals. Still, many projects ‘assume’ inputs like rainfall, tree cover, solar power, or do not consider the sustainability of natural resources for the communities in which they intervene. Examples are both those that rely on natural systems as well as those supposedly beyond them, e.g., enterprise development, education, safety nets, etc. Yet many enterprises, schools, safety nets do rely on a. viable environment in which their participants trade, learn, and live, and all are subject to the growing climate change disruptions. 

 

Why is this urgent? The OECD/DAC reminds us that “Natural assets represent, on average 26% of the wealth of developing countries compared to 2% in OECD economies” [4]. Unless we protect them and address the demand for natural resources, demand will far outstrip supply. “By 2030, an additional 1 billion people are expected to live in severely water-stressed areas, and global terrestrial biodiversity is expected to decline an additional 10%, leading to a loss of essential ecosystem services. By 2050, growing levels of dangerous air emissions from transport and industry will increase the global number of premature deaths linked to airborne particulate matter to 3.6 million people a year, more than doubling today’s levels. Failure to act could also lead to a 50% increase in global greenhouse gas emissions by 2050, and global mean temperature increases of 3-6°C by the end of the century, in turn contributing to more severe and sometimes more frequent natural disasters… [so] reconciling development with environmental protection and sustainable resource management is broadly agreed as a central concern for the post-2015 development agenda.”

 

When we return to projects that are a mix of behavior change and environment, we find a wide range of results:

  • Some projects, such as JICA Vietnam’s water supply and irrigation infrastructure reached 80% of the final results two years later [5]. And while the pilot projects were worse off (as low as 28% of irrigated hectares), longer-standing projects sustained as much as 72% of final results. While such agricultural development assumes continued water supply and access, does it evaluate it? No.
  • Some can define what ex-post lessons are more narrowly as functioning mechanisms: New ex-posts of water/ sanitation showed better – but still mixed results, such as USAID Senegal’s [6]. “While a majority (63 percent) of the water points remained functional, the performance varied significantly based on the technology used. Of the different technologies, the Erobon rope pumps performed poorly (27 percent functional), while the India Mark (74 percent functional) and mechanized pumps (70 percent functional) performed the best.”
  • Some projects that include environmental considerations illustrate our point by only focusing on behavior change as this sanitation/ hygiene ex-post from Madagascar did, where results fell off precipitously three years ex-post but without considering water supply or quality much [7]. 

[7]

  • There can be useful learning when one combines an evaluation of both types of sustainability (ex-post and environmental). A JICA irrigation project in Cambodia shows that when irrigation canals were mostly sustained over the five-years ex-post, they could serve increasing needs for land coverage and rice production [7]. The area of irrigated fields at the national level in 2010 reached the target, and the irrigated field area has since continued to increase in most areas. Even the largest drop [in area irrigated] post-closure was only 11%. They reported that the unit yield of rice at the end-line survey in 2012 at 11 sites was 3.24t/ha (average) versus 3.11t/ha of unit yield of rice at the ex-post evaluation in 2017, which [almost] maintains the 2012 level. The ex-post showed that “continuous irrigation development in the said site can be considered as the main reason for the increase in land area. Securing an adequate amount of water is an important factor in continuously improving rice productivity.” The research also found that 81% of agricultural incomes as a result of the irrigation had increased, 11% stayed the same, and 8% had decreased. Again, this looks to be among the most resilient projects that, based on ex-post research, included environment which was also found to be as resilient as the livelihoods it was fostering.
  • Sometimes more bad than good news is important when tracking environment and ex-post sustainability: Food for the Hungry, ADRA, and CARE Kenya found that unreliable water supply reduced the motivation to pay for water, threatening the resources to maintain the system [8]. What improved prospects of sustainability understand why communities could not sustain water and sanitation results based on willingness-to-pay models, as well as water being unavailable. Further, a lesson the organizations ideally learned was that “gradual exit, with the opportunity for project participants to operate independently prior to project closure, made it more likely that activities would be continued without project support.” So the question remains, what was learned by these organizations to avoid similar bad results and improve good, resilient results in similar circumstances?

 

[6]

 

Neither sustainability nor environmental quality can be assumed to continue nor to have positive results. Both are extensively under-evaluated, and given climate change disruptions, and this must change. Rowe concludes: “Climate change is a major threat to the long-term sustainability both attacking the natural systems (e.g. lower rainfall or higher floods, worse soil quality, increasing pests attacking crops, disappearing fish stocks, microplastics in our air and water, increasing sea levels from melting glaciers, worsening public health etc.) and destabilizining our Earth’s regenerative capacity. Fortunately, technical barriers do not prevent us from starting to infuse sustainability into evaluation; the barriers are social and associated with the worldview and vision of evaluation.”

 

Sources:

[1] IDEAS 2019 Global Assembly. (n.d.). Retrieved from https://2019.global-assembly.org/

[2] Rowe, A. (2019). Sustainability‐Ready Evaluation: A Call to Action. New Directions for Evaluation, 162, 29-48. Retrieved from https://www.researchgate.net/publication/333616139_Sustainability-Ready_Evaluation_A_Call_to_Action

[3] USAID. (2013, October 31). Environmental Compliance Procedures. Retrieved from https://www.usaid.gov/our_work/environment/compliance/pdf/216

[4] OECD. (2015). Element 4, Paper 1: Global and local environmental sustainability, development and growth. Retrieved from https://www.oecd.org/dac/environment-development/FINAL%20POST-2015%20global%20and%20local%20environmental%20sustainability.pdf

[5] Haraguchi, T. (2017). Socialist Republic of Viet Nam: FY 2017 Ex-Post Evaluation of Japanese ODA Loan Project “Small-Scale Pro Poor Infrastructure Development Project (III)”. Retrieved from https://www2.jica.go.jp/en/evaluation/pdf/2017_VNXVII-5_4.pdf

[6] Coates, J., Kegode, E., Galante, T., & Blau, A. (2016, February). Sustaining Development: Results from a Study of Sustainability and Exit Strategies among Development Food Assistance Projects: Kenya Country Study. USAID. Retrieved from https://www.globalwaters.org/resources/assets/ex-post-evaluation-senegal-pepam

[7] Madagascar Rural Access To New Opportunities For Health And Prosperity (RANO-HP) Ex-Post Evaluation. (2017, June 1). USAID. Retrieved from https://www.globalwaters.org/resources/assets/madagascar-rural-access-new-opportunities-health-and-prosperity-rano-hp-ex-post-0

[8] Kobayashi, N. (2017). Kingdom of Cambodia: FY2017 Ex-Post Evaluation of Technical Cooperation Project: “Technical Service Center for Irrigation System Project – Phase 2 / The Improvement of Agricultural River Basin Management and Development Project (TSC3)”. Retrieved from https://www2.jica.go.jp/en/evaluation/pdf/2017_0900388_4.pdf

 

Setting a higher bar: Sustained Impacts are about All of us

Setting a higher bar: Sustained Impacts are about All of us
Global development aid has a problem which may already affect impact investing as well.

It is that we think it’s really all about us (individuals, wealthy donors and INGO implementers) not all of us (you, me, and project participants, their partners and governments). It’s also about us for a short time.

 

All too often, the measurable results we in global development aid and Corporate Social Responsibility (CSR) funded projects that last 1-5 years track and report data for two reasons:

1) Donors have Compliance for grantees to meet (money spent, not lost, and results met by fixed deadlines of 1-5 years – look at some of the European Commission Contracting rules) and

2) Fund recipients and the participants they serve are accountable to ‘our’ donors and implementers who take what happened through their philanthropic grants as ‘their’ results.

Both can skew how sustainably we get to create impacts. An example of such strictures on sustainability from USAID.  As respected CGDev Elliot and Dunning researchers found in 2016 when assessing the ‘US Feed the Future Initiative: A New Approach to Food Security?‘ the $10.15 billion leveraged $20 billion from other funders for disbursement over three years (2013-16). “We are concerned that pressure to demonstrate results in the short term may undermine efforts to ensure any impact is sustainable…. Unfortunately, the pressure to show immediate results can encourage pursuit of agricultural investments unlikely to be sustained. For example, a common response to low productivity is to subsidize or facilitate access to improved inputs… it can deliver a quick payoff… however, if the subsidies become too expensive and are eliminated or reduced, fertilizer use and yields often fall…..

With so much focus on reporting early and often about the progress in implementing the initiative, there is a risk that it increases the pressure to disburse quickly and in ways that may not produce sustainable results. For example, for 2014, Feed the Future reports that nearly 7 million farmers applied “improved technologies or management practices as a result of U.S. Government assistance,” but only 1,300 received “long-term agricultural sector productivity.” Are the millions of others that are using improved inputs or management practices because of subsidies likely to have these practices sustained? And how likely are they to continue using improved practices once the project ends?”

 

3) Impact investors stick to the same two paths-to-results and add a new objective: market-competitive financial  returns. They also need to show short-term results to their investors, albeit with social, environmental and governance results like non-profits (future blog).

4) Altruists create things we want ‘beneficiaries’ (our participants) to have. For instance a plethora of apps for refugees cropped up in recent years, over 5,000 it is estimated, which can be appropriate, nor not so helpful. Much like #2 above, ‘we’re’ helping ‘them’ but again, it seems to be a ‘give a man a fish’… and my fish is cool sort of solution… but do our participants want/ need this?

 

How often is our work-for-change mostly about us/by us/ for us... when ideally it is mostly about ‘them’ (OK, given human self-interest, shouldn’t changes we want at least be about all of us?).

All too often we want to be the solution but really, our ‘grassroots’ clients who are our true customers need to generate their own solution. Best if we listen and we design for long-term sustainability together?

 

As the Brilliant Sidekick Manifesto stated in two of its ten steps:
a)I will step out of the spotlight: Sustainable solutions to poverty come from within are bottom-up, and flow from local leaders who are taking the risks of holding their politicians accountable and challenging the status quo.”

b)I will read “To Hell with Good Intentions” again and again: Politicians, celebrities and billionaire philanthropists will tell me that I can be a hero. I cannot. The poor are not powerless or waiting to be saved. Illich will check my delusions of grandeur.”

 

We have examples of where we have stepped away and participants had to fend for themselves. At Valuing Voices, we’ve done post project-exit evaluations 2-15 years afterward. What did participants value so much that they sustained it themselves (all about them, literally)? These Sustained and Emerging Impacts Evaluations (SEIE) also give us indications of Sustained ROI (Sustained Return on Investment (SusROI) is a key missing metric. As respected evaluator Ricardo Wilson-Grau said in an email, “I think calculating cost-effectiveness of an intervention’s outcomes would be a wonderful challenge for a financial officer searching for new challenges — if not a Nobel prize in economics!”)

Most of these evaluations are pretty bad news mixed with some good news about what folks could sustain after we left, couldn’t and why not. (These are the ones folks expect to have great results, otherwise they wouldn’t share them!)  While most clients are understandably interested in what of ‘theirs’ was still standing, and it was interesting disentangling where the results were attributable by implementation or design or partnership flaws or something else, what was mesmerizing was what came from ‘them’.

The key is looking beyond ‘unexpected’ results to look at emerging impacts that are about ‘them’ (aka what we didn’t expect that was a direct result of our project, e.g. spare parts were no longer available to fix the water well pump once we left or a drought rehabilitation water project that decreased violence against women), to what emerging results are attributable not to use but only to our participants and partners who took over after our projects closed.  One example is a Nepalese project ended yet the credit groups of empowered women spawned groups of support groups for battered women. Another is a child maternal health project changed how it worked as women reverted to birthing at home after NGOs left; community leaders punished both parents with incarceration in the health clinic for a week if they didn’t given birth there (wow did that work to sustain behavior change of both parents!).

Many of us at Valuing Voices are shocked that funders don’t seem that interested in this, as this is where they not only take over (viz picture, sustaining the project themselves), but they are making it theirs, not oursImagine assuming the point of development is to BE SUSTAINABLE.

Source: Community Life Competence

Our participants and national stakeholder partners are our true clients, yet… Feedback Labs tell us Americans alone gave $358 billion to charities (equivalent to the 2014 GDP of 20 countries) – in 2014 but how much of this was determined by what ‘beneficiaries’ want? Josh Woodard, a development expert, suggests a vouchers approach where our true clients, our participants, who would “purchase services from those competing organizations… [such an] approach to development would enable us all to see what services people actually value and want. And when we asked ourselves what our clients want, we would really mean the individuals in the communities we are in the business of working with and serving. Otherwise we’d be out of business pretty quickly.”

This opens the door to client feedback – imagine if participants could use social media to rate the sustained impacts on them of the projects they benefited from? A customer support expert wrote in Forbes, “Today, every customer has, or feels she has, a vote in how companies do business and treat customers. This is part of a new set of expectations among customers today that will only grow ... you can’t control product ratings, product discussions or much else in the way of reviews, except by providing the best customer experience possible and by being proactive in responding to negative trends that come to the surface in your reviews and ratings stronger.”

So how well are we working with our participants for ‘development’ to be about them?

What do you think?