So excited to have our chapter verifying the ‘sustainability’ of the Global Environment Facility Trust Fund (GEF) funded projects through examining two tranches of evaluations. My co-writer colleague Susan Legro did a brilliant job pointing out GreenHouse Gasses (GHG) emissions estimated reductions flaws. Given climate change is in full swing, we must trust the data we have.
The purpose of this research was to explore how public donors and lenders evaluate the sustainability of environmental and other sectoral development interventions. Specifically, the aim is to examine if, how, and how well post project sustainability is evaluated in donor-funded climate change mitigation (CCM) projects, including the evaluability of these projects. We assessed the robustness of current evaluation practice of results after project exit, particularly the sustainability of outcomes and long-term impact. We explored methods that could reduce uncertainty of achieving results by using data from two pools of CCM projects funded by the Global Environment Facility (GEF).
Evaluating sustainable development involves looking at the durability and continuation of net benefits from the outcomes and impacts of global development project activities and investments in various sectors in the post project phase, i.e., from 2 to 20 years after donor funding ends.1 Evaluating the sustainability of the environment is, according to the Organisation for Economic Co-operation and Development (OECD, 2015), at once a focus on natural systems of “biodiversity, climate change, desertification and environment” (p.1) that will need to consider the context in which these are affected by human systems of “linkages between poverty reduction, natural resource management, and development” (p. 3). This chapter focuses more narrowly on the continuation of net benefits from the outcomes and impacts of a pool of climate change mitigation (CCM) projects (see Table 1). The sustainability of CCM projects funded by the Global Environment Facility (GEF), as in a number of other bilateral and multilateral climate funds, rests on a theory of change that a combination of technical assistance and investments contribute to successfully durable market transformation, thus reducing or offsetting greenhouse gas (GHG) emissions.
Table 1: Changes in OECD DAC Criteria from 1991 to 2019
SUSTAINABILITY: WILL THE BENEFITS LAST?
Sustainability is concerned with measuring whether the benefits of an activity are likely to continue after donor funding has been withdrawn. Projects need to be environmentally as well as financially sustainable.
The extent to which the net benefits of the intervention continue, or are likely to continue. Note: Includes an examination of the financial, economic, social, environmental, and institutional capacities of the systems needed to sustain net benefits over time. Involves analyses of resilience, risks, and potential trade-offs.
The positive and negative changes produced by a development intervention, directly or indirectly, intended or unintended. This involves the main impacts and effects resulting from the activity on the local social, economic, environmental, and other development indicators.
The extent to which the intervention has generated or is expected to generate significant positive or negative, intended or unintended, higher-level effects. . . . It seeks to identify social, environmental, and economic effects of the intervention that are longer-term or broader in scope.
Source: OECD/DAC Network on Development Evaluation, (2019); italics are emphasis added by Cekan
CCM projects lend themselves to such analysis, as most establish ex-ante quantitative mitigation estimates and their terminal evaluations often contain a narrative description and ranking of estimated sustainability beyond the project’s operational lifetime, including the achievement of project objectives. The need for effective means of measuring sustainability in mitigation projects is receiving increasing attention (GEF Independent Evaluation Office [IEO], 2019a) and is increasingly important, as Article 13 of the Paris Agreement mandates that countries with donor-funded CCM projects report on their actions to address climate change (United Nations, 2015). As several terminal evaluations in our dataset stated, better data are urgently needed to track continued sustainability of past investments and progress against emissions goals to limit global warming.
Measuring Impact and Sustainability
Although impactful projects promoting sustainable development are widely touted as being the aim and achievement of global development projects, these achievements are rarely measured beyond the end of the project activities. Bilateral and multilateral donors, with the exception of the Japan International Cooperation Agency (JICA) and the U.S. Agency for International Development (USAID),2 have reexamined fewer than 1% of projects following a terminal evaluation, although examples exist of post project evaluations taking place as long as 15 years (USAID) and 20 years (Deutsche Gesellschaft fur Internationale Zusammenarbeit [GIZ]) later (Cekan, 2015). Without such fieldwork, sustainability estimates can only rely on assumptions, and positive results may in fact not be sustained as little as 2 years after closure. An illustrative set of eight post project global development evaluations analyzed for the Faster Forward Fund of Michael Scriven in 2017 showed a range of results: One project partially exceeded terminal evaluation results, two retained the sustainability assumed at inception, and the other five showed a decrease in results of 20%–100% as early as 2 years post-exit (Zivetz et al., 2017a).
Since the year 2000, the U.S. government and the European Union have spent more than $1.6 trillion on global development projects, but fewer than several hundred post project evaluations have been completed, so the extent to which outcomes and impacts are sustained is not known (Cekan, 2015). A review of most bilateral donors shows zero to two post project evaluations (Valuing Voices, 2020). A rare, four-country, post project study of 12 USAID food security projects also found a wide variability in expected trajectories, with most projects failing to sustain expected results beyond as little as 1 year (Rogers & Coates, 2015). The study’s Tufts University team leaders noted that “evidence of project success at the time of exit (as assessed by impact indicators) did not necessarily imply sustained benefit over time.” (Rogers & Coates, 2015, p. v.). Similarly, an Asian Development Bank (ADB) study of post project sustainability found that “some early evidence suggests that as many as 40% of all new activities are not sustained beyond the first few years after disbursement of external funding,” and that review examined fewer than 14 of 491 projects in the field (ADB, 2010). The same study described how assumed positive trajectories post funding fail to sustain and noted a
tendency of project holders to overestimate the ability or commitment of implementing partners—and particularly government partners—to sustain project activities after funding ends. Post project evaluations can shed light on what contributes to institutional commitment, capacity, and continuity in this regard. (ADB, 2010, p. 1)
Learning from post project findings can be important to improve project design and secure new funding. USAID recently conducted six post project evaluations of water/sanitation projects and learned about needed design changes from the findings, and JICA analysed the uptake of recommendations 7 years after closure (USAID, 2019; JICA, 2020a, 2020b). As USAID stated in their 2018 guidance,
An end-of-project evaluation could address questions about how effective a sustainability plan seems to be, and early evidence concerning the likely continuation of project services and benefits after project funding ends. Only a post project evaluation, however, can provide empirical data about whether a project’s services and benefits were sustained. (para. 9)
Rogers and Coates (2015) expanded the preconditions for sustainability beyond only funding, to include capacities, partnerships, and ownership. Cekan et al. (2016) expanded ex-post project methods from examining the sustainability of expected project outcomes and impacts post closure to also evaluating emerging outcomes, namely “what communities themselves valued enough to sustain with their own resources or created anew from what [our projects] catalysed” (para. 19). In the area of climate change mitigation, rigorous evaluation of operational sustainability in the years following project closure should inform learning for future design and target donor assistance on projects that are most likely to continue to generate significant emission reductions.
How Are Sustainability and Impact Defined?
The original 1991 OECD Development Assistance Committee (DAC) criteria for evaluating global development projects included sustainability, and the criteria were revised in 2019. The revisions related to the definition of sustainability and emphasize the continuation of benefits rather than just activities, and they include a wider systemic context beyond the financial and environmental resources needed to sustain those benefits, such as resilience, risk, and trade-offs, presumably for those sustaining the benefits. Similarly, the criteria for impact have shifted from simply positive/negative, intended/unintended changes to effects over the longer term (see Table 1).
In much of global development, including in GEF-funded projects, impact and sustainability are usually estimated only at project termination, “to determine the relevance and fulfilment of objectives, development efficiency, effectiveness, impact and [projected] sustainability” (OECD DAC, 1991, p. 5). In contrast, actual sustainability can only be evaluated 2–20 years after all project resources are withdrawn, through desk studies, fieldwork, or both. The new OECD definitions present an opportunity to improve the measurement of sustained impact across global development, particularly via post project evaluations. Evaluations need to reach beyond projected to actual measurement across much of “sustainable development” programming, including that of the GEF.
GEF evaluations in recent years have been guided by the organization’s 2010 measurement and evaluation (M&E) policy, which requires that terminal evaluations “assess the likelihood of sustainability of outcomes at project termination and provide a rating” (GEF Independent Evaluation Office [IEO], p. 31). Sustainability is defined as “the likely ability of an intervention to continue to deliver benefits for an extended period of time after completion; projects need to be environmentally as well as financially and socially sustainable” (GEF IEO, 2010, p. 27).
In 2017, the GEF provided specific guidance to implementing agencies on how to capture sustainability in terminal evaluations of GEF-funded projects (GEF, 2017, para. 8 and Annex 2): “The overall sustainability of project outcomes will be rated on a four-point scale (Likely to Unlikely)”:
Likely (L) = There are little or no risks to sustainability;
Moderately Likely (ML) = There are moderate risks to sustainability;
Moderately Unlikely (MU) = There are significant risks to sustainability;
Unlikely (U) = There are severe risks to sustainability; and
Unable to Assess (UA) = Unable to assess the expected incidence and magnitude of risks to sustainability
Although this scale is a relatively common measure for estimating sustainability among donor agencies, it is not a measure that has been tested for reliability, i.e., whether multiple raters would provide the same estimate from the same data. It has also not been tested for construct validity, i.e., whether the scale is an effective predictive measure of post project sustainability. Validity issues include whether an estimate of risks to sustainability is a valid measure of the likelihood of post project sustainability, whether the narrative estimates of risk are ambiguous or double-barreled; and the efficacy of using a ranked, ordinal scale that treats sustainability as an either/or condition rather than a range (from no sustainability to 100% sustainability).
Throughout this chapter, we identify projects by their GEF identification numbers, with a complete table of projects provided in the appendix.
The Limits of Terminal Evaluations
Terminal evaluations and even impact evaluations that mostly compare effectiveness rather than long-term impact were referenced as sources for evaluating sustainability in the GEF’s 2017 Annual Report on Sustainability (GEF IEO, 2019a). Although they can provide useful information on relevance, efficiency, and effectiveness, neither is a substitute for post project evaluation of the sustainability of outcomes and impacts, because projected sustainability may or may not occur. In a terminal evaluation of Mexican Sustainable Forest Management and Capacity Building (GEF ID 4149), evaluators made the case for ex-post project monitoring and evaluation of results:
There is no follow-up that can measure the consolidation and long-term sustainability of these activities. . . . Without a proper evaluation system in place, nor registration, it is difficult to affirm that the rural development plans will be self-sustaining after the project ends, nor to what extent the communities are readily able to anticipate and adapt to change through clear decision-making processes, collaboration, and management of resources. . . . They must also demonstrate their sustainability as an essential point in development with social and economic welfare from natural resources, without compromising their future existence, stability, and functionality. (pp. 5–9)3
Returning to a project area after closure also fosters learning about the quality of funding, design, implementation, monitoring, and evaluation and the ability of those tasked with sustaining results to do so. Learning can include how well conditions for sustainability were built in, tracked, and supported by major stakeholders. Assumptions made at design and final evaluation can then also be tested, along with theories of change (Sridharam & Nakaima, 2019). Finally, post project evaluations can verify the attributional claims made at the time of the terminal evaluation. As John Mayne explained in his 2001 paper:
In trying to measure the performance of a program, we face two problems. We can often—although frequently not without some difficulty—measure whether or not these outcomes are actually occurring. The more difficult question is usually determining just what contribution the specific program in question made to the outcome. How much of the success (or failure) can we attribute to the program? What has been the contribution made by the program? What influence has it had? (p. 3)
In donor- and lender-funded CCM projects, emission reduction estimates represent an obvious impact measure. They are generally based on a combination of direct effects—i.e., reductions due to project-related investments in infrastructure—and indirect effects—i.e., reductions due to the replication of “market transformation” investments from other funding or an increase in climate-friendly practices due to improvements in the policy and regulatory framework (Duval, 2008; Legro, 2010). Both of these effects are generally estimated over the lifetime of the mitigation technology involved, which is nearly always much longer than the operational duration of a given project (see Table 2).
Table 2: Typology of GHG Reductions Resulting from Typical Project Interventions
Type of GHG reductions
Project lifetime (quarterly annual monitoring)
Post project lifetime (post project evaluation)
Reductions directly financed by donor-funded pilot project(s) or investment(s)
Continuing reductions from project-financed investments (through the end of the technology lifetime; e.g., 20 years for buildings, 10 years for industrial equipment, etc.)
Reductions from policy uptake (e.g., reduced fossil fuel use from curtailment of subsidies, spillover effects from tax incentives, increased government support for renewable energy due to strategy development) (co-) funded by the donor
Continuing reductions from policy uptake (e.g., reduced fossil fuel use from curtailment of subsidies, spillover effects from tax incentives, increased government support for energy efficiency or renewable energy due to strategy development)
Reductions from market transformation (changes in availability of financing, increased willingness of lenders, reduction in perceived risk) supported by pilot demonstrations and/or outreach and awareness raising (co-)funded by the donor
Continuing reductions from market transformation (changes in availability of financing, increased willingness of lenders, reduction in perceived risk) as a legacy of the pilot demonstrations and/or outreach and awareness raising funded by the donor-funded project
New reductions from the continuation of the investment or financing mechanism established by the donor-funded project
The increasing use of financial mechanisms such as concessional loans and guarantees as a component of donor-funded CCM projects, such as those funded by the Green Climate Fund (https://www.greenclimate.fund/), can also limit the ability of final evaluations to capture sustainability, because the bulk of subsequent investments in technologies that are assumed with revolving funds will not take place during the project lifetime. A 2012 paper by then-head of the GEF Independent Evaluation Office, Rob van den Berg, supported the need for post project evaluation and importantly included:
Barriers targeted by GEF projects, and the results achieved by GEF projects in addressing market transformation barriers . . . facilitate in understanding better whether the ex-post changes being observed in the market could be linked to GEF projects and pathways through which outcomes and intermediate states . . . [and] the extent GEF-supported CCM activities are reducing GHGs in the atmosphere . . . because it helps in ascertaining whether the incremental GHG reduction and/or avoidance is commensurate with the agreed incremental costs supported by GEF. . . . It is imperative that the ex-ante and ex-post estimates of GHG reduction and avoidance benefits are realistic and have a scientific basis. (GEF IEO, 2012, p. 13)
This description of GHG-related impacts illustrates the difficulties associated with accurately drawing conclusions about sustainability from using a single scale to estimate “the likely ability [emphasis added] of an intervention to continue to deliver benefits for an extended period of time” (GEF IEO, 2010, p. 35) due to several factors. First, the GEF’s 4-point scale is supposed to capture two different aspects of continuation: ongoing benefits from a project-related investment, and new benefits from the continuation of financing mechanisms. Without returning to evaluate the continued net benefits of the now-closed investment, such assumptions cannot be fully claimed. Second, the scale is supposed to capture benefits that can be estimated in a quantitative way (e.g., solar panels that offset the use of a certain amount of electricity from diesel generators); benefits that can be evaluated through policy or program evaluation (e.g., the introduction of a law on energy efficiency); and benefits that will require careful, qualitative study to determine impacts (e.g., training programs for energy auditors or awareness-raising for energy consumers, leading to knowledge and decision changes). Aggregating and weighing such an array of methods into one ranking is methodologically on shaky ground, especially without post project measurements to confirm whether results happened at any time after project closure.
The impetus for this research was a sustainability analysis conducted by the GEF IEO that was summarized in the 2017 GEF Annual Performance Report (GEF IEO, 2019a). The study stated: “The analysis found that outcomes of most of the GEF projects are sustained during the postcompletion period, and a higher percentage of projects achieve environmental stress reduction and broader adoption than at completion” (p. 17). Learning more about postcompletion outcomes and assessing how post project sustainability was evaluated was the aim of this work.
This chapter’s research sample consists of two sets of GEF project evaluations. We chose projects funded by the GEF because of the large size of the total project pool. For example, the Green Climate Fund lacks a large pool of mitigation projects that would be suitable for post project evaluation. Our first tranche was selected from the pool of CCM projects cited in the sustainability analysis, which included a range of projects with the earliest start date of 1994 and the latest closing date of 2013 (GEF IEO, 2019a). These constituted $195.5 million dollars of investments. The pool of projects in the climate change focal area (n = 17), comprising one third of the GEF IEO sample, was then selected from the 53 projects listed in the report for further study. We then classified the selected projects by which ones had any mention of field-based post project verification according to an evaluability checklist (Zivetz et al., 2017a). This list highlights methodological considerations including: (a) data showing overall quality of the project at completion, including M&E documentation needed on original and post project data collection; (b) time postcompletion (at least 2 years); (c) site selection criteria; and (d) proof that project results were isolated from concurrent programming to ascertain contribution to sustained impacts (Zivetz et al., 2017a).
Next, we reviewed GEF documentation to identify any actual quantitative or qualitative measures of post project outcomes and impacts. These could include: (a) changes in actual energy efficiency improvements against final evaluation measures used, (b) sustained knowledge or dissemination of knowledge change fostered through trainings, (c) evidence of ownership, or (d) continued or increased dissemination of new technologies. Such verification of assumptions in the final documents typically explores why the assumptions were or were not met, and what effects changes in these assumptions would have on impacts, such as CO2 emissions projections.
The second tranche consisted of projects in the climate change focal area that were included in the 2019 cohort of projects for which the GEF received terminal evaluations. As the GEF 2019 Annual Performance Report explained:
Terminal evaluations for 193 projects, accounting for $ 616.6 million in GEF grants, were received and validated during 2018–2019 and these projects constitute the 2019 cohort. Projects approved in GEF-5 (33 percent), GEF-4 (40 percent) and GEF-3 (20 percent) account for a substantial share of the 2019 cohort. Although 10 GEF Agencies are represented in the 2019 cohort, most of these projects have been implemented by UNDP [United Nations Development Programme] (56 percent), with World Bank (15 percent) and UNEP [United Nations Environment Programme] (12 percent) also accounting for a significant share. (GEF IEO, 2020, p. 9)
We added the second tranche of projects to represent a more current view of project performance and evaluation practice.
The climate change focal area subset consisted of 38 completed GEF projects, which account for approximately $155.7 million in GEF grants (approximately 20% of the total cohort and 25% of the overall cohort budget). Projects included those approved in 1995–1998 (GEF-1; n = 1) and 2003–2006 (GEF-3; n = 2), but 68% were funded in 2006–2010 (GEF-4; n = 26), and 24% in 2010–2014 (GEF-5; n = 9), making them more recent as a group than the 2019 cohort as a whole. Six GEF agencies were represented: Inter-American Development Bank (IDB), International Fund for Agricultural Development (IFAD), UNDP, UNEP, United Nations Industrial Development Organization (UNIDO), and the World Bank.
We eliminated three projects listed in the climate focal area subset from consideration in the second tranche because they had not been completed, leaving a pool of 35 projects. Ex-ante project documentation, such as CEO endorsement requests, and terminal evaluation reports were then reviewed for initial estimates of certain project indicators, such as GHG emission reductions, and ratings of estimated sustainability on the 4-point scale, including the narrative documentation that accompanied the ratings.
The question of whether post project sustainability was being measured was based on the first tranche of projects and on the sustainability analysis in which they were included. Most of the documents cited in the sustainability analysis were either terminal or impact evaluations focused on efficiency (GEF IEO, 2019a), and most of the documents and report analysis focused on estimated sustainability. Of the 53 “postcompletion verification reports,” as they are referred to in the review (GEF IEO, 2019a, p. 62), we found only 4% to contain adequate information to support the analysis of sustainability. Our wider search for publicly available post project evaluations, which would have constituted an evidence base for sustained outcomes and environmental stress reduction and adoption cited in the GEF IEO 2019 analysis, did not identify any post project evaluations. We were unable to replicate the finding that “84% of these projects that were rated as sustainable at closure also had satisfactory postcompletion outcomes. . . . Most projects with satisfactory outcome ratings at completion continued to have satisfactory outcome ratings at postcompletion” (GEF IEO, 2019a, p. 3) or to compare the CCM subset of projects with this conclusion. The report stated that “the analysis of the 53 selected projects is based on 61 field verification reports. For 81 percent of the projects, the field verification was conducted at least four years after implementation completion [emphasis added].” However, we found no publicly accessible documentation that could be used to confirm the approach to field verification for 8 of the 17 projects.
Similarly, the available documentation for the projects lacked the most typical post project hallmarks, such as methods of post project data collection, comparisons of changes from final to post project outcomes and impacts at least 2 years post closure, and tracing contribution of the project at the funded sites to the changes. Documentation focused on a rating of estimated sustainability with repeated references to only the terminal evaluations and closure reports. In summary, of the 17 projects selected for review in the first tranche, 14 had data consisting of terminal evaluations, and none was 2–20 years post closure. We did not find publicly available evidence to support measurement of post project sustainability other than statements that such evidence was gathered in a handful of cases. Of the pool of 17 projects, only two (both from India) made any reference to post project data regarding the sectors of activity in subsequent years. However, these two were terminal evaluations within a country portfolio review and could not be substantiated with publicly accessible data.
We then screened the first tranche of projects using the Valuing Voices evaluability checklist (Zivetz et al., 2017b):
High-quality project data at least at terminal evaluation, with verifiable data at exit: Of 14 projects rated for sustainability, only six were rated likely to be sustained and outcome and impact data were scant.
Clear ex-post methodology, sufficient samples: None of the evaluations available was a post project evaluation of sustainability or long-term impact. Although most projects fell within the evaluable 2–20 years post project (the projects had been closed 4–20 years), none had proof of return evaluation. There were no clear post project sampling frames, data collection processes including identification of beneficiaries/informants, site selection, isolating legacy effects of the institution or other concurrent projects, or analytic methods.
Transparent benchmarks based on terminal, midterm, and/or baseline data on changes to outcomes or impacts: M&E documents show measurable targets and indicators, baseline vs. terminal evaluations with methods that are comparable to methods used in the post project period: For some of the 17 projects, project inception documents and terminal evaluations were available; in other cases, GEF evaluation reviews were available. Two had measurable environmental indicators that compared baseline to final, but none were after project closure.
Substantiated contribution vs. attribution of impacts: Examples of substantiated contribution were not identified.
Evaluation reports revealed several instances for which we could not confirm attribution. For example, evaluation of the project Development of High Rate BioMethanation Processes as Means of Reducing Greenhouse Gas Emissions (GEF ID 370), which closed in 2005, referenced the following subsequent market information:
As of Nov 2012, capacity installed from waste-to-energy projects running across the country for grid connected and captive power are 93.68MW and 110.74 MW respectively [versus 3.79KW from 8 sub-projects and 1-5 MW projects]. . . . The technologies demonstrated by the 16 sub-projects covered under the project have seen wide-scale replication throughout the country. . . . An installed capacity of 201.03MW within WTE [waste to energy] projects and the 50% of this is attributed to the GEF project. (GEF IEO, 2013, vol. 2, p. 64)
Claims of “the technical institutes strengthened as a result of the project were not fully effective at the time of project completion but are now actively engaged in the promotion of various biomethanation technologies” are unsubstantiated in publicly available information; as a result, the ex-post methods of contribution/attribution data are not clear. Another project in India, Optimizing Development of Small Hydel [hydroelectric] Resources in Hilly Areas (GEF ID 386), projected that later investments in the government’s 5-year plans would happen, and the resulting hydropower production would be attributable to the original project (GEF IEO, 2013); again, this attributional analysis was not documented. Analysis of a third project in India, Coal Bed Methane Capture and Commercial Utilization (GEF ID 325), which closed in 2008, claimed results that could not be reproduced: “Notable progress has been made through replication of projects, knowledge sharing, and policy development” and “expertise was built” (GEF IEO, 2013, Vol. 2, p. 90). Further claims that the project contributed to “the total coal bed methane production in the country and has increased to 0.32 mmscmd [million metric standard cubic meters per day], which is expected to rise to 7.4 mmscmd by the end of 2014” is without proof. The evaluation reported estimates of indirect GHG emission reduction, based on postcompletion methane gas production estimates of 0.2 million m3 per day:
1.0 Million tons equivalent per year, considering an adjustment factor of 0.5 as the GEF contribution [emphasis added], the indirect GHG emission reduction due to the influence of the project is estimated to be 0.5 million tons of CO2 equivalent per annum (2.5 million tons over the lifetime period of 5 years). (GEF IEO, 2013, Vol. 2, p. 91)
Yet without verification of coal bed methane capture and commercial utilization continuing, this impact cannot be claimed.
How Is Sustainability Being Captured?
Fifteen of the 17 CCM projects we reviewed in the first tranche were rated on a 4-point scale at terminal evaluation. Of those 15, 12 had overall ratings of either satisfactory or marginally satisfactory, and one highly satisfactory overall. Eleven of the sustainability ratings were either likely or marginally likely. Only two projects were rated marginally unlikely overall or for sustainability, and only one project received marginally unlikely in both categories (the Demand Side Management Demonstration energy conservation project that ended in 1999 [GEF ID 64]). Although none of the documents mentioned outcome indicators, eight of the 17 rated estimated CO2 direct and indirect impacts.
In the second pool of projects—the CCM subset of the 2019 cohort—63% of the projects were rated in the likely range for sustainability (n = 22; nine were rated likely and 13 marginally likely). This is slightly higher than the 2019 cohort as a whole, in which 59% were rated in the likely range. In turn, the 2019 annual performance report noted that “the difference between the GEF portfolio average and the 2019 cohort is not statistically significant for both outcome and sustainability rating” (GEF IEO, 2020, p. 9). It is slightly lower than the percentage of CCM projects receiving an overall rating of marginally likely or higher in the 2017 portfolio review (68%, n = 265; GEF IEO, 2017, p. 78).
In this second set of projects, only two received a rating of marginally unlikely and only one received a sustainability rating of unlikely. The remainder of the projects could not be classified using the 4-point rating scale, either because they had used an either/or estimate (one project), a 5-point scale (one project), or an estimate based on the assessment of risks to development outcome (two projects). Six projects or could not be assessed due to the absence of a publicly accessible terminal evaluation in the GEF and implementing agency archives.
How Effectively Is Sustainability Being Captured?
Throughout the first set of reports on which the sustainability was claimed, “84% of these projects that were rated as sustainable at closure also had satisfactory postcompletion outcomes, as compared with 55% percent of the unsustainable projects” (GEF IEO, 2019a, p. 29). The data did not support the claim, even during implementation.
As a Brazilian project (GEF ID 2941) showed, sustainability is unlikely when project achievements are weak, and exit conditions and benchmarks need to be clear: The exit strategy provided by IDB Invest77 is essentially based on financial-operational considerations but does not provide answers to the initial questions how an EEGM [energy efficiency guarantee mechanism] should be shaped in Brazil, how relevant it is and for whom, and to whom the EEGM should be handed over (p. 25).
In Russia, the terminal evaluation for an energy efficiency project (GEF ID 292) cited project design flaws that seemed to belie its sustainability rating of likely: “From a design-for-replication point of view the virtually 100% grant provided by the GEF for project activities is certainly questionable” (Global Environment Facility Evaluation Office [GEF EO], 2008, p. 20). Further, the assessment that “the project is attractive for replication, dissemination of results has been well implemented, and the results are likely to be sustainable [emphasis added] for the long-term, as federal and regional legislation support is introduced” (GEF EO, 2008, p. 39), makes a major assumption regarding changes in the policy environment. (In fact, federal legislation was introduced 2 years post project, and the extent of enforcement would require examination.)
A Pacific regional project (GEF ID 1058) was rated as likely to be sustained, but its report notes that it “does not provide overall ratings for outcomes, risks to sustainability, and M&E” (p. 1).
The Renewable Development Energy project in China (GEF ID 446) that closed in 2007 was evaluated in 2009 (not post project, but a delayed final evaluation). The report considered the project sustainable with a continued effort to support off-grid rural electrification, claiming, “the market is now self-sustaining, and thus additional support is not required” (p. 11). The project estimated avoided CO2 emissions and cited 363% as achieved; however, calculations were based on 2006 emissions values for thermal power sector and data from all wind farms in China, without a bottom-up estimate. The interpolation of this data lacks verification.
Similar sampling issues emerge in a project in Mexico (GEF ID 643): “A significant number of farmers . . . of an estimated 2,312 farmers who previously had had no electricity” (p. 20) saw their productivity and incomes increase as a result of their adoption of productive investments (e.g., photovoltaic-energy water-pumping systems and improved farming practices). A rough preliminary estimate is extrapolated from an evaluation of “three [emphasis added] beneficiary farms, leading to the conclusion that in these cases average on-farm increases in income more than doubled (rising by139%)” (p. 21).
Baseline to terminal evaluation comparisons were rare, with the exception of photovoltaic energy projects in China and Mexico, and none were post project. Two were mid-term evaluations, which could not assess final outcomes much less sustainability. Ex-post project evaluations far more typically focus on the contributions that projects made, because only in rare cases can the attribution be isolated, especially for a project pool, where the focus is often on creating an enabling environment reliant on a range of actors. One such example is the Indian energy efficiency project approved in 1998 (GEF ID 404), in which
the project resulted in a favorable environment for energy-efficiency measures and the sub-projects inspired many other players in similar industries to adopt the demonstrated technologies. Although quantitative data for energy saved by energy efficiency technologies in India is not available, it is evident that due to the change in policy and financial structure brought by this project, there is an increase in investment in energy efficiency technologies in the industries. (GEF IEO, 2013, Vol. 2., p. 95)
And while such GEF evaluators are asking for ex-post evaluation, in an earlier version of this book, Evaluating Climate Change Action for Sustainable Development (Uitto et al., 2017), the authors encouraged us to be “modest” in expectations of extensive ex-post evaluations and exploration of ex-post’s confirmatory power seemingly has not occurred:
The expectations have to be aligned with the size of the investment. The ex-post reconstruction of baselines and the assessment of quantitative results is an intensive and time-consuming process. If rigorous, climate change-related quantitative and qualitative data are not available in final reports or evaluations of the assessed projects, it is illusive to think that an assessment covering a portfolio of several hundred projects is able to fill that gap and to produce aggregated quantitative data, for example on mitigated GHG emissions. When producing data on proxies or qualitative assessments, the expectations must be realistic, not to say modest. (p. 89)
Following an analysis of the sustainability estimates in the first pool of projects, we screened project documentation and terminal evaluations for conditions that foster sustainability during planning, implementation, and exit. We also analyzed how well the projects reported on factors that could be measured in a post project evaluation and factors that would predispose projects to sustainability. These sustained impact conditions consisted of the following elements: (a) resources, (b) partnerships and local ownership, (c) capacity building, (d) emerging sustainability, (e) evaluation of risks and resilience, and (f) CO2 emissions (impacts).
Although documentation in evaluations did not verify sustainability, many examples exist of data collection that could support post project analyses of sustainability and sustained impacts in the future. Most reports cited examples of resources that had been generated, partnerships that had been fostered for local ownership and sustainability, and capacities that had been built through training. Some terminal evaluations also captured emerging impacts due to local efforts to sustain or extend impacts of the project that had not been anticipated ex-ante.
The Decentralized Power Generation project (GEF ID 4749) in Lebanon provides a good example of a framework to collect information on elements of sustainability planning at terminal (see Table 3).
Table 3: Sustainability Planning from a Decentralized Power Generation Project in Lebanon (GEF ID 4749)
Are there financial risks that may jeopardize the sustainability of project outcomes?
What is the likelihood of financial and economic resources not being available once GEF grant assistance ends?
What is the risk, for instance, that the level of stakeholder ownership (including ownership by governments and other key stakeholders) will be insufficient to allow for the project outcomes/benefits to be sustained?
Do the various key stakeholders see that it is in their interest that project benefits continue to flow?
Is there sufficient public/stakeholder awareness in support of the project’s long-term objectives?
Do the legal frameworks, policies, and governance structures and processes within which the project operates pose risks that may jeopardize sustainability of project benefits?
Benchmarks, risks, & resilience
Are requisite systems for accountability and transparency, and required technical know-how, in place?
Are there ongoing activities that may pose an environmental threat to the sustainability of project outcomes?
Are there social or political risks that may threaten the sustainability of project outcomes?
Source: 4749 Terminal Evaluation, p. 45. Note: Capacity Building and Emerging Sustainability were missing from project 4749
Tangible examples of the above categories at terminal evaluations include the following.
The most widespread assumption for sustainability was sufficient financial and in-kind resources, often reliant on continued national investments or new private international investments, which could be verified. National resources that could sustain results include terminal evaluation findings such as:
Funding for fuel cell and electric vehicle development by the Chinese Government had increased from Rmb 60 million (for the 1996-2000 period) to more than Rmb 800 million (for the 2001-2005 period). More recently, policymakers have now targeted hydrogen commercialization for the 2010-2020 period. (GEF ID 445, p. 17)
Another example is: “About 65 percent of [Indian] small Hydro electromechanical Equipment is sourced locally” (GEF ID 386; GEF IEO, 2013, Vol.2, p. 76). The terminal evaluation of a global IFC project stated that “Moser Baer is setting up 30 MW solar power plants with the success of the 5 MW project. Many private sector players have also emulated the success of the Moser Baer project by taking advantage of JNNSM scheme” (GEF ID 112, p. 3).
Local Ownership and Partnerships
The Russian Market Transformation for EE Buildings project (GEF ID 3593) showed in its recommendation to governmental stakeholders that their ownership would be essential for sustainability, describing “a suitable governmental institution to take over the ownership over the project web site along with the peer-to-peer network ensuring the sustainability of the tools [to] support the sustainability of the project results after the project completion” (p. xi). An Indian project (GEF ID 386) noted how partnerships could sustain outcomes:
By 2001, 16 small hydro equipment manufacturers, including international joint ventures (compared to 10 inactive firms in 1991) were operational. . . . State government came up with policies with financial incentives and other promotional packages such as help in land acquisition, getting clearances, etc. These profitable demonstrated projects attracted private sector and NGOs to set up similar projects. (GEF IEO, 2013, Vol. 2, p. 74)
The Renewable Energy for Agriculture project in Mexico (GEF ID 643) established the “percentage of direct beneficiaries surveyed who learned of the equipment through FIRCO’s promotional activities” (86%), “number of replica renewable energy systems installed” (847 documented replicas), and “total number of technicians and extensionists trained in renewable energy technologies” (p. 33). This came to 3022, or 121% of the original goal of 2500, which provides a good measure of how the project exceeded this objective.
Recent post project evaluations also address what emerged after the project that was unrelated to the existing theory of change. These emerging findings are rarely documented in terminal evaluations, but some projects in the first pool included information about unanticipated activities or outcomes at terminal evaluation, and these could be used for future post project fieldwork follow-up. As a consequence of the hydroelectric resource project, for example, the Indian Institute “developed and patented the designs for water mills” (GEF ID 386; GEF IEO, 2013, Vol. 2, p. 73). The terminal evaluation for another project stated that “following the UNDP-GEF project, the MNRE [Ministry of New and Renewable Energy] initiated its own programs on energy recovery from waste. Under these programs, the ministry has assisted 14 projects with subsidies of US$ 2.72 million” (GEF ID 370; GEF IEO, 2013, Vol. 2, p. 62).
Benchmarks, Risks, and Resilience
As the GEF’s 2019 report itself noted, “The GEF could strengthen its approach to assessing sustainability further by explicitly addressing resilience” (GEF IEO, 2019a, p. 33). Not doing so is a risk, as our climate changes. Two evaluations noted “no information on environmental risks to project sustainability;” these were the Jamaican pilot on Removal of Barriers to Energy Efficiency and Energy Conservation (GEF ID 64; p. 68) and a Pacific regional project (GEF ID 1058). For likelihood of sustainability, the Jamaican project was rated moderately unlikely and the Pacific Islands project was rated likely but “does not provide overall ratings for outcomes, risks to sustainability, and M&E” other than asserting that
the follow-up project, which has been approved by the GEF, will ensure that the recommendations entailed in the documents prepared as part of this project are carried out. Thus, financial risks to the benefits coming out of the project are low. (p. 3)
Greenhouse Gas Emissions (Impacts)
In GEF projects, timeframe is an important issue, which makes post project field verification that much more important. As the GEF IEO stated in 2018, “Many environmental results take more than a decade to manifest. Also, many environmental results of GEF projects may be contingent on future actions by other actors.” (GEF IEO, 2018, p. 34).
Uncertainty and Likelihood Estimates
Estimating the likelihood of sustainability of greenhouse gas emissions at terminal evaluation raises another challenge: the relatively high level of uncertainty concerning the achievement of project impacts related to GHG reduction. GHG reductions are the primary objective stated in the climate change focal area, and they appear as a higher level impact across projects regardless of the terminology used. For a global project on bus rapid transit and nonmotorized transport, the objective was to “reduce GHG emissions for transportation sector globally” (GEF ID 1917, p. 9). For a national project on building sector energy efficiency, the project goal was “the reduction in the annual growth rate of GHG emissions from the Malaysia buildings sector” (GEF ID 3598; Aldover & Tiong, 2017, p. i). For a land management project in Mexico, the project objective was to “mitigate climate change in the agricultural units selected . . . including the reduction of emissions by deforestation and the increase of carbon sequestration potential” (GEF ID 4149, p. 21). For a national project to phase out ozone-depleting substances, the project objective was to “reduce greenhouse gas emissions associated with industrial RAC (refrigeration and air conditioning) facilities in The Gambia” (GEF ID 5466, p. vii). Clearly, actual outcomes in GHG emissions need to be considered in any assessment of the likelihood of sustainability of outcomes.
Unlike projects in the carbon finance market, GEF projects estimate emissions for a project period that usually exceeds the duration of the GEF intervention. In most cases, ex-ante estimated GHG reductions in the post project period are larger than estimated GHG reductions during the project lifetime. In practice, this means that for projects for which the majority of emissions will occur after the terminal evaluation, evaluators are being asked to estimate the likelihood that benefits will not only continue, but will increase due to replication, market transformation, or changes in the technology or enabling environment. Table 4 provides several examples from the GEF 2019 cohort of how GHG reductions may be distributed over the project lifecycle.
Table 4: Distribution of Estimated GHG Reductions Ex-Ante for Selected Projects in the CCM Subset of the GEF 2019 Cohort
Ex-ante GHG reduction estimates
% of reductions achieved by the terminal evaluation
During project lifetime (tCO2e)
Total reductions (tCO2e)
EE Standards / Labels
Sources: 2941 Project Document, pp. 35–37; 2951 PAD/CEO Endorsement Request, p. 88; 3216 Project Document, pp. 80–90; 3555 Terminal Evaluation; 3593 Terminal Evaluation, p. 23; 3598 Terminal Evaluation, p. 24; 3755 GEF CEO Endorsement Request; 3771 Terminal Evaluation pp. 8–9
The range in Table 4 shows the substantial variation in uncertainty when estimating the likelihood of long-term project impacts. For projects designed to achieve all of their emission reductions during their operational lifetimes, the achievement of GHG reductions can be verified as a part of the terminal evaluation. However, most projects assume that nearly all estimated GHG reductions will occur in the post project period, so uncertainty levels are much higher and estimates may be more difficult to compile. In other evaluations, evaluators may identify inconsistent GHG estimates (e.g., GEF ID 4157 and 5157), or recommend that the ex-ante estimates be downsized (e.g., GEF ID 3922, 4008, and 4160). These trends may also be difficult to capture in likelihood estimates.
Conclusions and Recommendations
While sustainability has been estimated in nearly all of the projects in the two pools we considered, it has not been measured. Assessing the relationship between projected sustainability and actual post project outcomes was not possible due to insufficient data. Further, findings from the first pool of climate change mitigation projects did not support the conclusion that “outcomes of most of the GEF projects are sustained during the postcompletion period” (GEF IEO, 2019a, p. 17). In the absence of sufficient information regarding project sustainability, determining post project GHG emission reductions is not possible, because these are dependent on the continuation of project benefits following project closure.
We also conclude that although the 4-point rating scale is a common tool for estimating the likelihood of sustainability, the measure itself has not been evaluated for reliability or validity. The scale is often used to summarize diverse trends in the midst of varying levels of uncertainty limits. The infrequency of the unlikely rating in terminal evaluations may result from this limitation—evaluators believe that some benefits (greater than 0%) will continue. However, the 4-point scale cannot convey an estimate of what percentage of benefits will continue. Furthermore, the use of market studies to assess sustainability is not effective in the absence of attributional analysis linking results to the projects that ostensibly caused change.
As a result, the current evaluator’s toolkit still does not provide a robust means of estimating post project sustainability and is not suitable as a basis for postcompletion claims. That said, M&E practices in the CCM projects we studied supported the collection of information that documented conditions (e.g., resources, partnerships, capacities, etc.) in a way that projects could be evaluable, or suitable for post project evaluation. We recommend that donors provide financial and administrative support for project data repositories to retain data in-country at terminal evaluation for post project return and country-level learning, and include evaluability (control groups, sampling sizes, and sites selected by evaluability criteria) in the assessment of project design. We also recommend sampling immediately from the 56 CCM projects in the two sets of projects that have been closed at least 2 years.
Donors’ allocation of sufficient resources for CCM project evaluations would allow verification of actual long-term, post project sustainability using the OECD DAC (2019) definition of “the continuation of benefits from a development intervention after major development assistance has been completed” (p. 12). It would also enable evaluators to consider enumerating project components that are sustained rather than using an either/or designation (sustained/not sustained). Evaluation terms of reference should clarify the methods used for contribution vs. attribution claims, and they should consider decoupling estimates of direct and indirect impacts, which are difficult to measure meaningfully in a single measure. For the GEF portfolio specifically, the development of a postcompletion verification approach could be expanded from the biodiversity focal area to the climate change focal area (GEF IEO, 2019b), and lessons could also be learned from the Adaptation Fund’s (2019) commissioned work on post project evaluations. Bilateral donors such as JICA have developed rating scales for post project evaluations that assess impact in a way that captures both direct and indirect outcomes (JICA, 2017).
Developing country parties to the Paris Agreement have committed to providing “a clear understanding of climate change action” in their countries under Article 13 of the agreement (United Nations, 2015), and donors have a clear imperative to press for continued improvement in reporting on CCM project impacts and using lessons learned to inform future support.
We use the term “postproject” evaluations to distinguish these longer term evaluations from terminal evaluations, which typically occur within 3 months of the end of donor funding. While some donors (JICA, 2004; USAID, 2019) use the term “ex-post evaluation” to refer to evaluations distinct from the terminal/final evaluation and occurring 1 year or more after project closure, other donors use the terms “terminal evaluation” and “ex-post evaluation” synonymously. Other terms include postcompletion, post-closure, and long-term impact.
In a 2013 meta-evaluation, Hageboeck et al. found that only 8% of projects in the 2009–2012 USAID PPL/LER evaluation portfolio (26 of 315) were evaluated post-project following the termination of USAID funding.
Rogers, B. L., & Coates, J. (2015). Sustaining development: A synthesis of results from a four-country study of sustainability and exit strategies among development food assistance projects. FANTA III, Tufts University, & USAID. https://www.fantaproject.org/research/exit-strategies-ffp
Open AccessThis chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
Cite: Cekan J., Legro S. (2022) Can We Assume Sustained Impact? Verifying the Sustainability of Climate Change Mitigation Results. In: Uitto J.I., Batra G. (eds) Transformational Change for People and the Planet. Sustainable Development Goals Series. Springer, Cham. https://doi.org/10.1007/978-3-030-78853-7_8
I am quoting liberally and highlighting our work from the Adaptation Fund’s website where their commitment to learning from what lasts is clear. “Ex post evaluations are a key element of the AF-TERG FY21-FY23 strategy and work programme, originating from the request of the Adaptation Fund Board to develop post-implementation learning for Fund projects and programmes and provide accountability of results financed by the Fund. They intend to evaluate aspects of both sustainability of outcomes and climate resilience, and over time feed into ex-post-evaluation-informed adjustments within the Fund’s Monitoring Evaluation and Learning (MEL) processes.”
How are we defining sustainability’s path to evaluate it? Here is a flowchart from our training:
There are four phases from 0 to 3: Phase 0 Foundational Review: Not only was this work preceded by months of background research on both evaluability of their young portfolio (e.g., under 20 of the 100 projects funded were closed at least three years, a selection criteria we had) and secondary research on evidence of ex-post sustainability evaluation in climate change/ resilience across the Adaptation Fund’s sectors.
Phase 1 Framework and Pilots Shortlist: Our Phase 1 report from mid-2021 provided an overview of the first stage of ex-post evaluations, outlining methods and identifying a list of potential projects for ex-post evaluation pilots from the Fund’s 17 completed, evaluated projects. The framework presented in the report introduced possible methods to evaluate the sustainability of project outcomes, considering the characteristics, strengths, and weaknesses of the Fund portfolio. It also presents an analysis tool to assess climate resilience, bearing in mind that this area is pivotal to climate change adaptation yet has rarely been measured.
Vetting and pilot selection, revised design for evaluating sustained outcomes related to resilience to climate change. Key aspects are: 1) Timing (3-5 years since closure or projects at least 4 years long within the last 5 years and seasonality matches the final evaluation) and 2) Good quality of implementation and M&E with measurable outputs and outcomes traceable to impact(s) and 3) Safety to do fieldwork re: Covid, civil peace, etc.
We (my so-clever colleagues Meg Spearman and Dennis Bours) introduced a new resilience analysis tool that includes consideration of the climate disturbances, the human and natural systems (and their nexus) affected by and affecting project outcomes. This includes five characteristics of resilience in the outcomes (presence of feedback loops, at scale, plus being diverse, dynamic, and redundant) and means/actions to support outcomes. Resilience can be identified via a clear summary of the structures (S) and functions (F) that typify Resistance, Resilience and Transformation showing where a project is and is moving towards. It is a typology of resistance-resilience-transformation (RRT) onto which the overall project can be mapped based on how actions are designed to maintain or change existing structures and functions. That was integrated into the Adaptation Fund resilience evaluation approach.
Phase 2 Methods Testing and Ex-post Field-testing: Training of national evaluators and piloting two ex-post evaluations per year includes selecting among these methods to evaluate sustainability ex-post plus the RRT and resilience measures above. In the first ex-post in Samoa’s “Enhancing Resilience of Samoa’s Coastal Communities to Climate Change” (UNDP) happening December 21, it is through qualitative evaluation of wall-infrastructure. The second, Ecuador’s “Enhancing resilience of communities to the adverse effects of climate change on food security, in Pichincha Province and the Jubones River basin “(WFP) has training completed and fieldwork should be from January 22, likely be of food security assets and methods TBD.
Phase 3 Evaluations continue, with MEL Capacity Building: Two more years of ex-post pilot evaluations (2 per year) with lesson informing integration into the MEL of the Adaptation Fund. We are already finding out lessons of rigor, of knowledge management, of unexpected benefits of returning years after closure, including indications of sustainability and resilience of the assets, with much more learning to come.
Innovations include “the relative novelty of climate change adaptation portfolios and the limited body of work on ex post evaluation for adaptation, it presents possible methods that will be piloted in field-tested ex post evaluations in fiscal year 2022 (FY22).” This includes piloting shockingly rare evaluation of oft-promised resilience. In the update to AF’s Board three months ago, it transparently outlined shortlisting of five completed projects as potential candidates for the pilots, of which two projects were selected for ex post evaluations. It outlined our process of co-creating the evaluation with national partners to prioritize their learning needs while building national capacity to assess sustainability and resilience of project outcomes in the field onward.
Also, training materials for ex post pilots are being shared to foster country and industry learning, focusing on evaluating projects at ex-post and emerging sustainability and resilience, as well as presenting and adapting methods to country and project realities.
The training had three sessions (which could not have happened without colleague Caroline’s expertise):
Part A: Understanding ex-post & resilience evaluations. Introduce and understand ex-post evaluations of sustainability and resilience, especially in the field of climate change adaptation
Part B: Discussing country-specific outcome priorities and co-creating learning with stakeholders. Discuss the project and its data more in-depth to understand and select what outcome(s) will be evaluated at ex-post
Part C: Developing country-specific methods and approaches. Discuss range of methods with the national evaluator and M&E experts to best evaluate the selected outcome(s) and impact(s)
Ex-post Eval Week: Exiting For Sustainability by Jindra Cekan
Reblogged from AEA: https://aea365.org/blog/ex-post-eval-week-exiting-for-sustainability-by-jindra-cekan-2/ January 22, 2021
Hello. My name is Jindra Cekan, and I am the Founder and Catalyst of Valuing Voices at Cekan Consulting LLC. Our research, evaluation and advocacy network have been working on post-project (ex-post) evaluations since 2013. I have loved giraffes for decades and fund conservation efforts (see pix).
Our planet is in trouble as are millions of species, including these twiga giraffes and billions of homo-sapiens. Yet in global development we evaluate projects based on their sectoral, e.g. economic, social, educational, human rights etc., results, with barely a glance at the natural systems on which they rest. IDEAS Prague featured Andy Rowe and Michael Quinn Patton who showed that I too have been blind to this aspect of sustainability.
I have argued ad nauseum that the OECD’s definition of projected sustainability and impact don’t give a hoot about sustaining lives and livelihoods.. If we did, we would not just claim we do ‘sustainable development’ and invest in ‘Sustainable Development Goals’ but go about proving how well, for how long, by whom, after closeout.
After hearing Rowe, I added to my Sustained Exit Checklists new elements about how we must evaluate Risks to Sustainability and Resilience to Shocks that included the natural environment. I added Adaptation to Implementation based on feedback on how much implementation would need to change based in part on climatic changes.
Yet new evaluation thinking by Rowe, Michael Quinn Patton, Astrid Brouselle/ Jim McDavid take us a quantum leap beyond. We must ask how can any intervention be sustained without evaluating the context in which it operates. Is it resilient to environmental threats? Can participants adapt to shocks,? Have we assessed and mitigated the environmental impacts of our interventions? As Professor Brouselle writes, “changing our way of thinking about interventions when designing and evaluating them…. away from our many exploitation systems that lead to exhaustion of resources and extermination of many species.”
This 2020 new thinking includes ascertaining:
(Andy Rowe) Ecosystems of biotic natural capital and abiotic natural capital (from trees to minerals) with effects on health, education, public safety/ climate risk and community development
(Astrid Brouselle and Jim McDavid) Human systems that affect our interventions, including: Power relations, prosperity, equity and we need to make trade-offs between environment and development goals clear.
We have miles to go of systems and values to change. Please read this and let’s start sustaining NOW.
Rowe, A. (2019). Sustainability-read evaluation: A call to action. In G. Jules (Ed.) Evaluating Sustainability: Evaluative Support for Managing Processes in the Public Interest. New Directions for Evaluation, 162, 29-48.
This week, AEA365 is celebrating Ex-post Eval Week during which blog authors share lessons from project exits and ex-post evaluations. Am grateful to the American Evaluation Association that we could share these resources….
So how are we to get there? A Sustainable Brands Conference this year gets us there through being clear about their own consumption, and USAID is no different. USAID Forward is putting their money where their keyboards are (so to speak), toward more sustainable local delivery by directing a huge 30 percent of its funding to “local solutions” through procurement in coming years. This framework is to “support the ‘new model of development’ that USAID Administrator Rajiv Shah has touted, which entails a shift away from hiring U.S.-based development contractors and NGOs to implement projects, and toward channeling money through host-country governments and local organizations to build their capacity to do the work themselves and sustain programs after funding dries up. I, and others celebrate the investments this will enable local firms to make in their own capacity, in leading development!
Of course all sorts of safeguards are needed, and ideally US firms would be providing capacity development, but shouldn’t we have been doing this all along, to move toward transferring ‘development’ to the countries themselves?
Also vital to sustainable development is learning from what works and doing more of it. USAID is finally planning to incorporate more ex-post evaluations into its toolkit of evaluating sustainability! Two weeks ago, PPL/LER shared their great new policy document- “Local systems: A framework for supporting sustained development” on how they can better incorporate local systems thinking into policy as well as DIME (Design, Implementation, Monitoring and Evaluation). Industry insider DevEx tells us "even though the agency plans to use ex-post evaluations to measure whether development projects are successful or not, these evaluations will not focus on “specific contractor performance” but instead consider the “types of approaches that contribute to more sustainable outcomes…to inform USAID’s country strategies and project design." While PVO implementing partners will not [yet?] be required to do ex-post evaluations as part of their projects, having this door cracked open is excitingly opening. Notably, it is a ‘back to the future’ moment, as 30 years ago USAID led the development world in post-project evaluations, yet in the last 24 years has done none (or at least not published any) except for the Food for Peace retrospective below, as I found in our Valuing Voices research of USAID's Development Experience Clearinghouse.
There is far more to watch. In our view, the whole development industry needs to grapple with the perceived barrier that funding ends with projects (note: a trust could be set up to document post-project impact 1, 3, 5 years later and results retained, much as 3ie does now for impact evaluations) and the view that one cannot discern attributable project impact with a time-lag of several years. Yet even the General Accounting Office is asking for longitudinal data; they reviewed USAID’s document and wants to see clear measures of success at Mission and HQ level by different indicators of local institutional sustainability and impact four years on.
Why should we care? As Chelsea Clinton of the Clinton Global Initiative puts it, "you can't measure everything, but you can measure almost everything through quantitative or qualitative means, so that we know what we're disproportionately good at. And, candidly, what we're not so good at, so we can stop doing that.
Yes! Development should be about doing more of what works, sustainably, and less of what doesn’t. USAID’s Local Systems Framework found the best could also be free, as in this one Food For Peace evaluation shows:
Returning to Chelsea Clinton, I’ll conclude by stating something obvious. She "wants to see some evidence of why we're making decisions, as opposed to the anecdotes” which is what getting post-project evaluation data from our true clients, our participants, is all about. Clinton says this will transform CGI into a smart, accountable, and sustainable support system for philanthropic disrupters around the world. USAID is radical for me, today, with their Local Systems investments… my neighborhood disrupter.
Are you such a disrupter too? Who else is one whom we can celebrate together?
Pineapple, Apple- what differentiates Impact from self-Sustainability Evaluation?
There is great news. Impact Evaluation is getting attention and being funded to do excellent research, such as by the International Initiative for Impact Evaluation (3ie), by donors such as the World Bank, USAID, UKAid, the Bill and Melinda Gates Foundation in countries around the world. Better Evaluation tell us that "USAID, for example, uses the following definition: “Impact evaluations measure the change in a development outcome that is attributable to a defined intervention; impact evaluations are based on models of cause and effect and require a credible and rigorously defined counterfactual to control for factors other that the intervention that might account for the observed change.”
William Savedoff of CGD reports in Evaluation Gap reports that whole countries are setting up such evaluation institutes: "Germany's new independent evaluation institute for the country's development policies, based in Bonn, is a year old. DEval has a mandate that looks similar to Britain's Independent Commission for Aid Impact (discussed in a previous newsletter ) because it will not only conduct its own evaluations but also help the Federal Parliament monitor the effectiveness of international assistance programs and policies. DEval's 2013-2015 work program is ambitious and wide – ranging from specific studies of health programs in Rwanda to overviews of microfinance and studies regarding mitigation of climate change and aid for trade." There is even a huge compendium of impact evaluation databases.
There is definitely a key place for impact evaluations in analyzing which activities are likely to have the most statistically significant (which means definitive change) impact. One such study in Papua New Guinea found SMS (mobile text) inclusion in teaching made a significant difference in student test scorescompared to the non-participating 'control group' who did not get the SMS (texts). Another study, the Tuungane I evaluation by a group of Columbia University scholars showed clearly that an International Rescue Committee program on community-level reconstruction did not change participant behaviors. The study was as well designed as an RCT can be, and its conclusions are very convincing. But as the authors note, we don't actually know why the intervention failed. To find that out, we need the kind of thick descriptive qualitative data that only a mixed methods study can provide.
Economist Kremer from Harvard says "“The vast majority of development projects are not subject to any evaluation of this type, but I’d argue the number should at least be greater than it is now.” Impact evaluations use 'randomized control trials', comparing the group that got project assistance to a similar group that didn't to gauge the change. A recent article that talks about treating poverty as a science experiment says "nongovernmental organizations and governments have been slow to adopt the idea of testing programs to help the poor in this way. But proponents of randomization—“randomistas,” as they’re sometimes called—argue that many programs meant to help the poor are being implemented without sufficient evidence that they’re helping, or even not hurting." However we get there, we want to know – the real (or at least likely)- impact of our programming, helping us focus funds wisely.
Data gleaned from impact evaluations is excellent information to have before design and during implementation. While impact evaluations are a thorough addition to the evaluation field, experts recommend they be done from the beginning of implementation. While they ask “Are impacts likely to be sustainable?”, and “to what extent did the impacts match the needs of the intended beneficiaries?” and importantly “did participants/key informants believe the intervention had made a difference?”they focus only on possible sustainability, using indicators we expect to see at project end rather than tangible proof of sustainability of the activities and impacts that communities define themselves that we actually return to measure 2-10 years later.
That is the role for something that has rarely been used in 30 years – for post-project (ex-post) evaluations looking at:
The resilience of expected impacts of the project 2, 5, 10 years after close-out
The communities’ and NGOs’ ability to self-sustain which activities themselves
Positive and negative unintended impacts of the project, especially 2 years after, while still in clear living memory
Kinds of activities the community and NGOs felt were successes which could not be maintained without further funding
Lessons for other projects across projects on what was most resilient that communities valued enough to do themselves or NGOs valued enough to get other funding for, as well as what was not resilient.
Where is this systematically happening already? There are our catalysts ex-post evaluation organizations, drawing on communities' wisdom. Here and there there are other glimpses of ValuingVoices, mainly to inform current programming, such as these two interesting approaches:
Ned Breslin, CEO of Water For People talks about “Rethinking Social Entrepreneurism: Moving from Bland Rhetoric to Impact (Assessment)”. His new water and sanitation program, Everyone Forever, does not focus on the inputs and outputs, including water provided or girls returning to school. Instead it centers instead on attaining the ideal vision of what a community would look like with improved water and sanitation, and working to achieve that goal. Instead of working on fundraising only, Breslin wants to redefine the meaning of success as a world in which everyone has access to clean water.
We need a combination. We need to know how good our programming is now through rigorous randomized control trials, and we need to ask communities and NGOs how sustainable the impacts are. Remember, 99% of all development projects worth hundreds of millions of dollars a year are not currently evaluated for long-term self-sustainability by their ultimate consumers, the communities they were designed to help.
We need an Institute of Self-Sustainable Evaluation and a Ministry of Sustainable Development in every emerging nation, funded by donors who support national learning to shape international assistance. We need a self-sustainability global database, mandatory to be referred to in all future project planning. We need to care enough about the well-being of our true client to listen, learn and act.
Development= A Jeep (motor optional).. Resilience? If within 5 years!
Imagine being given a lovely new Jeep. You get a driver (remember driving school) to help you learn to steer it around the pothole-strewn, scantly lit roads. Eventually you take over the controls of the Jeep and control the steering wheel directly, driving offroad, with the copilot praising your good driving and steering only to avoid catastrophe. You are told that one day the Jeep will be yours.
That day arrives. The development agency hands you the keys to the Jeep. You wave good bye to them, return to the Jeep, turn the key. Dead.
Looking under the hood, you realize the motor is gone. Checking the rest of the Jeep you realize there is no fuel and the tries are flat. That is what it is like from the community's view of development projects after close-out. The local NGO to whom the project has been 'handed over' has scant financial or human resources to continue (no engine), and in the last few months' scramble to close out, the implemeneting agency put in few systems for communities to continue doing the programming without support by the local NGO or all the resources they had poured in (no fuel). There is little to help you move the Jeep (even on flat tires) except your own feet, other than the capacity building that was learned early on, as it wasn't built to last based on local materials. Sustainability isn't programmed in projects that have set timelines and donor-set markers of success which mandate close-out.
So you own the Jeep but with little power to move, very much like countless well-meant tractor for development agriculture before you.
There are several glimmers of hope. What communities have is the human power that exists locally, fuelled by participation coupled with information transmission such as WorkWithUs and MakingAllVoicesCount (based on the moral imperative of it's Their Development as well) and ALNAP's push to use evaluation for learning in international development.
Resilience could be the doorway to getting community-defined sustainable programming to break the cycle of recurrent emergencies that divert resources from long-term development. Imagine: we could ask citizens what will make them resilient! A rare, shining example is a USAID-funded Ethiopia project with a mandate to use participatory impact assessments (process monitoring plus participatory input to capture local perceptions of benefits) to learn from communities. A USAID Solicitation tells us "seventeen impact assessments on different program activities were undertaken to inform best practice and to develop guidelines and policies. A major impact was the development and adoption of Emergency Livestock Guidelines by the Ethiopian government. These were based on best practice assessments in many countries (including Kenya) and action research on different types of interventions. Emergency de-stocking–selling livestock early in a drought to preserve their price and leave more fodder and water for remaining animals — was found to be particularly effective, with a 40:1 benefit cost ratio. Emergency livestock vaccination campaigns, on the other hand, were found to have no impact on livestock mortality, and were dropped in favor of other health interventions including parasite control and de-stocking."
Excellent Valuing of pastoralist Voices! How are such locally-informed excellent processes and findings being widely shared and implemented? What do you think?
Jindra Cekan, Ph.D. has used participatory methods for 30 years to connect with participants, ranging from villagers in Africa, Central/ Latin America and the Balkans to policy makers and Ministers around the world for her international clients. Their voices have informed the new Sustained and Emerging Impacts Evaluation, other M&E, stakeholder analysis, strategic planning, knowledge management and organizational learning.
If you don’t find what you are looking for via the search, categories, or posts above, you can go to the Blog page, scroll to the bottom, and click “previous posts” to go through all of the posts (newest–>oldest).