Hard-wiring and Soft-wiring in Sustainability via health program examples: by Laurence Desvignes and Jindra Cekan/ova
Overview
We all want things to last. Most of us joined the ‘sustainable development’ industry hoping our foreign aid projects not only do good while we are there but long afterward. Following on last month’s blog on better learning about project design, implementation, and M&E, here are some things to do better.
Long-term sustainability rests on four pillars: the first rests on how the project is designed and implemented before exit and the second is to what degree conditions are needed for the continuation of results the project generated are put into place. While the first one embeds sustainability into its very results, the second invests in processes to foster the continuation of results. The other two of the four pillars, returning to see what lasts by evaluating the sustainability of results two or more years later and bringing those lessons back to funding, design, implementation, and building in shock-resilience, e.g., such as to climate change, are in other Valuing Voices blogs.
We focus on 1 and 2 in this blog, and use an analogy of hard-wiring and soft-wiring sustainability into the fabric of the project:
Hardwiring, ‘baking-in’ sustainability involves the design/ implementation which predisposes results lasting. This includes investing in Maternal Child Health and Nutrition’s first 1000 days from conception to age two that are vital for child development. The baby’s physical development and nutrition are so important as is maternal well-being. Investing in these early days leads to better health and nutrition throughout their lives. So too are buying local. Too often our projects rely on imported technology and inputs that are hard to replace if broken. A project on hand pumps used by UNICEF suggested local purchase of those “designed to optimize the chances of obtaining good quality hand pumps and an assured provision of spare parts” involves both the hardware of the pump and also the “capacity building plan and a communication strategy.” Also using local capacity/specialists when available vs external consultants can also be key to building the sustainability of a project.
Another example of baking-in sustainability is using participatory approaches to ensure that those implementing- such as the communities/ local authorities. In this example, it’s grassroots where participants are heard during design in terms of their priorities and how the project should be implemented. This includes targeting discussions and monitoring and evaluation being done with and by communities. The seminal research of 6,000 interviews with aid recipients, Time to Listen, found that they want to participate and when they do, things are more likely to be sustained, rather than being passive recipients…. there is ex-post proof such programming is more ‘owned’ and more sustained.
Conducting in-depth needs assessments at design is usually the way to collect information about what is needed and how projects should be implemented to last. Unfortunately, very often, the time is very limited for the proposal development and (I)NGOs are under pressure of short deadlines to submit the proposal, and needs assessments are either done quickly, collecting very basic information or not done at all. Yet time spent valuing the voices of participants can bring great richness. In 2022, the UN’s FAO did a monitoring and evaluation study in Malawi validating indicators for poverty by asking communities how they identify it from the start. “Researchers were impressed at how accurately the people they interviewed were able to gauge the relative wealth of their neighbors.” We were not surprised as the locals often know best. Another example with Mines Advisory Group in Cambodia, we developed a community-based participatory approach for design whereby project staff would work with the mine-affected communities to draw local maps of their villages, highlighting the location of the dangerous places and the key areas/places used by the communities. Staff and communities discussed the constraints, risks, needs, etc. to make their community safer, which the project would follow up with risk education, clearance, victim assistance, and/or alternative economic /development solutions to make the community safer. Other mine action agencies, e.g. Danish Refugee Council (Danish Demining Group) are also now using safer community approaches, involving local residents to decide on how to make their village safer depending on the community priorities[1].
Hardwiring in participatory feedback-loop learning from locals during implementation is also key. Implementation of a community feedback strategy once the programme is running is also essential. The community feedback mechanism (CFM) is a formal system established to enable affected populations to communicate information on their views, concerns, and experiences of a humanitarian agency or of the wider humanitarian system. It systematically captures, records, tracks, and follows up on the feedback it receives to improve elements of a response. CFM is key to ensuring that people affected by crisis have access to avenues to hold humanitarian actors to account; to offer affected people a formalized structure for raising concerns if they feel their needs are not being met, or if the assistance provided is having any unintended and harmful consequences; to understand and solicit information on their experience of a humanitarian agency or response; as part of a broader commitment to quality and accountability that enables organizations to recognize and respond to any failures in response; to promote the voices and influence of people affected so their perspectives, rights, and priorities remain at the forefront of humanitarian/development work[2].
Promoting and implementing community engagement, such as a community feedback strategy, provides a basis for dialogue with people affected on what is needed and on how what is needed might best be provided, especially as needs change during implementation. This will help identify priority needs and is a means to gauge beneficiaries’ understanding of activities being carried out, to assist in the identification of local partners and establishment and follow-up of partnerships, and in the organizational development and capacity building of local institutions and authorities. It can strengthen the quality of assistance by facilitating dialogue and meaningful exchange between aid agencies and affected people at all stages of humanitarian response and result in the empowerment of those involved. Targeted people are viewed as social actors that can play an active role in decisions affecting their lives.
OXFAM’s project in Haiti starting in 2012 came as a result of a cholera epidemic that began in 2010 (“Preventing the Cholera Epidemic by Improving WASH Services and Promoting Hygiene in the North and Northeast”), whose goal was to contribute to the cholera elimination, experimented with the community feedback strategy as a means of gauging the recipients’ understanding of the activities carried out and of further strengthening the links between OXFAM and the communities during implementation. The initial process of community feedback was intended to both receive recommendations from project participants for better management of the action and also to better understand the strengths and weaknesses of Oxfam interventions. Based on the information and recommendations applied, OXFAM served as a bridge between the community and the actors involved (e.g. private firm contracted to carry out some health centers/ water systems renovation work or other) in the implementation of the project. This is also part of Oxfam’s logic of placing more emphasis on the issue of accountability and community engagement.
The feedback-loop benefits of such a community process are manyfold, especially on Protection, human rights, risk management, and further below, adapting Implementation, M&E, and fostering organizational learning:
CFMs assist in promoting the well-being, rights, and protection of people by offering them a platform to have a voice and be heard
it fosters participation, transparency, and trust
It uses Do No Harm and conflict-sensitive programming
It helps identify staff misconduct
It functions as a risk management and early warning system
Adapting Implementation and Improving M&E:
This process makes it possible to adapt to the priorities of the beneficiaries, to better meet their needs hence ensuring the agency’s accountability to the affected population
It facilitates and guarantees a better quality of the project.
It represents a means of monitoring our approaches and our achievements.
It makes it possible to construct a common vision shared between the various actors and the project participants/targeted communities.
Organizational Learning:
Ensuring the programme quality and accountability through the establishment of an appropriate accountability strategy (including Transparency, Feedback, Participation, Monitoring, and Effectiveness) and relevant methodologies and tools (since the planning stage of the project) is a key exercise which allow to think and plan for the sustainability of the programme at an early stage.
It allows us to gauge the strengths and weaknesses of the interventions while offering us the opportunity to learn from our experiences, hence allowing for programmatic learning and adaptative programming.
It conveys the impact of the project and the change brought about in the lives of the beneficiaries.
It is part of the logic of capitalizing on experiences to improve the quality of future projects.
2. Soft-wiring is creating conditions to make sustainability more likely for local communities and partners by thinking about how to replace what has been brought by the projects’ donors and implementers. This involves an analysis as well as actions that put conditions for sustainability into place before and during the time that foreign aid projects close. Valuing Voices’ checklists for exiting sustainably involves local ownership, sufficient capacities, and resources, viable partnerships, how well risks such as climate and economic shocks were identified and managed, and benchmarking for success 1-2 years before closure. Later it is important to return to check findings at ex-post project, comparing completion results to what was sustained 2-30 years later.
Several blogs on Valuing Voices deal with resources, including assumptions donors make. Donor resource investments cannot be assumed to be sustained. The checklists outline a wide array of questions to ask during design and latest a year pre-exit, including what assumptions do aid projects make? USAID water/ sanitation/ hygiene investments have mostly not been sustained, due to a combination of lack of resources to maintain them and low ownership of the resources invested. Some key questions are:
Did the project consider how those taking over the project would get sufficient resources, e.g., grant funding or other income generation available or renting out their facility or infrastructure that they own or shift some of their activities to for-profit production, sold to cover part of project costs?
Does the project or partner have a facility or infrastructure that they own and is rentable to increase resources outside donor funding or can the project shift to for-profit, including institutional and individual in-kind products or technical knowledge skills that can be sold to cover part of project costs?
What new equipment is needed, e.g. computers, vehicles, technical (e.g. weighing scales) for activities to continue, and which stakeholder will retain them?
Or even no resources are needed because some project activities will scale down, move elsewhere, focus on a smaller number of activities that are locally sustainable, or the whole project will naturally phase-out)
2. PARTNERSHIPS:
The objective of that Oxfam project was to reduce the risks of communities placed in a situation of acute vulnerability to the cholera epidemic in two departments in Haiti (where about 1.5 million inhabitants reside). It focused on sustainability by effectively supporting and accompanying governmental WASH and health structures in the rapid response to alerts and outbreaks recorded in the targeted communities. How? Through awareness-raising activities among the populations concerned, by strengthening the epidemiological surveillance system and coordination between concerned stakeholders. The project also aimed to improve drinking water structures such as drinking water distribution points, drinking water networks or systems, catchments, and boreholes. As part of this intervention, Oxfam worked in close collaboration and in support of the Departmental Directorates of Health (DH), DINEPA (government services responsible for water and sanitation), and local authorities at the level of cities, towns and neighborhoods, and community structures including civil protection teams. Oxfam and DINEPA staff intervened through mixed mobile response teams that included technical and managerial staff from the health department to whom Oxfam provided ongoing technical support in terms of WASH analysis and actions, WASH training, finance training, and monitoring, as well as logistical support for the deployment of teams in the field (provision of vehicles and drivers). Oxfam was therefore working to ensure that cholera surveillance and mitigation actions were led by state and community actors, and by supporting state structures to build their capacities and allow ownership of the various aspects of the fight against cholera. Concretely, this was done as follows:
Preliminary meetings and discussions were held with concerned governmental authorities to agree on a plan of action based on needs, implementation means, priorities, and budget for the health and wash governmental services/teams to be able to function. This was followed by the signature of an MoU between Oxfam and the Departmental Directorates of Health (DH).
An action plan was set up with the DH and DINEPA (governmental water and sanitation agency) at the very beginning of the project.
Outbreak response teams were managed directly by the DH and the staff was recruited, managed, and paid by the DH. The DH and DINEPA implemented the activities, managed the staff of the mobile teams, and provide technical monitoring in coordination with Oxfam.
The epidemiological monitoring activities carried out by the DH were also monitored by the Oxfam epidemiologist who, in close coordination with the DH, built the capacities of epidemiologists and staff at the departmental level and at the level of the treatment centers to ensure adequate monitoring and communication.
An Oxfam social engineering officer worked with DINEPA to ensure that the various water committees at the sources/infrastructure rehabilitated by Oxfam were functional. Sources/infrastructures were rehabilitated in concert with DINEPA to ensure the proper ownership.
Oxfam provided funding, and technical supervision and wrote and submitted the final report to the donor. Based on DH’s regular reports on activities which were then consolidated by Oxfam for the donor.
Teams were paid directly by the DH from funds received by Oxfam, based on the budget agreed by both Oxfam and the DH, and were based on government salary scales.
The Oxfam WASH team, which systematically accompanies case investigations in the field, further encouraged the participation of DINEPA and its community technicians, through regular meetings with the DINEPA departmental directors.
Overall, Oxfam ensured to provide support and capacity building of the DH, DINEPA, and community actors involved in the fight against cholera, to ensure proper ownership and to avoid substitution of the health/wash authorities.
3. OWNERSHIP:
The type of peer-partnering at design and during implementation described above is vital for ownership and sustainability. Unless we consider people’s ownership of the project and capacities to sustain results, they won’t be sustained. See Cekan’s exiting for sustainability checklists on phasing over before phasing out and exit, strengthening ownership, which brings us full circle to the participatory hard-wiring described above in Haiti.
4. CAPACITIES-STRENGTHENING
We have to strengthen capacities at the most sustainable level. Taking an example from IRC’s Sierra Leone Gender-based Violence project involves looking at what happens when capacities training done for local participants and partners to take over are not done right. In this case, there were two-year consultancies to the Ministry (MSWGCA) on strategic planning and gender training, but “it is not clear if this type of support has had a sustainable impact. The institutional memory often disappears with the departure of the consultant, leaving behind sophisticated and extensive plans and strategies that there is simply no capacity to implement.” The report found that community-based initiatives that are the “primary sources of support for GBV victims living in rural areas in a more innovative and sustainable way that promotes local ownership. They also may yield more results,” most donor agencies find it hard to partner with community-based organizations so they recommended a focus on training and capacity-building of mainstream health workers to respond to GBV and aim for the government will assume control of service provision in approximately five years. The excellent manual by Sarriot et al on Sustainability Planning, “Taking the Long View: A Practical Guide to Sustainability Planning and Measurement in Community-Oriented Health Programming puts local capacity strengthening at the core. We have to consult and collaborate throughout and create an ‘enabling environment’ so that the activities and results are theirs.
Few donors require information on how hard-wired or how soft-wired programming pre-exit is at closure which would make sustainability likely. Even fewer demand actual post-closure sustainability data to confirm assumptions at exit, sadly we believe most of our foreign aid has had limited sustained impacts. But this can change.Donors need to be educated that the “localization” agenda is the new trend (just as gender, resilience, and climate change have been at one point). It is beyond the “nationalization” of staff members (e.g. replacing expatriates with national staff), which is only one of the elements relating to locallization.True localization is to promote the local leadership of communities in their own ‘sustainable development’. While this is easier to say than to do, sustainability depends on it. We foster it through the hard-wiring and soft-wiring we discussed above and more steps, below. Here are specific steps from Laurence’s and Jindra’s experiences with the Global South:
Funds & additional time for local partnership and ownership need to be embedded in the design and planned for, which requires a different approach on which the donors also need to be sensitized/ educated/ advocated to;
In-depth needs assessment must be carried out just before or when an NGO sets up an operation – it usually takes time and should be integrated into any operation. Advocating this approach to donors is key so that it can be included in the budget or the NGO needs to find its own funds to do so) and the NGO country and sector strategy can then be updated yearly to embed such activities into the (I)NGO DNA;
Conduct a capacity strengthening assessment of the local authorities or partners with whom we are going to conduct the project. This can take between 3 to 6 months, depending on the number and type of actors involved but this is an essential element to build self-sustaining local capacities and ensure that comprehensive capacity building is going to take place. This transparent step is also an essential step to ensure ownership by national/governmental stakeholders;
It is vital to allow time to plan for an exit strategy at an early stage, even as early as design. This requires time and needs to be included in the budget for the implementation of the plan at least one year before the end, for phasing over to local implementing partners to take over before the donors/ Global North implementers exit, and for possibly strengthening capacities or extending programming to deliver on their timeline rather than ours before exiting out. More on this from CRS’ Participation by All ex-post and of course the oft-cited “Stopping As Success: Locally-led Transitions in Development” by Peace Direct, Search for Common Ground, and CDA. Also do not forget shared leadership noted by UK’s INTRAC’s “Investing in Exit”;
Finally, don’t forget about evaluating ex-posts and embedding those lessons into future design/ implementation/ monitoring and evaluation.
Investing in sustainability by hard-wiring or soft-wiring works! Let us know what you do…
So excited to have our chapter verifying the ‘sustainability’ of the Global Environment Facility Trust Fund (GEF) funded projects through examining two tranches of evaluations. My co-writer colleague Susan Legro did a brilliant job pointing out GreenHouse Gasses (GHG) emissions estimated reductions flaws. Given climate change is in full swing, we must trust the data we have.
The purpose of this research was to explore how public donors and lenders evaluate the sustainability of environmental and other sectoral development interventions. Specifically, the aim is to examine if, how, and how well post project sustainability is evaluated in donor-funded climate change mitigation (CCM) projects, including the evaluability of these projects. We assessed the robustness of current evaluation practice of results after project exit, particularly the sustainability of outcomes and long-term impact. We explored methods that could reduce uncertainty of achieving results by using data from two pools of CCM projects funded by the Global Environment Facility (GEF).
Evaluating sustainable development involves looking at the durability and continuation of net benefits from the outcomes and impacts of global development project activities and investments in various sectors in the post project phase, i.e., from 2 to 20 years after donor funding ends.1 Evaluating the sustainability of the environment is, according to the Organisation for Economic Co-operation and Development (OECD, 2015), at once a focus on natural systems of “biodiversity, climate change, desertification and environment” (p.1) that will need to consider the context in which these are affected by human systems of “linkages between poverty reduction, natural resource management, and development” (p. 3). This chapter focuses more narrowly on the continuation of net benefits from the outcomes and impacts of a pool of climate change mitigation (CCM) projects (see Table 1). The sustainability of CCM projects funded by the Global Environment Facility (GEF), as in a number of other bilateral and multilateral climate funds, rests on a theory of change that a combination of technical assistance and investments contribute to successfully durable market transformation, thus reducing or offsetting greenhouse gas (GHG) emissions.
Table 1: Changes in OECD DAC Criteria from 1991 to 2019
1991
2019
SUSTAINABILITY:
SUSTAINABILITY: WILL THE BENEFITS LAST?
Sustainability is concerned with measuring whether the benefits of an activity are likely to continue after donor funding has been withdrawn. Projects need to be environmentally as well as financially sustainable.
The extent to which the net benefits of the intervention continue, or are likely to continue. Note: Includes an examination of the financial, economic, social, environmental, and institutional capacities of the systems needed to sustain net benefits over time. Involves analyses of resilience, risks, and potential trade-offs.
IMPACT:
IMPACT:
The positive and negative changes produced by a development intervention, directly or indirectly, intended or unintended. This involves the main impacts and effects resulting from the activity on the local social, economic, environmental, and other development indicators.
The extent to which the intervention has generated or is expected to generate significant positive or negative, intended or unintended, higher-level effects. . . . It seeks to identify social, environmental, and economic effects of the intervention that are longer-term or broader in scope.
Source: OECD/DAC Network on Development Evaluation, (2019); italics are emphasis added by Cekan
CCM projects lend themselves to such analysis, as most establish ex-ante quantitative mitigation estimates and their terminal evaluations often contain a narrative description and ranking of estimated sustainability beyond the project’s operational lifetime, including the achievement of project objectives. The need for effective means of measuring sustainability in mitigation projects is receiving increasing attention (GEF Independent Evaluation Office [IEO], 2019a) and is increasingly important, as Article 13 of the Paris Agreement mandates that countries with donor-funded CCM projects report on their actions to address climate change (United Nations, 2015). As several terminal evaluations in our dataset stated, better data are urgently needed to track continued sustainability of past investments and progress against emissions goals to limit global warming.
Measuring Impact and Sustainability
Although impactful projects promoting sustainable development are widely touted as being the aim and achievement of global development projects, these achievements are rarely measured beyond the end of the project activities. Bilateral and multilateral donors, with the exception of the Japan International Cooperation Agency (JICA) and the U.S. Agency for International Development (USAID),2 have reexamined fewer than 1% of projects following a terminal evaluation, although examples exist of post project evaluations taking place as long as 15 years (USAID) and 20 years (Deutsche Gesellschaft fur Internationale Zusammenarbeit [GIZ]) later (Cekan, 2015). Without such fieldwork, sustainability estimates can only rely on assumptions, and positive results may in fact not be sustained as little as 2 years after closure. An illustrative set of eight post project global development evaluations analyzed for the Faster Forward Fund of Michael Scriven in 2017 showed a range of results: One project partially exceeded terminal evaluation results, two retained the sustainability assumed at inception, and the other five showed a decrease in results of 20%–100% as early as 2 years post-exit (Zivetz et al., 2017a).
Since the year 2000, the U.S. government and the European Union have spent more than $1.6 trillion on global development projects, but fewer than several hundred post project evaluations have been completed, so the extent to which outcomes and impacts are sustained is not known (Cekan, 2015). A review of most bilateral donors shows zero to two post project evaluations (Valuing Voices, 2020). A rare, four-country, post project study of 12 USAID food security projects also found a wide variability in expected trajectories, with most projects failing to sustain expected results beyond as little as 1 year (Rogers & Coates, 2015). The study’s Tufts University team leaders noted that “evidence of project success at the time of exit (as assessed by impact indicators) did not necessarily imply sustained benefit over time.” (Rogers & Coates, 2015, p. v.). Similarly, an Asian Development Bank (ADB) study of post project sustainability found that “some early evidence suggests that as many as 40% of all new activities are not sustained beyond the first few years after disbursement of external funding,” and that review examined fewer than 14 of 491 projects in the field (ADB, 2010). The same study described how assumed positive trajectories post funding fail to sustain and noted a
tendency of project holders to overestimate the ability or commitment of implementing partners—and particularly government partners—to sustain project activities after funding ends. Post project evaluations can shed light on what contributes to institutional commitment, capacity, and continuity in this regard. (ADB, 2010, p. 1)
Learning from post project findings can be important to improve project design and secure new funding. USAID recently conducted six post project evaluations of water/sanitation projects and learned about needed design changes from the findings, and JICA analysed the uptake of recommendations 7 years after closure (USAID, 2019; JICA, 2020a, 2020b). As USAID stated in their 2018 guidance,
An end-of-project evaluation could address questions about how effective a sustainability plan seems to be, and early evidence concerning the likely continuation of project services and benefits after project funding ends. Only a post project evaluation, however, can provide empirical data about whether a project’s services and benefits were sustained. (para. 9)
Rogers and Coates (2015) expanded the preconditions for sustainability beyond only funding, to include capacities, partnerships, and ownership. Cekan et al. (2016) expanded ex-post project methods from examining the sustainability of expected project outcomes and impacts post closure to also evaluating emerging outcomes, namely “what communities themselves valued enough to sustain with their own resources or created anew from what [our projects] catalysed” (para. 19). In the area of climate change mitigation, rigorous evaluation of operational sustainability in the years following project closure should inform learning for future design and target donor assistance on projects that are most likely to continue to generate significant emission reductions.
How Are Sustainability and Impact Defined?
The original 1991 OECD Development Assistance Committee (DAC) criteria for evaluating global development projects included sustainability, and the criteria were revised in 2019. The revisions related to the definition of sustainability and emphasize the continuation of benefits rather than just activities, and they include a wider systemic context beyond the financial and environmental resources needed to sustain those benefits, such as resilience, risk, and trade-offs, presumably for those sustaining the benefits. Similarly, the criteria for impact have shifted from simply positive/negative, intended/unintended changes to effects over the longer term (see Table 1).
In much of global development, including in GEF-funded projects, impact and sustainability are usually estimated only at project termination, “to determine the relevance and fulfilment of objectives, development efficiency, effectiveness, impact and [projected] sustainability” (OECD DAC, 1991, p. 5). In contrast, actual sustainability can only be evaluated 2–20 years after all project resources are withdrawn, through desk studies, fieldwork, or both. The new OECD definitions present an opportunity to improve the measurement of sustained impact across global development, particularly via post project evaluations. Evaluations need to reach beyond projected to actual measurement across much of “sustainable development” programming, including that of the GEF.
GEF evaluations in recent years have been guided by the organization’s 2010 measurement and evaluation (M&E) policy, which requires that terminal evaluations “assess the likelihood of sustainability of outcomes at project termination and provide a rating” (GEF Independent Evaluation Office [IEO], p. 31). Sustainability is defined as “the likely ability of an intervention to continue to deliver benefits for an extended period of time after completion; projects need to be environmentally as well as financially and socially sustainable” (GEF IEO, 2010, p. 27).
In 2017, the GEF provided specific guidance to implementing agencies on how to capture sustainability in terminal evaluations of GEF-funded projects (GEF, 2017, para. 8 and Annex 2): “The overall sustainability of project outcomes will be rated on a four-point scale (Likely to Unlikely)”:
Likely (L) = There are little or no risks to sustainability;
Moderately Likely (ML) = There are moderate risks to sustainability;
Moderately Unlikely (MU) = There are significant risks to sustainability;
Unlikely (U) = There are severe risks to sustainability; and
Unable to Assess (UA) = Unable to assess the expected incidence and magnitude of risks to sustainability
Although this scale is a relatively common measure for estimating sustainability among donor agencies, it is not a measure that has been tested for reliability, i.e., whether multiple raters would provide the same estimate from the same data. It has also not been tested for construct validity, i.e., whether the scale is an effective predictive measure of post project sustainability. Validity issues include whether an estimate of risks to sustainability is a valid measure of the likelihood of post project sustainability, whether the narrative estimates of risk are ambiguous or double-barreled; and the efficacy of using a ranked, ordinal scale that treats sustainability as an either/or condition rather than a range (from no sustainability to 100% sustainability).
Throughout this chapter, we identify projects by their GEF identification numbers, with a complete table of projects provided in the appendix.
The Limits of Terminal Evaluations
Terminal evaluations and even impact evaluations that mostly compare effectiveness rather than long-term impact were referenced as sources for evaluating sustainability in the GEF’s 2017 Annual Report on Sustainability (GEF IEO, 2019a). Although they can provide useful information on relevance, efficiency, and effectiveness, neither is a substitute for post project evaluation of the sustainability of outcomes and impacts, because projected sustainability may or may not occur. In a terminal evaluation of Mexican Sustainable Forest Management and Capacity Building (GEF ID 4149), evaluators made the case for ex-post project monitoring and evaluation of results:
There is no follow-up that can measure the consolidation and long-term sustainability of these activities. . . . Without a proper evaluation system in place, nor registration, it is difficult to affirm that the rural development plans will be self-sustaining after the project ends, nor to what extent the communities are readily able to anticipate and adapt to change through clear decision-making processes, collaboration, and management of resources. . . . They must also demonstrate their sustainability as an essential point in development with social and economic welfare from natural resources, without compromising their future existence, stability, and functionality. (pp. 5–9)3
Returning to a project area after closure also fosters learning about the quality of funding, design, implementation, monitoring, and evaluation and the ability of those tasked with sustaining results to do so. Learning can include how well conditions for sustainability were built in, tracked, and supported by major stakeholders. Assumptions made at design and final evaluation can then also be tested, along with theories of change (Sridharam & Nakaima, 2019). Finally, post project evaluations can verify the attributional claims made at the time of the terminal evaluation. As John Mayne explained in his 2001 paper:
In trying to measure the performance of a program, we face two problems. We can often—although frequently not without some difficulty—measure whether or not these outcomes are actually occurring. The more difficult question is usually determining just what contribution the specific program in question made to the outcome. How much of the success (or failure) can we attribute to the program? What has been the contribution made by the program? What influence has it had? (p. 3)
In donor- and lender-funded CCM projects, emission reduction estimates represent an obvious impact measure. They are generally based on a combination of direct effects—i.e., reductions due to project-related investments in infrastructure—and indirect effects—i.e., reductions due to the replication of “market transformation” investments from other funding or an increase in climate-friendly practices due to improvements in the policy and regulatory framework (Duval, 2008; Legro, 2010). Both of these effects are generally estimated over the lifetime of the mitigation technology involved, which is nearly always much longer than the operational duration of a given project (see Table 2).
Table 2: Typology of GHG Reductions Resulting from Typical Project Interventions
Type of GHG reductions
Project lifetime (quarterly annual monitoring)
TERMINAL EVALUATION
Post project lifetime (post project evaluation)
Direct reductions
Reductions directly financed by donor-funded pilot project(s) or investment(s)
Continuing reductions from project-financed investments (through the end of the technology lifetime; e.g., 20 years for buildings, 10 years for industrial equipment, etc.)
Indirect reductions
Reductions from policy uptake (e.g., reduced fossil fuel use from curtailment of subsidies, spillover effects from tax incentives, increased government support for renewable energy due to strategy development) (co-) funded by the donor
Continuing reductions from policy uptake (e.g., reduced fossil fuel use from curtailment of subsidies, spillover effects from tax incentives, increased government support for energy efficiency or renewable energy due to strategy development)
Reductions from market transformation (changes in availability of financing, increased willingness of lenders, reduction in perceived risk) supported by pilot demonstrations and/or outreach and awareness raising (co-)funded by the donor
Continuing reductions from market transformation (changes in availability of financing, increased willingness of lenders, reduction in perceived risk) as a legacy of the pilot demonstrations and/or outreach and awareness raising funded by the donor-funded project
New reductions from the continuation of the investment or financing mechanism established by the donor-funded project
The increasing use of financial mechanisms such as concessional loans and guarantees as a component of donor-funded CCM projects, such as those funded by the Green Climate Fund (https://www.greenclimate.fund/), can also limit the ability of final evaluations to capture sustainability, because the bulk of subsequent investments in technologies that are assumed with revolving funds will not take place during the project lifetime. A 2012 paper by then-head of the GEF Independent Evaluation Office, Rob van den Berg, supported the need for post project evaluation and importantly included:
Barriers targeted by GEF projects, and the results achieved by GEF projects in addressing market transformation barriers . . . facilitate in understanding better whether the ex-post changes being observed in the market could be linked to GEF projects and pathways through which outcomes and intermediate states . . . [and] the extent GEF-supported CCM activities are reducing GHGs in the atmosphere . . . because it helps in ascertaining whether the incremental GHG reduction and/or avoidance is commensurate with the agreed incremental costs supported by GEF. . . . It is imperative that the ex-ante and ex-post estimates of GHG reduction and avoidance benefits are realistic and have a scientific basis. (GEF IEO, 2012, p. 13)
This description of GHG-related impacts illustrates the difficulties associated with accurately drawing conclusions about sustainability from using a single scale to estimate “the likely ability [emphasis added] of an intervention to continue to deliver benefits for an extended period of time” (GEF IEO, 2010, p. 35) due to several factors. First, the GEF’s 4-point scale is supposed to capture two different aspects of continuation: ongoing benefits from a project-related investment, and new benefits from the continuation of financing mechanisms. Without returning to evaluate the continued net benefits of the now-closed investment, such assumptions cannot be fully claimed. Second, the scale is supposed to capture benefits that can be estimated in a quantitative way (e.g., solar panels that offset the use of a certain amount of electricity from diesel generators); benefits that can be evaluated through policy or program evaluation (e.g., the introduction of a law on energy efficiency); and benefits that will require careful, qualitative study to determine impacts (e.g., training programs for energy auditors or awareness-raising for energy consumers, leading to knowledge and decision changes). Aggregating and weighing such an array of methods into one ranking is methodologically on shaky ground, especially without post project measurements to confirm whether results happened at any time after project closure.
Methodology
The impetus for this research was a sustainability analysis conducted by the GEF IEO that was summarized in the 2017 GEF Annual Performance Report (GEF IEO, 2019a). The study stated: “The analysis found that outcomes of most of the GEF projects are sustained during the postcompletion period, and a higher percentage of projects achieve environmental stress reduction and broader adoption than at completion” (p. 17). Learning more about postcompletion outcomes and assessing how post project sustainability was evaluated was the aim of this work.
This chapter’s research sample consists of two sets of GEF project evaluations. We chose projects funded by the GEF because of the large size of the total project pool. For example, the Green Climate Fund lacks a large pool of mitigation projects that would be suitable for post project evaluation. Our first tranche was selected from the pool of CCM projects cited in the sustainability analysis, which included a range of projects with the earliest start date of 1994 and the latest closing date of 2013 (GEF IEO, 2019a). These constituted $195.5 million dollars of investments. The pool of projects in the climate change focal area (n = 17), comprising one third of the GEF IEO sample, was then selected from the 53 projects listed in the report for further study. We then classified the selected projects by which ones had any mention of field-based post project verification according to an evaluability checklist (Zivetz et al., 2017a). This list highlights methodological considerations including: (a) data showing overall quality of the project at completion, including M&E documentation needed on original and post project data collection; (b) time postcompletion (at least 2 years); (c) site selection criteria; and (d) proof that project results were isolated from concurrent programming to ascertain contribution to sustained impacts (Zivetz et al., 2017a).
Next, we reviewed GEF documentation to identify any actual quantitative or qualitative measures of post project outcomes and impacts. These could include: (a) changes in actual energy efficiency improvements against final evaluation measures used, (b) sustained knowledge or dissemination of knowledge change fostered through trainings, (c) evidence of ownership, or (d) continued or increased dissemination of new technologies. Such verification of assumptions in the final documents typically explores why the assumptions were or were not met, and what effects changes in these assumptions would have on impacts, such as CO2 emissions projections.
The second tranche consisted of projects in the climate change focal area that were included in the 2019 cohort of projects for which the GEF received terminal evaluations. As the GEF 2019 Annual Performance Report explained:
Terminal evaluations for 193 projects, accounting for $ 616.6 million in GEF grants, were received and validated during 2018–2019 and these projects constitute the 2019 cohort. Projects approved in GEF-5 (33 percent), GEF-4 (40 percent) and GEF-3 (20 percent) account for a substantial share of the 2019 cohort. Although 10 GEF Agencies are represented in the 2019 cohort, most of these projects have been implemented by UNDP [United Nations Development Programme] (56 percent), with World Bank (15 percent) and UNEP [United Nations Environment Programme] (12 percent) also accounting for a significant share. (GEF IEO, 2020, p. 9)
We added the second tranche of projects to represent a more current view of project performance and evaluation practice.
The climate change focal area subset consisted of 38 completed GEF projects, which account for approximately $155.7 million in GEF grants (approximately 20% of the total cohort and 25% of the overall cohort budget). Projects included those approved in 1995–1998 (GEF-1; n = 1) and 2003–2006 (GEF-3; n = 2), but 68% were funded in 2006–2010 (GEF-4; n = 26), and 24% in 2010–2014 (GEF-5; n = 9), making them more recent as a group than the 2019 cohort as a whole. Six GEF agencies were represented: Inter-American Development Bank (IDB), International Fund for Agricultural Development (IFAD), UNDP, UNEP, United Nations Industrial Development Organization (UNIDO), and the World Bank.
We eliminated three projects listed in the climate focal area subset from consideration in the second tranche because they had not been completed, leaving a pool of 35 projects. Ex-ante project documentation, such as CEO endorsement requests, and terminal evaluation reports were then reviewed for initial estimates of certain project indicators, such as GHG emission reductions, and ratings of estimated sustainability on the 4-point scale, including the narrative documentation that accompanied the ratings.
Findings
The question of whether post project sustainability was being measured was based on the first tranche of projects and on the sustainability analysis in which they were included. Most of the documents cited in the sustainability analysis were either terminal or impact evaluations focused on efficiency (GEF IEO, 2019a), and most of the documents and report analysis focused on estimated sustainability. Of the 53 “postcompletion verification reports,” as they are referred to in the review (GEF IEO, 2019a, p. 62), we found only 4% to contain adequate information to support the analysis of sustainability. Our wider search for publicly available post project evaluations, which would have constituted an evidence base for sustained outcomes and environmental stress reduction and adoption cited in the GEF IEO 2019 analysis, did not identify any post project evaluations. We were unable to replicate the finding that “84% of these projects that were rated as sustainable at closure also had satisfactory postcompletion outcomes. . . . Most projects with satisfactory outcome ratings at completion continued to have satisfactory outcome ratings at postcompletion” (GEF IEO, 2019a, p. 3) or to compare the CCM subset of projects with this conclusion. The report stated that “the analysis of the 53 selected projects is based on 61 field verification reports. For 81 percent of the projects, the field verification was conducted at least four years after implementation completion [emphasis added].” However, we found no publicly accessible documentation that could be used to confirm the approach to field verification for 8 of the 17 projects.
Similarly, the available documentation for the projects lacked the most typical post project hallmarks, such as methods of post project data collection, comparisons of changes from final to post project outcomes and impacts at least 2 years post closure, and tracing contribution of the project at the funded sites to the changes. Documentation focused on a rating of estimated sustainability with repeated references to only the terminal evaluations and closure reports. In summary, of the 17 projects selected for review in the first tranche, 14 had data consisting of terminal evaluations, and none was 2–20 years post closure. We did not find publicly available evidence to support measurement of post project sustainability other than statements that such evidence was gathered in a handful of cases. Of the pool of 17 projects, only two (both from India) made any reference to post project data regarding the sectors of activity in subsequent years. However, these two were terminal evaluations within a country portfolio review and could not be substantiated with publicly accessible data.
We then screened the first tranche of projects using the Valuing Voices evaluability checklist (Zivetz et al., 2017b):
High-quality project data at least at terminal evaluation, with verifiable data at exit: Of 14 projects rated for sustainability, only six were rated likely to be sustained and outcome and impact data were scant.
Clear ex-post methodology, sufficient samples: None of the evaluations available was a post project evaluation of sustainability or long-term impact. Although most projects fell within the evaluable 2–20 years post project (the projects had been closed 4–20 years), none had proof of return evaluation. There were no clear post project sampling frames, data collection processes including identification of beneficiaries/informants, site selection, isolating legacy effects of the institution or other concurrent projects, or analytic methods.
Transparent benchmarks based on terminal, midterm, and/or baseline data on changes to outcomes or impacts: M&E documents show measurable targets and indicators, baseline vs. terminal evaluations with methods that are comparable to methods used in the post project period: For some of the 17 projects, project inception documents and terminal evaluations were available; in other cases, GEF evaluation reviews were available. Two had measurable environmental indicators that compared baseline to final, but none were after project closure.
Substantiated contribution vs. attribution of impacts: Examples of substantiated contribution were not identified.
Evaluation reports revealed several instances for which we could not confirm attribution. For example, evaluation of the project Development of High Rate BioMethanation Processes as Means of Reducing Greenhouse Gas Emissions (GEF ID 370), which closed in 2005, referenced the following subsequent market information:
As of Nov 2012, capacity installed from waste-to-energy projects running across the country for grid connected and captive power are 93.68MW and 110.74 MW respectively [versus 3.79KW from 8 sub-projects and 1-5 MW projects]. . . . The technologies demonstrated by the 16 sub-projects covered under the project have seen wide-scale replication throughout the country. . . . An installed capacity of 201.03MW within WTE [waste to energy] projects and the 50% of this is attributed to the GEF project. (GEF IEO, 2013, vol. 2, p. 64)
Claims of “the technical institutes strengthened as a result of the project were not fully effective at the time of project completion but are now actively engaged in the promotion of various biomethanation technologies” are unsubstantiated in publicly available information; as a result, the ex-post methods of contribution/attribution data are not clear. Another project in India, Optimizing Development of Small Hydel [hydroelectric] Resources in Hilly Areas (GEF ID 386), projected that later investments in the government’s 5-year plans would happen, and the resulting hydropower production would be attributable to the original project (GEF IEO, 2013); again, this attributional analysis was not documented. Analysis of a third project in India, Coal Bed Methane Capture and Commercial Utilization (GEF ID 325), which closed in 2008, claimed results that could not be reproduced: “Notable progress has been made through replication of projects, knowledge sharing, and policy development” and “expertise was built” (GEF IEO, 2013, Vol. 2, p. 90). Further claims that the project contributed to “the total coal bed methane production in the country and has increased to 0.32 mmscmd [million metric standard cubic meters per day], which is expected to rise to 7.4 mmscmd by the end of 2014” is without proof. The evaluation reported estimates of indirect GHG emission reduction, based on postcompletion methane gas production estimates of 0.2 million m3 per day:
1.0 Million tons equivalent per year, considering an adjustment factor of 0.5 as the GEF contribution [emphasis added], the indirect GHG emission reduction due to the influence of the project is estimated to be 0.5 million tons of CO2 equivalent per annum (2.5 million tons over the lifetime period of 5 years). (GEF IEO, 2013, Vol. 2, p. 91)
Yet without verification of coal bed methane capture and commercial utilization continuing, this impact cannot be claimed.
How Is Sustainability Being Captured?
Fifteen of the 17 CCM projects we reviewed in the first tranche were rated on a 4-point scale at terminal evaluation. Of those 15, 12 had overall ratings of either satisfactory or marginally satisfactory, and one highly satisfactory overall. Eleven of the sustainability ratings were either likely or marginally likely. Only two projects were rated marginally unlikely overall or for sustainability, and only one project received marginally unlikely in both categories (the Demand Side Management Demonstration energy conservation project that ended in 1999 [GEF ID 64]). Although none of the documents mentioned outcome indicators, eight of the 17 rated estimated CO2 direct and indirect impacts.
In the second pool of projects—the CCM subset of the 2019 cohort—63% of the projects were rated in the likely range for sustainability (n = 22; nine were rated likely and 13 marginally likely). This is slightly higher than the 2019 cohort as a whole, in which 59% were rated in the likely range. In turn, the 2019 annual performance report noted that “the difference between the GEF portfolio average and the 2019 cohort is not statistically significant for both outcome and sustainability rating” (GEF IEO, 2020, p. 9). It is slightly lower than the percentage of CCM projects receiving an overall rating of marginally likely or higher in the 2017 portfolio review (68%, n = 265; GEF IEO, 2017, p. 78).
In this second set of projects, only two received a rating of marginally unlikely and only one received a sustainability rating of unlikely. The remainder of the projects could not be classified using the 4-point rating scale, either because they had used an either/or estimate (one project), a 5-point scale (one project), or an estimate based on the assessment of risks to development outcome (two projects). Six projects or could not be assessed due to the absence of a publicly accessible terminal evaluation in the GEF and implementing agency archives.
How Effectively Is Sustainability Being Captured?
Throughout the first set of reports on which the sustainability was claimed, “84% of these projects that were rated as sustainable at closure also had satisfactory postcompletion outcomes, as compared with 55% percent of the unsustainable projects” (GEF IEO, 2019a, p. 29). The data did not support the claim, even during implementation.
As a Brazilian project (GEF ID 2941) showed, sustainability is unlikely when project achievements are weak, and exit conditions and benchmarks need to be clear: The exit strategy provided by IDB Invest77 is essentially based on financial-operational considerations but does not provide answers to the initial questions how an EEGM [energy efficiency guarantee mechanism] should be shaped in Brazil, how relevant it is and for whom, and to whom the EEGM should be handed over (p. 25).
In Russia, the terminal evaluation for an energy efficiency project (GEF ID 292) cited project design flaws that seemed to belie its sustainability rating of likely: “From a design-for-replication point of view the virtually 100% grant provided by the GEF for project activities is certainly questionable” (Global Environment Facility Evaluation Office [GEF EO], 2008, p. 20). Further, the assessment that “the project is attractive for replication, dissemination of results has been well implemented, and the results are likely to be sustainable [emphasis added] for the long-term, as federal and regional legislation support is introduced” (GEF EO, 2008, p. 39), makes a major assumption regarding changes in the policy environment. (In fact, federal legislation was introduced 2 years post project, and the extent of enforcement would require examination.)
A Pacific regional project (GEF ID 1058) was rated as likely to be sustained, but its report notes that it “does not provide overall ratings for outcomes, risks to sustainability, and M&E” (p. 1).
The Renewable Development Energy project in China (GEF ID 446) that closed in 2007 was evaluated in 2009 (not post project, but a delayed final evaluation). The report considered the project sustainable with a continued effort to support off-grid rural electrification, claiming, “the market is now self-sustaining, and thus additional support is not required” (p. 11). The project estimated avoided CO2 emissions and cited 363% as achieved; however, calculations were based on 2006 emissions values for thermal power sector and data from all wind farms in China, without a bottom-up estimate. The interpolation of this data lacks verification.
Similar sampling issues emerge in a project in Mexico (GEF ID 643): “A significant number of farmers . . . of an estimated 2,312 farmers who previously had had no electricity” (p. 20) saw their productivity and incomes increase as a result of their adoption of productive investments (e.g., photovoltaic-energy water-pumping systems and improved farming practices). A rough preliminary estimate is extrapolated from an evaluation of “three [emphasis added] beneficiary farms, leading to the conclusion that in these cases average on-farm increases in income more than doubled (rising by139%)” (p. 21).
Baseline to terminal evaluation comparisons were rare, with the exception of photovoltaic energy projects in China and Mexico, and none were post project. Two were mid-term evaluations, which could not assess final outcomes much less sustainability. Ex-post project evaluations far more typically focus on the contributions that projects made, because only in rare cases can the attribution be isolated, especially for a project pool, where the focus is often on creating an enabling environment reliant on a range of actors. One such example is the Indian energy efficiency project approved in 1998 (GEF ID 404), in which
the project resulted in a favorable environment for energy-efficiency measures and the sub-projects inspired many other players in similar industries to adopt the demonstrated technologies. Although quantitative data for energy saved by energy efficiency technologies in India is not available, it is evident that due to the change in policy and financial structure brought by this project, there is an increase in investment in energy efficiency technologies in the industries. (GEF IEO, 2013, Vol. 2., p. 95)
And while such GEF evaluators are asking for ex-post evaluation, in an earlier version of this book, Evaluating Climate Change Action for Sustainable Development (Uitto et al., 2017), the authors encouraged us to be “modest” in expectations of extensive ex-post evaluations and exploration of ex-post’s confirmatory power seemingly has not occurred:
The expectations have to be aligned with the size of the investment. The ex-post reconstruction of baselines and the assessment of quantitative results is an intensive and time-consuming process. If rigorous, climate change-related quantitative and qualitative data are not available in final reports or evaluations of the assessed projects, it is illusive to think that an assessment covering a portfolio of several hundred projects is able to fill that gap and to produce aggregated quantitative data, for example on mitigated GHG emissions. When producing data on proxies or qualitative assessments, the expectations must be realistic, not to say modest. (p. 89)
Project Evaluability
Following an analysis of the sustainability estimates in the first pool of projects, we screened project documentation and terminal evaluations for conditions that foster sustainability during planning, implementation, and exit. We also analyzed how well the projects reported on factors that could be measured in a post project evaluation and factors that would predispose projects to sustainability. These sustained impact conditions consisted of the following elements: (a) resources, (b) partnerships and local ownership, (c) capacity building, (d) emerging sustainability, (e) evaluation of risks and resilience, and (f) CO2 emissions (impacts).
Although documentation in evaluations did not verify sustainability, many examples exist of data collection that could support post project analyses of sustainability and sustained impacts in the future. Most reports cited examples of resources that had been generated, partnerships that had been fostered for local ownership and sustainability, and capacities that had been built through training. Some terminal evaluations also captured emerging impacts due to local efforts to sustain or extend impacts of the project that had not been anticipated ex-ante.
The Decentralized Power Generation project (GEF ID 4749) in Lebanon provides a good example of a framework to collect information on elements of sustainability planning at terminal (see Table 3).
Table 3: Sustainability Planning from a Decentralized Power Generation Project in Lebanon (GEF ID 4749)
Resources
Are there financial risks that may jeopardize the sustainability of project outcomes?
What is the likelihood of financial and economic resources not being available once GEF grant assistance ends?
Ownership
What is the risk, for instance, that the level of stakeholder ownership (including ownership by governments and other key stakeholders) will be insufficient to allow for the project outcomes/benefits to be sustained?
Do the various key stakeholders see that it is in their interest that project benefits continue to flow?
Is there sufficient public/stakeholder awareness in support of the project’s long-term objectives?
Partnerships
Do the legal frameworks, policies, and governance structures and processes within which the project operates pose risks that may jeopardize sustainability of project benefits?
Benchmarks, risks, & resilience
Are requisite systems for accountability and transparency, and required technical know-how, in place?
Are there ongoing activities that may pose an environmental threat to the sustainability of project outcomes?
Are there social or political risks that may threaten the sustainability of project outcomes?
Source: 4749 Terminal Evaluation, p. 45. Note: Capacity Building and Emerging Sustainability were missing from project 4749
Tangible examples of the above categories at terminal evaluations include the following.
Resources
The most widespread assumption for sustainability was sufficient financial and in-kind resources, often reliant on continued national investments or new private international investments, which could be verified. National resources that could sustain results include terminal evaluation findings such as:
Funding for fuel cell and electric vehicle development by the Chinese Government had increased from Rmb 60 million (for the 1996-2000 period) to more than Rmb 800 million (for the 2001-2005 period). More recently, policymakers have now targeted hydrogen commercialization for the 2010-2020 period. (GEF ID 445, p. 17)
Another example is: “About 65 percent of [Indian] small Hydro electromechanical Equipment is sourced locally” (GEF ID 386; GEF IEO, 2013, Vol.2, p. 76). The terminal evaluation of a global IFC project stated that “Moser Baer is setting up 30 MW solar power plants with the success of the 5 MW project. Many private sector players have also emulated the success of the Moser Baer project by taking advantage of JNNSM scheme” (GEF ID 112, p. 3).
Local Ownership and Partnerships
The Russian Market Transformation for EE Buildings project (GEF ID 3593) showed in its recommendation to governmental stakeholders that their ownership would be essential for sustainability, describing “a suitable governmental institution to take over the ownership over the project web site along with the peer-to-peer network ensuring the sustainability of the tools [to] support the sustainability of the project results after the project completion” (p. xi). An Indian project (GEF ID 386) noted how partnerships could sustain outcomes:
By 2001, 16 small hydro equipment manufacturers, including international joint ventures (compared to 10 inactive firms in 1991) were operational. . . . State government came up with policies with financial incentives and other promotional packages such as help in land acquisition, getting clearances, etc. These profitable demonstrated projects attracted private sector and NGOs to set up similar projects. (GEF IEO, 2013, Vol. 2, p. 74)
Capacity Building
The Renewable Energy for Agriculture project in Mexico (GEF ID 643) established the “percentage of direct beneficiaries surveyed who learned of the equipment through FIRCO’s promotional activities” (86%), “number of replica renewable energy systems installed” (847 documented replicas), and “total number of technicians and extensionists trained in renewable energy technologies” (p. 33). This came to 3022, or 121% of the original goal of 2500, which provides a good measure of how the project exceeded this objective.
Emerging Sustainability
Recent post project evaluations also address what emerged after the project that was unrelated to the existing theory of change. These emerging findings are rarely documented in terminal evaluations, but some projects in the first pool included information about unanticipated activities or outcomes at terminal evaluation, and these could be used for future post project fieldwork follow-up. As a consequence of the hydroelectric resource project, for example, the Indian Institute “developed and patented the designs for water mills” (GEF ID 386; GEF IEO, 2013, Vol. 2, p. 73). The terminal evaluation for another project stated that “following the UNDP-GEF project, the MNRE [Ministry of New and Renewable Energy] initiated its own programs on energy recovery from waste. Under these programs, the ministry has assisted 14 projects with subsidies of US$ 2.72 million” (GEF ID 370; GEF IEO, 2013, Vol. 2, p. 62).
Benchmarks, Risks, and Resilience
As the GEF’s 2019 report itself noted, “The GEF could strengthen its approach to assessing sustainability further by explicitly addressing resilience” (GEF IEO, 2019a, p. 33). Not doing so is a risk, as our climate changes. Two evaluations noted “no information on environmental risks to project sustainability;” these were the Jamaican pilot on Removal of Barriers to Energy Efficiency and Energy Conservation (GEF ID 64; p. 68) and a Pacific regional project (GEF ID 1058). For likelihood of sustainability, the Jamaican project was rated moderately unlikely and the Pacific Islands project was rated likely but “does not provide overall ratings for outcomes, risks to sustainability, and M&E” other than asserting that
the follow-up project, which has been approved by the GEF, will ensure that the recommendations entailed in the documents prepared as part of this project are carried out. Thus, financial risks to the benefits coming out of the project are low. (p. 3)
Greenhouse Gas Emissions (Impacts)
In GEF projects, timeframe is an important issue, which makes post project field verification that much more important. As the GEF IEO stated in 2018, “Many environmental results take more than a decade to manifest. Also, many environmental results of GEF projects may be contingent on future actions by other actors.” (GEF IEO, 2018, p. 34).
Uncertainty and Likelihood Estimates
Estimating the likelihood of sustainability of greenhouse gas emissions at terminal evaluation raises another challenge: the relatively high level of uncertainty concerning the achievement of project impacts related to GHG reduction. GHG reductions are the primary objective stated in the climate change focal area, and they appear as a higher level impact across projects regardless of the terminology used. For a global project on bus rapid transit and nonmotorized transport, the objective was to “reduce GHG emissions for transportation sector globally” (GEF ID 1917, p. 9). For a national project on building sector energy efficiency, the project goal was “the reduction in the annual growth rate of GHG emissions from the Malaysia buildings sector” (GEF ID 3598; Aldover & Tiong, 2017, p. i). For a land management project in Mexico, the project objective was to “mitigate climate change in the agricultural units selected . . . including the reduction of emissions by deforestation and the increase of carbon sequestration potential” (GEF ID 4149, p. 21). For a national project to phase out ozone-depleting substances, the project objective was to “reduce greenhouse gas emissions associated with industrial RAC (refrigeration and air conditioning) facilities in The Gambia” (GEF ID 5466, p. vii). Clearly, actual outcomes in GHG emissions need to be considered in any assessment of the likelihood of sustainability of outcomes.
Unlike projects in the carbon finance market, GEF projects estimate emissions for a project period that usually exceeds the duration of the GEF intervention. In most cases, ex-ante estimated GHG reductions in the post project period are larger than estimated GHG reductions during the project lifetime. In practice, this means that for projects for which the majority of emissions will occur after the terminal evaluation, evaluators are being asked to estimate the likelihood that benefits will not only continue, but will increase due to replication, market transformation, or changes in the technology or enabling environment. Table 4 provides several examples from the GEF 2019 cohort of how GHG reductions may be distributed over the project lifecycle.
Table 4: Distribution of Estimated GHG Reductions Ex-Ante for Selected Projects in the CCM Subset of the GEF 2019 Cohort
GEF ID
Country
Sub-Sector
Ex-ante GHG reduction estimates
% of reductions achieved by the terminal evaluation
During project lifetime (tCO2e)
Total reductions (tCO2e)
2941
Brazil
EE Buildings
705,000
9,588,000
7
2951
China
EE Financing
5,400,000
111,500,000
5
3216
Russia
EE Standards / Labels
7,820,000
123,600,000
6
3555
India
EE Buildings
454,000
5,970,000
8
3593
Russia
EE Industry
0
3,800,000
0
3598
Malaysia
EE Buildings
2,002,000
18,166,000
11
3755
Vietnam
EE Lighting
2,302,000
5,268,000
44
3771
Philippines
EE Industry
560,000
560,000
100
Sources: 2941 Project Document, pp. 35–37; 2951 PAD/CEO Endorsement Request, p. 88; 3216 Project Document, pp. 80–90; 3555 Terminal Evaluation; 3593 Terminal Evaluation, p. 23; 3598 Terminal Evaluation, p. 24; 3755 GEF CEO Endorsement Request; 3771 Terminal Evaluation pp. 8–9
The range in Table 4 shows the substantial variation in uncertainty when estimating the likelihood of long-term project impacts. For projects designed to achieve all of their emission reductions during their operational lifetimes, the achievement of GHG reductions can be verified as a part of the terminal evaluation. However, most projects assume that nearly all estimated GHG reductions will occur in the post project period, so uncertainty levels are much higher and estimates may be more difficult to compile. In other evaluations, evaluators may identify inconsistent GHG estimates (e.g., GEF ID 4157 and 5157), or recommend that the ex-ante estimates be downsized (e.g., GEF ID 3922, 4008, and 4160). These trends may also be difficult to capture in likelihood estimates.
Conclusions and Recommendations
While sustainability has been estimated in nearly all of the projects in the two pools we considered, it has not been measured. Assessing the relationship between projected sustainability and actual post project outcomes was not possible due to insufficient data. Further, findings from the first pool of climate change mitigation projects did not support the conclusion that “outcomes of most of the GEF projects are sustained during the postcompletion period” (GEF IEO, 2019a, p. 17). In the absence of sufficient information regarding project sustainability, determining post project GHG emission reductions is not possible, because these are dependent on the continuation of project benefits following project closure.
We also conclude that although the 4-point rating scale is a common tool for estimating the likelihood of sustainability, the measure itself has not been evaluated for reliability or validity. The scale is often used to summarize diverse trends in the midst of varying levels of uncertainty limits. The infrequency of the unlikely rating in terminal evaluations may result from this limitation—evaluators believe that some benefits (greater than 0%) will continue. However, the 4-point scale cannot convey an estimate of what percentage of benefits will continue. Furthermore, the use of market studies to assess sustainability is not effective in the absence of attributional analysis linking results to the projects that ostensibly caused change.
As a result, the current evaluator’s toolkit still does not provide a robust means of estimating post project sustainability and is not suitable as a basis for postcompletion claims. That said, M&E practices in the CCM projects we studied supported the collection of information that documented conditions (e.g., resources, partnerships, capacities, etc.) in a way that projects could be evaluable, or suitable for post project evaluation. We recommend that donors provide financial and administrative support for project data repositories to retain data in-country at terminal evaluation for post project return and country-level learning, and include evaluability (control groups, sampling sizes, and sites selected by evaluability criteria) in the assessment of project design. We also recommend sampling immediately from the 56 CCM projects in the two sets of projects that have been closed at least 2 years.
Donors’ allocation of sufficient resources for CCM project evaluations would allow verification of actual long-term, post project sustainability using the OECD DAC (2019) definition of “the continuation of benefits from a development intervention after major development assistance has been completed” (p. 12). It would also enable evaluators to consider enumerating project components that are sustained rather than using an either/or designation (sustained/not sustained). Evaluation terms of reference should clarify the methods used for contribution vs. attribution claims, and they should consider decoupling estimates of direct and indirect impacts, which are difficult to measure meaningfully in a single measure. For the GEF portfolio specifically, the development of a postcompletion verification approach could be expanded from the biodiversity focal area to the climate change focal area (GEF IEO, 2019b), and lessons could also be learned from the Adaptation Fund’s (2019) commissioned work on post project evaluations. Bilateral donors such as JICA have developed rating scales for post project evaluations that assess impact in a way that captures both direct and indirect outcomes (JICA, 2017).
Developing country parties to the Paris Agreement have committed to providing “a clear understanding of climate change action” in their countries under Article 13 of the agreement (United Nations, 2015), and donors have a clear imperative to press for continued improvement in reporting on CCM project impacts and using lessons learned to inform future support.
We use the term “postproject” evaluations to distinguish these longer term evaluations from terminal evaluations, which typically occur within 3 months of the end of donor funding. While some donors (JICA, 2004; USAID, 2019) use the term “ex-post evaluation” to refer to evaluations distinct from the terminal/final evaluation and occurring 1 year or more after project closure, other donors use the terms “terminal evaluation” and “ex-post evaluation” synonymously. Other terms include postcompletion, post-closure, and long-term impact.
In a 2013 meta-evaluation, Hageboeck et al. found that only 8% of projects in the 2009–2012 USAID PPL/LER evaluation portfolio (26 of 315) were evaluated post-project following the termination of USAID funding.
Aldover, R. Z., & Tiong, T. C. (2017). UNDP/GEF project PIMS 3598: Building sector energy efficiency project (BSEEP): Terminal evaluation report. Global Environment Facility and United Nations Development Programme. https://erc.undp.org/evaluation/evaluations/detail/8919
Legro, S. (2010, June 9–10). Evaluating energy savings and estimated greenhouse gas emissions in six projects in the CIS: A comparison between initial estimates and assessed performance [paper presentation]. International Energy Program Evaluation Conference, Paris, France. https://energy-evaluation.org/wp-content/uploads/2019/06/2010-paris-027-susan-legro.pdf
Mayne, J. (2001). Assessing attribution through contribution analysis: Using performance measures sensibly. The Canadian Journal of Program Evaluation, 16(1), 1–24.Google Scholar
Organisation for Economic Co-operation and Development, Development Assistance Committee. (1991). DAC criteria for evaluating development assistance.https://www.oecd.org/dac/evaluation/2755284.pdf
Rogers, B. L., & Coates, J. (2015). Sustaining development: A synthesis of results from a four-country study of sustainability and exit strategies among development food assistance projects. FANTA III, Tufts University, & USAID. https://www.fantaproject.org/research/exit-strategies-ffp
Sridharam, S., & Nakaima, A. (2019). Till time (and poor planning) do us part: Programs as dynamic systems—Incorporating planning of sustainability into theories of change. The Canadian Journal of Program Evaluation.https://evaluationcanada.ca/system/files/cjpe-entries/33-3-pre005.pdf
Open AccessThis chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made.
Cite: Cekan J., Legro S. (2022) Can We Assume Sustained Impact? Verifying the Sustainability of Climate Change Mitigation Results. In: Uitto J.I., Batra G. (eds) Transformational Change for People and the Planet. Sustainable Development Goals Series. Springer, Cham. https://doi.org/10.1007/978-3-030-78853-7_8
Blog Categories
Jindra Cekan, Ph.D. has used participatory methods for 30 years to connect with participants, ranging from villagers in Africa, Central/ Latin America and the Balkans to policy makers and Ministers around the world for her international clients. Their voices have informed the new Sustained and Emerging Impacts Evaluation, other M&E, stakeholder analysis, strategic planning, knowledge management and organizational learning.
If you don’t find what you are looking for via the search, categories, or posts above, you can go to the Blog page, scroll to the bottom, and click “previous posts” to go through all of the posts (newest–>oldest).