Fostering Values-Driven Sustainability Through an Ex-Post Capacities Lens (reposting a book chapter)

We all want our project results to be sustained, but without doing ex-post sustainability evaluations, we don’t know if they are. However, ex-post evaluations can also teach us how to fund, design, monitor, and evaluate projects before they close. They also require some evaluator competencies, and the checklists below are designed to help build capacities to make implemented projects more sustained, This research was also informed by excellent research by INTRAC and CDA. Enjoy! Also, you can download it from this great array of evaluator competencies via the Journal of Multidisciplinary Evaluation.

Fostering Values-Driven Sustainability Through an Ex-Post Evaluation Capacities Lens

 

Jindra Cekan/ova

Founder of Valuing Voices at Cekan Consulting LLC

Background: Ex-post evaluation of sustainability has been done for 40 years in global development. However, it has been done far less than 1% of all global development projects, for there is little proof that “sustainable” development is or is not. Similarly, foreign aid projects are implemented to foster sustainability, but without the benefit of evidence from ex-post evaluations of what drove it and limited research on the benefits of robust exit strategies..

 

Purpose: Transparency in values we hold, and evaluative capacities’ best practices that we bring to our evaluations inform how they are done, with whom, and for what. Using the evidence base from ex-post evaluations and exit strategies led to these nine checklists. Professionals in monitoring and evaluation should use them to foster long-term sustainability and learning.

 

Setting: Drawing on primary and secondary research across 91 ex-post evaluations of foreign aid sustainability plus two major studies of exit strategies globally..

 

Intervention: Not applicable.

 

Research Design: The checklists were drafted based on sustainability and exit studies and then vetted with lead researchers of the two exit studies. They were revised, and additional research was done on both values-driven evaluation and evaluation competencies.

Data Collection and Analysis: Some primary data was collected during ex-post evaluations by the author, complemented by secondary research.

 

Findings: Sustained exit commitments and conditions checklists can build evaluator capacities in evaluating sustainability. Several have been used by Tufts, USAID, the GEF, and the Adaptation Fund and verified actual sustainability and its prospects. Also, evaluator capacities can be built.

 

Keywords: ex-post evaluation; sustainability; monitoring and evaluation; values; competencies; M&E checklists

Abstract

 

Monitoring and evaluation (M&E) work is guided by an array of values held by funders, implementers, M&E experts, and project participants and partners. Some values are explicit, while others are assumed, such as the truth of “values-neutral” evaluation or that projects are sustainable in the long term. I espouse Patton’s (2022) “activist interventionist change-committed evaluation” by both advocating for ex-post evaluation of many development aid projects’ untested hypotheses about durability, and suggesting ex-post lessons can shape development aid projects from design to closure. Ex-post lessons are valuable for current project planning, design, implementation, and M&E. Using them can make development results more sustainable. Checklists created to ease monitoring and evaluation of prospects for sustainability should be used with country nationals. Six evaluator competencies support sustainability practice, namely systems thinking competency, collaboration competency, anticipatory competency, and reflective, technical, and situational practice competencies. Drawing on several studies that validate this approach, this paper shows how infrequently ex-post evaluations of sustainability are conducted. This seems to indicate that the lessons learned from ex-post evaluation are not valued. Bringing lessons from rare ex-post evaluations to benefit current implementation and exit is the core of the checklists described in this article. Learning from ex-post evaluations and exit studies is very beneficial to inform current aid projects and helps results last. Evaluator competencies are built through this paper. Evaluating both the results expected by donors and new, locally emerging outcomes from local efforts to sustain results also adds value to the canon. Ongoing learning and sharing lessons from progress around the project cycle, from participants to donors, and among M&E experts is vital, especially bringing those lessons back to new projects. The six competencies, the technical checklists, and evaluative thinking about sustainability can help shift programming toward locally led and sustainable development.

Introduction

 

This paper explores a range of values and capacities needed to support the sustainability of foreign aid development projects. It draws on 12 years of Valuing Voices research.[1] This initiative, aimed to increase sustainable solutions for excellent impact through learning from ex-post project sustainability evaluations, also focuses on how evaluators can promote the design, monitoring, and evaluation of sustainability pre-closure and draw on germane evaluator competencies. This paper explores a range of evaluators’ views on the values we bring as monitoring and evaluation experts, as well as the competencies needed to design, implement, monitor, and evaluate for long-term sustainability.

 

Both implicit and explicit values that donors, implementers, and M&E commissioners bring to global development work influence how that work is done. Evaluators need to be aware of and promote the explicit and implicit values that drive M&E work to build evaluation capacity that manifests evaluation values to ascertain which project results are sustainable, by whom, for how long, and why.

 

Sustainability, i.e., the long-term durability of project results, does not happen by itself; it needs to be fostered during the project, but more needs to be known about the conditions required for sustainability to take root after project closure and exit. Valuing Voices’ founder, consultants, and clients believe that evaluating sustainability cannot be limited to desk studies; that eliciting the views of country-based former project participants and partners is key. Based on the lessons from 10 such ex-post sustainability and exit evaluations done by Valuing Voices and over 90 other studies that include participant responses from a variety of donors and implementers,[2] plus seminal studies of exit strategies from Lewis (2016) and CDA (2020), we found nine elements need to be monitored and evaluated from project design to the ex-post years after closure. Development practitioners, including evaluators, need to build their knowledge about what has been sustained in ex-post evaluations and have this inform how they advocate to include these nine elements in project design, implementation, monitoring, and evaluation. This will need equal participation by national partners and participants to be built in throughout to foster long-term results and for new emerging pathways to emerge.

 

The nine elements are presented below in the form of checklists, which function as evaluator capacities tools. For by identifying what elements are needed to foster sustainability in programming, evaluators can inform clients and employers of what needs to be designed, implemented, monitored, and evaluated. The checklists cover two kinds of sustainability drivers: (a) commitments to sustainability, which includes designing beyond the project lifetime through a theory of sustainability, thinking about how to foster sustainability through the process of exit/handover, and considering risks and resilience; and (b) building conditions within the very project to foster lasting sustainability. This involves looking beyond resources as the only driver of durability, to seeing what makes local ownership of results robust. This includes considering several questions: How should equitable partnerships be fostered for long-term results? What capacities to keep disseminating behavior change exist? How adaptive are the timeframe and exit to foster sustainability? How accountable are projects in their communications to partners as they exit?

 

One of the greatest shocks that threatens the sustainability of most global development aid investments is climate change, which is why the natural world and access to viable nature is part of both risks and resilience to shocks. It is discussed separately, given the urgency with which we need to monitor and evaluate its progression and effect on sustainability. Some evaluator competency-building resources that help to evaluate the natural world have been added (e.g., Brouselle, 2022; Rowe, 2019). This is because nature is assumed and often overlooked in much global development programming design and evaluation, as seen in the review of several hundred ex-posts, exit reports, webinars, and evaluations, including blog posts about sustainable development by Cekan (2020a; 2020b), and underscored by Rowe (2019). The natural world and its environmental sustainability are a missing link, while the oft-stated but rarely evaluated “resilience” is often unproven (except for new ex-post research by the Adaptation Fund (2022). A viable natural world continuing to support lives and livelihoods underpins sustainability across so much of global foreign aid and urgently needs inclusion in all evaluations.

 

Defining Evaluation, Its Values, and Sustainability

 

Michael Scriven defined evaluation this way: “Evaluation determines the merit, worth, or value of things” (Scriven, 1991, as cited in Coffman, 2004, p. 1). “Valuation” (measurement, estimation of worth) is embedded in our work as evaluators. Increasingly, the field of evaluation is discussing the values that underpin the work of evaluators. Thomas Archibald notes in a book review, “Schwandt, House, and Scriven—call into question the dubious ‘value-free doctrine’ of the social sciences… [and] emphasize[s] the obvious yet frequently ignored primacy of values and valuing in evaluation” (2016, p. 448). Evaluation, from the perspective of Michael Scriven, is filled with values:

 

If evaluators cling to a values-free philosophy, then the inevitable and necessary application of values in evaluation research can only be done indirectly, by incorporating the values of other persons who might be connected with the programs, such as program administrators, program users, or other stakeholders. (Encyclopedia.com, 2018, para. 26)

 

This opens a door for participatory input from those most closely connected to projects¾the partners and the participants.

 

Michael Quinn Patton highlights tensions between evaluations that seek independent definitive judgments versus those that honor diverse perspectives. He values work done via participatory co-creation by activist, interventionist, change-committed evaluators, where the evaluation itself engages in change. This paper explicitly encourages those involved in monitoring and evaluation to work through participatory co-creation, because sustainability can only be maintained if it is locally driven. Evaluation also needs change-committed evaluators who embrace long-term sustainability.

 

The Development Assistance Committee of the Organisation for Economic Co-operation and Development (OECD/DAC) defines sustainability as the basis for ex-post project evaluation. Their definition includes that same reference to long-term sustainability, and its evaluation is part of the change needed in our field¾namely, a focus on longitudinal results: “the continuation of benefits from a development intervention after major development assistance has been completed…. [and] [t]he probability of continued long-term benefits. The resilience to risk of the net benefit flows over time” (2002, p. 37). In OECD/DAC’s updated and detailed definition, evaluators are directed to consider sustainability

 

at each point of the results chain and the project cycle of an intervention. Evaluators should also reflect on sustainability in relation to resilience and adaptation in dynamic and complex environments. This includes the sustainability of inputs (financial or otherwise) after the end of the intervention and the sustainability of impacts in the broader context of the intervention. For example, an evaluation could assess whether an intervention considered partner capacities and built ownership at the beginning of the implementation period as well as whether there was willingness and capacity to sustain financing at the end of the intervention. In general, evaluators can examine the conditions for sustainability that were or were not created in the design of the intervention and by the intervention activities and whether there was adaptation where required…. If the evaluation is taking place ex post, the evaluator can also examine whether the planned exit strategy was properly implemented to ensure the continuation of positive effects as intended. (2019 Sustainability, para. 3, 6).

 

These key elements, especially the “conditions for sustainability,” inform the checklists in this paper.

 

The OECD also differentiates between durability and ecological sustainability. With the latter being relegated to:

 

Confusion can arise between sustainability in the sense of the continuation of results, and environmental sustainability or the use of resources for future generations…. environmental sustainability is a concern (and may be examined under several criteria, including relevance, coherence, impact, and sustainability). (2019, Sustainability, para. 2)

 

Yet sustainability rests on our valuing the environment and planning for risks and resilience to sustainability (see Figure 8). As evaluators, we need to push donors and implementers to examine the natural system’s resilience, which supposedly unrelated sectors rely on. For instance, the environment affects sectors such as income generation (e.g., natural products being processed by people generating income) and education (e.g., the gardens that subsidize teacher salaries, or the farming, relying on rain, that supports parents to afford school fees). In “Planting Seeds for Change,” evaluator Brouselle (2022) reminds us of the primacy of climate values in Evaluation’s COP26 compendium:

 

We must challenge the ways that evaluations are commissioned; how policies and programmes are framed¾to take risks, going beyond existing evaluation mandates, to improve equity, health and prosperity; reduce pollution; take care of our air, waters and lands; and protect biodiversity… we should use our facilitating skills to foster democracy and engagement. Evaluators can contribute to creating spaces for dialogue and debate with commissioners, participants, and stakeholders, on the socio-ecological impacts of projects, programmes and policies. (para. 4)

 

Linking Competencies and Capacities to Sustainability via Valuing Voices Sustained Exit Checklists

There are six types of evaluator competencies that are relevant to focus work planning for sustainability during design/implementation or conducting an ex-post sustainability evaluation.

 

Evaluation as a field needs to embrace a variety of such competencies as we seek to address a range of complex problems. The first three competencies come from the United Nations Educational, Scientific and Cultural Organization (UNESCO), from a 2017 report called “Education for Sustainable Development Goals: Learning Objectives,” which informs the macro view for sustainability and locally led development.

 

Systems Thinking Competency

UNESCO (2017) defines this competency as “the abilities to recognize and understand relationships; to analyse complex systems; to think of how systems are embedded within different domains and different scales; and to deal with uncertainty” (p. 10). This is key as interventions interact with complicated societies, often with wider aims than what just one project wants to achieve. Uncertainty affects projects in implementation (which is why adaptive management is a checklist item (see Figure 7). Further, because ex-posts are not about direct attribution, given the complexity of communities, but contribution, it is vital to look at a range of outside influences post–project closure that could explain the results (not) seen.

 

Collaboration Competency

 

This competency is pivotal in designing, implementing, monitoring, and evaluating sustainability, which lies in both “the abilities to learn from others; to understand and respect the needs, perspectives and actions of others… and to facilitate collaborative and participatory problem solving” (UNESCO, 2017, p. 10). Listening to those who will be tasked with sustaining results or innovating emerging outcomes involves a close collaboration, as does using participatory methods to both design for and troubleshoot/problem-solve with.

 

Anticipatory Competency

 

Anticipatory competency is “the ability to understand and evaluate multiple futures¾possible, probable and desirable¾and to create one’s own visions for the future, to apply the precautionary principle, to assess the consequences of actions, and to deal with risks and changes” (UNESCO, 2017, p. 10). This competency is key to the whole field of sustainability as a field of study. Often projects assume sustainability will be the long-term result of development efforts. But, as Rogers and Coates (2015) note,

 

Hope is not a strategy. Sustainability plans that depend on the expectation, or hope, that individuals and organizations will continue to function without the key factors previously identified are not likely to achieve this goal. Such plans should take account of what is feasible within the economic, political, and social/cultural context of the areas in which they work. (p. 44)

 

This also relates to two other competencies, systems thinking (discussed above) and situational practice (discussed below).

 

The Canadian Evaluation Society (CES; 2018) provides us with the second three domains relevant to sustainability that evaluators need to consider in terms of how the M&E is done.

 

Reflective Practice Competencies

CES’s Reflective Practice domain includes competencies that “focus on the evaluator’s knowledge of evaluation theory and practice; application of evaluation standards, guidelines, and ethics; and awareness of self, including reflection on one’s practice and the need for continuous learning and professional growth” (2018, p. 5). This competency applies to the content of the sustainability methods presented below, as well as the knowledge evaluators will gain from evaluating prospects for sustainability and emerging outcomes (Figure 1) in projects. Additionally, this competency domain includes both considering “the well-being of human and natural systems in evaluation practice” and being “committed to transparency” (p. 6), which is the aim of using the checklists as a whole sustainability learning process. It is important in such reflection to clarify one’s values.

 

Technical Practice Competencies

These competencies focus on the “strategic, methodological, and interpretive decisions required to conduct an evaluation” (CES, 2018, p. 5), which directly applies to the five sustained exit commitments and conditions (see Figure 3). One competency, “assesses program evaluability,” is germane to ex-post evaluation and prospects for long-term sustainability. Cekan and Legro (2021) have applied the elements in the nine checklists which comprise the Embedding Sustainability in the Project Cycle framework to a World Bank sustainability study, and Cekan has used it in ex-post evaluations, such as a recent one for youth employment (USAID Mali, 2022). It has informed the training materials created for the Adaptation Fund (2023) on how to evaluate sustainability and resilience ex-post.

 

Situational Practice Competencies

As so few projects are “cookie-cutter” versions of each other, it is always vital to contextualize each project and its prospects for sustainability in its unique context, applying CES’s third competency domain, Situational Practice: “Focus on understanding, analyzing, and attending to the many circumstances that make every evaluation unique, including culture, stakeholders, and context” (CES, 2018, p. 6), identifying how specifically the project has moved around the project cycle (see Figure 2), particularly monitoring “organizational changes and changes in the program environment during the course of the evaluation” (p. 7) as well as tracing changes that lead to likely sustainability post-project, and building evaluation capacity by “engag[ing] in reciprocal processes in which evaluation knowledge and expertise are shared between the evaluator and stakeholders” (p. 7) throughout both the analysis and the sharing of the learning results.

 

Competencies that M&E professionals need can be used when monitoring and evaluating prospects for sustainability during project implementation as well as during ex-post evaluations. Sustainability prospects increase when they are designed and planned for, as Zivetz et al. (2017) found in researching ex-posts. There are clear advantages of planning for sustainability measurement from the outset of the project as well as measuring sustainability through the entire project cycle. Donors, implementers, and experts in monitoring and evaluation, as well as national partners, need to be trained in these competencies.

Evaluating Sustainability in Practice

Aid experts including evaluators embed values in their work in a myriad of ways, starting with how projects are funded and designed and by whom; for this reason, much M&E emphasis is on final rather than ex-post evaluations and learning from them. Over $3.5 trillion has been spent on public foreign aid projects in the past 70 years (OECD, 2019). Yet, the aid industry has evaluated fewer than 1% of these projects for sustainability (Cekan, 2015). Valuing Voices’ ex-post research on 39 organizations’ ex-post evaluations of sustainability shows that most project results decrease (10–90%) as early as 2 years ex-post (Valuing Voices, 2012).

 

Except for the Japan International Cooperation Agency (JICA), which has done over 2,500 ex-post evaluations on their grants, loans, and technical assistance, learning from what lasts is rare among international aid donors and implementers. An Asian Development Bank study (2010) of post-completion sustainability found that “some early evidence suggests that as many as 40% of all new activities are not sustained beyond the first few years after disbursement of external funding” (p. 1). The World Bank and Inter-American Development Bank, both multilateral banks, show less stellar investments in ex-post learning (Lopez, 2015; Cekan, 2022). Ex-post evaluations are rare, as is illustrated by a Sustainable Governance Indicators overview of EU member state policy evaluations, with most countries using them rarely or not at all (Sustainable Governance Indicators, n.d.).

 

Often in the ex-post evaluation of sustained impact, we see some results fade as early as 2 years ex-post. It is key to prioritize learning from what was sustained by asking our project participants and local/national partners directly during implementation about sustainability prospects. Field inquiry gives no time to test assumptions about drivers/barriers that the project is being implemented under and test whether optimistic trajectories will hold post-closure, as is widely assumed in the global development industry. For as Sridharan and Nakaima (2010) write:

 

There is no reason for the trajectory of performance outcomes to be linear or monotonic over time¾this has important implications for an evaluation system… [and] should programs that do not have a ‘successful’ trajectory of ‘performance measures’ be terminated? (p. 144)

 

To make sustainability more likely, designing, implementing, monitoring, and evaluating for sustainability is key, and makes successful trajectories more likely. While widespread ex-post learning would be the most effective, lessons can be learned to manifest our values of pro-sustainable development by extracting learning from the ex-post evaluations and exit studies that have been done. This is the aim of the rest of this article.

 

Most ex-posts have found mixed results of some activities being sustained, and others not. Often, what was relevant and locally owned, was sustained, whereas activities that relied on donor incentives such as food aid failed to continue (Catholic Relief Services [CRS], 2016). A 2020 Jones and Jordan ex-post study of USAID Global Waters projects found that while 25 million have gained access to water and sanitation,

 

despite tremendous achievements within the life of our programs, they have largely not endured… Rural water systems that, at activity close, delivered safe water to households have fallen into disrepair. Basic latrine ownership and use have dwindled. Communities certified as open-defecation free are backsliding, and gains in handwashing have not been sustained. [Nonetheless,] where USAID invested in providing technical assistance to committed government partners and utilities, gains in service provision and local capacity were sustained, with local actors taking up and expanding upon best practices introduced during activity implementation. (para. 3, 4)

 

This again supports designing and implementing for sustainability during the project, which is the aim of this paper. But such reviews are rare among donors.

 

The dearth of ex-post evaluations suggests that most global development evaluations currently being conducted are not value neutral. Commissioners seem to value short-term results rather than showing and learning from sustained impacts. Further, donors and implementers design and fund aid projects and their evaluations. Country nationals need to be engaged throughout the project cycle (Figure 2), for they will be left to sustain results. As Scriven stated in discussions with Donaldson, Patton and Fetterman (2010),

 

I want to hear, not just about intended use or users of the evaluation. I want to find out about impact on intended and actual impactees—the targeted and accidental recipients of the program, not just the people that get the evaluation. So I consider my task as an evaluator to find out who it is that this program is aimed at reaching and helping. (p. 23)

 

Emerging Outcomes

 

Typical ex-post evaluations focus on what lasted from what donors funded. Few evaluations return ex-post to also ask the front-line users, project participants, and partners what lasted of the prior project, and what emerged from their local efforts to sustain results with fewer or different resources, partnerships, etc. This glaring omission speaks to a lack of valuing sustained results, much less learning from local capacities to sustain results differently. Thus an innovation by Valuing Voices in evaluating sustainability, either ex-post or for monitoring sustainability, is the search for emerging outcomes, namely what emerges from local efforts to sustain results, rather than focusing only on expected donor-designed pathways to still exist.

 

The example in Figure 1 comes from 2023 Adaptation Fund training materials on ex-post; it draws on a three-year World Food Program Ecuadorian ex-post evaluation of sustainability and resilience. The expected change was that improving the water supply for crops would lead to improved food security. While that was happening to some degree, other outcome pathways were happening as well. In some areas, more water was used to improve cultivation methods, which led to an emerging outcome of children returning home to their rural villages to help their parents and continued to sustain food security, which decreased family vulnerability. Elsewhere, maladaptive pathways also emerged, in which a landslide eliminated the stable water reservoir source in one site, leading farmers to revert to drawing water from a river via pump systems, which likely led to decreased water for the community.

Figure 1. Expected, Emerging, and Unexpected Outcomes Ex-Post

 

 

 

 

 

 

 

 

Note. From Training Material for Ex Post Pilots, by Adaptation Fund, 2023 (https://www.adaptation-fund.org/document/training-material-for-ex-post-pilots/).

 

The picture is incomplete without looking at what was expected to be sustained and what local communities had to innovate to maintain results. Unless we look at both what was expected to be sustained and what local communities had to innovate to sustain results, the picture would be incomplete. Both can be traced during implementation and at ex-post evaluation.

 

Sustainability Around the Project Cycle

We need to build sustainability in from the onset, from funding and design to implementation, while looking out for alternative paths that locals create (see the orange slices in Figure 2). Once local stakeholders are involved throughout the project cycle (green slices in Figure 2), results are more likely to be sustained, for the programming is done with country nationals who will sustain results after donors leave. Assumptions need to be checked, adaptation to foster durability needs to be monitored and evaluated, and exit needs to include consultations on ownership, resources, partnerships, adaptation, resilience, and communications, much of which can be traced in a theory of sustainability.

 

Figure 2. Embedding Sustainability in the Project Cycle

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Note. From “What Happens After the Project Ends?”, by J. Cekan, 2016 (https://valuingvoices.com/what-happens-after-the-project-ends-country-national-ownership-lessons-from-post-project-sustained-impact-evaluations-part-2/ ).

 

As ex-post evaluation of projects is an important link missing before exiting with participants and partners leading sustainability; this paper focuses on lessons learned from the 90+ ex-posts reviewed. Lessons come from projects such as those below. Roughly 80% of the CRS Niger PROSAN food security project was sustained 3 years ex-post. It was implemented for sustainability by taking the final 18 months to exit, rather than 3 to 6 months. National partners were co-implementers pre–project closure. The UK charity EveryChild similarly worked with INTRAC (Lewis, 2016; Morris et al., 2021) to evaluate sustainability during exit. They did so in four countries 5 years ex-post, learning similar lessons about phasing down and over before exiting sustainably.

Were national stakeholders to partner equally, these local “targeted recipients” as Scriven tells us, could require projects not to close until further funding was secured, as EveryChild UK did. Donors, implementers, and evaluators need to listen to what locals want and can sustain. All of us who value sustainable development need to design M&E to incorporate sustainability. Exemplary studies are an ex-post tracing national primary teacher training (USAID Uganda, 2017) and final evaluation projecting sustainably prospects pre-exit from migrants and NGOs in Bangladesh (Hasan, 2021).

Thus, the checklists below help foster sustainability through M&E that involves questioning assumptions that donors and implementers, partners, and participants hold about the sustainability of results. It means building capacities to monitor and evaluate conditions for sustainable impact that are embedded in a traceable, relevant way as projects are implemented. It means documenting and learning from data throughout implementation, planning sustained exit beyond the final evaluation, and retaining data to be evaluated ex-post. This involves building understanding and capacities for ex-post evaluation and project planning (funding, design, implementation, and M&E) to foster it. This includes national stakeholders and evaluators who have a greater stake in their countries who can help foreign national stakeholders focus on learning what excelled or failed and how to use it for future projects in-country.

 

Validation

 

Several sources of expertise inform and validate the checklists (see Figures 4 to 8). In their 2015 analysis of exit strategies and sustainability for four USAID / Food and Peace countries, Rogers and Coates highlighted monitoring and evaluating the presence of four “drivers” of sustainability. These drivers create conditions that both are used to evaluate sustainability ex-post and are likely indicators for how likely sustainability is (if such drivers were put in place during implementation pre-exit). Rogers and Coates’ drivers are (a) sustained motivation/ ownership by national stakeholders to sustain a project’s activities; if activities are yielding relevant results, they are far more likely to be sustained; (b) a sustained flow of resources from, national or international sources; (c) sustained technical and managerial capacities passed on to new participants; and (d) linkages/partnerships with governmental/private or other organizations, for an array of support. Negi and Sohn (2022) confirmed the presence of these drivers across Global Environment Facility (GEF) projects created by Rogers and Coates and applied by Cekan and Legro (2022). Negi and Sohn’s review of 62 projects also confirmed that project design, a key sustainability driver, feeds into OECD’s (2019) Relevance criterion, as well as Figure 4. Similarly, USAID Uganda (2017) found the same four drivers were operational in sustainability.

These elements of sustainability draw on ex-post research by Cekan and key studies about participatory implementation and exit. One is Anderson, Brown, and Jean’s (2012) report Time to Listen. They interviewed 6,000 recipients and implementers of international aid across 20 countries from inside and outside the aid system. Their study focuses on unearthing stories “on the ways that people on the receiving side of aid suggest it can become more effective and accountable” (p. i). A second source was CDA (2020) case studies research led by Jean and a consortium of non-governmental organizations (NGOs), focused on improving exit. This work, Stopping as Success, highlighted that a gradual exit process contributes to sustainability. This research informs one of the commitments mentioned in Figure 3, namely phasing down over time during implementation and to national partners before exiting. These studies underscore that global development should be informed by local conditions and country nationals. Local participation is important while checking on sustainability prospects, as is getting local feedback on how well exit is going pre-closure. These checklists below also draw on seminal research by Lewis for INTRAC (2016), from extensive work on exit among NGOs.

Sustained Exit Commitments and Conditions Checklists

 

Figure 3. Valuing Voices Sustained Exit Commitments and Conditions Checklists

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Note. From “Exit for Sustainability Checklists,” by Valuing Voices, 2020 (https://valuingvoices.com/wp-content/uploads/2021/03/Exit-For-Sustainability-Checklists-Dec2020-2.pdf).

 

Now, let’s return to reflect on how the evaluator competencies articulated by UNESCO and CES fit into these Figure 3 commitments and conditions. Systems thinking competency leads us to consider what a theory of sustainability could consist of, and how to plan for it, given the complex ecosystems any project is embedded in. Collaborative and anticipatory competencies are brought into play when handing over projects during implementation, pre-exit. This is especially relevant to partnerships seeking to best face unknown future risks to sustainability and foster resilience to shocks pre-closure. Taking these commitments to heart predisposes projects to continuation. Another competency, reflective practice, needs to be used to discern which conditions of sustainability are driving change. Further, technical and situational practice are used in the field, examining if and to what degree sustainability is driven by these six conditions. While four of the six conditions (ownership, resources, capacities, and partnerships) driving sustainability come from the Rogers and Coates study, two additional conditions have been found to be important in the exit literature. Namely, how well timeframes pre-exit can be shifted to enable sustainability, and how clear and accountable the communication is between those closing out and those being left before closure. Consider using the nine checklists listed in Figures 4 through 8 along a scale of high–medium–low and revisiting them periodically to gauge change.

Revising a theory of change into a theory of sustainability (Figure 4) is helpful to chart stakeholders, assumptions, trajectories, key questions, and whom to ask.

 

Figure 4. Sustainability Ex-Post Project: Theory of Sustainability

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Ask all stakeholders involved long before exit about how much they feel they “own” the project’s continuation and the resources needed. There is a wide range of resources to be explored and questions to ask about how much the interventions are generating local results that are valued (see Figure 5).

 

Figure 5. Designing for Exit: Ownership/Motivation and Resources

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

The questions in Figure 6 can be used during baseline and midterm evaluations. Some questions can also be selected, as part of ongoing monitoring, from the lists of resources and ownership (above) and capacity strengthening and partnerships. With such data, evaluating sustainability during ex-post evaluations is much easier.

 

Figure 6. Checking Assumptions: Capacity Strengthening and Partnerships

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Two of the elements that tell the most about the extent to which project implementation fosters sustainability are the amount of planning that has gone into project exit and handover, as well as adapting timeframes to readiness for exit (see Figure 7).

 

Figure 7. Monitoring and Adaptation: Exit/Handover, Timeframe, and Adaptation of Implementation

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Finally, long-term sustained and responsible exit fostering local ownership is based on planning for the immediate term (communications about who leaves and who knows why the project is closing, how respectfully this is this done and with how much involvement by local partners). As shown in the two checklists in Figure 8, it is vital to examine how well consideration of present and future risks and resilience to shocks have been embedded in programming.

 

Figure 8. Exit Consultations and Close: Risks/Resilience and Accountable Communications

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Conclusions

 

In addition to infusing sustainability into the project cycle during implementation, it is important to live one’s values and use evaluator capacities as guiding lights for one’s work. What also matters is monitoring and evaluating sustained ownership and the other hallmarks of sustainability within the checklists during programming and at ex-post evaluation. Further, it’s important to look for the capacities that remain behind after projects close (emerging outcomes) and learn from ex-post evaluations to inform current programming to facilitate sustainability while there are sufficient resources, partnerships, capacities and other conditions. Also important is fostering what national and local stakeholders want to sustain through their commitments and conditions. Six competencies equipping monitoring and evaluation experts to do this well have been outlined above, namely systems thinking competency, collaboration competency, anticipatory competency, and reflective, technical, and situational practice competencies. These types of “evaluative thinking” lenses can and should be used, as Archibald (2021) calls for an “ethical accountability” in locally led development. Values-driven sustainability can be a powerful driving force to improve public accountability and good governance. Equipped with such skills, evaluators simultaneously bolster evaluation systems and capacities among national evaluators and program implementers alike. For equitable, values-driven accountability for sustainability to happen, power needs to shift to people at national and local levels to determine what resources, partnerships, and capacities are needed and what is a priority for them to take ownership of. We can begin as soon as possible by building the most likely conditions for sustainability and commitments to foster sustainable exit into the project cycle. We have no time to lose; embracing such values-driven sustainability would be of great benefit.

 

 

References

 

Adaptation Fund (2022). Training material for ex post evaluations. https://www.adaptation-fund.org/about/evaluation/publications/evaluations-and-studies/ex-post-evaluations/

Adaptation Fund. (2023). Training materials for ex post pilots. https://www.adaptation-fund.org/document/training-material-for-ex-post-pilots/

Anderson, M. B., Brown, D., & Jean, I. (2012, December 1). Time to listen: Hearing people on the receiving end of international aid. CDA Collaborative. https://www.cdacollaborative.org/publication/time-to-listen-hearing-people-on-the-receiving-end-of-international-aid/

Archibald, T. (2016). Evaluation foundations revisited: Cultivating a life of the mind for practice [Review of the book Evaluation foundations revisited: Cultivating a life of the mind for practice, by T. A. Schwandt]. American Journal of Evaluation, 37(3), 448–452. https://doi.org/10.1177/1098214016648794

Archibald, T. (2021, February 18). Critical and evaluative thinking skills for transformative evaluation. Eval4Action. https://www.eval4action.org/post/critical-and-evaluative-thinking-skills-for-transformative-evaluation

Asian Development Bank. (2010, October). Special evaluation study on post-completion sustainability of Asian Development Bank-assisted projects. Organisation for Economic Co-operation and Development. https://www.oecd.org/derec/adb/47186868.pdf

Brouselle, A. (2022). Planting seeds for change. Evaluation28(1), 7–35. https://journals.sagepub.com/doi/full/10.1177/13563890221074173

Canadian Evaluation Society. (2018, November). Competencies for Canadian evaluation practice. Evaluation Canada. https://evaluationcanada.ca/files/pdf/2_competencies_cdn_evaluation_practice_2018.pdf

Catholic Relief Services. (2016, October 7). Participation by all: The keys to sustainability of a CRS food security project in Niger. https://www.crs.org/our-work-overseas/research-publications/participation-all

CDA. (2020). Stopping as success: Research findings case studies. CDA, Peace Direct, Search for Common Ground. https://www.stoppingassuccess.org/resources/

Cekan, J. (2015). When funders move on. Stanford Social Innovation Review. https://ssir.org/articles/entry/when_funders_move_on

Cekan, J. (2016, February 19). What happens after the project ends? Country-national ownership lessons from post-project sustained impacts evaluations (Part 2). Valuing Voices. https://valuingvoices.com/what-happens-after-the-project-ends-country-national-ownership-lessons-from-post-project-sustained-impact-evaluations-part-2/

Cekan, J. (2020a, April 20). Sustaining sustainable development. Valuing Voices. https://valuingvoices.com/sustaining-sustainable-development/

Cekan, J. (2020b, October 28). Sustained exit? Prove it or improve it! [Webinar]. Valuing Voices. https://valuingvoices.com/interactive-webinar-sustained-exit-prove-it-or-improve-it-nov-6-2020/

Cekan, J. (2022, November 26). Inter-American Development Bank (IDB) – Where have your ex-post evaluations, and learning from them, gone? Valuing Voices. https://valuingvoices.com/inter-american-development-bank-idb-where-have-your-ex-post-evaluations-and-learning-from-them-gone/

Cekan, J., & Legro, S. (2022). Can we assume sustained impact? Verifying the sustainability of climate change mitigation results. In J. I. Uitto & G. Batra (Eds.), Transformational change for people and the planet: Evaluating environment and development. Springer. https://link.springer.com/book/10.1007/978-3-030-78853-7

Coffman, J. (2004). Michael Scriven on the differences between evaluation and social science research. Evaluation Exchange, 9(4). https://archive.globalfrp.org/evaluation/the-evaluation-exchange/issue-archive/reflecting-on-the-past-and-future-of-evaluation/michael-scriven-on-the-differences-between-evaluation-and-social-science-research

Donaldson, S., Patton, M., Fetterman D., & Scriven, M. (2010). The 2009 Claremont debates: The promise and pitfalls of utilization-focused and empowerment evaluation. Journal of MultiDisciplinary Evaluation, 6(13). https://www.researchgate.net/publication/41391464_The_2009_Claremont_Debates_The_Promise_and_Pitfalls_of_Utilization-Focused_and_Empowerment_Evaluation

Encyclopedia.com. (2018, May 17). Evaluation research: Brief history. https://www.encyclopedia.com/social-sciences-and-law/sociology-and-social-reform/sociology-general-terms-and-concepts/evaluation-research

Hasan, A. A. (2021, January 17). Ex-post eval week: Are we serious about project sustainability and exit? American Evaluation Association AEA365. https://aea365.org/blog/ex-post-eval-week-are-we-serious-about-project-sustainability-and-exit-by-abu-ala-hasan/

Japan International Cooperation Agency. (n.d.). Ex-post evaluation (technical cooperation). https://www.jica.go.jp/english/our_work/evaluation/tech_and_grant/project/ex_post/index.html

Jones, A., & Jordan, E. (2020, October 19). Unpacking the drivers of WASH sustainability. USAID Global Waters. https://www.globalwaters.org/resources/blogs/unpacking-drivers-wash-sustainability

Lewis, S. (2016, January). Developing a timeline for exit strategies: Experiences from an Action Learning Set with the British Red Cross, EveryChild, Oxfam GB, Sightsavers and WWF-UK. INTRAC. https://www.intrac.org/wpcms/wp-content/uploads/2016/09/INTRAC-Praxis-Paper-31_Developing-a-timeline-for-exit-strategies.-Sarah-Lewis.pdf

Lopez, K. (2015, April 7). IEG blog series part II: Theory vs. practice at the World Bank. Valuing Voices. https://valuingvoices.com/ieg-blog-series-part-ii-theory-vs-practice-at-the-world-bank/

Morris, L., George, B., Gondwe, C., James, R., Mauney, R., & Tamang, D. D. (2021, June). Is there lasting change, five years after EveryChild’s exit? Lessons in designing programmes for lasting impact. INTRAC. https://www.intrac.org/wpcms/wp-content/uploads/2021/07/Praxis-Paper-13_EveryChild-exit.pdf

Negi, N. K., & Sohn, M. W. (2022). Sustainability after project completion: Evidence from the GEF. In J. I. Uitto & G. Batra (Eds.), Transformational change for people and the planet: Evaluating environment and development. Springer. https://doi.org/10.1007/978-3-030-78853-7_4

OECD. (2019). Applying evaluation criteria thoughtfully. [Chapter: Understanding the six criteria: Definitions, elements for analysis and key challenges]. OECD Publishing. https://www.oecd-ilibrary.org/sites/543e84ed-en/1/3/4/index.html?itemId=/content/publication/543e84ed-en&_csp_=535d2f2a848b7727d35502d7f36e4885&itemIGO=oecd&itemContentType=book#section-d1e4964

OECD/DAC. (2002). Evaluation and Aid Effectiveness No. 6 – Glossary of Key Terms in Evaluation and Results Based Management (in English, French and Spanish). https://read.oecd-ilibrary.org/development/evaluation-and-aid-effectiveness-no-6-glossary-of-key-terms-in-evaluation-and-results-based-management-in-english-french-and-spanish_9789264034921-en-fr#page37

Patton, M. Q. (2022, June 13). Why so many evaluation approaches: The short story version [Video]. YouTube. https://www.youtube.com/watch?v=6xY9jMUUorM.

Rogers, B. L. & Coates, J. (2015, December). Sustaining development: A synthesis of results from a four-country study of sustainability and exit strategies among development food assistance projects. Food and Nutrition Technical Assistance III Project (FANTA III) for USAID. https://pdf.usaid.gov/pdf_docs/PA00M1SX.pdf

Rowe, A. (2019). Sustainability-ready evaluation: A call to action. New Directions for Evaluation, 162. https://onlinelibrary.wiley.com/doi/abs/10.1002/ev.20365

Sustainable Governance Indicators. (n.d.). Evidence-based instruments. https://www.sgi-network.org/2020/Good_Governance/Executive_Capacity/Evidence-based_Instruments/Quality_of_Ex_Post_Evaluation

Sridharan, S., & Nakaima, A. (2010). Ten steps to making evaluation matter. Evaluation and Program Planning, 34(2), 135–146. https://doi.org/10.1016/j.evalprogplan.2010.09.003

UNESCO. (2017). Education for sustainable development goals: Learning objectives. https://unesdoc.unesco.org/ark:/48223/pf0000247444

USAID Mali. (2022, December). Ex-post evaluation of the USAID/Mali Out Of School Youth Project (PAJE-NIETA): Final evaluation report. https://pdf.usaid.gov/pdf_docs/PA00ZTBJ.pdf

USAID Uganda. (2017, October 11). Uganda case study summary report: Evaluation of sustained outcomes. https://pdf.usaid.gov/pdf_docs/PBAAJ314.pdf

Valuing Voices. (2020, December). Exit for sustainability checklists. https://valuingvoices.com/wp-content/uploads/2021/03/Exit-For-Sustainability-Checklists-Dec2020-2.pdf

Valuing Voices. (2012). Catalysts for ex-post learning. https://valuingvoices.com/catalysts-2/

Zivetz, L., Cekan J., & Robbins, K. (2017, May). Building the evidence base for post-project evaluation: Case study review and evaluability checklists. Valuing Voices. https://valuingvoices.com/wp-content/uploads/2013/11/The-case-for-post-project-evaluation-Valuing-Voices-Final-2017.pdf

[1] https://valuingvoices.com/

[2] https://valuingvoices.com/catalysts-2/

This chapter in published form can be accessed at: https://www.researchgate.net/publication/376497201_Fostering_Values-Driven_Sustainability_Through_an_Ex-Post_Capacities_Lens

Evaluating global aid projects ex-post closure: Evaluability… and how-to via Sustainability (and Resilience) Evaluation training materials

Evaluating global aid projects ex-post closure: Evaluability, and how-to via Sustainability (and Resilience) Evaluation training materials:

How do we evaluate projects ex-post, and are all projects evaluable after donors leave? How do we learn from project documents to ascertain likely markers of sustainability (hint: see materials for Theory of Sustainability and Sustainability Ratings) that we can verify? How do we design projects to make sustainability more likely (hint: implement pre-exit the drivers in the second image, below)?  How to consider evaluating resilience to climate change (hint: evaluate resilience characteristics)?

In 2017, Valuing Voices created an evaluability checklist for Michael Scriven’s foundation, as many projects are not implemented long enough, or too long ago, or have such weak monitoring and evaluation (M&E) data as to be very difficult to evaluate: https://valuingvoices.com/wp-content/uploads/2017/08/Valuing-Voices-Checklists.pdf. We have used it in multiple evaluations since. Now in 2023, we are sharing work we’ve done with the Adaptation Fund, where we have applied that evaluability process via a vetting evaluability process that eliminated over 90% of all projects done in their early years, which alone is sobering but typical in our field, and is leading to changes in how they monitor, retain information and learn.  See page 24 – officially pg 15) of this Phase 1 report: https://www.adaptation-fund.org/wp-content/uploads/2021/10/2021-09-12-AF-TERG_Ex-Post-Phase-1-Final-Report_Final.pd

Evaluability of projects for ex-post at the Adaptation Fund

 

 

 

 

Ex-post evaluability at the Adaptation Fund

 

The Adaptation Fund has also committed to doing two ex-post sustainability and resilience evaluations per year. and learn from remote ex-posts as well.  The first two ex-post evaluations are found here:  Samoa/ UNDP on resilient infrastructure (https://www.adaptation-fund.org/wp-content/uploads/2023/01/Ex-Post-Evaluation-Samoa-final-ed-2.pdf) and Ecuador/ WFP on food security (https://www.adaptation-fund.org/document/ex-post-evaluation-summary-ecuador/). So many lessons from assumptions made to future design…

Not only are we evaluating projects ex-post, but we are also creating processes for others to follow, including this new Sustainability Framework. The left three columns project the likelihood of sustainability, which the right three columns verify. It includes the Valuing Voices ’emerging outcomes’ from local efforts to sustain results after donors leave. SO exciting:

Ex-post Sustainability Framework

 

 

 

 

 

 

 

Also, just today, we published our training materials which we are using in ex-posts 3 and 4 in Argentina (Ministry of the Environment and the World Bank); https://www.adaptation-fund.org/document/training-material-for-ex-post-pilots/. They include videos for those who like to listen and watch us, as well as Powerpoint slides/ PDFs and Excels of our training materials (including suggested methods) for those who like to read…

Take a look, use, and tell us what you think and what you are learning! Thanks, Jindra Cekan (Sustainability) and Meg Spearman (Resilience) and the Adaptation Fund team!

Inter-American Development Bank (IDB) – where have your ex-post evaluations, and learning from them, gone?

Inter-American Development Bank (IDB) – where have your ex-post evaluations, and learning from them, gone?

A Linkedin colleague, Gillian Marcelle, Ph.D. recently asked me about ex-posts by the Inter-American Development Bank (IDB) as more Caribbean accelerators/incubators were planned without learning from previous identical tech investments. Here is what I found, and if anyone knows more, please contact me, as it is not reassuring. Also, some were internal ‘self-evaluations’, some were desk reviews, and only a few involved going to the field to ask aid recipients about what lasted, which is typical for multilaterals (ADB and the IBRD do the same). Given Valuing Voices’ focus on participant’s voices in results, there was an attempt to focus on those, but this report did not make it clear which were which so highlights are presented below.

In 2003 IDB created an ex-post policy,Ex Post Policy (EPP) in October 2003, which mandated two new tasks to OVE: the review and validation of Project Completion Reports and the implementation of ex post project evaluations.”. These were under the Board’s request for “a commitment to a ‘managing for results’ business model.” 2004’s ex-posts were seen as “the first year of the implementation of the EPP, all 16 evaluations can be considered part of the pilot and the findings presented in this report refer to the entire set of ex post evaluations.” Further, “the general evaluative questions proposed by EPP are first “… the extent to which the development objectives of IDB-financed projects have been attained.” and second “… the efficiency with which those objectives have been attained.”

They spent over $300,000 unsuccessfully evaluating six of the projects. In part this was due to data quality. “six had an evaluation strategy identified in the approval stage, most had abandoned the strategy during execution prior to project closure and, with one exception which produced data that could be used to calculate a treatment effect, none had produced quality evaluative information…. No [Project Completion Report] PCR provided adequate information regarding the evolution of development outcomes expected from the project or an update with respect to the evaluation identified at the time of approval. For the other six, they found that the expected results did not match what would be the sustained results. Some were better than expected while more were worse. “A critical finding across all projects is the lack of correspondence between the reflexive estimates and the treatment effect estimates. In practically all cases, the estimates were different.” How sustainably “Improved” are “Lives” as IDB’s logo touts?

 

 

In 2004 IADB chose 16 projects to evaluate and dropped four for a variety of reasons. The remaining dozen projects ex-post evaluated were on land development, improving neighborhoods, and cash programming. There were data quality and comparability issues from the onset. In the land [tenure] ‘regularization’: “Six of the projects mention ex post evaluations in loan proposals, but none have been completed to date. OVE was successful in retrofitting a subset of outcomes expected for three projects: an attrition rate of 50%.”  The neighborhood improvements had positive and negative results, with ‘retrofitting’ being needed regarding data. For the four evaluated, the overall conclusion was mixed. Both, that the projcts led to “greater coverage of certain public services.” and for two cases, “this impact was more pronounced for the poorest segments included in the treated population.” Nonetheless, much more was unachieved. “Beyond this, very little else can be said. The impact on the objectives related to human capital formation and income were not demonstrated. In the case of health interventions, perhaps the intervention type most directly linked to sanitation services; there has been no demonstrated link between the interventions and outcomes, even for the poorest segments of the beneficiary population. There was also no consistent evidence showing an increase in variables related to housing values.”

Regarding cash programming, there were individual evaluations that showed promise but only after statistical analysis of a control group, something which is sorely lacking in most foreign aid evaluations. An IDB project in Panama “shows that in some cases the reflexive evaluation, in fact, understated the true program treatment effect. The development outcome of this project was the reduction in poverty. A reflexive evaluation (the gross effects) of the incidence of poverty suggested that not only was the project unsuccessful but that it actually contributed to worsening poverty; the opposite of its intent. However, a treatment effect evaluation (the net effect) that compared “similar municipalities” shows that municipalities benefiting from FIS funds had a significant decline in poverty relative to comparable municipalities that did not receive FIS financing; the project had clear positive development outcomes”.

The IDB staff consulted in 2005 about the results “questioned whether the analysis of closed projects that were not required to include the necessary outcomes and data at the time of approval was a cost-effective use of Bank resources” which may be a reason why the Bank decided against doing more, in spite of many ex-post findings contradicting expected results. Astonishingly, since then, only one summary of a Jordanian ex-post in 2007 was found, but it is questionable that it is an actual ex-post closure. At a minimum, one would expect that the Bank would ensure data quality improved, and planned strategies would actually be done.

Finally, presumably a bank cares about Return on Investment. As a former investment banker, I would be concerned about the lack of learning, given the low cost of such learning versus the discrepancies found between expected and actual sustainability. Specifically, the extremely low cost of the six evaluations that were done ($113K each, more precisely a cost of .001-.21% of the program value), which is a pittance compared to the millions in loan values. Given that more were not done – or at least publically shared- in the 18 years since, sustainability-aware donors, beware.

Unlike this multilateral, I’ve been busy with two ex-post evaluations which I hope to share in the coming months… Let me know your thoughts!

Sustainability of what and how do we know? Measuring projects, programs, policies…

On my way to present at the European Evaluation Society’s annual conference, I wanted to close the loop on the Nordic and Netherlands ex-post analysis. The reason is, that we’ll be discussing the intersection of different ways to evaluate ‘sustainability’ over the long- and short-term, and how we’re transforming evaluation systems. The session on Friday morning is called “Long- And Short-Term Dilemmas In Sustainability Evaluations” (Cekan, Bodnar, Hermans, Meyer, and Patterson). We come from academia as professors, consultancies to International organizations, International/ national non-profits, and our European (Dutch, German, Czech), South African, and American governments. We’ll discuss it as a ‘fishbowl’ of ideas.

The session’s abstract adds the confounding factor of program vs project versus portfolio-wide evaluations all-around sustainability.

Details on our session are below and why I’m juxtaposing it to the Nordic and Netherlands ex-posts in detail, comes next. As we note in our EES ’22 session description, “One of the classic complications in sustainability is dealing with short-term – long-term dilemmas. Interventions take place in a local and operational setting, affecting the daily lives of stakeholders. Sustainability is at stake when short-term activities are compromising the long-term interests of these stakeholders and future generations, for instance, due to a focus on the achievement of shorter-term results rather than ensuring durable impacts for participants… Learning about progress towards the SDGs or the daunting task of keeping a global temperature rise this century well below 2 degrees Celsius above pre-industrial levels, for instance, requires more than nationally and internationally agreed indicator-systems, country monitoring, and reporting and good intentions.”

But there are wider ambitions for most sustainability activities undertaken by a range of donors, policy actors, project implementers, and others: Sustainability “needs to span both human-social and natural-ecological systems’ time scales. Furthermore, long-term sustainability, in the face of climate change and SDGs, demands a dynamic view, with due attention for complexity, uncertainty, resilience, and systemic transformation pathways…. the need for a transformation of current evaluation systems – seeing them as nested or networked systems… Their focus may range from focused operational projects to the larger strategic programmes of which these projects are part, to again the larger policies that provide the context or drivers for these programmes. Analogue to these nested layers runs a time dimension, from the short-term projects (months to years), to multi-year programmes, to policies with outlooks of a decade or more.” 

When Preston did his research in 2020-21 which I oversaw, we focused on the projects precisely because that is where we believe ‘impact’ happens in a measurable way by participants and partners. Yet we found that many defined their parameters differently. Preston writes, “This paper focuses on what such research [on projects evaluated at least 2 years post-closure] yielded, not definitive findings of programs or multi-year country strategies that are funded for 20-30 years continuously, nor projects funded by country-level embassies which did not feature on the Ministry site. We focus on project bilateral project evaluations, not multilateral funding of sectors. We also …received input that Sweden’s EBA has a (non-project [not ex-post] portfolio of ‘country evaluations’ which looked back over 10 or even 20-year time horizons

So we present these compiled detailed studies on the Netherlands, Norway, Finland,  Sweden, and Denmark for your consideration. Can we arrive at a unified definition of ‘sustainability’ or imagine a unified ‘sustainability evaluation’ definition and scope? I hope so, will let you know after EES this week! What do you think, is it possible?

“Promises made and promises unfulfilled: Focusing evaluations after COP26” 2021’s Climate Conference reblog from the journal Evaluation

Reblogged from the journal EVALUATION,Volume: 28 issue: 1, page(s): 7-35, Article first published online: January 24, 2022; Issue published: January 1, 2022

 

“Leading evaluation practitioners were asked about lessons from the recent 26th Conference of the Parties (COP26) for evaluation practice. Contributors emphasize the importance of evaluating equity between rich and poor countries and other forms of climate injustice. The role of the evaluation is questioned: what can evaluation be expected to do on its own and what requires collaboration across disciplines, professions and civil society – and across generations? Contributors discuss the implications of the post-Glasgow climate ‘pact’ for the continued relevance of evaluation. Should evaluators advocate for the marginalized and become activists on behalf of sustainability and climate justice – as well as advocates of evidence? Accountability-driven and evidence-based evaluation is needed to assess the effectiveness of investments in adaptation and mitigation. Causal pathways in different settings and ‘theories of no-change’ are needed to understand gaps between stakeholder promises and delivery. Evaluators should measure unintended consequences and what is often left unmeasured, and be sensitive to failure and unanticipated effects of funded actions. Evaluation timescales and units of analysis beyond particular programmes are needed to evaluate the complexities of climate change, sustainability and to take account of natural systems. The implications for evaluation commissioning and funding are discussed as well as the role of evaluation in programme-design and implementation.”

Here is my article on sustainability, measurement and reporting:

Like many evaluators reading this, I am not a climate specialist but an international political economist, a Czech-American. Both my countries have polluted more than our fair share. Maybe like you, I feel responsible for those who polluted less but suffer more. Professionally, I focus on grassroots sustainability of ex-post project evaluations, including those funded by the Adaptation Fund, and consult on environmental, social and governance ‘impact’. I worry that aid impacts sustained through ingenious local efforts will not hold up to climate shocks for which our aid was not designed, and funding is insufficient.

Where to focus? Knowing what aspect of evaluation interests you, shapes which aspect of the COP26 juggernaut to examine. Results gaps between promises made versus actual change accomplished are an evaluator’s daily bread. Were I an evaluator of environmental processes such as deforestation, ocean acidification/ biodiversity, CO2 emissions, I could evaluate along the lines of the recent publication edited by Juha Uitto (2021) Evaluating Environment in International Development. Abel Gbala, an Ivoirien monitoring and evaluation expert, offers a range of roles evaluators take on, and two for climate change are

  1. Evaluator as a ‘judge (following Scriven) to investigate and justify the value of an evaluand, supported by both empirical facts and probative reasoning;
  2. Evaluator as ‘activist’, as argued by Bitar (2019), and Montrosse-Moorhead, et al. (2019: Chapter 3, 33) advocating for social justice and addressing the needs and interests of the vulnerable and disadvantaged.

Following the money and focusing on the centrality of justice and equity between rich and poorer/‘developing’ countries involves judging and being an activist with sharing results. This includes measuring how well the Global North has helped the Global South deal with the inequity in adapting to, mitigating or addressing the devastation of a range of climate change disproportionately caused by the Global North over two centuries. Notably, only small proportions of all financing get to indigenous and local communities (see Rainforest Foundation Norway, 2021USAID, 2021). Evaluating to whom funding goes is vital for sustainable results.

Many promises are unfulfilled

The 2015 Paris Agreement (United Nations Framework convention on Climate Change, 2016) promised fewer climate-harming emissions, yet the 60 biggest banks have invested 3.8 trillion in fossil fuels (Project Regeneration, 2020). Paris signatories promised US$100 billion climate funding a year, but the COP26 showed massive shortfalls. Not only have an insufficient US$55–US$80 billion a year been given since 2013 (Timperley, 2021), but in a recent Financial Times article by Hook and Kao (2021), Amar Bhattacharya, of the Brookings Institute, stated, ‘In terms of real impact of climate finance, and efficacy across different donors, there has been no development impact or climate impact study done to date’. A German climate watchdog confirms a massive gap if and how US$80 billion in 2019 has been spent, noting, ‘The absence of a detailed, publicly available account of this financing . . . risks all sorts of omissions: donors mis-labeling their funding [as “significant” [impactful], or money being misspent, or an under-estimation of the true volume of money required’ (Subramanian, 2021). Evaluators and auditors are needed to confirm that funding was allocated, disbursed and had an impact on climate change needs.

Needs are tenfold more

India and African countries state they need US$1–US$1.3 trillion in finance by 2030 (Rathi and Chaudhary, 2021). This is not unreasonable, given that ‘developing’ countries ‘are currently shouldering approximately $70 billion per year costs of adapting to climate change’ themselves. The Global South also wants funds to be more evenly split between adaptation (now 25%) to help them deal with sea-level rise and extreme weather events and mitigation (now 75%) (Pontecorvo, 2021). Why the imbalance towards mitigation? Because mitigation is remunerative to investors, companies and banks, who offer loans for countries to switch to clean energy or sell ‘carbon-offsets’. As noted by Timperley (2021), ‘just $20 billion went to adaptation projects in 2019’ versus UN-estimated needs of US$300 billion. It is also essential to measure the effectiveness of finance once it arrives and help those in the climate field see how such investments’ efficacy can be improved.

Where to go to measure costs and finance? One priority for evaluators is to know where to look for data on finance and costs on sustainability and adaptation. This is spread over many national and international databases and reports, and across private and public institutions. Burmeister et al. (2019) have a useful table that summarizes the many finance sources that could be used by evaluators when trying to track actual expenditures and investments on adaptation.

Proof of promises is key

Oxfam’s Climate Finance Shadow Report 2020 (Carty et al., 2020) helps judges and activists see that while donors reported giving US$59.5 billion in 2017 and 2018, ‘the true value. . . may be as little as $19-22.5 billion per year once loan repayments, interest, and other forms of over-reporting are stripped out’. Eighty percent was primarily given as loans, and a further 50 percent of this was non-concessional, requiring higher repayments from emerging countries. In short, our climate ‘largesse’ is increasing their indebtedness. Another watchdog looks at the recipient side. Climate Governance by Transparency International (2021) traces in-country corruption of the funds received. The International Financial Reporting Standards Foundation’s International Sustainability Standards Board questions corporate ‘greenwashing’. Other evaluations remind multilateral and bilateral donors not to claim what they cannot substantiate. Aid promises ‘sustainable development’. Climate funds such as the GEF could be delivering, but Čekan/ová and Legro (2022) examined the GEF’s 2019 report claim that 84 percent were sustainable post-project. ‘Can We Assume Sustained Impact? Verifying the Sustainability of Climate Change Mitigation Results’ showed no proof of ex-post project fieldwork or research to substantiate it. Worse, ‘in the absence of sufficient information regarding project sustainability, determining post-project greenhouse gas emission reductions is not possible, because these are dependent on the continuation of project benefits following project closure’.

It is vital to monitor and evaluate the gaps between promises made and actual change. Gaps include between the finance needed by developing/poorer countries and what is delivered; provable measurements of the impacts and effectiveness of finance given; and the knock-on effects of support for climate action, including indebtedness. As evaluators, we need champions willing to listen, for no one has to listen to evaluators, but much like years of the climate-science IPCC, perseverance and public interest, plus our collective survival on the line, measurements increasingly matter and drive imperative change. Our planet, institutions and many promises and fewer results need all of us.”

I also encourage readers to see the other 13 authors” fascinating submissions in Evaluation post-COP26. Also, many thanks to Elliot Stern, editor, for his support and for making this issue open-access for global learning.