I am quoting liberally and highlighting our work from the Adaptation Fund’s website where their commitment to learning from what lasts is clear. “Ex post evaluations are a key element of the AF-TERG FY21-FY23 strategy and work programme, originating from the request of the Adaptation Fund Board to develop post-implementation learning for Fund projects and programmes and provide accountability of results financed by the Fund. They intend to evaluate aspects of both sustainability of outcomes and climate resilience, and over time feed into ex-post-evaluation-informed adjustments within the Fund’s Monitoring Evaluation and Learning (MEL) processes.”
How are we defining sustainability’s path to evaluate it? Here is a flowchart from our training:
There are four phases from 0 to 3: Phase 0 Foundational Review: Not only was this work preceded by months of background research on both evaluability of their young portfolio (e.g., under 20 of the 100 projects funded were closed at least three years, a selection criteria we had) and secondary research on evidence of ex-post sustainability evaluation in climate change/ resilience across the Adaptation Fund’s sectors.
Phase 1 Framework and Pilots Shortlist: Our Phase 1 report from mid-2021 provided an overview of the first stage of ex-post evaluations, outlining methods and identifying a list of potential projects for ex-post evaluation pilots from the Fund’s 17 completed, evaluated projects. The framework presented in the report introduced possible methods to evaluate the sustainability of project outcomes, considering the characteristics, strengths, and weaknesses of the Fund portfolio. It also presents an analysis tool to assess climate resilience, bearing in mind that this area is pivotal to climate change adaptation yet has rarely been measured.
Vetting and pilot selection, revised design for evaluating sustained outcomes related to resilience to climate change. Key aspects are: 1) Timing (3-5 years since closure or projects at least 4 years long within the last 5 years and seasonality matches the final evaluation) and 2) Good quality of implementation and M&E with measurable outputs and outcomes traceable to impact(s) and 3) Safety to do fieldwork re: Covid, civil peace, etc.
We (my so-clever colleagues Meg Spearman and Dennis Bours) introduced a new resilience analysis tool that includes consideration of the climate disturbances, the human and natural systems (and their nexus) affected by and affecting project outcomes. This includes five characteristics of resilience in the outcomes (presence of feedback loops, at scale, plus being diverse, dynamic, and redundant) and means/actions to support outcomes. Resilience can be identified via a clear summary of the structures (S) and functions (F) that typify Resistance, Resilience and Transformation showing where a project is and is moving towards. It is a typology of resistance-resilience-transformation (RRT) onto which the overall project can be mapped based on how actions are designed to maintain or change existing structures and functions. That was integrated into the Adaptation Fund resilience evaluation approach.
Phase 2 Methods Testing and Ex-post Field-testing: Training of national evaluators and piloting two ex-post evaluations per year includes selecting among these methods to evaluate sustainability ex-post plus the RRT and resilience measures above. In the first ex-post in Samoa’s “Enhancing Resilience of Samoa’s Coastal Communities to Climate Change” (UNDP) happening December 21, it is through qualitative evaluation of wall-infrastructure. The second, Ecuador’s “Enhancing resilience of communities to the adverse effects of climate change on food security, in Pichincha Province and the Jubones River basin “(WFP) has training completed and fieldwork should be from January 22, likely be of food security assets and methods TBD.
Phase 3 Evaluations continue, with MEL Capacity Building: Two more years of ex-post pilot evaluations (2 per year) with lesson informing integration into the MEL of the Adaptation Fund. We are already finding out lessons of rigor, of knowledge management, of unexpected benefits of returning years after closure, including indications of sustainability and resilience of the assets, with much more learning to come.
Innovations include “the relative novelty of climate change adaptation portfolios and the limited body of work on ex post evaluation for adaptation, it presents possible methods that will be piloted in field-tested ex post evaluations in fiscal year 2022 (FY22).” This includes piloting shockingly rare evaluation of oft-promised resilience. In the update to AF’s Board three months ago, it transparently outlined shortlisting of five completed projects as potential candidates for the pilots, of which two projects were selected for ex post evaluations. It outlined our process of co-creating the evaluation with national partners to prioritize their learning needs while building national capacity to assess sustainability and resilience of project outcomes in the field onward.
Also, training materials for ex post pilots are being shared to foster country and industry learning, focusing on evaluating projects at ex-post and emerging sustainability and resilience, as well as presenting and adapting methods to country and project realities.
The training had three sessions (which could not have happened without colleague Caroline’s expertise):
Part A: Understanding ex-post & resilience evaluations. Introduce and understand ex-post evaluations of sustainability and resilience, especially in the field of climate change adaptation
Part B: Discussing country-specific outcome priorities and co-creating learning with stakeholders. Discuss the project and its data more in-depth to understand and select what outcome(s) will be evaluated at ex-post
Part C: Developing country-specific methods and approaches. Discuss range of methods with the national evaluator and M&E experts to best evaluate the selected outcome(s) and impact(s)
Ex-post Eval Week: Exiting For Sustainability by Jindra Cekan
Reblogged from AEA: https://aea365.org/blog/ex-post-eval-week-exiting-for-sustainability-by-jindra-cekan-2/ January 22, 2021
Hello. My name is Jindra Cekan, and I am the Founder and Catalyst of Valuing Voices at Cekan Consulting LLC. Our research, evaluation and advocacy network have been working on post-project (ex-post) evaluations since 2013. I have loved giraffes for decades and fund conservation efforts (see pix).
Our planet is in trouble as are millions of species, including these twiga giraffes and billions of homo-sapiens. Yet in global development we evaluate projects based on their sectoral, e.g. economic, social, educational, human rights etc., results, with barely a glance at the natural systems on which they rest. IDEAS Prague featured Andy Rowe and Michael Quinn Patton who showed that I too have been blind to this aspect of sustainability.
I have argued ad nauseum that the OECD’s definition of projected sustainability and impact don’t give a hoot about sustaining lives and livelihoods.. If we did, we would not just claim we do ‘sustainable development’ and invest in ‘Sustainable Development Goals’ but go about proving how well, for how long, by whom, after closeout.
After hearing Rowe, I added to my Sustained Exit Checklists new elements about how we must evaluate Risks to Sustainability and Resilience to Shocks that included the natural environment. I added Adaptation to Implementation based on feedback on how much implementation would need to change based in part on climatic changes.
Yet new evaluation thinking by Rowe, Michael Quinn Patton, Astrid Brouselle/ Jim McDavid take us a quantum leap beyond. We must ask how can any intervention be sustained without evaluating the context in which it operates. Is it resilient to environmental threats? Can participants adapt to shocks,? Have we assessed and mitigated the environmental impacts of our interventions? As Professor Brouselle writes, “changing our way of thinking about interventions when designing and evaluating them…. away from our many exploitation systems that lead to exhaustion of resources and extermination of many species.”
This 2020 new thinking includes ascertaining:
(Andy Rowe) Ecosystems of biotic natural capital and abiotic natural capital (from trees to minerals) with effects on health, education, public safety/ climate risk and community development
(Astrid Brouselle and Jim McDavid) Human systems that affect our interventions, including: Power relations, prosperity, equity and we need to make trade-offs between environment and development goals clear.
We have miles to go of systems and values to change. Please read this and let’s start sustaining NOW.
Rowe, A. (2019). Sustainability-read evaluation: A call to action. In G. Jules (Ed.) Evaluating Sustainability: Evaluative Support for Managing Processes in the Public Interest. New Directions for Evaluation, 162, 29-48.
This week, AEA365 is celebrating Ex-post Eval Week during which blog authors share lessons from project exits and ex-post evaluations. Am grateful to the American Evaluation Association that we could share these resources….
So how are we to get there? A Sustainable Brands Conference this year gets us there through being clear about their own consumption, and USAID is no different. USAID Forward is putting their money where their keyboards are (so to speak), toward more sustainable local delivery by directing a huge 30 percent of its funding to “local solutions” through procurement in coming years. This framework is to “support the ‘new model of development’ that USAID Administrator Rajiv Shah has touted, which entails a shift away from hiring U.S.-based development contractors and NGOs to implement projects, and toward channeling money through host-country governments and local organizations to build their capacity to do the work themselves and sustain programs after funding dries up. I, and others celebrate the investments this will enable local firms to make in their own capacity, in leading development!
Of course all sorts of safeguards are needed, and ideally US firms would be providing capacity development, but shouldn’t we have been doing this all along, to move toward transferring ‘development’ to the countries themselves?
Also vital to sustainable development is learning from what works and doing more of it. USAID is finally planning to incorporate more ex-post evaluations into its toolkit of evaluating sustainability! Two weeks ago, PPL/LER shared their great new policy document- “Local systems: A framework for supporting sustained development” on how they can better incorporate local systems thinking into policy as well as DIME (Design, Implementation, Monitoring and Evaluation). Industry insider DevEx tells us "even though the agency plans to use ex-post evaluations to measure whether development projects are successful or not, these evaluations will not focus on “specific contractor performance” but instead consider the “types of approaches that contribute to more sustainable outcomes…to inform USAID’s country strategies and project design." While PVO implementing partners will not [yet?] be required to do ex-post evaluations as part of their projects, having this door cracked open is excitingly opening. Notably, it is a ‘back to the future’ moment, as 30 years ago USAID led the development world in post-project evaluations, yet in the last 24 years has done none (or at least not published any) except for the Food for Peace retrospective below, as I found in our Valuing Voices research of USAID's Development Experience Clearinghouse.
There is far more to watch. In our view, the whole development industry needs to grapple with the perceived barrier that funding ends with projects (note: a trust could be set up to document post-project impact 1, 3, 5 years later and results retained, much as 3ie does now for impact evaluations) and the view that one cannot discern attributable project impact with a time-lag of several years. Yet even the General Accounting Office is asking for longitudinal data; they reviewed USAID’s document and wants to see clear measures of success at Mission and HQ level by different indicators of local institutional sustainability and impact four years on.
Why should we care? As Chelsea Clinton of the Clinton Global Initiative puts it, "you can't measure everything, but you can measure almost everything through quantitative or qualitative means, so that we know what we're disproportionately good at. And, candidly, what we're not so good at, so we can stop doing that.
Yes! Development should be about doing more of what works, sustainably, and less of what doesn’t. USAID’s Local Systems Framework found the best could also be free, as in this one Food For Peace evaluation shows:
Returning to Chelsea Clinton, I’ll conclude by stating something obvious. She "wants to see some evidence of why we're making decisions, as opposed to the anecdotes” which is what getting post-project evaluation data from our true clients, our participants, is all about. Clinton says this will transform CGI into a smart, accountable, and sustainable support system for philanthropic disrupters around the world. USAID is radical for me, today, with their Local Systems investments… my neighborhood disrupter.
Are you such a disrupter too? Who else is one whom we can celebrate together?
Pineapple, Apple- what differentiates Impact from self-Sustainability Evaluation?
There is great news. Impact Evaluation is getting attention and being funded to do excellent research, such as by the International Initiative for Impact Evaluation (3ie), by donors such as the World Bank, USAID, UKAid, the Bill and Melinda Gates Foundation in countries around the world. Better Evaluation tell us that "USAID, for example, uses the following definition: “Impact evaluations measure the change in a development outcome that is attributable to a defined intervention; impact evaluations are based on models of cause and effect and require a credible and rigorously defined counterfactual to control for factors other that the intervention that might account for the observed change.”
William Savedoff of CGD reports in Evaluation Gap reports that whole countries are setting up such evaluation institutes: "Germany's new independent evaluation institute for the country's development policies, based in Bonn, is a year old. DEval has a mandate that looks similar to Britain's Independent Commission for Aid Impact (discussed in a previous newsletter ) because it will not only conduct its own evaluations but also help the Federal Parliament monitor the effectiveness of international assistance programs and policies. DEval's 2013-2015 work program is ambitious and wide – ranging from specific studies of health programs in Rwanda to overviews of microfinance and studies regarding mitigation of climate change and aid for trade." There is even a huge compendium of impact evaluation databases.
There is definitely a key place for impact evaluations in analyzing which activities are likely to have the most statistically significant (which means definitive change) impact. One such study in Papua New Guinea found SMS (mobile text) inclusion in teaching made a significant difference in student test scorescompared to the non-participating 'control group' who did not get the SMS (texts). Another study, the Tuungane I evaluation by a group of Columbia University scholars showed clearly that an International Rescue Committee program on community-level reconstruction did not change participant behaviors. The study was as well designed as an RCT can be, and its conclusions are very convincing. But as the authors note, we don't actually know why the intervention failed. To find that out, we need the kind of thick descriptive qualitative data that only a mixed methods study can provide.
Economist Kremer from Harvard says "“The vast majority of development projects are not subject to any evaluation of this type, but I’d argue the number should at least be greater than it is now.” Impact evaluations use 'randomized control trials', comparing the group that got project assistance to a similar group that didn't to gauge the change. A recent article that talks about treating poverty as a science experiment says "nongovernmental organizations and governments have been slow to adopt the idea of testing programs to help the poor in this way. But proponents of randomization—“randomistas,” as they’re sometimes called—argue that many programs meant to help the poor are being implemented without sufficient evidence that they’re helping, or even not hurting." However we get there, we want to know – the real (or at least likely)- impact of our programming, helping us focus funds wisely.
Data gleaned from impact evaluations is excellent information to have before design and during implementation. While impact evaluations are a thorough addition to the evaluation field, experts recommend they be done from the beginning of implementation. While they ask “Are impacts likely to be sustainable?”, and “to what extent did the impacts match the needs of the intended beneficiaries?” and importantly “did participants/key informants believe the intervention had made a difference?”they focus only on possible sustainability, using indicators we expect to see at project end rather than tangible proof of sustainability of the activities and impacts that communities define themselves that we actually return to measure 2-10 years later.
That is the role for something that has rarely been used in 30 years – for post-project (ex-post) evaluations looking at:
The resilience of expected impacts of the project 2, 5, 10 years after close-out
The communities’ and NGOs’ ability to self-sustain which activities themselves
Positive and negative unintended impacts of the project, especially 2 years after, while still in clear living memory
Kinds of activities the community and NGOs felt were successes which could not be maintained without further funding
Lessons for other projects across projects on what was most resilient that communities valued enough to do themselves or NGOs valued enough to get other funding for, as well as what was not resilient.
Where is this systematically happening already? There are our catalysts ex-post evaluation organizations, drawing on communities' wisdom. Here and there there are other glimpses of ValuingVoices, mainly to inform current programming, such as these two interesting approaches:
Ned Breslin, CEO of Water For People talks about “Rethinking Social Entrepreneurism: Moving from Bland Rhetoric to Impact (Assessment)”. His new water and sanitation program, Everyone Forever, does not focus on the inputs and outputs, including water provided or girls returning to school. Instead it centers instead on attaining the ideal vision of what a community would look like with improved water and sanitation, and working to achieve that goal. Instead of working on fundraising only, Breslin wants to redefine the meaning of success as a world in which everyone has access to clean water.
We need a combination. We need to know how good our programming is now through rigorous randomized control trials, and we need to ask communities and NGOs how sustainable the impacts are. Remember, 99% of all development projects worth hundreds of millions of dollars a year are not currently evaluated for long-term self-sustainability by their ultimate consumers, the communities they were designed to help.
We need an Institute of Self-Sustainable Evaluation and a Ministry of Sustainable Development in every emerging nation, funded by donors who support national learning to shape international assistance. We need a self-sustainability global database, mandatory to be referred to in all future project planning. We need to care enough about the well-being of our true client to listen, learn and act.
Development= A Jeep (motor optional).. Resilience? If within 5 years!
Imagine being given a lovely new Jeep. You get a driver (remember driving school) to help you learn to steer it around the pothole-strewn, scantly lit roads. Eventually you take over the controls of the Jeep and control the steering wheel directly, driving offroad, with the copilot praising your good driving and steering only to avoid catastrophe. You are told that one day the Jeep will be yours.
That day arrives. The development agency hands you the keys to the Jeep. You wave good bye to them, return to the Jeep, turn the key. Dead.
Looking under the hood, you realize the motor is gone. Checking the rest of the Jeep you realize there is no fuel and the tries are flat. That is what it is like from the community's view of development projects after close-out. The local NGO to whom the project has been 'handed over' has scant financial or human resources to continue (no engine), and in the last few months' scramble to close out, the implemeneting agency put in few systems for communities to continue doing the programming without support by the local NGO or all the resources they had poured in (no fuel). There is little to help you move the Jeep (even on flat tires) except your own feet, other than the capacity building that was learned early on, as it wasn't built to last based on local materials. Sustainability isn't programmed in projects that have set timelines and donor-set markers of success which mandate close-out.
So you own the Jeep but with little power to move, very much like countless well-meant tractor for development agriculture before you.
There are several glimmers of hope. What communities have is the human power that exists locally, fuelled by participation coupled with information transmission such as WorkWithUs and MakingAllVoicesCount (based on the moral imperative of it's Their Development as well) and ALNAP's push to use evaluation for learning in international development.
Resilience could be the doorway to getting community-defined sustainable programming to break the cycle of recurrent emergencies that divert resources from long-term development. Imagine: we could ask citizens what will make them resilient! A rare, shining example is a USAID-funded Ethiopia project with a mandate to use participatory impact assessments (process monitoring plus participatory input to capture local perceptions of benefits) to learn from communities. A USAID Solicitation tells us "seventeen impact assessments on different program activities were undertaken to inform best practice and to develop guidelines and policies. A major impact was the development and adoption of Emergency Livestock Guidelines by the Ethiopian government. These were based on best practice assessments in many countries (including Kenya) and action research on different types of interventions. Emergency de-stocking–selling livestock early in a drought to preserve their price and leave more fodder and water for remaining animals — was found to be particularly effective, with a 40:1 benefit cost ratio. Emergency livestock vaccination campaigns, on the other hand, were found to have no impact on livestock mortality, and were dropped in favor of other health interventions including parasite control and de-stocking."
Excellent Valuing of pastoralist Voices! How are such locally-informed excellent processes and findings being widely shared and implemented? What do you think?
Unintended impacts – LWR and Gates Drought Resilience in Niger
Unintended development program impacts – how much do we know about what they are and how we could learn from them for future programming? How often do we even question our assumptions about what we meant the programming to bring and result in, versus what actually happened?
I had the privilege of consulting to Lutheran World Relief from 2005-2007 in Niger on a drought resilience and rehabilitation project. IIED's PLA Notes just published my write-up of this baseline and slightly ex-post (6 months after closeout) final evaluation funded by the Bill and Melinda Gates Foundation. I led a team doing mixed-method interviews (that means tracking numbers finding out how many, how much impact we had, combined with listening to words explaining why things worked out this way, or not). There were numerous lessons lessons learned from targeting sheep, wells and animal fodder to 600 of the poorest women in 10 communities in northern Niger. Among them were that LWR's programming did some real good, especially due to great knowledge of the herding communities via national staff.
The main two project goals had to do with whether the communities were more resilient against future drought and were they more sustainably food secure thanks to the assets (sheep, water, fodder), income (future sheep, sale of fodder) and training (management of sheep and maintenance of water points). The answers seemed to be generally yes. We found that women's share of household income increased from 5% to 25% in some households. This was due to the sheep, as well as time savings used for income generation (a staggering 7-10 hours every other day saved from not having to go fetch water far away or bring animals to drink).
We also found several completely unexpected benefits: resources + time savings generated in harmony and peace. Women reported far more inter-ethnic harmony thanks to collaboration across tribes during the sheep management training and water sharing, and ethnic groups attended each others marriages and baptisms. Households also were far more at peace, some saying "our husbands don't beat us anymore", thanks to increased respect, cleanliness and their ability to be home for their husbands, children and mothers-in-law. Men reported that they sat with women by project's end and there was more intra-household collaboration. Were these planned at all? No! We need to return to learn about what our work engenders, and bring those lessons forward, see if they're replicated or not. What else has emerged in the years since the project ended? What was valued enough to be sustained by the communities themselves?
Finally, a fascinating flaw in our logic which is interesting for international development staff to consider was whether our participants shared the two large goals at all (drought resilience and sustained food security). Some women sold the sheep to buy food, pay their childrens' school fees or their daughters' dowries, even buy themselves beds or pots and pans immediately. Some had their sheep sold by their husbands who used them to buy other animals, pay for ceremonies or other expenses. Spending assets on immediate needs is not at all illogical for a community who can feed itself only 4 months a year; for some households, their pressing needs far outweighed the luxury to wait and buffer seasonal food insecurity way down the line. So while overall the project brought benefit – many saying they didn't have to resort to worse survival strategies during the next hungry season- it illuminated that donors' goals may differ greatly from participants'. These people didn't sign a contract agreeing to abide with donor expectations, yet that is what the evaluation looked for. We need to learn not to assume, and instead explore what goals they do have as expectations of impact. Looking beyond within useful but narrow 'boxes' supported but limited by logical frameworks of inputs-outputs-impacts can help us learn what communities really value and what they got from what we offered.
Valuing their voices is key. What projects have you seen voices valued? What have you learned?
Jindra Cekan, Ph.D. has used participatory methods for 30 years to connect with participants, ranging from villagers in Africa, Central/ Latin America and the Balkans to policy makers and Ministers around the world for her international clients. Their voices have informed the new Sustained and Emerging Impacts Evaluation, other M&E, stakeholder analysis, strategic planning, knowledge management and organizational learning.
If you don’t find what you are looking for via the search, categories, or posts above, you can go to the Blog page, scroll to the bottom, and click “previous posts” to go through all of the posts (newest–>oldest).