Presenting Lessons on (post-project) Sustained and Emerging Impact Evaluations from the U.S. AEA Conference
Dear readers, attached please find the Barking up a Better Tree: Lessons about SEIE Sustained and Emerging Impact Evaluation presentation we did last week at the American Evaluation Association (AEA) conference in Atlanta GA . I had the pleasure of co-presenting with Beatrice Lorge Rogers PhD, Professor, Friedman Nutrition School, Tufts University (aka the famous Food for Peace/ Tufts Exit Strategy study ), Patricia Rogers PhD, Director, BetterEvaluation, Professor, Australia and New Zealand School of Government (where we recently published guidance on SEIE ) and Laurie Zivetz PhD, International Development Consultant and Valuing Voices evaluator.
We integrated our presentations from Africa, Asia and Latin America into this fascinating overview:
1.Sustained and Emerging Impact Evaluation: global context
2.SEIE: definitions and methods
3.Case studies: findings from post-project evaluations
4.Designing an SEIE: Considerations
5.Q&A — which fostered super comments, but since you couldn’t come, please tell us what you think and what questions you have…
There are amazing lessons to learn about design, implementation, M&E from doing post-project evaluation. We have also grown in appreciating that sustainability can be tracked throughout the project cycle, not just during post-project SEIE evaluation.
We’ll be building this into a white paper or a … (toolkit? webinar series? training? something else?). What’s your vote ___? (I know in this US election season, so… :)).
What would you like to get to support your learning about Sustained and Emerging Impact Evaluations? Look forward to hearing from you- Jindra@ValuingVoices.com
The full presentation is available here:
 Cekan, J., Rogers, B. L., Rogers, P., & Zivetz, L. (2016, October 26). Barking Up a Better Tree: Lessons about SEIE (Sustained and Emerging Impact Evaluation). Retrieved from https://valuingvoices.com/wp-content/uploads/2016/11/Barking-up-a-Better-Tree-AEA-Oct-26-FINAL.pdf
 Food and Nutrition Technical Assistance (FANTA). (n.d.). Effective Sustainability and Exit Strategies for USAID FFP Development Food Assistance Projects. Retrieved from https://www.fantaproject.org/research/exit-strategies-ffp
 Zivetz, L., Cekan, J., & Robbins, K. (2017, May). Building the Evidence Base for Post-Project Evaluation: Case Study Review and Evaluability Checklists. Retrieved from https://valuingvoices.com/wp-content/uploads/2013/11/The-case-for-post-project-evaluation-Valuing-Voices-Final-2017.pdf
Are We Done Yet?
When are we off the hook, so to speak, for the well-being of the participants whom we said we'd make healthier, better fed, more educated, safer, etc?
America’s Agency for International Development (USAID) is the main channel for international development aid. It is also an organization interested in learning from its programming and numerous contracts support such work. One such contract by FHI360/FANTA was Food for Peace tasking them to review the agency’s Title II development food aid from 2003-2009 covering 28 countries. This Second Food Aid and Food Security Assessment (FAFSA-2) Summary found that such programs can “reduce undernutrition in young children, improve health and nutrition outcomes, and increase access to income and food” and also found practices that did not work well.
While USAID has made enormous strides in the intervening six years on monitoring and evaluation (I was a consultant to USAID/PPL/LER from 2013-14), excellent recommendations that would support great, sustainable programs are unfulfilled:
Recommendations #1, 4 “USAID/FFP should develop an applied research agenda and sponsor studies that focus on the implementation of Title II programs in the field to better define what works and what does not…. [and] should select the review panel for new Title II applications… and give reviewers a ‘cheat sheet’ on interventions and approaches that USAID/FFP is and is not interested in funding because they work better or do not work as well, [and] provide this same information in the Request for Assistance” [Request for proposals].
Yes, all across our industry there is little learning from past evaluations for future design and Valuing Voices believes local participants and stakeholders need to be consulted to tell us what (still) works and what they want more of not only during implementation but long after. Their voices must support great design, as it’s their lives we go there to improve; they must be involved in the design of these original requests that non-profits design and fulfill. Further, the study found that only 1/3 of all evaluations were included in USAID’s database, and as Valuing Voices’ partner Sonjara has written in our blog, aid transparency requires data retention and access for learning to happen.
Recommendation #3 “USAID/FFP should include options for extensions of awards or separate follow-on awards to enable USAID/FFP to continue to support high-performing programs beyond five years and up to ten years… [as] longer implementation periods are associated with greater impact.”
This would address the ‘how much impact can we accomplish in 1, 3, 5 years” question that many of us in international non-profits ask ourselves. Finally, the graphic below is self-explanatory – USAID sees its role ending at close-out.
The crux lies in their honest statement: "It was beyond the scope and resources of the FAFSA-2 to explore in any depth the sustainability of Title II development programs after they ended." While they state that there is merit in having impact while you intervene, such as "having a positive impact on the nutritional status of the first cohort of children is of immense benefit in its own right", they go on to say that "ideally, one would like to see mothers continuing positive child feeding practices and workers continuing to deliver services long after programs end… [yet] whether the [maternal child health and nutrition] interventions are sustainable beyond one generation is unknown and would require research." This is because funding is pre-programmed, fixed to end within set 1, 3, 5 year increments, and no one goes back to learn how it all turned out. This is what most needs to change, this illusion that what happens after closeout is no longer our issue, that the ‘positive impact’ we had while there is enough.
They are not alone. I think of NORAD, the Government of Norway's development arm as very progressive. So I went to NORAD's website and searched for 'ex-post' (we do a lot of that at ValuingVoices). So like our World Bank blog on finding real ex-post evaluations, many many things are considered 'ex-post', including one actual evaluation in Palestine with fieldwork which asked participants and a few that looked at institutional sustainability. Many of the 100+ 'finds' were actually documents recommending ex-post. Typical of our other searches of other donors. I emailed NORAD whether there were more with participant voices, yet they assured me they did them. Maybe our problem is in definitions and taxonomy again. Maybe we should call them post-project participant feedback?
Most of my colleagues would agree that the sustainability of activities aimed at making communities food secure in the long-term and independent of aid is a shared goal, one which short-term assistance aimed at huge impacts such as to ‘make communities food secure’ and ‘sustainably decrease malnutrition’ (common proposal goals) is unrealistic. We need participant voices to teach us how well we served them. We need to return, learn “what works and what does not”, and Value Voices in true sustained partnership. We all look forward to being done.
 “Another major obstacle to transparency and learning from the Title II program experience was the fact that only one-third of the final evaluations were publicly available on the Development Experience Clearinghouse (DEC), despite the requirement that Awardees post them to the DEC…. [There was a lack of] cross-cutting studies or in-depth analyses of Title II evaluation results to advance organizational learning [and] much greater use could be made of the evaluation data for systematic reviews, meta-analyses, secondary analyses, and learning.”
Learning from the Past… for Future Sustainability
Heading up Food Security for Catholic Relief Services (CRS) was my first international development job in 1995-1999 and I have watched this organization grow in its commitment to program quality and learning/ knowledge management ever since. At the time I oversaw 17 of of CRS' USAID/ Food for Peace (FFP) programs. So I was delighted that not only has CRS done an ex-post evaluation and used the findings for programming (e.g. the effectiveness of investing in a particular sector—for example, the importance of supporting girls’ education within a food security program) and also for advocacy (e.g. evaluation lessons from Rwandan peace-building projects seven years after the genocide informed CRS’ evolving approach to peace and justice strategies), but I get to celebrate FFP learning too. In addition to having consulted to USAID/PPL (Policy, Planning and Learning) and the FANTA project, all featured below, I went to Tufts University’s Fletcher School. Super to see great organizations learning about sustainability!
CRS’ 2007 project package guidance for implementation and guidance (ProPackII), described ex-post evaluation/sustainable impact evaluation’s aim “to determine which project interventions have been continued by project participants on their own [which] may contribute to future program design…. it is fair to say that NGOs rarely evaluate what remains following the withdrawal of project funding [which] is unfortunate [as] important lessons can be generated regarding factors that help to ensure project sustainability.”
A 2004 Catholic Relief Services excellent ex-post evaluation in Ethiopia was featured: “Looking at the past for better programming: dap I Ex-Post Assessment Report”. It assessed sustainability of Agriculture Natural Resource Management, and Food-Assisted Child Survival/Community Based Health Care programming, done as an internal evaluation by CRS staff and partners with document review, partner, government and community interviews. Results were mixed.
Some activities generated enough food and income that households could eat throughout the year and have some savings, making them more resilient against drought
Almost 100% of cropland bunding and irrigation practices for improved crop production were still being applied and buffered them during a subsequent drought
Health practices had also continued (e.g. trained traditional birth attendants had continued to provide services with high levels of enthusiasm and commitment, and increased levels of health care-seeking behaviors existed).
However, many other benefits and services had severely deteriorated:
Nearly all water committees had dissolved, fee collection was irregular or had been discontinued
Many water schemes were not operational
The centrally managed [tree] nurseries had been abandoned (given the existing management capacity of communities and government).
CRS/Ethiopia and its partners came to see that:
“The potable water strategy had over-focused on the technical aspects (“hardware”) while not paying enough attention to the community organizing dimensions and support by existing government services (“software”).
Even limited post-project follow-up by partners and government staff might have gone a long way towards mitigating the deterioration of project benefits and services.
What was terrific was that they “went on to use these findings and lessons learned from this ex-post evaluation to inform the design of similar projects in Ethiopia… while also raising awareness of these issues among partner staff”. The ex-post recommended increased planning for sustainability, setting up village management for post-project and incentive maintenance. Great learning, yet we have found few ex-posts at CRS or elsewhere. Our industry needs to explore issues such as those the evaluators posed: Was the lack of sustainability due to technical, institutional or financial faults in the programming? In other words, was the lack of self-sustainability due to the design/ aim of the activity itself or how it was implemented?
In 2013, USAID’s Food For Peace commissioned fascinating research on Exit Strategies. Tufts University went to Bolivia, Honduras, India and Kenya which were phasing out of Title II food aid to look at how to “ensure that the benefits of interventions are sustained after they end, [as] there is little rigorous evidence on the effectiveness of different types of exit strategies used in USAID Office of Food for Peace Title II development food aid prog
rams.” The research is to “assess the extent to which the programs’ impacts were maintained or improved and to help understand factors of success or failure in the specific exit strategies that were used.” They have made the important discernment that the effectiveness of Title II programs depends on both short-term impact and long-term sustainability.
The FANTA project (contractor) made the following preliminary results available:
Impact assessment at exit does not consistently predict sustained impact two years later…. It can be misleading.
Many activities, practices, and impacts across sectors declined over the two years after exit. These declines are related to inadequate design and implementation of sustainability strategies and exit processes.
There are specific ways to increase the likelihood of sustainability: Sustaining service provision and beneficiary utilization of services and practices depends on three critical factors: Resources, Technical and Management Capacity, Motivation
Withdrawal of free food rations or any other free input (as incentive) jeopardizes sustainability without consideration of substitute incentives. For instance,
Withdrawal of food was a disincentive for participation in and provision of [child] growth monitoring…. Resources and health system linkages are needed to sustain health activities
Motivation, capacity and resources are all needed to maintain water systems
Agriculture and Natural Resource Management suffered greatly when resource incentives disappeared
Their main recommendations are that sustainability should be built into the design from the beginning, program cycles are longer and exit is gradual.
CRS found the same issue of incentives as a barrier, as they did technical and (institutional) capacity/ motivation/ management issues. We have much to learn… at least we’ve started Valuing Voices and asking… and eventually designing for sustainability!
Walking in our participants' shoes, Doing Development Differently
You and I like to make informed decisions. We go to restaurants recommended by Yelp or Facebook friends. We refer to Consumer Reports' rankings before we buy appliances and read Amazon reviews before we purchase items, or at least ask random friends what works for them. I just bought a fridge and looked at energy star ratings to buy one that was efficient and climate-friendly. Would you buy an expensive appliance without market research reassuring you of the likely success?
Yet that is what we ask millions of development participants to do every day. In international development, community participants don't have such luxuries as knowing which projects worked well before, or that success in terms of health, income, education is replicable using this model. They invest their time and resources, blindly, hoping for a good return.
Doing Development Differently is a very encouraging initiative by UK's Overseas Development Institute (ODI) which focuses on learning from bottom-up, country-led development. Leni Wild tells us via a Malawi Country Scorecard example that NGO-led innovation during implementation is key as is creating coalitions of shared responsibility to meet participant needs. Natalia Adler shares a Nicaraguan Human-Centered Design where policy makers literally walked the lives of participants, learning from being participant-observers and supporters, not omniscient experts.
For our projects and policies are but a piece of participants' lives, but an important input if done for appropriate impact and sustainability. But do we know if we all mean the same thing? Years ago I asked what 'food security' meant in several Malian communities. In addition to expected answers- 'being able to feed myself and my family'- I got answers such as 'many children' (both the cause of having food and that more children generate sustained food security once adult, employed), and 'children's schooling' (having surplus to send their children to school year-around).
Not only do we need to understand what impact looks like to our participants, and start putting in systems for them to track it and report back to us, but also what does sustainability means to our participants? Also what can they self-sustain? What can we do more skillfully in future project design, learning from what worked best?
* Imagine what project impacts we could learn about if we returned to see, 2-5 years after projects closed out, to see what remarkable unintended impacts were created as in Niger.
* Imagine how self-sustainable project activities could be if we designed future projects based on past project successes (trackin which activities communities around the world were most able to replicate after projects left)?
* Imagine being the non-profit able to claim that the majority of activities were self-sustainable by communities 10 years later, and receiving the best Star ratings by them?!
* Imagine being the Minister of Development in Africa, Asia or Latin America being able to vett incoming projects based on likelihood of achieving Development Star results?
Using national evaluators, building national systems of online IATI-compliant documentation of what their own people consider to be the most sustainable impacts of projects, as well as are missing pieces of the 'development' puzzle. Listening could teach us a lot more than our logical framework (logframe) expectations of impact from our (donor’s) view — especially what communities imagine projects will lead to. Even farther out, imagine if together we calculate projects' economic benefits/ the Return on Investment to participants (crudely put, the 'bang for the buck") of our projects in their view, as well as our efficacy in terms of resource use, that scary thought…
Doing Development Differently is as exciting as are initiatives like USAID Forward in supporting development toward countries, e.g. starting to channel funds directly to local implementers under which falls USAID’s CLA (Collaborating, Learning and Adapting). Another great example is UK’s DIFD’s BRIDGE which incorporates Strategic Learning and adaptation into projects (adapting the during implementation, rather than the quite fixed straight-jacket most projects are tied to on signing agreements or contracts).
For we need to put ourselves out of a job by training local NGOs to supplant us, to support capacity/ create systems in Ministries to take over all but our funding, and especially, to listen to those who know best- communities whose lives we are to improve. Where are you seeing success brewing?
What’s likely to ‘stand’ after we go? A new consideration in project design and evaluation
This spring I had the opportunity to not only evaluate a food security project but also to use the knowledge gleaned for the follow-on project design. This Ethiopian Red Cross (ERCS) project “Building Resilient Community: Integrated Food Security Project to Build the Capacity of Dedba, Dergajen & Shibta Vulnerable People to Food Insecurity” (with Federation and Swedish Red Cross support) was targeted to 2,259 households in Dedba, Dergajen and Shibta through provision of crossbreed cows, ox fattening, sheep/goats, beehives and poultry which were to be repaid in cash over time as well water and agriculture/ seedlings for environmental resilience. ERCS had been working with the Ethiopian government to provide credit for these food security inputs to households in Tigray which were to be repaid in cash over time. During this evaluation, we met with 168 respondents (8% of total project participants).
Not only were we looking for food consumption impacts (which were very good), and income impacts (good), we also probed for self-sustainability of activities. My evaluation team and I asked 52 of these participants more in-depth questions on income and self-sustainability preferences. We used participatory methods to learn what they felt they could most sustain themselves after they repaid the credit and the project moved on to new participants and communities.
We also asked the to rank what input provided the greatest source of income. The largest income (above 30,000 birr or $1,500) was earned from dairy and oxen fattening, while a range of dairy, oxen, shoats and beehives provided over half of our respondents (40 people) smaller amounts between 1,000-10,000 birr ($50 to $500).
And even while 87% of total loans were for ox fattening, dairy cows (and beehives) which brought in farm more income, and only 11% of loans were sheep/goats (shoats) and 2% for poultry, the self-sustainability feedback was clear. In the chart below, poultry and shoats (and to a lesser degree, ox fattening) were what men and women felt they could self-sustain. In descending order, the vast majority of participants prioritized these activities:
To learn more about how we discussed that Ethiopian participants can self-monitor, see blog.
So how can such a listening and learning approach feed program success and sustainability? We need to sit with communities to discuss the project’s objectives during design plus manage our/ our donors’ impact expectations:
1) If raising income in the short-term is the goal, the project could only have offered dairy and ox fattening to the communities as their incomes gained the most. Note, fewer took this risk as the credit for these assets was costly.
2) If they took a longer view, investing in what communities felt they could self-sustain, then poultry and sheep/goats were the activities to promote. This is because more people (especially women, who preferred poultry 15:1 and shoats 2:1 compared to men ) could afford these smaller amounts of credit as well as the feed to sustain them.
3) In order to learn about true impacts we must return post-project close to confirm the extent to which income increases continued, as well as the degree to which communities were truly able to self-sustain the activities the project enabled them to launch. How do our goals fit with the communities’?
What is important is seeing community actors, our participants as the experts. It is their lives and livelihoods, and not one of us in international development is living there except them…
What are your questions and thoughts? Have you seen such tradeoffs? We long to know…
[*NB: There were other inputs (water, health, natural resource conservation) which are separate from this discussion.]
Reposted from Feedback Labs: http://feedbacklabs.org/see-how-it-turned-out-feedback-loops-for-implementation-and-sustainability/
When I met Aminata in a central Malian village, she asked me whether I was with the people with the yellow trucks or the white trucks. That was her way of differentiating between development projects. I explained to her I was doing (PhD) research, with neither. She asked me whether they (the trucks) would come back. I couldn’t answer, and therein lies the problem. “Country-led development” begins with communities being involved at every stage of a project. Ongoing community input during project design, implementation, and monitoring is needed for impact, local community ownership and sustainability. Developing “feedback loops” that facilitate two-way communication are key for building cultures of collaboration, adaptation and learning into international development programs. Valuing Voices sees data as another resource to deliver development, beyond serving the needs of donors and international non-profits.
The distance between our intentions and our reality
Too often, data is extracted from communities by development organizations, in order to evaluate how well they fulfilled the project, rather than communitiesevaluating how well international development projects supported community needs. The best projects co-design interventions and monitoring with communities. Too often communities have no mechanism to learn how their feedback during implementation or during evaluations led to real changes. Such feedback leads to community buy-in, as there is proof that their voices matter and that they are co-driving development. Outside of submitting a format report in a PDF to a donor, the development field has consistently inadequately retained the data from projects once they close, much less leave it in-country, and rarely return afterwards for follow-up evaluations. This further reinforces the idea that these communities are being exploited for resources or development experimentation, with no thought to long-term capacity, learning or sustainability.
Ever farther from this vision of collaborative partnership in project conceptualization, design, and monitoring is what happens once these short and long term projects are closed out. Due to budget restrictions and bureaucratic habits, too often the task of sustaining these projects is handed over to local partners, without funding, staff or data continuity. Cursed with “pilot-itis”, development initiatives too often lose sight of achieving scalability and sustainability. ‘Sustainability’ is often incorrectly defined only as whether the specific project got follow-on funding, rather than by asking communities what activities they were able to continue long after the project ends. Communities provide feedback only 1% of the time, missing great learning opportunities to be had from returning to assess these same projects 1-10 years later to see what was expected, unexpected and what could be sustained. We therefore never truly learn what had the greatest prospect for replicability and scaling elsewhere.
How do we really show that we Value Voices?
At Valuing Voices, we believe that community members are our true clients. We have identified two kinds of feedback loops that are needed to make international development far more effective.
Country-Led Project Implementation Feedback Loops: Valuing Voices wants to create standardized methodologies and data collection processes that can be integrated into most international development projects to create feedback loops that continue working long after projects close. Valuing Voices believes key elements to developing collaborative feedback loops are:
Identification of different feedback loop methodologies. Based on what is appropriate for the population, Valuing Voices identifies different methodologies to create these loops through a mix of traditional and digital tools. This means explicitly targeting the building, capturing and sharing of feedback during a specific project, to test and document different methodologies and create standard processes and infrastructure. This includes participatory methodologies and studies such as Empowerment Evaluation and “Who Counts? The Power of Participatory Statistics, as well as ways of evaluating qualitativeStorytelling. We must of course protect our respondents as well.
Use of a franchise approach to replicate and scale. The identified methodologies can be taught to local and in-country evaluators and development experts who can then “sell” those services to governments, local NGOs, international donors, and the private sector. Valuing Voices is a catalyst for this country-based franchise approach which strengthens national evaluation, capture and learning. It also allows for feedback loops to exist within a program as local evaluators provide feedback on impact and lessons for improvement. This leads to local empowerment, sustainability and aid transparency.
Use of digital tools- capture once, share forever. Following evaluations on the local level, Valuing Voices then rolls up national evaluations by sector, analyzes them for what’s most sustained, and shares that learning around the world, influencing project design, implementation, funding and empowerment. In addition to generating feedback loops, this valuable feedback is entered into a curated database of findings for comparison across projects such as agriculture, livelihoods, credit, natural resources, health and education. By using structured data analysis, we can compare feedback data and actual behavior. We will look at whether we are capturing the right associated metadata (date, location, project) to contextualize feedback and bring forth lessons our partners can learn from, analyzing similarities and learning from them to improve program quality, organizational learning (of all partners) and sustainability prospects.
Post-Project evaluation Feedback Loops: The eagerly awaited Local Systems: A Framework for Supporting Sustainable Development was published by USAID in May:
“More ‘ex-post’ evaluations — which are meant to determine the impact of a project after it is completed, sometimes years later… are needed to design and implement projects.”
This is great news, as in the last 24 years, USAID has not published a single post-project evaluation, and the World Bank has only published one. This could indicate that none were written, or none were seen as ‘publishable’.
Post-project (ex-post) evaluations should look at:
The resilience of expected impacts of the project two, five, and ten years after close-out;
The communities’ and NGOs’ ability to self-sustain those activities;
Positive and negative unintended impacts of the project, both immediately in in the long term;
Which activities the community and NGOs felt were successes which could not be maintained without further funding;
Lessons for other projects on what interventions were most resilient which communities valued enough to continue themselves or for which NGOs valued enough to obtain additional funding, as well as what was not resilient.
These evaluations are rarer than snow in Sri Lanka. Feedback loops post-implementation are key to understanding the sustainability of projects and to improve transparency, efficacy and learning. Valuing Voices has a handful of examples of ex-post (post-project) evaluation done by organizations such as Mercy Corps and Plan International as well as bilateral JICA, but shockingly 99.9% of all development projects just don’t go back and check.
Voices to be valued
These are just a few examples of areas where we can value the voices of our communities, to be more effective and impactful with our efforts, and show our respect and value of communities’ input.
The time is ripe for growth, for these voices to be heard more loudly, telling us what they want development to be. Join us in advocating for funding for such feedback, join us as partners in our field sites, as partners in ICT database creation, as voices for communities.
Communities want their voices heard. Sustained development depends on it.
Jindra Cekan, PhD is the founder of Valuing Voices at Cekan Consulting LLC. She has worked in international development for 25 years in participatory design, monitoring & evaluation and knowledge management in over 20 countries. Her PhD was in Mali, “Listening to One’s Clients” and she consults to non-profit, public (USAID), foundation and private sectors.
Siobhan Green, MA is the founder of Sonjara, Inc, and a member of ValuingVoices. She has worked in international development since 1992, and specializes in ICTs for development, knowledge management, and technology for monitoring and evaluation. Her master’s thesis was on “The Internet in Africa: Policy Perspectives and Approaches” in 1997, and she works with USAID and other USG clients, non-profits, and for-profit partners.