by valuingvoicesjin | Jan 20, 2016 | Accountability, Aid effectiveness, ex-post evaluation, Exit strategies, Food for Peace (FFP), Participants, partners, Rural Development, self-sustainability, Sustainability, Sustainable development, USAID
Learning about Sustainability and Exit Strategies
from USAID’s Food Assistance Projects
USAID overall and Food for Peace (FFP) specifically have become far more progressive in the Obama Administration and under Administrator Rajiv Shah, with a much greater focus on accountability and results. Those of you unfamiliar with USAID’s Food For Peace will learn it has been a large channel of international assistance for over 60 years and is not a small funding instrument. For 2016 alone, they have proposed spending $1.75 billion to feed 47 million people through humanitarian and development programs implemented by non-profits, for-profits and the UN’s World Food Program [1]. Given this scale of resources, it is highly surprising that while many documents in their archive ask for post-project evaluation and there are a handful of desk reviews, they have done only two actual ones with new fieldwork in the last 30 years (these four countries and a recently published one on Uganda, see Catalysts page). This recent and excellent 2015 synthesis report by authors Rogers and Coates is presented below [2].
Commissioned by USAID, Tufts University and FHI360 have done a remarkably thorough two to three year post-project evaluation of four (Title II) food-assisted programs containing 12 projects in Bolivia, Honduras, India, and Kenya that closed out in 2009. The methodologies used are clearly outlined (itself a boon to our fledgling field) as are limitations and comments on context, findings and recommendations. It was no small feat to compare activities across four countries and so many sectors (some of which were supported by provision of US food aid resources, others with in-kind or cash inputs): maternal and child health and nutrition; water and sanitation; agriculture, livestock, and rural income generation; natural resource management; school feeding; and micro-savings and loans. Also this covered many implementers, from CARE, ADRA and Save the Children to World Vision, CRS and Feed the Hungry.
In this document, Dina Esposito, the Director of FFP states “We commissioned this report with the objective of determining what factors enhanced the likelihood of sustained project benefits, in order to improve our guidance for future food assistance development projects.… FFP development projects are designed to reduce the long-term need for food assistance by strengthening the capacity of developing societies to ensure access to nutritious food for their most vulnerable communities and individuals, especially women and children. The study team looked at 12 FFP development projects across four countries and asked not only what was achieved by each project’s end?, but also, what of those achievements remained one year after project close-out? and two years after? This rigorous, retrospective approach is not widely done, but is essential if we are to understand the true impacts of our investments. To be effective, development projects must result in changes that last beyond the duration of the project themselves.”
Process and findings:
The researchers compared baseline, midline and endline evaluations and exit strategy documents to new mixed-method data collection. There were four main findings:
1) Impact vs. Sustainability trade-offs: Evidence of project success at the time of exit (as assessed by impact indicators) did not necessarily imply sustained benefit over time. Just because projects were deemed successful at exit does not mean that those continued after closeout. “Moreover, the study found that focusing exclusively on demonstrating impact at exit may jeopardize investment in longer-term sustainability.” Valuing Voices found the same in Ethiopia in research done in 2013 [3].
2) Preconditions to successful sustainability: In addition to an ongoing source of resources, good technical and managerial capacity, and sustained motivation of participants and partners, linkages to governmental organizations and/or other entities were key to continuity and sustainability of outcomes and new impacts. “No project in this study achieved sustainability without [the first] three of them in place before the project ended,” and linkages between community partners and the public/private sector were critical for handover (Figure 1, below). Further, a gradual transition from project-supported activities to independent operation was important for sustainability. “Sustainability was more likely when projects withdrew gradually, allowing community-based organizations to develop the capacity to operate independently.”

[2]
3) Free resources can threaten sustainability, unless replaced while there is no one-size fits all for resources:
Using incentives has costs. “Free supplementary food in maternal and child health and nutrition projects or free marketing services in agriculture projects created expectations in many projects that could not be sustained once resources were withdrawn”. Valuing Voices found the same in research in Niger (report imminent). But other financing options, free health care or fee for service are still unsystematically studied regarding fostering sustainability in differing sectors.
4) External factors (climate, economy) can affect sustainability: The operating context and exogenous shocks (e.g., economic, legal and climatic) also affected the sustainability of project benefits, positively or negatively.
Most tellingly, the authors warned that “sustainability plans cannot be based on the hope that activities and benefits will continue in the absence of the key factors identified in this study.” Throughout the report and in pending country-specific studies, they outlined the assumptions that projects made about sustainability in order to exit and closeout, which were variably disproved, such as:
- Community health workers would continue to provide services although without remuneration,
- Households could continue to access nutritious food from their own (increased) production or purchases and have time, and know how to prepare such food,
- Farmers will pay for inputs with profits from increased production and commercialization and can meet the quantity and quality requirements of long-term contracts
- Community members will recognize the tangible benefit of Natural Resource Management activities and will be motivated to continue them without further inputs or remuneration
- Water committees will have sufficient administrative capacity and resources to manage their budgets effectively
- Community-based organizations have strong institutional capacity
- Partner organizations will continue to provide teacher training
- Government will have the resources and commitment to support future needs
The country studies with detailed findings are still forthcoming but these examples may illustrate the range of sustainability. There were some very well-sustained positive results in Food Production (India by area) and Child Health Growth Monitoring (Bolivia by consortium implementers) between baseline or enline and followup 2-3 years later:

[2]

[2]
As well as some far more mixed or negative results in examples across all the Water and Sanitation projects:

[2]
And far less stellar results in Maternal Child Health’s Community Health Workers (Kenya):

[2]
The authors recommended not only ensuring resources, capacity, motivation and linkages are present before exiting but also institutionalizing sustainable approaches to project design and evaluation including in solicitations and applications, project assessments, project management and knowledge management. They also recommended not only phasing down exit but also extending more such evaluations beyond the 5 years of implementation and assessing impacts as long as 10 years after. This requires some sizeable revisions to how development is done at Food For Peace.
All of these findings recommendations are near and dear to those of us at Valuing Voices. We strongly commend Food For Peace and ask for many more such studies, for unless we know what worked best and why, how do we know what to design next together with our partners and participants for real sustainability?
Sources:
[1] InterAction. (2015). Choose to Invest 2016: Food For Peace Title II. Retrieved from https://web.archive.org/web/20150307160559/https://www.interaction.org/choose-to-invest-2016/food-for-peace-title-II
[2] Rogers, B. L., & Coates, J. (2015, December). Sustaining Development: A Synthesis of Results from a Four-Country Study of Sustainability and Exit Strategies among Development Food Assistance Projects. Retrieved from https://www.fantaproject.org/research/exit-strategies-ffp
[3] Cekan, J., PhD. (2014, April 7). Evaluation of ERCS/Tigray’s “Building Resilient Community: Integrated Food Security Project to Build the Capacity of Dedba, Dergajen & Shibta Vulnerable People to Food Insecurity”. Retrieved from http://adore.ifrc.org/Download.aspx?FileId=147802&.pdf
by Jindra Cekan | Sep 24, 2015 | Accountability, Brookings Institute, Center for Global Development, Evidence-based policy, ex-post evaluation, Exit strategies, foreign aid, IEG, Milenium Development Goals, Millennium Challenge Corporation, Participants, post-project evaluation, self-sustainability, Sustainable development, USAID, USDA, World Bank
Sustainable Development Goals and Foreign Aid–
How Sustainable and Accountable to Whom?
Jindra Cekan, PhD of ValuingVoices
World leaders will paint New York City red next week at the UN Summit adopting the new post-2015 development agenda. The agreed plans set 17 new ‘Sustainable’ Development Goals (SDGs) to be achieved by 2030. These are successors to the Millennium Development Goals (MDGs).
While many of us have heard of them, how many of us know whether we met them and what prospects are there for the Sustainable Development Goals to do well or better? According to Bill Gates the MDGs were “the best idea for focusing the world on fighting global poverty that I have ever seen.” The Brookings Institute goes on to praise the eight MDG targets for aiming high by setting targets such as halving world poverty and reducing child mortality, improving universal primary education and gender equality and empower women etc. [1]. Donor countries pledged three times more than they had until that time (raising the percentage of gross national income for international development assistance from 0.2% to 0.7%, not huge amounts but laudable). From 2000 to 2015 extreme poverty did fall by half (although some argue China and Asia were well on their way before this aid came) and in some countries (Senegal, Cambodia) child deaths fell by half. Global health improved via huge coalitions on immunization and HIV/AIDS. Yet while poverty dropped and health increased, hunger, environment and sanitation targets were not met, for instance, and there are 850 million people still hungry worldwide (11% of all people) [2]. Yet gains far outweigh losses. The new Sustainable Development Goals are to be achieved in 15 years, by 2030. The UN and member nations will track a remarkable 169 indicators, “monitoring progress towards the SDGs at the local, national, regional, and global levels… [3]

Overall, one would feel rather tickled by these results — not 100% but still amazing given global disparities. Nancy Birdsall of the Center for Global Development thinks measurement is not the goal: “Growing global interconnectedness means that the problems the world faces, that hold back development, are increasingly shared… we’re making a promise to ourselves that we are one world, one planet, one society, one people, who look out for each other…” [4]
But I’m a fan of measurable results, I must admit. One would logically think that our international development projects funded by U.S. Agency for International Development (USAID), the U.S. Department of Agriculture (USDS), the Millennium Challenge Corporation and others could outline how the caused the good results that some MDGs showed. USAID’s website links its work to the MDGs clearly: “in September 2010, President Obama called for the elevation of development as a key pillar of America’s national security and foreign policy. This set forth a vision of an empowered and robust U.S. Agency for International Development that could lead the world in solving the greatest development challenges of our time and, ultimately, meet the goal of ending extreme poverty in the next generation” [5]. It goes on to talk about work to “Promote sustainable development through high-impact partnerships and local solutions“ [5]. USDA’s Foreign Agricultural Service states their “non-emergency food aid programs help meet recipients’ nutritional needs and also support agricultural development and education. These food assistance programs, combined with trade capacity building efforts, support long-term economic development” [6]. Finally, MCC states they are “committed to delivering results throughout the entire lifecycle of its investments. From before investments begin to their completion and beyond… MCC’s continuum of results is designed to foster learning and accountability” [7].
Maybe. We don’t actually know because 99% of the time we never return to projects after they end to learn how sustainable they actually were. We could be fostering super-sustainability. Or not.
For international development programming works on 1-5 year programming cycles. Multi-million dollar project requests for proposals are designed and sent out by these funders to non-profit or for-profit implementers. These are awarded to one or more organization, quite rigorously monitored, and most have very good results. Then they end. Since 2000, the US Agency for International Development has spent $280 billion on country-to-country development and humanitarian aid projects as well as funding multilateral aid and in spite of much work evaluating the final impact of projects at the end, they never go back [8]. The EU has spent a staggering $1.4 trillion. USAID has funded one evaluation that has gone back to see what communities and partners could sustain… in the last 30 years, and that is about to be published. A handful of international non-profits have taken matters in their own hands and funded such studies privately. The EU’s track record is even more dismal, with policies being proposed but not done [10]. The World Bank, which has funded over 12,000 projects has an independent evaluation arm, the Independent Evaluation Group. They returned after projects closed out to evaluate results only 33 times and we found only three of them systematically talked to project participants about what was sustained.
The bottom line is, how do we know anything we’ve done in international development or SDGs is sustainable unless we go back to see? What amazing or awful results must we know for future design? If we do not return, are we really accountable to our taxpayers and our real clients: the participants and the national countries foreign aid recipients themselves?
The UN has pledged to have an SGD report card to “measure progress towards sustainable development and help ensure the accountability of all stakeholders for achieving the SDGs….[and a] Global Partnership for Sustainable Development Data, to help drive the Data Revolution….by using data we can ensure accountability for the policies that are implemented to reduce global and local inequities” [3]. I completely agree that having citizen generated data at the local, national, regional, and global levels is so very important “to fill gaps in our knowledge, establish global norms and standards and…help countries develop robust national strategies for data development.” And as the World Bank IEG’s Caroline Heider states, measuring them is complex (e.g. agriculture affected by climate change and measuring changes across sectors is hard) but worthwhile [11].
While SDG data tells us what donor-funded activities and policies work, very few in international development know how sustainably our programming works for our ultimate clients, our participants and partners. And the price needn’t be high—a recent post-project evaluation we did cost under $120,000 which is a pittance given the project cost over $30 million and reached 500,000 folks. We found clear (mostly successful) lessons. USAID has, after 30 years, funded one post-project evaluation that also has clear cost-effective lessons (forthcoming). Really, in this era of cost-effectiveness, don’t we want data on what worked best (Note to self: do that more) and what worked least (Note to self: stop doing that)?
Learning what participants and partners could self-sustain after we left is actually all we should care about. They want to get beyond aid. Shouldn’t we know if we are getting them there? Self-sustainability of outcomes is a clear indicator of good Return on Investment of our resources and expertise and their time, effort and expertise. It shows us we want to put ourselves out of a job, having built country-led development that really has a future in-country with their own resources.
Two steps are:
1) Donors to add a funding equivalent to 1% of program value for five years after closeout for all projects over $10 million to support local capacity-building of NGOs and national partners to take over implementation plus to evaluate lessons across different sector’s sustainability outcomes.
2) A cross-donor fund for country-led analysis of such learning plus lessons for what capacity needs to be built in-country to take over programming. This needs support from regionally-based knowledge repositories and learning centers in Africa, Asia, etc. Online and tangible centers could house both implementer reports/ evaluations and analyze/ share lessons learned across sectors and countries from post-project evaluations for projects that closed out 2-7 years ago for future design.
Now that is accountability. Let’s advocate for sustainability funding, data and learning now.
What are your suggestions? How can we improve sustainability?
Sources:
[1] McArthur, J. (2013, February 21). Own the Goals: What the Millennium Development Goals Have Accomplished. Retrieved from https://www.brookings.edu/articles/own-the-goals-what-the-millennium-development-goals-have-accomplished/
[2] End Poverty 2015 Millennium Campaign. (n.d.). MDG 1: Eradicate Extreme Poverty and Hunger. Retrieved from https://www.endpoverty2015.org/mdg-success-stories/mdg-1-end-hunger/
[3] Sharma, S. (2015, August 20). From Aspirations to Reality: How to Effectively Measure the Sustainable Development Goals. Retrieved from https://www.huffpost.com/entry/measuring-the-peoples-age_b_7999640
[4] Mirchandani, R. (2015, September 22). Does It Matter If We Don’t Achieve the SDGs? A New Podcast with Nancy Birdsall and Michael Elliott. Retrieved from https://www.cgdev.org/blog/does-it-matter-if-we-dont-achieve-sdgs-podcast-nancy-birdsall-and-michael-elliott
[5] USAID. (n.d.). USAID Forward. Retrieved from https://www.usaid.gov/usaidforward/usaid-forward-2014-archive
[6] United States Department of Agriculture (USDA). (n.d.). Food Assistance. Retrieved from https://www.fas.usda.gov/topics/food-assistance
[7] Millennium Challenge Corporation. (n.d.). Our Impact. Retrieved 2016, from https://web.archive.org/web/20160325135258/https://www.mcc.gov/our-impact
[8] Cekan, J. (2015, March 13). When Funders Move On. Retrieved from https://ssir.org/articles/entry/when_funders_move_on
[9] Global Issues. (n.d.). Foreign Aid for Development Assistance: Foreign Aid Numbers in Charts and Graphs. Retrieved from https://www.globalissues.org/print/article/35#globalissues-org
[10] Florio, M. (2009, November/December). Sixth European Conference on Evaluation of Cohesion Policy: Getting Incentives Right — Do We Need Ex Post CBA? Retrieved from https://ec.europa.eu/regional_policy/archive/conferences/evaluation2009/abstracts/florio.doc
[11] Heider, C. (2015, September 15). Evaluation Beyond 2015: Implications of the SDGs for Evaluation. Retrieved from https://ieg.worldbankgroup.org/blog/evaluation-beyond-2015-implications-sdgs-evaluation
by Jindra Cekan | Aug 28, 2015 | Accountability, Aid effectiveness, Beneficiaries, Center for Global Development, Charity Navigator, Effective Philanthropy, Evaluation, Hewlett Foundation, International aid, Making All Voices Count, OECD, ONE, OXFAM, post-project evaluation, self-sustainability, stakeholders, Sustainable development, USAID
Altruistic Accountability… for Sustainability
Many of us in international development feel a sense of responsibility for others to be well, and for our work to improve their lives as well as for the work to be done in good stewardship of aid resources and optimizing their impact. As Matthieu Ricard writes, "Altruism is a benevolent state of mind. To be altruistic is to be concerned about the fate of all those around us and to wish them well. This should be done together with the determination to act for their benefit. Valuing others is the main state of mind that leads to altruism." We also feel a responsibility to our international aid donors and taxpayers. We who implement, monitor and evaluate projects work to ensure that the altruism of aid is responsible to both donors and recipients.
Altruism appears most vividly when implementers issue appeals after disasters, with millions donated as a result, but unsung heroes are also development workers. Organizations such as Charity Navigator, ONE and Center for Global Development on how well US organizations spent funds and track donor-country policy accountability. Thoughtful donor studies such as French Development Agency’s OECD study report on the power of AidWatch and Reality of Aid intiatives in Europe for their taxpayers.
But who is pushing for our donor’s accountability to the country’s participants themselves? While USAID funds many program evaluations, some of which “identify promising, effective…strategies and to conduct needs assessments and other research to guide program decisions”, they are always at project end, rather than looking at sustainability of the outcomes and impacts, and focus on Congressional and domestic listeners. This is no funding and no small audience. The US Department of State/ USAID’s FY13 Summary report states that in fiscal year 2013, USAID had $23.8 billion to disburse, over $12 billion for programming. While total beneficiary (participant) numbers were not provided, emergency food assistance alone used $981 million for nearly 21.6 million people in 25 countries.
So who is a watchdog for what results? OXFAM may excellently highlight opportunities for better programming. 3ie does many studies looking at projected impact and does systemic reviews (but only three were post-project). Challenges such as Making All Voices Count may fund channels for country-nationals to hold their own governments responsible, but can in-country project participants ever demand sustainable results from anyone but their own governments? Herein lies the crux of the issue. Unless governments demand it (unlikely in ‘free’ aid), only pressure from donor country nationals (you? we) can push for changes.
At the core of Valuing Voices mission is advocacy for altruistic accountability of the sustainability of projects to country-ownership at all levels. For us, this involves valuing and also giving voice to those supported by, and also tasked with doing ‘sustainable projects’. Unless we know how sustainable our development projects have been, we have only temporarily helped those in greatest need. This means looking beyond whether funding continued to whether the benefits of an activity or even the existence of entire local NGOs tasked with this actually continued after funding was withdrawn. Unless we strive to learn what has continued to work best or failed to be continued after projects left in the views of the participants themselves, we can let down the very people who have entrusted us with hopes of a self-sustainable future of well being. Unless we listen to project staff and local partners to see what program staff felt they did right/wrong, and national partners felt they were supported to do keep doing right, we have minimized success of future projects. While increasing numbers of organizations such as Hewlett Foundation fund work to “increase the responsiveness of governments to their citizens’ needs. We do this by working to make governments more transparent and accountable,” the long-term effectiveness of our donor development assistance is not yet visible.
OECD guidelines on corporate accountability and transparency are illuminating. Adapting it from State-Corporate to Non-profit-State is interesting. For how well have we considered who ‘owns’ these development projects in practical terms from inception onward? Our donors? Implementing agencies? Local partners and communities?
OECD Guidelines on Corporate Governance of State-Owned Enterprises
1: The State Acting as an Owner
2: Equitable Treatment and Relations with Shareholders
3: Ensuring an Effective Legal and Regulatory Framework for State-Owned Enterprises
4: Transparency and Disclosure
How well do we design projects along these lines to do this successfully? Not terrifically:
-
Too often ‘stakeholders’ are not consulted at the very inception of the proposal design, only at design or implementation
-
Too often our work is aimed at making only our ‘client’- our donors- happy with our results rather than the country nationals who are tasked with self-sustaining them.
-
Too often handover is done at the 11th hour, not transferring it throughout implementation or building local capacity for those taking over be true projects’ owners.
But it is coming, through changing societal trends. On the data-access front, USAID (and differently, other European donors) have promised to modernize diplomacy and development by 2017 by “increas[ing] the number and effectiveness of communication and collaboration tools that leverage interactive digital platforms to improve direct engagement with both domestic and foreign publics. This will include increasing the number of publicly available data sets and ensuring that USAID-funded evaluations are published online, expanding publicly available foreign assistance data, increasing the number of repeat users of International Information.” Now to generate and add self- sustainability data to inform future projects!
Second, on the they-are-se front, our basic human nature, according to Ricard, lends itself to altruism. “Let's assume that the majority of us are basically good people who are willing to build a better world. In that case, we can do so together thanks to altruism. If we have more consideration for others, we will promote a more caring economy, and we will promote harmony in society and remedy inequalities.” Let’s get going…
by Jindra Cekan | May 29, 2015 | Aid effectiveness, Community Driven Development, Donor Driven Development, Evidence-based policy, impact evaluation, International aid, local capacity building, Local Participants, Open Data, self-sustainability, Sustainable development
It's not just Me, it's We
Many of us want to be of service. That's why we go into international development, government, and many other fields. We hope our words and deeds help make others' lives better.
For 25 years I've written proposals, designed and evaluated projects, knowing that while I could not live in-country due to my family constraints, I could get resources there and help us learn how well they are used. I became a consultant so I could raise my kids without being on the road 60% of the time, one who promotes national consultants so that African, Asian, Latin American and European experts evaluate their own projects. I put myself into the shoes of our participants and realized any local person my age wants to leave behind a better, more sustainably viable livelihood for her family, so I looked to see what was most sustained and how we knew it. I took my love of participatory approaches of listening to and learning from the end-users and founded Valuing Voices to promote learning from projects whose activities were most self-sustained.
Yet this is not enough. I am one person with only my views (however great I think they are :), many of us have great views and knowledge about how to best promote sustainable development. For the state of things today seem to me that too often our donors have limited funds for limited time with goals that they limit because they can only assure success by holding the outcome and funding reins so tightly that none of us are fostering self-sustainable development which takes time, faith in one's participants. I have found that the lack of post-project evaluation (see ValuingVoices.com/blogs such as this one on causes and conditions being ripe for sustainability) is a symptom but doing them also provides a huge opportunity to design projects well learning from what communities were able to sustain themselves, based on why/how it worked and how can we do this well again? For instance, from my fieldwork I have realized that questions such as ‘sustainable by whom for how long’ are ones I never asked and don’t think others have ways to go about it well (yet)… unless you have ideas!
How can we foster aid effectiveness, effective philanthropy, community-driven-development, community-driven and NGO-led impact , and effective policy? It takes many of us – giraffes, ostriches, wliderbeast, gazelles, each with our own expertise.
This takes Time to Listen, respect for local capacities (Doing Development Differently) and an openness to step out of the limelight of 'we saved you' to asking "how can we best work together for a sustainable world?". This takes you, me, WE. One way is to join together in a LinkedIn Group: Sustainable Solutions for Excellent Impact where we can discuss how can we best design, implement, evaluate, fund, promote (etc!) projects well that are programmatically, financially, institutionally and environmentally sustainable. Please join us!
by Kelsey Lopez | Apr 7, 2015 | Beneficiaries, Estonia, Evaluation, ex-post evaluation, IEG, Local Participants, Niger, Nigeria, Participants, Participation, post-project evaluation, PPAR, self-sustainability, Sustainability, Sustainable development, Valuing Voices, World Bank
IEG Blog Series Part II: Theory vs. Practice at the World Bank

In Part I of this blog series, I described my research process for identifying the level to which the World Bank (WB) is conducting participatory post project sustainability evaluations for its many international development projects. Through extensive research and analysis of the WB’s IEG database, Valuing Voices concluded that there is a very loosely defined taxonomy for ex-post project evaluation at the WB, making it difficult to identify a consistent standard of evaluation methodology for sustainability impact assessments.
Particularly, we were concerned with identifying examples of direct beneficiary involvement in evaluating long-term sustainability outcomes, for instance by surveying/interviewing participants to determine which project objectives were self-sustained…and which were not. Unfortunately, it is quite rare for development organizations to conduct ex–post evaluations that involve all levels of project participants to contribute to long-term information feedback loops. However, there was one document type in the IEG database that gave us at Valuing Voices some room for optimism: Project Performance Assessment Reports (PPARs). PPARs are defined by the IEG as documents that are,
“…based on a review of the Implementation Completion Report (a self-evaluation by the responsible Bank department) and fieldwork conducted by OED [Operations Evaluation Department, synonymous with IEG]. To Prepare PPARs, staff examines project files and other documents, interview operation staff, and in most cases visit the borrowing country for onsite discussions with project staff and beneficiaries” [1].
The key takeaway from this definition is that these reports supplement desk studies (ICRs) with new fieldwork data provided, in part, by the participants themselves. The IEG database lists hundreds of PPAR documents, but I focused on only the 33 documents that came up when I queried “post-project”.
Here are a few commonalities to note about the 33 PPARs I studied:
- They are all recent documents – the oldest document was published in 2004, and the most recent documents from 2014.
- The original projects that are assessed in the PPARs were finalized anywhere from 2-10+ years before the PPAR was written, making them true ex-posts
- They all claimed to involve mission site visits and communication with key project stakeholders, but they did not all claim to involve beneficiaries explicitly
Although the WB/IEG mentions that beneficiary participation takes place in “most” of the ex-post missions back to the project site in its definition of a PPAR, Valuing Voices was curious to know if there is a standard protocol for the level of participant involvement, the methods of data collection, and ultimately, the overall quality of the new fieldwork data collected to inform PPARs. For this data quality analysis, Valuing Voices identified these key criteria:
- Overall summary of evaluation methods
- Who was involved, specifically? Was there direct beneficiary participation? What were the research methods/procedures used?
- What was the level of sustainability (termed Risk to Development Outcome* after 2006) established by the PPAR?
- Was this different from the level of sustainability as projected by the preceding ICR report?
- Were participants involved via interviews? (Yes/No)
- If yes, were they semi-structured (open-ended questions allowing for greater variety/detail of qualitative data) or quantitative surveys
- How many beneficiaries were interviewed/surveyed?
- What % of total impacted beneficiary population was this number?
- Was there a control group used? (Yes/No)
Despite our initial optimism, we determined that the quality of the data provided in these PPARs was highly variable, and overall quite low. A summary of the findings is as follows:
1. Rarely were ‘beneficiaries’ interviewed
- Only 15% of the PPARs (5) gave details about the interview methodologies, but of this only 3% of the PPARs (1) described in detail how many participants were consulted, what they said and how they were interviewed (Nigeria 2014 [2]).
- 54% of the reports (18), mentioned beneficiary input in data collected in the post-project mission, but gave no specific information on the number of participants involved nor were their voices cited nor was any information included on the methodologies used. The vast majority only vaguely referenced the findings of the post project mission, rather than data collection specifics. A typical example of this type of report is Estonia 2004 [1]
- 30% of the PPARs (10) actually involved no direct participant/beneficiary participation in the evaluation process, with these missions only including stakeholders such as project staff, local government, NGOs, donors, consultants, etc.A typical example of this type of report is Niger 2005 [3]
These percentages are illustrated in Figure 1, below, which gives a visual breakdown of the number of reports that involved direct participant consultation with detailed methodologies provided (5), the number of reports where stakeholders were broadly consulted but no specific methodologies were provided (18), and the number of reports where no participants were directly involved in the evaluation process (10).

2. Sustainability of project outcomes was unclear
- In 54% of cases, there was some change in the level of sustainability from the original level predicted in the ICR (which precedes and informs the PPAR) to the level established in the PPAR. Ironically, of the 33 cases, 22 of them were classified as Likely or Highly Likely or Significantly Likely to be sustainable, yet participants were not asked for their input.
- So on what basis was sustainability judged? Of the three cases where there was high participant consultation, the Nigerian project’s (where they asked 10% of participants for feedback) sustainability prospects was only moderate while India (also 10% feedback) and Kenya (14-20%) both were classified as likely to be sustainable.
Along the Y axis of Figure 2, below, is the spectrum of sustainability rankings observed in the PPARs, which range from “Negligible to Low” up to “High”. For each of the projects analyzed (there are 60 total projects accounted for in this graph, as some of the PPARs covered up to 4 individual projects in one report), the graph illustrates how many projects consulted participants, and how many failed to do so, for each evaluation outcome. As we can see, the majority of cases that were determined to be highly or significantly sustainable either did not consult participants directly or only consulted stakeholders broadly, with limited community input represented in the evaluation. These are interesting findings, because although there is a lot of supposed sustainability being reported, very few cases actually involved the community participants in a meaningful way (to our knowledge, based on the lack of community consultation discussed in the reports). However, unless these evaluations are taking place at grassroots level, engaging the participants in a conversation about the true self-sustainability outcomes of projects, you can’t really know how sustainable the project is by only talking with donors, consultants, governments, etc. Are the right voices really being represented in this evaluation process? *Note: the “Sustainability” ranking was retitled “Risk to Development Outcomes” in 2006.

While projects were deemed sustainable, this is based on very little ‘beneficiary’ input. The significance of this information is simple: not enough is being done to ensure beneficiary participation in ALL STAGES of the development process, especially in the post-project time frame, even by prominent development institutions like the WB/IEG. While we commend the Bank for currently emphasizing citizen engagement via beneficiary feedback, this still seems to be more of a guiding theory than a habitualized practice [4]. Although all 34 documents I analyzed claimed there was “key stakeholder” or beneficiary participation, the reality is that no consistent procedural standard for eliciting such engagement could be identified.
Furthermore, the lack of specific details elaborating upon interview/survey methods, the number of participants involved, the discovery of any unintended outcomes, etc. creates a critical information void. As a free and public resource, the IEG database should not only be considered an important internal tool for the WB to catalog its numerous projects throughout time, but it is also an essential external tool for members of greater civil society who wish to benefit from the Bank’s extensive collection of resources – to learn from WB experiences and inform industry-wide best practices.
For this reason, Valuing Voices implores the World Bank to step up its game and establish itself as a leader in post-project evaluation learning, not just in theory but also in practice. While these 33 PPARs represent just a small sample of the over 12,000 projects the WB has implemented since its inception, Valuing Voices hopes to see much more ex-post project evaluation happening in the future through IEG. Today we are seeing a decisive shift in the development world towards valuing sustainable outcomes over short-term fixes, towards informing future projects based on long-term data collection and learning, and towards community participation in all stages of the development process…
If one thing is certain, it is that global emphasis on sustainable development will not be going away anytime soon…but are we doing enough to ensure it?
Sources:
[1] World Bank OED. (2004, June 28). Project Performance Assessment Report: Republic of Estonia, Agriculture Project. Retrieved from http://documents.worldbank.org/curated/en/173891468752061273/pdf/295610EE.pdf
[2] World Bank OED. (2014, June 26). Project Performance Assessment Report: Nigeria, Second National Fadama Development Project. Retrieved from https://ieg.worldbankgroup.org/sites/default/files/Data/reports/Nigeria_Fadama2_PPAR_889580PPAR0P060IC0disclosed07070140_0.pdf
[3] World Bank OED. (2005, April 15). Project Performance Assessment Report: Niger, Energy Project. Retrieved from http://documents.worldbank.org/curated/en/899681468291380590/pdf/32149.pdf
[4] World Bank. (n.d.). Citizen Engagement: Incorporating Beneficiary Feedback in all projects by FY 18. Retrieved 2015, from https://web.archive.org/web/20150102233948/http://pdu.worldbank.org/sites/pdu2/en/about/PDU/EngageCitizens
by Kelsey Lopez | Mar 2, 2015 | Accountability, Beneficiaries, Evaluation, Evidence-based policy, ex-post evaluation, Feedback loops, impact evaluation, Local Participants, Participants, Participation, post-project evaluation, Results, self-sustainability, Sustainability, Sustainable development, Transparency, Uncategorized, Valuing Voices, World Bank
Pick a term, any term…but stick to it!
Valuing Voices is interested in identifying learning leaders in international development that are using participatory post-project evaluation methods to learn about the sustainability of their development projects. These organizations not only believe they need to see the sustained impact of their projects by learning from what has worked and what hasn’t in the past, but also that participants are the most knowledgeable about such impacts. So how do they define sustainability? This is determined by asking questions such as the following: were project goals self-sustained by the ‘beneficiary’ communities that implemented these projects? By our VV definition, self-sustainability can only be determined by going back to the project site, 2-5 years after project closeout, to speak directly with the community about the long-term intended/unintended impacts.
Naturally, we turned to the World Bank (WB) – the world’s prominent development institution – to see if this powerhouse of development, both in terms of annual monetary investment and global breadth of influence, has effectively involved local communities in the evaluation of sustainable (or unsustainable) outcomes. Specifically, my research was focused on identifying the degree to which participatory post-project evaluation was happening at the WB.
A fantastic blog* regarding participatory evaluation methods at the WB emphasizes the WB’s stated desire to improve development effectiveness by “ensuring all views are considered in participatory evaluation,” particularly through its community driven development projects. As Heider points out,
“The World Bank Group wants to improve its development effectiveness by, among others things, engaging citizens throughout the operational project cycle. It has set itself an ambitious target: 100% citizen engagement in projects that have clearly identifiable beneficiaries.”
Wow! Though these methods are clearly well intentioned, there seems to be a flaw in the terminology. The IEG says, “[Community driven development projects] are based on beneficiary participation from design through implementation, which make them a good example of citizen-centered assessment techniques in evaluation,” …however, this fails to recognize the importance of planning for community-driven post-project sustainability evaluations, to be conducted by the organization in order to collect valuable data concerning the long-term intended/unintended impacts of development work.
With the intention of identifying evidence of the above-mentioned mode of evaluation at the WB, my research process involved analyzing the resources provided by the WB’s Independent Evaluation Group (IEG) database of evaluations. As the accountability branch of the World Bank Group, the IEG works to gather institution-wide knowledge about the outcomes of the WBs finished projects. Its mission statement is as follows:
“The goals of evaluation are to learn from experience, to provide an objective basis for assessing the results of the Bank Group’s work, and to provide accountability in the achievement of its objectives. It also improves Bank Group work by identifying and disseminating the lessons learned from experience and by framing recommendations drawn from evaluation findings.”
Another important function of the IEG database is to provide information for the public and external development organizations to access and learn from; this wealth of data and information about the World Bank’s findings is freely accessible online.
When searching for evidence of post-project learning, I was surprised to find that the taxonomy varied greatly; e.g. projects I was looking for could be found under ‘post-project’, post project’, ‘ex-post’ or ‘ex post’. What was also unclear was any specific category under which these could be found, including a definition of what exactly is required in an IEG ex post impact evaluation. According to the IEG, there are 13 major evaluation categories, which are described in more detail here. I was expecting to find an explicit category dedicated to post-project sustainability, but instead this type of evaluation was included under Project Level Evaluations (which include PPARs and ICRs [Implementation Completion Reports]), and Impact evaluations.
This made it difficult to determine a clear procedural standard for documents reporting sustainability outcomes and other important data for the entire WB.
I began my research process by simply querying a few key terms into the database. In the first step of my research, which will be elaborated upon in Part I in this blog series, I attempted to identify evidence of ex post sustainability evaluation at the IEG by searching for the term “post-project” in the database, which yielded 73 results when using a hyphen and 953 results without using a hyphen. I found it interesting the inconsistency in the number of results depending on the use of a hyphen, but in order to narrow the search parameters to conduct a manageable content analysis of the documents, I chose to breakdown these 73 results by document type to determine if there are any examples of primary fieldwork research. In these documents, the term “post-project” was not used in the title of the documents or referenced in the executive summary as the specific aim of the evaluation, but rather used to loosely define the ex post time frame. Figure 1 illustrates the breakdown of document types found in the sample of 73 documents that came up when I searched for the key term “post-project”:

Figure 1: Breakdown by Document Type out of Total 73 Results when searching post-project
As the chart suggests, many of the documents (56% – which accounts for all of the pie chart slices except Project Level Evaluations) were purely desk studies – evaluating WB programs and the overall effectiveness of organization policies. These desk studies draw data from existing reports, such as those published at project closeout, without supplementing past data with new fieldwork research.
Out of the 9 categories, the only document type that showed evidence of any follow up evaluations were the Project Performance Assessment Reports (PPARs), defined by the IEG as documents that are…
“…based on a review of the Implementation Completion Report (a self-evaluation by the responsible Bank department) and fieldwork conducted by OED [Operations Evaluation Department]. To prepare PPARs, OED staff examines project files and other documents, interview operational staff, and in most cases visit the borrowing country for onsite discussions with project staff and beneficiaries. The PPAR thereby seeks to validate and augment the information provided in the ICR, as well as examine issues of special interest to broader OED studies.”
Bingo. This is what we’re looking for. The PPARs accounted for 32 out of the 73 results, or a total of 44%. As I examined the methodology used to conduct PPARs, I found that in the 32 cases that came up when I searched for “post-project”, after Bank funds were “fully dispersed to a project” and resources were withdrawn, the IEG sent a post-project mission back into the field to collaborate on new M&E with local stakeholders and beneficiaries. The IEG gathered new data through the use of field surveys or interviews to determine project effectiveness.
Based on these findings, I conducted a supplementary search of the term “ex post”, which yielded 672 results. From this search, 11 documents were categorized by the IEG as “Impact Evaluations”, of which 3 showed evidence of talking with participants to evaluate for sustainability outcomes. In follow-up blogs in this series I will elaborate upon the significance of these additional findings and go into greater detail regarding the quality of the data in these 32 PPARs, but here are a few key takeaways from this preliminary research:
-
Taxonomy and definition of ex-post is missing. After committing approximately 15-20 hours of research time to this content analysis, it is clear that navigating the IEG database to search for methodology standards to evaluate for sustainability is a more complicated process than it should be for such a prominent learning institution. The vague taxonomy used to categorize post-project/ex-post evaluation by the WB limits the functionality of this resource as a public archive dedicated to informing the sustainability of development projects the World Bank has funded.
-
Despite affirmative evidence of participatory community involvement in the post-project evaluation of WB projects, not all PPARs in the IEG database demonstrated a uniform level of ‘beneficiary’ participation. In most cases, it was unclear how many community members impacted by the project were really involved in the ex-post process, which made it difficult to determine even a general range of the number of participants involved in post-project activity at the WB.
-
Although PPARs report findings based, in part, on post-project missions (as indicated in the preface of the reports), the specific methods/structure of the processes were not described, and oftentimes the participants were not explicitly referenced in the reports. (More detailed analysis on this topic to come in Blog Series Part 2!)
-
These surprisingly inconsistent approaches make it difficult to compare results across this evaluation type, as there is no precise status quo.
Finally, the World Bank, which has funded 12,000 projects since its inception, should have far more than 73 post-project/ ex-post evaluations…but maybe I’m just quibbling with terms.
Stay tuned for PART II of this series, coming soon!