Whose responsibility is it to sustain project activities?
Billions of dollars are pumped into development activities in developing countries all over the world. Communities getting involved in these projects have a clear objective, which is to have their lives improved in the sectors that the projects target. As to whether this is the main objective of the development partners is not clear. What is clear is that the development partners focus more on numbers than on getting people to participate.
We note that majority of these projects are designed to last between 2-5 years. Delays occasioned poor planning or other unforeseen factors eat into the implementation time to an extent that in some projects it takes about 1-2 years to get a program running. This means that the planned implementation time is reduced. Baselines, midlines and end lines studies are conducted to inform changes that may have occurred within the program life, and in most cases they happen shortly after the program has started or just before it ends. In fact, some baselines are conducted after programs have started.
Considering the reduced implementation time and the fact that it takes a much longer time to get concrete behavior change related results, questions emerge whether indeed the reported changes are solid enough during implementation to be sustained. There is also a difference between measuring what can be referred to artificial changes (activities that community members adopt as a way of short-term trial in their excitement, but don't find useful afterward) and long lasting changes that community members adopt because they are useful part and parcel of their lives.
Almost all projects have logical frameworks (logframes) that show how project activities will be implemented and to some extent there are also exit strategies for closing out the project. This can be an illusion long-term. In most cases donors and implementers assume that communities will adopt activities that are being implemented within a specified period of time, and so projects close down at the end of the specified period of implementation assuming things will continue, but have no proof. Valuing Voices has done projected sustainability work in Ethiopia which points to possible differences between what donors expect to the sustained versus what communities are able to sustain.
The big questions remain: "whose responsibility is it to ensure that whatever has been adopted is continued? Whose responsibility is it to sustain project activities post project implementation?" It is silently assumed that communities can take up this responsibility and a key question is "what guarantees are there that this is possible and is happening?" Project sustainability should not be seen as a community-alone responsibility but rather a responsibility for all those who are involved in program activities. Sustainability studies should be planned for and executed in the same breath that the baselines, midlines, end-lines and in the rare cases impact assessments in real time should be planned for. We must do sustainability studies as they provide an additional realistic opportunity to inform us on actual community development post project implementation. Communities should not be left alone with it.
Sustainable Development Goals and Foreign Aid–
How Sustainable and Accountable to Whom?
Jindra Cekan, PhD of ValuingVoices
World leaders will paint New York City red next week at the UN Summit adopting the new post-2015 development agenda. The agreed plans set 17 new ‘Sustainable’ Development Goals (SDGs) to be achieved by 2030. These are successors to the Millennium Development Goals (MDGs).
While many of us have heard of them, how many of us know whether we met them and what prospects are there for the Sustainable Development Goals to do well or better? According to Bill Gates the MDGs were “the best idea for focusing the world on fighting global poverty that I have ever seen.” The Brookings Institute goes on to praise the eight MDG targets for aiming high by setting targets such as halving world poverty and reducing child mortality, improving universal primary education and gender equality and empower women etc. . Donor countries pledged three times more than they had until that time (raising the percentage of gross national income for international development assistance from 0.2% to 0.7%, not huge amounts but laudable). From 2000 to 2015 extreme poverty did fall by half (although some argue China and Asia were well on their way before this aid came) and in some countries (Senegal, Cambodia) child deaths fell by half. Global health improved via huge coalitions on immunization and HIV/AIDS. Yet while poverty dropped and health increased, hunger, environment and sanitation targets were not met, for instance, and there are 850 million people still hungry worldwide (11% of all people) . Yet gains far outweigh losses. The new Sustainable Development Goals are to be achieved in 15 years, by 2030. The UN and member nations will track a remarkable 169 indicators, “monitoring progress towards the SDGs at the local, national, regional, and global levels… 
Overall, one would feel rather tickled by these results — not 100% but still amazing given global disparities. Nancy Birdsall of the Center for Global Development thinks measurement is not the goal: “Growing global interconnectedness means that the problems the world faces, that hold back development, are increasingly shared… we’re making a promise to ourselves that we are one world, one planet, one society, one people, who look out for each other…” 
But I’m a fan of measurable results, I must admit. One would logically think that our international development projects funded by U.S. Agency for International Development (USAID), the U.S. Department of Agriculture (USDS), the Millennium Challenge Corporation and others could outline how the caused the good results that some MDGs showed. USAID’s website links its work to the MDGs clearly: “in September 2010, President Obama called for the elevation of development as a key pillar of America’s national security and foreign policy. This set forth a vision of an empowered and robust U.S. Agency for International Development that could lead the world in solving the greatest development challenges of our time and, ultimately, meet the goal of ending extreme poverty in the next generation” . It goes on to talk about work to “Promote sustainable development through high-impact partnerships and local solutions“ . USDA’s Foreign Agricultural Service states their “non-emergency food aid programs help meet recipients’ nutritional needs and also support agricultural development and education. These food assistance programs, combined with trade capacity building efforts, support long-term economic development” . Finally, MCC states they are “committed to delivering results throughout the entire lifecycle of its investments. From before investments begin to their completion and beyond… MCC’s continuum of results is designed to foster learning and accountability” .
Maybe. We don’t actually know because 99% of the time we never return to projects after they end to learn how sustainable they actually were. We could be fostering super-sustainability. Or not.
For international development programming works on 1-5 year programming cycles. Multi-million dollar project requests for proposals are designed and sent out by these funders to non-profit or for-profit implementers. These are awarded to one or more organization, quite rigorously monitored, and most have very good results. Then they end. Since 2000, the US Agency for International Development has spent $280 billion on country-to-country development and humanitarian aid projects as well as funding multilateral aid and in spite of much work evaluating the final impact of projects at the end, they never go back . The EU has spent a staggering $1.4 trillion. USAID has funded one evaluation that has gone back to see what communities and partners could sustain… in the last 30 years, and that is about to be published. A handful of international non-profits have taken matters in their own hands and funded such studies privately. The EU’s track record is even more dismal, with policies being proposed but not done . The World Bank, which has funded over 12,000 projects has an independent evaluation arm, the Independent Evaluation Group. They returned after projects closed out to evaluate results only 33 times and we found only three of them systematically talked to project participants about what was sustained.
The bottom line is, how do we know anything we’ve done in international development or SDGs is sustainable unless we go back to see? What amazing or awful results must we know for future design? If we do not return, are we really accountable to our taxpayers and our real clients: the participants and the national countries foreign aid recipients themselves?
The UN has pledged to have an SGD report card to “measure progress towards sustainable development and help ensure the accountability of all stakeholders for achieving the SDGs….[and a] Global Partnership for Sustainable Development Data, to help drive the Data Revolution….by using data we can ensure accountability for the policies that are implemented to reduce global and local inequities” . I completely agree that having citizen generated data at the local, national, regional, and global levels is so very important “to fill gaps in our knowledge, establish global norms and standards and…help countries develop robust national strategies for data development.” And as the World Bank IEG’s Caroline Heider states, measuring them is complex (e.g. agriculture affected by climate change and measuring changes across sectors is hard) but worthwhile .
While SDG data tells us what donor-funded activities and policies work, very few in international development know how sustainably our programming works for our ultimate clients, our participants and partners. And the price needn’t be high—a recent post-project evaluation we did cost under $120,000 which is a pittance given the project cost over $30 million and reached 500,000 folks. We found clear (mostly successful) lessons. USAID has, after 30 years, funded one post-project evaluation that also has clear cost-effective lessons (forthcoming). Really, in this era of cost-effectiveness, don’t we want data on what worked best (Note to self: do that more) and what worked least (Note to self: stop doing that)?
Learning what participants and partners could self-sustain after we left is actually all we should care about. They want to get beyond aid. Shouldn’t we know if we are getting them there? Self-sustainability of outcomes is a clear indicator of good Return on Investment of our resources and expertise and their time, effort and expertise. It shows us we want to put ourselves out of a job, having built country-led development that really has a future in-country with their own resources.
Two steps are:
1) Donors to add a funding equivalent to 1% of program value for five years after closeout for all projects over $10 million to support local capacity-building of NGOs and national partners to take over implementation plus to evaluate lessons across different sector’s sustainability outcomes.
2) A cross-donor fund for country-led analysis of such learning plus lessons for what capacity needs to be built in-country to take over programming. This needs support from regionally-based knowledge repositories and learning centers in Africa, Asia, etc. Online and tangible centers could house both implementer reports/ evaluations and analyze/ share lessons learned across sectors and countries from post-project evaluations for projects that closed out 2-7 years ago for future design.
Now that is accountability. Let’s advocate for sustainability funding, data and learning now.
What are your suggestions? How can we improve sustainability?
 McArthur, J. (2013, February 21). Own the Goals: What the Millennium Development Goals Have Accomplished. Retrieved from https://www.brookings.edu/articles/own-the-goals-what-the-millennium-development-goals-have-accomplished/
 End Poverty 2015 Millennium Campaign. (n.d.). MDG 1: Eradicate Extreme Poverty and Hunger. Retrieved from https://www.endpoverty2015.org/mdg-success-stories/mdg-1-end-hunger/
 Sharma, S. (2015, August 20). From Aspirations to Reality: How to Effectively Measure the Sustainable Development Goals. Retrieved from https://www.huffpost.com/entry/measuring-the-peoples-age_b_7999640
 Mirchandani, R. (2015, September 22). Does It Matter If We Don’t Achieve the SDGs? A New Podcast with Nancy Birdsall and Michael Elliott. Retrieved from https://www.cgdev.org/blog/does-it-matter-if-we-dont-achieve-sdgs-podcast-nancy-birdsall-and-michael-elliott
 USAID. (n.d.). USAID Forward. Retrieved from https://www.usaid.gov/usaidforward/usaid-forward-2014-archive
 United States Department of Agriculture (USDA). (n.d.). Food Assistance. Retrieved from https://www.fas.usda.gov/topics/food-assistance
 Millennium Challenge Corporation. (n.d.). Our Impact. Retrieved 2016, from https://web.archive.org/web/20160325135258/https://www.mcc.gov/our-impact
 Cekan, J. (2015, March 13). When Funders Move On. Retrieved from https://ssir.org/articles/entry/when_funders_move_on
 Global Issues. (n.d.). Foreign Aid for Development Assistance: Foreign Aid Numbers in Charts and Graphs. Retrieved from https://www.globalissues.org/print/article/35#globalissues-org
 Florio, M. (2009, November/December). Sixth European Conference on Evaluation of Cohesion Policy: Getting Incentives Right — Do We Need Ex Post CBA? Retrieved from https://ec.europa.eu/regional_policy/archive/conferences/evaluation2009/abstracts/florio.doc
 Heider, C. (2015, September 15). Evaluation Beyond 2015: Implications of the SDGs for Evaluation. Retrieved from https://ieg.worldbankgroup.org/blog/evaluation-beyond-2015-implications-sdgs-evaluation
Face our fears! Learn from failure…
Global Giving has a nice example of getting participant feedback success/failure of a project their fundraising funded. The organization failed, which was sorrowful to the players in West Africa and funders worldwide (I, too am a soccer mom). Yes, it failed. It is hard to read those words, and all involved faced it, and they learned from it. Thanks to a spirited discussion on Pelican Platform for Evidence-based Learning about a plethora of unintended impacts, I learned about the organization committed to learning from failures: Admitting Failure.
Maybe some of you have heard of FailFests, such as the one in Raleigh N.C. US this year that answered, “Why are we doing this?” with “We’re on a mission to erase the stigma around failure. The more we talk about it, the more we can learn from it. Failure doesn’t have to be fatal!” You may know Engineers without Borders Canada has been an early proponent of “openly acknowledging failure is often a catalyst for innovation that takes our work from good to great.” Such examples are heartening, especially for participatory sustainability evaluation.
In 2014 a met a fellow evaluator at the American Evaluation Association Conference, where I was presenting on Post-Project Sustainability Evaluation. They told me their organization was waiting for a really successful project to end so they could evaluate its sustainability. Another evaluator told me of a post-project evaluation that was hidden away, unpublished, after the findings were negative. And that’s where the problem lies: our international development industry’s neurosis about presenting anything unsuccessful.
No wonder so few projects have been evaluated post-project, as we fear:
* What if activities and outcomes aren’t sustained (note: we didn’t design them that way, often, we opt for quicker wins achievable only with large resource inflows of funds and technical help)?
* What if our funders find out that funds had limited impact or partners had no means to continue good programming?
* What if we had entirely unintended impacts that favored some over others (well beyond our expected logical frameworks and Theories of Change)?
Hallelujah 🙂 Why can’t we see this as good news that propels us to improve our projects by listening to what worked? To learn not to do what didn’t work again?! To design in locally sustainable ways now that we have learned?
* What if we learn that our resources and empowerment led them to succeed on their own terms in ways we couldn’t imagine, far exceeding the planned impacts we had expected?
* What if we find that unexpected outcomes showcased ways groups within communities stood on their feet, making development work for them on their terms?
Would we design and implement, monitor, and evaluate projects differently? We’ll see. Organizations such as USAID’s Food for Peace funded a four-country study on exit strategies (forthcoming 2015) and are looking at success and failure. Catholic Relief Services hired Valuing Voices to do a sustainability evaluation in Africa. Others may follow…
Post-project sustainability evaluations expose us to learning the full range. Let’s be brave and ask, learn and innovate, making aid and philanthropy more effective, and learn from failure for success!
IEG Blog Series Part II: Theory vs. Practice at the World Bank
In Part I of this blog series, I described my research process for identifying the level to which the World Bank (WB) is conducting participatory post project sustainability evaluations for its many international development projects. Through extensive research and analysis of the WB’s IEG database, Valuing Voices concluded that there is a very loosely defined taxonomy for ex-post project evaluation at the WB, making it difficult to identify a consistent standard of evaluation methodology for sustainability impact assessments.
Particularly, we were concerned with identifying examples of direct beneficiary involvement in evaluating long-term sustainability outcomes, for instance by surveying/interviewing participants to determine which project objectives were self-sustained…and which were not. Unfortunately, it is quite rare for development organizations to conduct ex–post evaluations that involve all levels of project participants to contribute to long-term information feedback loops. However, there was one document type in the IEG database that gave us at Valuing Voices some room for optimism: Project Performance Assessment Reports (PPARs). PPARs are defined by the IEG as documents that are,
“…based on a review of the Implementation Completion Report (a self-evaluation by the responsible Bank department) and fieldwork conducted by OED [Operations Evaluation Department, synonymous with IEG]. To Prepare PPARs, staff examines project files and other documents, interview operation staff, and in most cases visit the borrowing country for onsite discussions with project staff and beneficiaries” .
The key takeaway from this definition is that these reports supplement desk studies (ICRs) with new fieldwork data provided, in part, by the participants themselves. The IEG database lists hundreds of PPAR documents, but I focused on only the 33 documents that came up when I queried “post-project”.
Here are a few commonalities to note about the 33 PPARs I studied:
- They are all recent documents – the oldest document was published in 2004, and the most recent documents from 2014.
- The original projects that are assessed in the PPARs were finalized anywhere from 2-10+ years before the PPAR was written, making them true ex-posts
- They all claimed to involve mission site visits and communication with key project stakeholders, but they did not all claim to involve beneficiaries explicitly
Although the WB/IEG mentions that beneficiary participation takes place in “most” of the ex-post missions back to the project site in its definition of a PPAR, Valuing Voices was curious to know if there is a standard protocol for the level of participant involvement, the methods of data collection, and ultimately, the overall quality of the new fieldwork data collected to inform PPARs. For this data quality analysis, Valuing Voices identified these key criteria:
- Overall summary of evaluation methods
- Who was involved, specifically? Was there direct beneficiary participation? What were the research methods/procedures used?
- What was the level of sustainability (termed Risk to Development Outcome* after 2006) established by the PPAR?
- Was this different from the level of sustainability as projected by the preceding ICR report?
- Were participants involved via interviews? (Yes/No)
- If yes, were they semi-structured (open-ended questions allowing for greater variety/detail of qualitative data) or quantitative surveys
- How many beneficiaries were interviewed/surveyed?
- What % of total impacted beneficiary population was this number?
- Was there a control group used? (Yes/No)
Despite our initial optimism, we determined that the quality of the data provided in these PPARs was highly variable, and overall quite low. A summary of the findings is as follows:
1. Rarely were ‘beneficiaries’ interviewed
- Only 15% of the PPARs (5) gave details about the interview methodologies, but of this only 3% of the PPARs (1) described in detail how many participants were consulted, what they said and how they were interviewed (Nigeria 2014 ).
- 54% of the reports (18), mentioned beneficiary input in data collected in the post-project mission, but gave no specific information on the number of participants involved nor were their voices cited nor was any information included on the methodologies used. The vast majority only vaguely referenced the findings of the post project mission, rather than data collection specifics. A typical example of this type of report is Estonia 2004 
- 30% of the PPARs (10) actually involved no direct participant/beneficiary participation in the evaluation process, with these missions only including stakeholders such as project staff, local government, NGOs, donors, consultants, etc.A typical example of this type of report is Niger 2005 
These percentages are illustrated in Figure 1, below, which gives a visual breakdown of the number of reports that involved direct participant consultation with detailed methodologies provided (5), the number of reports where stakeholders were broadly consulted but no specific methodologies were provided (18), and the number of reports where no participants were directly involved in the evaluation process (10).
2. Sustainability of project outcomes was unclear
- In 54% of cases, there was some change in the level of sustainability from the original level predicted in the ICR (which precedes and informs the PPAR) to the level established in the PPAR. Ironically, of the 33 cases, 22 of them were classified as Likely or Highly Likely or Significantly Likely to be sustainable, yet participants were not asked for their input.
- So on what basis was sustainability judged? Of the three cases where there was high participant consultation, the Nigerian project’s (where they asked 10% of participants for feedback) sustainability prospects was only moderate while India (also 10% feedback) and Kenya (14-20%) both were classified as likely to be sustainable.
Along the Y axis of Figure 2, below, is the spectrum of sustainability rankings observed in the PPARs, which range from “Negligible to Low” up to “High”. For each of the projects analyzed (there are 60 total projects accounted for in this graph, as some of the PPARs covered up to 4 individual projects in one report), the graph illustrates how many projects consulted participants, and how many failed to do so, for each evaluation outcome. As we can see, the majority of cases that were determined to be highly or significantly sustainable either did not consult participants directly or only consulted stakeholders broadly, with limited community input represented in the evaluation. These are interesting findings, because although there is a lot of supposed sustainability being reported, very few cases actually involved the community participants in a meaningful way (to our knowledge, based on the lack of community consultation discussed in the reports). However, unless these evaluations are taking place at grassroots level, engaging the participants in a conversation about the true self-sustainability outcomes of projects, you can’t really know how sustainable the project is by only talking with donors, consultants, governments, etc. Are the right voices really being represented in this evaluation process? *Note: the “Sustainability” ranking was retitled “Risk to Development Outcomes” in 2006.
While projects were deemed sustainable, this is based on very little ‘beneficiary’ input. The significance of this information is simple: not enough is being done to ensure beneficiary participation in ALL STAGES of the development process, especially in the post-project time frame, even by prominent development institutions like the WB/IEG. While we commend the Bank for currently emphasizing citizen engagement via beneficiary feedback, this still seems to be more of a guiding theory than a habitualized practice . Although all 34 documents I analyzed claimed there was “key stakeholder” or beneficiary participation, the reality is that no consistent procedural standard for eliciting such engagement could be identified.
Furthermore, the lack of specific details elaborating upon interview/survey methods, the number of participants involved, the discovery of any unintended outcomes, etc. creates a critical information void. As a free and public resource, the IEG database should not only be considered an important internal tool for the WB to catalog its numerous projects throughout time, but it is also an essential external tool for members of greater civil society who wish to benefit from the Bank’s extensive collection of resources – to learn from WB experiences and inform industry-wide best practices.
For this reason, Valuing Voices implores the World Bank to step up its game and establish itself as a leader in post-project evaluation learning, not just in theory but also in practice. While these 33 PPARs represent just a small sample of the over 12,000 projects the WB has implemented since its inception, Valuing Voices hopes to see much more ex-post project evaluation happening in the future through IEG. Today we are seeing a decisive shift in the development world towards valuing sustainable outcomes over short-term fixes, towards informing future projects based on long-term data collection and learning, and towards community participation in all stages of the development process…
If one thing is certain, it is that global emphasis on sustainable development will not be going away anytime soon…but are we doing enough to ensure it?
 World Bank OED. (2004, June 28). Project Performance Assessment Report: Republic of Estonia, Agriculture Project. Retrieved from http://documents.worldbank.org/curated/en/173891468752061273/pdf/295610EE.pdf
 World Bank OED. (2014, June 26). Project Performance Assessment Report: Nigeria, Second National Fadama Development Project. Retrieved from https://ieg.worldbankgroup.org/sites/default/files/Data/reports/Nigeria_Fadama2_PPAR_889580PPAR0P060IC0disclosed07070140_0.pdf
 World Bank OED. (2005, April 15). Project Performance Assessment Report: Niger, Energy Project. Retrieved from http://documents.worldbank.org/curated/en/899681468291380590/pdf/32149.pdf
 World Bank. (n.d.). Citizen Engagement: Incorporating Beneficiary Feedback in all projects by FY 18. Retrieved 2015, from https://web.archive.org/web/20150102233948/http://pdu.worldbank.org/sites/pdu2/en/about/PDU/EngageCitizens
A Missing Piece In Local Ownership: Evaluation
(Reblog from http://www.interaction.org/blog/missing-piece-local-ownership-evaluation Grino and Levine)
Ten years ago, ownership was established as a key principle of aid effectiveness. Although understanding of ownership has evolved since then – most significantly, as something that involves not just governments but all parts of society – today the focus is not on whether ownership is important but on how we can move ownership from principle to practice. To date, these conversations have primarily concerned how to make ownership a reality in program design and implementation. InterAction supports these efforts, but believes they need to go one step further. As we argue in our new briefing paper, the local ownership agenda must extend to all parts of the program cycle – from design all the way through evaluation.
Including those meant to benefit from international assistance (we use the term “participants”) in deciding what should be done and how it should be done is critically important for effectiveness and sustainability. Organizations, and some governments, also increasingly recognize the value of hearing directly from participants and citizens about how well something is being done. This can be seen in the growing use of feedback mechanisms and the establishment of initiatives promoting social accountability. Including participants in evaluation decision making is just as important. Particularly when participants have lacked ownership at other stages of an intervention, evaluation serves as a last opportunity for them to weigh in.
Despite the widespread acceptance of the principle of local ownership, evaluations continue to predominantly respond to the demands of donors, focusing on how funds are spent and the degree to which the results donors or implementers value are achieved. By only taking into consideration the values and interests of some stakeholders (primarily donors and external actors) in evaluations, organizations are missing a critical perspective on an intervention’s results: the views of the very people the intervention was intended to assist.
When participants are involved in evaluation, more often than not they serve as data sources, and perhaps as data collectors. Very rarely do we find examples of participants involved in deciding the questions an evaluation will ask, determining the criteria that will be used for judging an intervention’s success, interpreting results, or shaping recommendations based on evaluation findings.
A concern frequently raised about including participants in evaluation decision making is that their clear stakes in evaluation outcomes and potentially their lack of evaluation capacity could lead to biased and unreliable results. Yet it is important to acknowledge that everyone involved in an evaluation has values, interests, and capacities that affect how they approach an evaluation. Including participants’ voices adds a greater diversity of perspectives to an evaluation and the interpretation of findings, thus reducing bias.
We recognize that the road to local ownership in evaluation is just that: a road, not something that can be achieved instantly or that is possible in all cases. For that reason, we recommend that organizations take an incremental approach to pursuing local ownership in evaluation, focusing on the critical steps that can be taken along the way to increase the role of participants in evaluation processes.
As organizations seek to increase participants’ ownership in evaluation, they must consider:
Who to include as co-owners in an evaluation;
In which aspects of an evaluation participants need to be involved (we provide a list of possible evaluation activities related to designing the evaluation, collecting and analyzing data, determining findings and recommendations, and disseminating and using evaluation results); and
The nature of participants’ involvement (with the goal of moving from informing or consulting participants to including participants as partners in evaluation decision making).
Getting to local ownership in evaluation requires making progress on all three fronts.
Ultimately, all actors along the aid chain – from donors to international NGOs to local partners – must believe in the value of including participants as co-owners in evaluation. Once in place, this commitment must be complemented by investing in staff’s capacity to effectively involve participants in evaluation decision making, and in strengthening participants’ own capacity to engage. As in any other participatory process, participants must also trust that their input will indeed influence policies and practice. Including participants in this way is another way to signal that we truly view them as partners, rather than beneficiaries.
By Laia Grino, Senior Manager for Transparency Accountability and Results, and Carlisle Levine, Ph.D., Senior Advisor, Evaluation (Consultant)
Pick a term, any term…but stick to it!
Valuing Voices is interested in identifying learning leaders in international development that are using participatory post-project evaluation methods to learn about the sustainability of their development projects. These organizations not only believe they need to see the sustained impact of their projects by learning from what has worked and what hasn’t in the past, but also that participants are the most knowledgeable about such impacts. So how do they define sustainability? This is determined by asking questions such as the following: were project goals self-sustained by the ‘beneficiary’ communities that implemented these projects? By our VV definition, self-sustainability can only be determined by going back to the project site, 2-5 years after project closeout, to speak directly with the community about the long-term intended/unintended impacts.
Naturally, we turned to the World Bank (WB) – the world’s prominent development institution – to see if this powerhouse of development, both in terms of annual monetary investment and global breadth of influence, has effectively involved local communities in the evaluation of sustainable (or unsustainable) outcomes. Specifically, my research was focused on identifying the degree to which participatory post-project evaluation was happening at the WB.
A fantastic blog* regarding participatory evaluation methods at the WB emphasizes the WB’s stated desire to improve development effectiveness by “ensuring all views are considered in participatory evaluation,” particularly through its community driven development projects. As Heider points out,
“The World Bank Group wants to improve its development effectiveness by, among others things, engaging citizens throughout the operational project cycle. It has set itself an ambitious target: 100% citizen engagement in projects that have clearly identifiable beneficiaries.”
Wow! Though these methods are clearly well intentioned, there seems to be a flaw in the terminology. The IEG says, “[Community driven development projects] are based on beneficiary participation from design through implementation, which make them a good example of citizen-centered assessment techniques in evaluation,” …however, this fails to recognize the importance of planning for community-driven post-project sustainability evaluations, to be conducted by the organization in order to collect valuable data concerning the long-term intended/unintended impacts of development work.
With the intention of identifying evidence of the above-mentioned mode of evaluation at the WB, my research process involved analyzing the resources provided by the WB’s Independent Evaluation Group (IEG) database of evaluations. As the accountability branch of the World Bank Group, the IEG works to gather institution-wide knowledge about the outcomes of the WBs finished projects. Its mission statement is as follows:
“The goals of evaluation are to learn from experience, to provide an objective basis for assessing the results of the Bank Group’s work, and to provide accountability in the achievement of its objectives. It also improves Bank Group work by identifying and disseminating the lessons learned from experience and by framing recommendations drawn from evaluation findings.”
Another important function of the IEG database is to provide information for the public and external development organizations to access and learn from; this wealth of data and information about the World Bank’s findings is freely accessible online.
When searching for evidence of post-project learning, I was surprised to find that the taxonomy varied greatly; e.g. projects I was looking for could be found under ‘post-project’, post project’, ‘ex-post’ or ‘ex post’. What was also unclear was any specific category under which these could be found, including a definition of what exactly is required in an IEG ex post impact evaluation. According to the IEG, there are 13 major evaluation categories, which are described in more detail here. I was expecting to find an explicit category dedicated to post-project sustainability, but instead this type of evaluation was included under Project Level Evaluations (which include PPARs and ICRs [Implementation Completion Reports]), and Impact evaluations.
This made it difficult to determine a clear procedural standard for documents reporting sustainability outcomes and other important data for the entire WB.
I began my research process by simply querying a few key terms into the database. In the first step of my research, which will be elaborated upon in Part I in this blog series, I attempted to identify evidence of ex post sustainability evaluation at the IEG by searching for the term “post-project” in the database, which yielded 73 results when using a hyphen and 953 results without using a hyphen. I found it interesting the inconsistency in the number of results depending on the use of a hyphen, but in order to narrow the search parameters to conduct a manageable content analysis of the documents, I chose to breakdown these 73 results by document type to determine if there are any examples of primary fieldwork research. In these documents, the term “post-project” was not used in the title of the documents or referenced in the executive summary as the specific aim of the evaluation, but rather used to loosely define the ex post time frame. Figure 1 illustrates the breakdown of document types found in the sample of 73 documents that came up when I searched for the key term “post-project”:
Figure 1: Breakdown by Document Type out of Total 73 Results when searching post-project
As the chart suggests, many of the documents (56% – which accounts for all of the pie chart slices except Project Level Evaluations) were purely desk studies – evaluating WB programs and the overall effectiveness of organization policies. These desk studies draw data from existing reports, such as those published at project closeout, without supplementing past data with new fieldwork research.
Out of the 9 categories, the only document type that showed evidence of any follow up evaluations were the Project Performance Assessment Reports (PPARs), defined by the IEG as documents that are…
“…based on a review of the Implementation Completion Report (a self-evaluation by the responsible Bank department) and fieldwork conducted by OED [Operations Evaluation Department]. To prepare PPARs, OED staff examines project files and other documents, interview operational staff, and in most cases visit the borrowing country for onsite discussions with project staff and beneficiaries. The PPAR thereby seeks to validate and augment the information provided in the ICR, as well as examine issues of special interest to broader OED studies.”
Bingo. This is what we’re looking for. The PPARs accounted for 32 out of the 73 results, or a total of 44%. As I examined the methodology used to conduct PPARs, I found that in the 32 cases that came up when I searched for “post-project”, after Bank funds were “fully dispersed to a project” and resources were withdrawn, the IEG sent a post-project mission back into the field to collaborate on new M&E with local stakeholders and beneficiaries. The IEG gathered new data through the use of field surveys or interviews to determine project effectiveness.
Based on these findings, I conducted a supplementary search of the term “ex post”, which yielded 672 results. From this search, 11 documents were categorized by the IEG as “Impact Evaluations”, of which 3 showed evidence of talking with participants to evaluate for sustainability outcomes. In follow-up blogs in this series I will elaborate upon the significance of these additional findings and go into greater detail regarding the quality of the data in these 32 PPARs, but here are a few key takeaways from this preliminary research:
Taxonomy and definition of ex-post is missing. After committing approximately 15-20 hours of research time to this content analysis, it is clear that navigating the IEG database to search for methodology standards to evaluate for sustainability is a more complicated process than it should be for such a prominent learning institution. The vague taxonomy used to categorize post-project/ex-post evaluation by the WB limits the functionality of this resource as a public archive dedicated to informing the sustainability of development projects the World Bank has funded.
Despite affirmative evidence of participatory community involvement in the post-project evaluation of WB projects, not all PPARs in the IEG database demonstrated a uniform level of ‘beneficiary’ participation. In most cases, it was unclear how many community members impacted by the project were really involved in the ex-post process, which made it difficult to determine even a general range of the number of participants involved in post-project activity at the WB.
Although PPARs report findings based, in part, on post-project missions (as indicated in the preface of the reports), the specific methods/structure of the processes were not described, and oftentimes the participants were not explicitly referenced in the reports. (More detailed analysis on this topic to come in Blog Series Part 2!)
These surprisingly inconsistent approaches make it difficult to compare results across this evaluation type, as there is no precise status quo.
Finally, the World Bank, which has funded 12,000 projects since its inception, should have far more than 73 post-project/ ex-post evaluations…but maybe I’m just quibbling with terms.
Stay tuned for PART II of this series, coming soon!