Longing to do an Ex-post Sustainability Evaluation? How to support this work…
Just back from Niger where Catholic Relief Services, Rutere Kagendo and I are doing a fascinating post-project evaluation. Fresh on my mind is the commitment we all had to make to get this quite ground-breaking research going. Here is the full report, but there are three kinds of conditions we found were integral for success: client-ValuingVoices match; project and site selection; and resources.
Client – Valuing Voices match:
The study needs to be appreciated as innovative, adding to the program quality and learning of the organization so funding is provided and there is in-house interest in the findings;
The local office needs to allocate staff and technical time to support the study technically and logistically (see below);
Shared clarity is needed among all involved that such a study looks for self-sustained activities and outcomes. While there are lessons that emerge about the quality of implementation, its focus is what participants and their country-partners could continue themselves after project close-out and withdrawal of resources. It also can include lessons about what the local non-profit and the national stakeholders are doing to support community success (or not) and unexpected outcomes. Our clients need an openness to honestly seeing what was not sustained and exploring why;
While Valuing Voices provides expertise given review of the handful of post-project and exit evaluations that exist, client is interested in sharing findings and advocating to donors to fund more of these studies;
Disseminating findings internally where a possibility of learning from this evaluation to support similar current implementation; research could help similar project learning, lessons for country nationals such as Ministries;
Prioritizing local capacity – Valuing Voices believes in using regional M&E capacity to do the work; where possible we partner with regional evaluators while also building capacity within our client's staff to carry out such work;
Sharing and discussing findings locally: Valuing Voices believes knowledge learned needs to be shared in immediate feedback loops. We present: a) to each village after each site’s research; b) to local key partners and representatives from each village at the end of the qualitative Rapid Rural Appraisal findings; c) to the non-profit in-country at the end of qualitative research and d) internationally to headquarters at the end of the combined analysis of the qualitative and quantitative research with findings in the final report.
Project and Site Selection:
Non-profit programming projects has been closed out for at least two years and no more than seven years (for recall);
No other NGO has done very similar work in the region in the intervening years;
The region selected is representative of the project as a whole (e.g. agro-ecological zones, economic/ livelihood/ health, educational or other sectoral criteria);
Research areas are secure and safe (e.g. from civil unrest, severe drought/ floods, epidemics, to the degree possible);
Timing does not interfere with urgent priorities of those involved with the study (e.g. livelihoods are not jeapordized in communities, holidays are kept, other technical work is not disrupted).
Resources (Time, Material and Project Expertise):
Time: the research is qualitative followed by quantitative, coming to 80-90 days of qualitative and quantitative research overall (roughly 5 weeks of fieldwork in teams of 4-10, analysis, report-writing and presentation;
Project and evaluation documents are available to inform and contextualize approach including activities, outcomes and projected impacts;
Data: key to the fieldwork are village and participant lists from pre-closeout days so participants can be interviewed both during a Rapid Rural Appraisal and a follow-on household survey;
Internal/external sectoral staff and at least one past project staff are part of the team to inform and ‘ground truth’ research;
Logistics support is provided by the client; from vehicle/driver and lodging support in the field to materials such as mobile phones, flipcharts, photocopying and advances;
A consultant or staff prepares the sites before the research teams come, e.g. to confirm communities are willing to be visited (each visit will be 2-5 days) and to identify participants and partners still there;
Partners familiar with the closed project can be identified so they can be interviewed by the research team;
Local language expertise is needed, e.g. translator to local language, as well as data entry personnel afterwards.
While Valuing Voices provides the technical lead experts and statistical back office analysis, including sampling and rigorous analysis, senior non-profit staff are needed in-country for contextualization and input on preliminary findings, as well as senior technical staff to review the final product;
A home for findings and on the road for dissemination: good knowledge management is needed for data retention, for the findings to have a sustainable ‘home’– be that info-graphics and print copies distributed to villages and partners or online repositories created that are language-accessible both nationally and by foreign donors; webinars, conference presentations etc are needed to optimize the learning via sustainability dissemination campaigns.
Much of this is needed for the research to be the best quality and yield the highest results. It is exciting learning is to be had from not only what communities and supporters could sustain but what they exceeded or dropped! Consider doing one to see how post-project sustainability research can improve your current implementation, future design, and long-term self-sustainability!
IEG Blog Series Part II: Theory vs. Practice at the World Bank
In Part I of this blog series, I described my research process for identifying the level to which the World Bank (WB) is conducting participatory post project sustainability evaluations for its many international development projects. Through extensive research and analysis of the WB’s IEG database, Valuing Voices concluded that there is a very loosely defined taxonomy for ex-post project evaluation at the WB, making it difficult to identify a consistent standard of evaluation methodology for sustainability impact assessments.
Particularly, we were concerned with identifying examples of direct beneficiary involvement in evaluating long-term sustainability outcomes, for instance by surveying/interviewing participants to determine which project objectives were self-sustained…and which were not. Unfortunately, it is quite rare for development organizations to conduct ex–post evaluations that involve all levels of project participants to contribute to long-term information feedback loops. However, there was one document type in the IEG database that gave us at Valuing Voices some room for optimism: Project Performance Assessment Reports (PPARs). PPARs are defined by the IEG as documents that are,
“…based on a review of the Implementation Completion Report (a self-evaluation by the responsible Bank department) and fieldwork conducted by OED [Operations Evaluation Department, synonymous with IEG]. To Prepare PPARs, staff examines project files and other documents, interview operation staff, and in most cases visit the borrowing country for onsite discussions with project staff and beneficiaries” .
The key takeaway from this definition is that these reports supplement desk studies (ICRs) with new fieldwork data provided, in part, by the participants themselves. The IEG database lists hundreds of PPAR documents, but I focused on only the 33 documents that came up when I queried “post-project”.
Here are a few commonalities to note about the 33 PPARs I studied:
- They are all recent documents – the oldest document was published in 2004, and the most recent documents from 2014.
- The original projects that are assessed in the PPARs were finalized anywhere from 2-10+ years before the PPAR was written, making them true ex-posts
- They all claimed to involve mission site visits and communication with key project stakeholders, but they did not all claim to involve beneficiaries explicitly
Although the WB/IEG mentions that beneficiary participation takes place in “most” of the ex-post missions back to the project site in its definition of a PPAR, Valuing Voices was curious to know if there is a standard protocol for the level of participant involvement, the methods of data collection, and ultimately, the overall quality of the new fieldwork data collected to inform PPARs. For this data quality analysis, Valuing Voices identified these key criteria:
- Overall summary of evaluation methods
- Who was involved, specifically? Was there direct beneficiary participation? What were the research methods/procedures used?
- What was the level of sustainability (termed Risk to Development Outcome* after 2006) established by the PPAR?
- Was this different from the level of sustainability as projected by the preceding ICR report?
- Were participants involved via interviews? (Yes/No)
- If yes, were they semi-structured (open-ended questions allowing for greater variety/detail of qualitative data) or quantitative surveys
- How many beneficiaries were interviewed/surveyed?
- What % of total impacted beneficiary population was this number?
- Was there a control group used? (Yes/No)
Despite our initial optimism, we determined that the quality of the data provided in these PPARs was highly variable, and overall quite low. A summary of the findings is as follows:
1. Rarely were ‘beneficiaries’ interviewed
- Only 15% of the PPARs (5) gave details about the interview methodologies, but of this only 3% of the PPARs (1) described in detail how many participants were consulted, what they said and how they were interviewed (Nigeria 2014 ).
- 54% of the reports (18), mentioned beneficiary input in data collected in the post-project mission, but gave no specific information on the number of participants involved nor were their voices cited nor was any information included on the methodologies used. The vast majority only vaguely referenced the findings of the post project mission, rather than data collection specifics. A typical example of this type of report is Estonia 2004 
- 30% of the PPARs (10) actually involved no direct participant/beneficiary participation in the evaluation process, with these missions only including stakeholders such as project staff, local government, NGOs, donors, consultants, etc.A typical example of this type of report is Niger 2005 
These percentages are illustrated in Figure 1, below, which gives a visual breakdown of the number of reports that involved direct participant consultation with detailed methodologies provided (5), the number of reports where stakeholders were broadly consulted but no specific methodologies were provided (18), and the number of reports where no participants were directly involved in the evaluation process (10).
2. Sustainability of project outcomes was unclear
- In 54% of cases, there was some change in the level of sustainability from the original level predicted in the ICR (which precedes and informs the PPAR) to the level established in the PPAR. Ironically, of the 33 cases, 22 of them were classified as Likely or Highly Likely or Significantly Likely to be sustainable, yet participants were not asked for their input.
- So on what basis was sustainability judged? Of the three cases where there was high participant consultation, the Nigerian project’s (where they asked 10% of participants for feedback) sustainability prospects was only moderate while India (also 10% feedback) and Kenya (14-20%) both were classified as likely to be sustainable.
Along the Y axis of Figure 2, below, is the spectrum of sustainability rankings observed in the PPARs, which range from “Negligible to Low” up to “High”. For each of the projects analyzed (there are 60 total projects accounted for in this graph, as some of the PPARs covered up to 4 individual projects in one report), the graph illustrates how many projects consulted participants, and how many failed to do so, for each evaluation outcome. As we can see, the majority of cases that were determined to be highly or significantly sustainable either did not consult participants directly or only consulted stakeholders broadly, with limited community input represented in the evaluation. These are interesting findings, because although there is a lot of supposed sustainability being reported, very few cases actually involved the community participants in a meaningful way (to our knowledge, based on the lack of community consultation discussed in the reports). However, unless these evaluations are taking place at grassroots level, engaging the participants in a conversation about the true self-sustainability outcomes of projects, you can’t really know how sustainable the project is by only talking with donors, consultants, governments, etc. Are the right voices really being represented in this evaluation process? *Note: the “Sustainability” ranking was retitled “Risk to Development Outcomes” in 2006.
While projects were deemed sustainable, this is based on very little ‘beneficiary’ input. The significance of this information is simple: not enough is being done to ensure beneficiary participation in ALL STAGES of the development process, especially in the post-project time frame, even by prominent development institutions like the WB/IEG. While we commend the Bank for currently emphasizing citizen engagement via beneficiary feedback, this still seems to be more of a guiding theory than a habitualized practice . Although all 34 documents I analyzed claimed there was “key stakeholder” or beneficiary participation, the reality is that no consistent procedural standard for eliciting such engagement could be identified.
Furthermore, the lack of specific details elaborating upon interview/survey methods, the number of participants involved, the discovery of any unintended outcomes, etc. creates a critical information void. As a free and public resource, the IEG database should not only be considered an important internal tool for the WB to catalog its numerous projects throughout time, but it is also an essential external tool for members of greater civil society who wish to benefit from the Bank’s extensive collection of resources – to learn from WB experiences and inform industry-wide best practices.
For this reason, Valuing Voices implores the World Bank to step up its game and establish itself as a leader in post-project evaluation learning, not just in theory but also in practice. While these 33 PPARs represent just a small sample of the over 12,000 projects the WB has implemented since its inception, Valuing Voices hopes to see much more ex-post project evaluation happening in the future through IEG. Today we are seeing a decisive shift in the development world towards valuing sustainable outcomes over short-term fixes, towards informing future projects based on long-term data collection and learning, and towards community participation in all stages of the development process…
If one thing is certain, it is that global emphasis on sustainable development will not be going away anytime soon…but are we doing enough to ensure it?
 World Bank OED. (2004, June 28). Project Performance Assessment Report: Republic of Estonia, Agriculture Project. Retrieved from http://documents.worldbank.org/curated/en/173891468752061273/pdf/295610EE.pdf
 World Bank OED. (2014, June 26). Project Performance Assessment Report: Nigeria, Second National Fadama Development Project. Retrieved from https://ieg.worldbankgroup.org/sites/default/files/Data/reports/Nigeria_Fadama2_PPAR_889580PPAR0P060IC0disclosed07070140_0.pdf
 World Bank OED. (2005, April 15). Project Performance Assessment Report: Niger, Energy Project. Retrieved from http://documents.worldbank.org/curated/en/899681468291380590/pdf/32149.pdf
 World Bank. (n.d.). Citizen Engagement: Incorporating Beneficiary Feedback in all projects by FY 18. Retrieved 2015, from https://web.archive.org/web/20150102233948/http://pdu.worldbank.org/sites/pdu2/en/about/PDU/EngageCitizens
Are We Done Yet?
When are we off the hook, so to speak, for the well-being of the participants whom we said we'd make healthier, better fed, more educated, safer, etc?
America’s Agency for International Development (USAID) is the main channel for international development aid. It is also an organization interested in learning from its programming and numerous contracts support such work. One such contract by FHI360/FANTA was Food for Peace tasking them to review the agency’s Title II development food aid from 2003-2009 covering 28 countries. This Second Food Aid and Food Security Assessment (FAFSA-2) Summary found that such programs can “reduce undernutrition in young children, improve health and nutrition outcomes, and increase access to income and food” and also found practices that did not work well.
While USAID has made enormous strides in the intervening six years on monitoring and evaluation (I was a consultant to USAID/PPL/LER from 2013-14), excellent recommendations that would support great, sustainable programs are unfulfilled:
Recommendations #1, 4 “USAID/FFP should develop an applied research agenda and sponsor studies that focus on the implementation of Title II programs in the field to better define what works and what does not…. [and] should select the review panel for new Title II applications… and give reviewers a ‘cheat sheet’ on interventions and approaches that USAID/FFP is and is not interested in funding because they work better or do not work as well, [and] provide this same information in the Request for Assistance” [Request for proposals].
Yes, all across our industry there is little learning from past evaluations for future design and Valuing Voices believes local participants and stakeholders need to be consulted to tell us what (still) works and what they want more of not only during implementation but long after. Their voices must support great design, as it’s their lives we go there to improve; they must be involved in the design of these original requests that non-profits design and fulfill. Further, the study found that only 1/3 of all evaluations were included in USAID’s database, and as Valuing Voices’ partner Sonjara has written in our blog, aid transparency requires data retention and access for learning to happen.
Recommendation #3 “USAID/FFP should include options for extensions of awards or separate follow-on awards to enable USAID/FFP to continue to support high-performing programs beyond five years and up to ten years… [as] longer implementation periods are associated with greater impact.”
This would address the ‘how much impact can we accomplish in 1, 3, 5 years” question that many of us in international non-profits ask ourselves. Finally, the graphic below is self-explanatory – USAID sees its role ending at close-out.
The crux lies in their honest statement: "It was beyond the scope and resources of the FAFSA-2 to explore in any depth the sustainability of Title II development programs after they ended." While they state that there is merit in having impact while you intervene, such as "having a positive impact on the nutritional status of the first cohort of children is of immense benefit in its own right", they go on to say that "ideally, one would like to see mothers continuing positive child feeding practices and workers continuing to deliver services long after programs end… [yet] whether the [maternal child health and nutrition] interventions are sustainable beyond one generation is unknown and would require research." This is because funding is pre-programmed, fixed to end within set 1, 3, 5 year increments, and no one goes back to learn how it all turned out. This is what most needs to change, this illusion that what happens after closeout is no longer our issue, that the ‘positive impact’ we had while there is enough.
They are not alone. I think of NORAD, the Government of Norway's development arm as very progressive. So I went to NORAD's website and searched for 'ex-post' (we do a lot of that at ValuingVoices). So like our World Bank blog on finding real ex-post evaluations, many many things are considered 'ex-post', including one actual evaluation in Palestine with fieldwork which asked participants and a few that looked at institutional sustainability. Many of the 100+ 'finds' were actually documents recommending ex-post. Typical of our other searches of other donors. I emailed NORAD whether there were more with participant voices, yet they assured me they did them. Maybe our problem is in definitions and taxonomy again. Maybe we should call them post-project participant feedback?
Most of my colleagues would agree that the sustainability of activities aimed at making communities food secure in the long-term and independent of aid is a shared goal, one which short-term assistance aimed at huge impacts such as to ‘make communities food secure’ and ‘sustainably decrease malnutrition’ (common proposal goals) is unrealistic. We need participant voices to teach us how well we served them. We need to return, learn “what works and what does not”, and Value Voices in true sustained partnership. We all look forward to being done.
 “Another major obstacle to transparency and learning from the Title II program experience was the fact that only one-third of the final evaluations were publicly available on the Development Experience Clearinghouse (DEC), despite the requirement that Awardees post them to the DEC…. [There was a lack of] cross-cutting studies or in-depth analyses of Title II evaluation results to advance organizational learning [and] much greater use could be made of the evaluation data for systematic reviews, meta-analyses, secondary analyses, and learning.”
A Missing Piece In Local Ownership: Evaluation
(Reblog from http://www.interaction.org/blog/missing-piece-local-ownership-evaluation Grino and Levine)
Ten years ago, ownership was established as a key principle of aid effectiveness. Although understanding of ownership has evolved since then – most significantly, as something that involves not just governments but all parts of society – today the focus is not on whether ownership is important but on how we can move ownership from principle to practice. To date, these conversations have primarily concerned how to make ownership a reality in program design and implementation. InterAction supports these efforts, but believes they need to go one step further. As we argue in our new briefing paper, the local ownership agenda must extend to all parts of the program cycle – from design all the way through evaluation.
Including those meant to benefit from international assistance (we use the term “participants”) in deciding what should be done and how it should be done is critically important for effectiveness and sustainability. Organizations, and some governments, also increasingly recognize the value of hearing directly from participants and citizens about how well something is being done. This can be seen in the growing use of feedback mechanisms and the establishment of initiatives promoting social accountability. Including participants in evaluation decision making is just as important. Particularly when participants have lacked ownership at other stages of an intervention, evaluation serves as a last opportunity for them to weigh in.
Despite the widespread acceptance of the principle of local ownership, evaluations continue to predominantly respond to the demands of donors, focusing on how funds are spent and the degree to which the results donors or implementers value are achieved. By only taking into consideration the values and interests of some stakeholders (primarily donors and external actors) in evaluations, organizations are missing a critical perspective on an intervention’s results: the views of the very people the intervention was intended to assist.
When participants are involved in evaluation, more often than not they serve as data sources, and perhaps as data collectors. Very rarely do we find examples of participants involved in deciding the questions an evaluation will ask, determining the criteria that will be used for judging an intervention’s success, interpreting results, or shaping recommendations based on evaluation findings.
A concern frequently raised about including participants in evaluation decision making is that their clear stakes in evaluation outcomes and potentially their lack of evaluation capacity could lead to biased and unreliable results. Yet it is important to acknowledge that everyone involved in an evaluation has values, interests, and capacities that affect how they approach an evaluation. Including participants’ voices adds a greater diversity of perspectives to an evaluation and the interpretation of findings, thus reducing bias.
We recognize that the road to local ownership in evaluation is just that: a road, not something that can be achieved instantly or that is possible in all cases. For that reason, we recommend that organizations take an incremental approach to pursuing local ownership in evaluation, focusing on the critical steps that can be taken along the way to increase the role of participants in evaluation processes.
As organizations seek to increase participants’ ownership in evaluation, they must consider:
Who to include as co-owners in an evaluation;
In which aspects of an evaluation participants need to be involved (we provide a list of possible evaluation activities related to designing the evaluation, collecting and analyzing data, determining findings and recommendations, and disseminating and using evaluation results); and
The nature of participants’ involvement (with the goal of moving from informing or consulting participants to including participants as partners in evaluation decision making).
Getting to local ownership in evaluation requires making progress on all three fronts.
Ultimately, all actors along the aid chain – from donors to international NGOs to local partners – must believe in the value of including participants as co-owners in evaluation. Once in place, this commitment must be complemented by investing in staff’s capacity to effectively involve participants in evaluation decision making, and in strengthening participants’ own capacity to engage. As in any other participatory process, participants must also trust that their input will indeed influence policies and practice. Including participants in this way is another way to signal that we truly view them as partners, rather than beneficiaries.
By Laia Grino, Senior Manager for Transparency Accountability and Results, and Carlisle Levine, Ph.D., Senior Advisor, Evaluation (Consultant)
Pick a term, any term…but stick to it!
Valuing Voices is interested in identifying learning leaders in international development that are using participatory post-project evaluation methods to learn about the sustainability of their development projects. These organizations not only believe they need to see the sustained impact of their projects by learning from what has worked and what hasn’t in the past, but also that participants are the most knowledgeable about such impacts. So how do they define sustainability? This is determined by asking questions such as the following: were project goals self-sustained by the ‘beneficiary’ communities that implemented these projects? By our VV definition, self-sustainability can only be determined by going back to the project site, 2-5 years after project closeout, to speak directly with the community about the long-term intended/unintended impacts.
Naturally, we turned to the World Bank (WB) – the world’s prominent development institution – to see if this powerhouse of development, both in terms of annual monetary investment and global breadth of influence, has effectively involved local communities in the evaluation of sustainable (or unsustainable) outcomes. Specifically, my research was focused on identifying the degree to which participatory post-project evaluation was happening at the WB.
A fantastic blog* regarding participatory evaluation methods at the WB emphasizes the WB’s stated desire to improve development effectiveness by “ensuring all views are considered in participatory evaluation,” particularly through its community driven development projects. As Heider points out,
“The World Bank Group wants to improve its development effectiveness by, among others things, engaging citizens throughout the operational project cycle. It has set itself an ambitious target: 100% citizen engagement in projects that have clearly identifiable beneficiaries.”
Wow! Though these methods are clearly well intentioned, there seems to be a flaw in the terminology. The IEG says, “[Community driven development projects] are based on beneficiary participation from design through implementation, which make them a good example of citizen-centered assessment techniques in evaluation,” …however, this fails to recognize the importance of planning for community-driven post-project sustainability evaluations, to be conducted by the organization in order to collect valuable data concerning the long-term intended/unintended impacts of development work.
With the intention of identifying evidence of the above-mentioned mode of evaluation at the WB, my research process involved analyzing the resources provided by the WB’s Independent Evaluation Group (IEG) database of evaluations. As the accountability branch of the World Bank Group, the IEG works to gather institution-wide knowledge about the outcomes of the WBs finished projects. Its mission statement is as follows:
“The goals of evaluation are to learn from experience, to provide an objective basis for assessing the results of the Bank Group’s work, and to provide accountability in the achievement of its objectives. It also improves Bank Group work by identifying and disseminating the lessons learned from experience and by framing recommendations drawn from evaluation findings.”
Another important function of the IEG database is to provide information for the public and external development organizations to access and learn from; this wealth of data and information about the World Bank’s findings is freely accessible online.
When searching for evidence of post-project learning, I was surprised to find that the taxonomy varied greatly; e.g. projects I was looking for could be found under ‘post-project’, post project’, ‘ex-post’ or ‘ex post’. What was also unclear was any specific category under which these could be found, including a definition of what exactly is required in an IEG ex post impact evaluation. According to the IEG, there are 13 major evaluation categories, which are described in more detail here. I was expecting to find an explicit category dedicated to post-project sustainability, but instead this type of evaluation was included under Project Level Evaluations (which include PPARs and ICRs [Implementation Completion Reports]), and Impact evaluations.
This made it difficult to determine a clear procedural standard for documents reporting sustainability outcomes and other important data for the entire WB.
I began my research process by simply querying a few key terms into the database. In the first step of my research, which will be elaborated upon in Part I in this blog series, I attempted to identify evidence of ex post sustainability evaluation at the IEG by searching for the term “post-project” in the database, which yielded 73 results when using a hyphen and 953 results without using a hyphen. I found it interesting the inconsistency in the number of results depending on the use of a hyphen, but in order to narrow the search parameters to conduct a manageable content analysis of the documents, I chose to breakdown these 73 results by document type to determine if there are any examples of primary fieldwork research. In these documents, the term “post-project” was not used in the title of the documents or referenced in the executive summary as the specific aim of the evaluation, but rather used to loosely define the ex post time frame. Figure 1 illustrates the breakdown of document types found in the sample of 73 documents that came up when I searched for the key term “post-project”:
Figure 1: Breakdown by Document Type out of Total 73 Results when searching post-project
As the chart suggests, many of the documents (56% – which accounts for all of the pie chart slices except Project Level Evaluations) were purely desk studies – evaluating WB programs and the overall effectiveness of organization policies. These desk studies draw data from existing reports, such as those published at project closeout, without supplementing past data with new fieldwork research.
Out of the 9 categories, the only document type that showed evidence of any follow up evaluations were the Project Performance Assessment Reports (PPARs), defined by the IEG as documents that are…
“…based on a review of the Implementation Completion Report (a self-evaluation by the responsible Bank department) and fieldwork conducted by OED [Operations Evaluation Department]. To prepare PPARs, OED staff examines project files and other documents, interview operational staff, and in most cases visit the borrowing country for onsite discussions with project staff and beneficiaries. The PPAR thereby seeks to validate and augment the information provided in the ICR, as well as examine issues of special interest to broader OED studies.”
Bingo. This is what we’re looking for. The PPARs accounted for 32 out of the 73 results, or a total of 44%. As I examined the methodology used to conduct PPARs, I found that in the 32 cases that came up when I searched for “post-project”, after Bank funds were “fully dispersed to a project” and resources were withdrawn, the IEG sent a post-project mission back into the field to collaborate on new M&E with local stakeholders and beneficiaries. The IEG gathered new data through the use of field surveys or interviews to determine project effectiveness.
Based on these findings, I conducted a supplementary search of the term “ex post”, which yielded 672 results. From this search, 11 documents were categorized by the IEG as “Impact Evaluations”, of which 3 showed evidence of talking with participants to evaluate for sustainability outcomes. In follow-up blogs in this series I will elaborate upon the significance of these additional findings and go into greater detail regarding the quality of the data in these 32 PPARs, but here are a few key takeaways from this preliminary research:
Taxonomy and definition of ex-post is missing. After committing approximately 15-20 hours of research time to this content analysis, it is clear that navigating the IEG database to search for methodology standards to evaluate for sustainability is a more complicated process than it should be for such a prominent learning institution. The vague taxonomy used to categorize post-project/ex-post evaluation by the WB limits the functionality of this resource as a public archive dedicated to informing the sustainability of development projects the World Bank has funded.
Despite affirmative evidence of participatory community involvement in the post-project evaluation of WB projects, not all PPARs in the IEG database demonstrated a uniform level of ‘beneficiary’ participation. In most cases, it was unclear how many community members impacted by the project were really involved in the ex-post process, which made it difficult to determine even a general range of the number of participants involved in post-project activity at the WB.
Although PPARs report findings based, in part, on post-project missions (as indicated in the preface of the reports), the specific methods/structure of the processes were not described, and oftentimes the participants were not explicitly referenced in the reports. (More detailed analysis on this topic to come in Blog Series Part 2!)
These surprisingly inconsistent approaches make it difficult to compare results across this evaluation type, as there is no precise status quo.
Finally, the World Bank, which has funded 12,000 projects since its inception, should have far more than 73 post-project/ ex-post evaluations…but maybe I’m just quibbling with terms.
Stay tuned for PART II of this series, coming soon!
Making money– microenterprise– is this a way to sustainable livelihoods? PACT’s Nepalese Lessons
Many Americans are steeped in the belief that we must ‘pull ourselves up by our bootstraps’, that hard work and especially faith in small businesses is the way to success. This is one of the many reasons why microfinance so appeals to donors as an investment. Does it work?
The US NGO-umbrella, Interaction, posted some “Aid Works” global results, including “the percentage of USAID-funded microfinance institutions that achieved financial sustainability jumped from 38% in 2000 to 76% in 2012.” Yet there have been numerous detractors of the model and the unsustainability of control over resources/ empowerment  .
What does one ex-post evaluations that we have on hand tell us? PACT’s USAID-funded WORTH program in Nepal was focused on women ending poverty through business, banking and literacy/ bookkeeping . The project, implemented between 1999 and 2001 worked with 240 local NGOs to reach 125,000 women in 6,000 economic groups across Nepal’s southern Terai (in 2001 a Maoist insurgency led to the groups being on their own) . By then, 1,500 of these groups led by the women themselves (35,000-strong) received training to become informal-sector Village Banks . Working with local NGOs enabled them to reach 100,000 women in a few months due to the NGOs’ presence and connections in the communities. The collaboration worked well due to a shared belief by PACT and the NGOs that dependency is not empowering. As the report says “WORTH groups and banks were explicitly envisaged as more than just microfinance providers; they were seen as organizations that would build up women as agents of change and development in their communities” .
In 2006, PACT and Nepalese Valley Research Group looked to see sustainability of the banks, the extent of retained income by the women as well as any effect on community development and broader issues such as domestic abuse . They went to 272 Banks from a random sample of 450 from seven of the 21 WORTH districts. Remarkably, they found even more functioning: 288 (16 more) of them were thriving and – wow- WORTH women had spawned another 400 more groups on their own . Participant interviews were done with members and management as well as those women who had left their Banks and members of groups that had dissolved plus they interviewed a ‘control group’ of poor, non-WORTH women in Village Bank communities.
Was it a universal success? Almost. See the bar chart below showing what impacts the management committee felt the village banks had had on members, which is mostly better off, some the same, some far better off. This held true for the original village bank members and the new bank members.
The SEEP network reviewed WORTH’s ex-post and found five key findings:
- Wealth creation: A Village Bank today holds average total assets of over Rs. 211,000, or $3,100, more than three times its holdings in 2001. Each woman member of WORTH now has an average equity stake of $116 in her Village Bank .
- Sustainability: Approximately two-thirds (64 percent) of the original 1,536 Village Banks are still active eight and a half years after the program began and five to six years after all WORTH-related support ended. That means there are nearly 1,000 surviving groups with approximately 25,000 members .
- Replication: A quarter of the existing WORTH groups has helped start an estimated 425 new groups involving another 11,000 women with neither external assistance nor prompting from WORTH itself. If all these groups are currently operating, then more Village Bankers are conducting business today in Nepal than when formal WORTH programming ended in 2001. The report also said 63% of the Village Bank members derived the income from agriculture/ sale of food versus 17% in commerce/ retail trade and the rest in miscellaneous trades. Over 40% of the participants said they borrowed to pay for education and health costs and another 20% to pay off other loans plus for festivals (e.g. birth, death) .
- Literacy: 97 percent of respondents reported that literacy is “very important” to their lives; 83 percent reported that because of WORTH they are able to send more of their children to school .
- Domestic disputes and violence: Two-thirds of groups reported that members bring their personal or family problems to the group for advice or help. Of these, three-quarters reported helping members deal with issues of domestic disputes and related problems. Forty-three percent of women said that their degree of freedom from domestic violence has changed because of their membership in a WORTH group. One in 10 reported that WORTH has actually helped “change her life” because of its impact on domestic violence .
The report outlines other impacts, including self-help actions such as two-thirds of groups being engaged in community action, and three-quarters said that the group has done something to help others in the community. Speaking of community, it is notable that the self-selected women were primarily from wealthier groups (60%), 15% from the middle class, with only 20% from the most disadvantaged castes . Frankly this is not as surprising, as those most willing to take on risk are rarely the poorest until later; 67% of the very poor later wanted to join such a bank (once the risk was shown not to be too high versus income) .
The study’s author asks “Yet for all this documented success, WORTH and other savings-led microfinance programs remain among the best kept secret in the world of international development and poverty alleviation. Although together such programs reach some two million poor people, they go almost unnoticed by the $20 billion credit-led microfinance industry… The empowered women in this study—like WORTH women elsewhere in Asia and Africa— have proved themselves equipped to lead a new generation of entrepreneurs who can take WORTH [onward] through a model of social franchising now being pilot-tested [which is] as creative and potentially groundbreaking as is WORTH…WORTH has the potential to become an “international movement that supports women’s efforts to lift themselves, their families, and their communities out of poverty” .
So why aren’t are we learning from such projects and scaling them up everywhere? PACT is . They have reached 365,000 women in 14 countries – including Myanmar, Cambodia, Colombia, Swaziland, DRC, Ethiopia, with Nigeria and Malawi starting this year . Coca-Cola awarded $400,000 to PACT in 2013 to replicate WORTH in Vietnam with 2,400 women . Who else is replicating this model? It’s not clear from many excellent microenterprise sites I visited except one tells me that Mastercard Foundation and Aga Khan are looking into wider replication as well. Let’s track their results and ask participants!
 Bateman, M. (2011, September 20). Microcredit doesn’t work – it’s now official. Retrieved from https://opinion.bdnews24.com/2011/09/20/microcredit-doesn%E2%80%99t-work-%E2%80%93-it%E2%80%99s-now-official/
 Vaessan, J., Rivas, A., & Duvendack, M. (2014, November). The Effects of Microcredit on Women’s Control Over Household Spending in Developing Countries: A Systematic Review and Meta-analysis. Retrieved from https://www.findevgateway.org/paper/2014/11/effects-microcredit-womens-control-over-household-spending-developing-countries
 Mayoux, L. (2008, June). Women Ending Poverty: The WORTH Program in Nepal – Empowerment through Literacy, Banking and Business 1999-2007. Retrieved from https://www.findevgateway.org/case-study/2008/06/women-ending-poverty-worth-program-nepal-empowerment-through-literacy-banking
 PACT. (n.d.). WORTH. Retrieved 2015, from https://web.archive.org/web/20141106013639/http://www.pactworld.org/worth
 PACT. (2013, August 13). The Coca-Cola Foundation awards $400,000 grant to Pact. Retrieved from https://www.pactworld.org/article/coca-cola-foundation-awards-400000-grant-pact