Pick a term, any term…but stick to it!
Valuing Voices is interested in identifying learning leaders in international development that are using participatory post-project evaluation methods to learn about the sustainability of their development projects. These organizations not only believe they need to see the sustained impact of their projects by learning from what has worked and what hasn’t in the past, but also that participants are the most knowledgeable about such impacts. So how do they define sustainability? This is determined by asking questions such as the following: were project goals self-sustained by the ‘beneficiary’ communities that implemented these projects? By our VV definition, self-sustainability can only be determined by going back to the project site, 2-5 years after project closeout, to speak directly with the community about the long-term intended/unintended impacts.
Naturally, we turned to the World Bank (WB) – the world’s prominent development institution – to see if this powerhouse of development, both in terms of annual monetary investment and global breadth of influence, has effectively involved local communities in the evaluation of sustainable (or unsustainable) outcomes. Specifically, my research was focused on identifying the degree to which participatory post-project evaluation was happening at the WB.
A fantastic blog* regarding participatory evaluation methods at the WB emphasizes the WB’s stated desire to improve development effectiveness by “ensuring all views are considered in participatory evaluation,” particularly through its community driven development projects. As Heider points out,
“The World Bank Group wants to improve its development effectiveness by, among others things, engaging citizens throughout the operational project cycle. It has set itself an ambitious target: 100% citizen engagement in projects that have clearly identifiable beneficiaries.”
Wow! Though these methods are clearly well intentioned, there seems to be a flaw in the terminology. The IEG says, “[Community driven development projects] are based on beneficiary participation from design through implementation, which make them a good example of citizen-centered assessment techniques in evaluation,” …however, this fails to recognize the importance of planning for community-driven post-project sustainability evaluations, to be conducted by the organization in order to collect valuable data concerning the long-term intended/unintended impacts of development work.
With the intention of identifying evidence of the above-mentioned mode of evaluation at the WB, my research process involved analyzing the resources provided by the WB’s Independent Evaluation Group (IEG) database of evaluations. As the accountability branch of the World Bank Group, the IEG works to gather institution-wide knowledge about the outcomes of the WBs finished projects. Its mission statement is as follows:
“The goals of evaluation are to learn from experience, to provide an objective basis for assessing the results of the Bank Group’s work, and to provide accountability in the achievement of its objectives. It also improves Bank Group work by identifying and disseminating the lessons learned from experience and by framing recommendations drawn from evaluation findings.”
Another important function of the IEG database is to provide information for the public and external development organizations to access and learn from; this wealth of data and information about the World Bank’s findings is freely accessible online.
When searching for evidence of post-project learning, I was surprised to find that the taxonomy varied greatly; e.g. projects I was looking for could be found under ‘post-project’, post project’, ‘ex-post’ or ‘ex post’. What was also unclear was any specific category under which these could be found, including a definition of what exactly is required in an IEG ex post impact evaluation. According to the IEG, there are 13 major evaluation categories, which are described in more detail here. I was expecting to find an explicit category dedicated to post-project sustainability, but instead this type of evaluation was included under Project Level Evaluations (which include PPARs and ICRs [Implementation Completion Reports]), and Impact evaluations.
This made it difficult to determine a clear procedural standard for documents reporting sustainability outcomes and other important data for the entire WB.
I began my research process by simply querying a few key terms into the database. In the first step of my research, which will be elaborated upon in Part I in this blog series, I attempted to identify evidence of ex post sustainability evaluation at the IEG by searching for the term “post-project” in the database, which yielded 73 results when using a hyphen and 953 results without using a hyphen. I found it interesting the inconsistency in the number of results depending on the use of a hyphen, but in order to narrow the search parameters to conduct a manageable content analysis of the documents, I chose to breakdown these 73 results by document type to determine if there are any examples of primary fieldwork research. In these documents, the term “post-project” was not used in the title of the documents or referenced in the executive summary as the specific aim of the evaluation, but rather used to loosely define the ex post time frame. Figure 1 illustrates the breakdown of document types found in the sample of 73 documents that came up when I searched for the key term “post-project”:
As the chart suggests, many of the documents (56% – which accounts for all of the pie chart slices except Project Level Evaluations) were purely desk studies – evaluating WB programs and the overall effectiveness of organization policies. These desk studies draw data from existing reports, such as those published at project closeout, without supplementing past data with new fieldwork research.
Out of the 9 categories, the only document type that showed evidence of any follow up evaluations were the Project Performance Assessment Reports (PPARs), defined by the IEG as documents that are…
“…based on a review of the Implementation Completion Report (a self-evaluation by the responsible Bank department) and fieldwork conducted by OED [Operations Evaluation Department]. To prepare PPARs, OED staff examines project files and other documents, interview operational staff, and in most cases visit the borrowing country for onsite discussions with project staff and beneficiaries. The PPAR thereby seeks to validate and augment the information provided in the ICR, as well as examine issues of special interest to broader OED studies.”
Bingo. This is what we’re looking for. The PPARs accounted for 32 out of the 73 results, or a total of 44%. As I examined the methodology used to conduct PPARs, I found that in the 32 cases that came up when I searched for “post-project”, after Bank funds were “fully dispersed to a project” and resources were withdrawn, the IEG sent a post-project mission back into the field to collaborate on new M&E with local stakeholders and beneficiaries. The IEG gathered new data through the use of field surveys or interviews to determine project effectiveness.
Based on these findings, I conducted a supplementary search of the term “ex post”, which yielded 672 results. From this search, 11 documents were categorized by the IEG as “Impact Evaluations”, of which 3 showed evidence of talking with participants to evaluate for sustainability outcomes. In follow-up blogs in this series I will elaborate upon the significance of these additional findings and go into greater detail regarding the quality of the data in these 32 PPARs, but here are a few key takeaways from this preliminary research:
Taxonomy and definition of ex-post is missing. After committing approximately 15-20 hours of research time to this content analysis, it is clear that navigating the IEG database to search for methodology standards to evaluate for sustainability is a more complicated process than it should be for such a prominent learning institution. The vague taxonomy used to categorize post-project/ex-post evaluation by the WB limits the functionality of this resource as a public archive dedicated to informing the sustainability of development projects the World Bank has funded.
Despite affirmative evidence of participatory community involvement in the post-project evaluation of WB projects, not all PPARs in the IEG database demonstrated a uniform level of ‘beneficiary’ participation. In most cases, it was unclear how many community members impacted by the project were really involved in the ex-post process, which made it difficult to determine even a general range of the number of participants involved in post-project activity at the WB.
Although PPARs report findings based, in part, on post-project missions (as indicated in the preface of the reports), the specific methods/structure of the processes were not described, and oftentimes the participants were not explicitly referenced in the reports. (More detailed analysis on this topic to come in Blog Series Part 2!)
- These surprisingly inconsistent approaches make it difficult to compare results across this evaluation type, as there is no precise status quo.
Finally, the World Bank, which has funded 12,000 projects since its inception, should have far more than 73 post-project/ ex-post evaluations…but maybe I’m just quibbling with terms.
Stay tuned for PART II of this series, coming soon!