Can’t wait to learn from post-project sustainability evaluation? If not why?


Can’t wait to learn from post-project sustainability evaluation? If not why?


A colleague who has been promoting ex-post sustainability evaluation in her organization questioned my claim that doing them had “benefits” for future programming. It was an “untested assumption that there will be sufficient, strong enough evidence to apply to future programming…  [and] the need to have evidence to cite for future work is not pressing enough.”

If you are on aid’s receiving end, what you care about is that good results are sustained, and you are able to live better, longer. You might want to show others evidence of what was sustained, rather than only what worked while external investments were there but stopped since. Absolutely, aid donors need to have evidence that something designed, funded, implemented, and monitored & evaluated showed good results, but we assume our results will be sustained after we have closed out and moved on. How well have we done? Let’s see.

At the American Evaluation Association meetings this month, several post-project evaluations were presented. Some came from Valuing Voices research, some from Social Impact, PLAN and World Vision and some others [1].


Results 3-5 years post close-out were, shall we say unexpected, from CRS Madagascar:




and Honduras.


While there were some successes, including Niger


and Burkina Faso, where MCC/ PLAN found that three years post project “BRIGHT still had a significant positive impact—6.0 percentage points for children between ages 6 and 22—on self-reported enrollment. The impacts are smaller than estimated impacts on enrollment at 7 and 3 years after the start of the program,” they were rare [6].

If we don’t wonder why things didn’t work or why they did, and don’t return to find out if it happened again and what to do/ not to do again?, Often we continue to do very similar programming elsewhere, again assuming great results. How can we close our eyes and not do post project, Sustained and Emerging Impacts Evaluations (SEIE) and see, learn, do better?  How can we continue to do very similar water/ sanitation, health, food security, and education programs and projects (with potentially similar results), and call ourselves sustainable development professionals? Shouldn’t we always ask not how effective is our aid when it’s there, but after its gone?

If you want more data, see a presentation we did at USAID. What do you think?




[1] Cekan, J., PhD. (2014, April 7). Evaluation of ERCS/Tigray’s “Building Resilient Community: Integrated Food Security Project to Build the Capacity of Dedba, Dergajen & Shibta Vulnerable People to Food Insecurity”. Retrieved from

[2] Madagascar Rural Access To New Opportunities For Health And Prosperity (RANO-HP) Ex-Post Evaluation. (2017, June 1). USAID. Retrieved from

[3] The World Bank. (2014, June 26). Project Performance Assessment Report, Nigeria: Second National Fadama Development Project. Retrieved from

[4] Rogers, B. L., Sanchez, L., & Fierstein, J. (n.d.). Exit Strategies Study: Honduras. Retrieved from

[5] Cekan, J., PhD, Kagendo, R., & Towns, A. (2016). Participation by All: The Keys to Sustainability of a CRS Food Security Project in Niger. Retrieved from

[6] Davis, M., Ingwersen, N., & Kazianga, H. (2016, August 29). Ten-Year Impacts of Burkina Faso’s BRIGHT Program. Retrieved from


IEG Blog Series Part II: Theory vs. Practice at the World Bank


IEG Blog Series Part II: Theory vs. Practice at the World Bank


IEG logo


In Part I of this blog series, I described my research process for identifying the level to which the World Bank (WB) is conducting participatory post project sustainability evaluations for its many international development projects. Through extensive research and analysis of the WB’s IEG database, Valuing Voices concluded that there is a very loosely defined taxonomy for ex-post project evaluation at the WB, making it difficult to identify a consistent standard of evaluation methodology for sustainability impact assessments.

Particularly, we were concerned with identifying examples of direct beneficiary involvement in evaluating long-term sustainability outcomes, for instance by surveying/interviewing participants to determine which project objectives were self-sustained…and which were not. Unfortunately, it is quite rare for development organizations to conduct ex–post evaluations that involve all levels of project participants to contribute to long-term information feedback loops. However, there was one document type in the IEG database that gave us at Valuing Voices some room for optimism: Project Performance Assessment Reports (PPARs). PPARs are defined by the IEG as documents that are,

“…based on a review of the Implementation Completion Report (a self-evaluation by the responsible Bank department) and fieldwork conducted by OED [Operations Evaluation Department, synonymous with IEG]. To Prepare PPARs, staff examines project files and other documents, interview operation staff, and in most cases visit the borrowing country for onsite discussions with project staff and beneficiaries” [1].

The key takeaway from this definition is that these reports supplement desk studies (ICRs) with new fieldwork data provided, in part, by the participants themselves. The IEG database lists hundreds of PPAR documents, but I focused on only the 33 documents that came up when I queried “post-project”.

Here are a few commonalities to note about the 33 PPARs I studied:

  • They are all recent documents – the oldest document was published in 2004, and the most recent documents from 2014.
  • The original projects that are assessed in the PPARs were finalized anywhere from 2-10+ years before the PPAR was written, making them true ex-posts
  • They all claimed to involve mission site visits and communication with key project stakeholders, but they did not all claim to involve beneficiaries explicitly


Although the WB/IEG mentions that beneficiary participation takes place in “most” of the ex-post missions back to the project site in its definition of a PPAR, Valuing Voices was curious to know if there is a standard protocol for the level of participant involvement, the methods of data collection, and ultimately, the overall quality of the new fieldwork data collected to inform PPARs. For this data quality analysis, Valuing Voices identified these key criteria:

  • Overall summary of evaluation methods
  • Who was involved, specifically? Was there direct beneficiary participation? What were the research methods/procedures used?
  • What was the level of sustainability (termed Risk to Development Outcome* after 2006) established by the PPAR?
  • Was this different from the level of sustainability as projected by the preceding ICR report?
  • Were participants involved via interviews? (Yes/No)
  • If yes, were they semi-structured (open-ended questions allowing for greater variety/detail of qualitative data) or quantitative surveys
  • How many beneficiaries were interviewed/surveyed?
  • What % of total impacted beneficiary population was this number?
  • Was there a control group used? (Yes/No)

Despite our initial optimism, we determined that the quality of the data provided in these PPARs was highly variable, and overall quite low. A summary of the findings is as follows:


1. Rarely were ‘beneficiaries’ interviewed

  • Only 15% of the PPARs (5) gave details about the interview methodologies, but of this only 3% of the PPARs (1) described in detail how many participants were consulted, what they said and how they were interviewed (Nigeria 2014 [2]).
  • 54% of the reports (18), mentioned beneficiary input in data collected in the post-project mission, but gave no specific information on the number of participants involved nor were their voices cited nor was any information included on the methodologies used. The vast majority only vaguely referenced the findings of the post project mission, rather than data collection specifics. A typical example of this type of report is Estonia 2004 [1]
  • 30% of the PPARs (10) actually involved no direct participant/beneficiary participation in the evaluation process, with these missions only including stakeholders such as project staff, local government, NGOs, donors, consultants, etc.A typical example of this type of report is Niger 2005 [3]

These percentages are illustrated in Figure 1, below, which gives a visual breakdown of the number of reports that involved direct participant consultation with detailed methodologies provided (5), the number of reports where stakeholders were broadly consulted but no specific methodologies were provided (18), and the number of reports where no participants were directly involved in the evaluation process (10).


Graph 1


2. Sustainability of project outcomes was unclear

  • In 54% of cases, there was some change in the level of sustainability from the original level predicted in the ICR (which precedes and informs the PPAR) to the level established in the PPAR.  Ironically, of the 33 cases, 22 of them were classified as Likely or Highly Likely or Significantly Likely to be sustainable, yet participants were not asked for their input.
  • So on what basis was sustainability judged? Of the three cases where there was high participant consultation, the Nigerian project’s (where they asked 10% of participants for feedback) sustainability prospects was only moderate while India (also 10% feedback) and Kenya (14-20%) both were classified as likely to be sustainable.

Along the Y axis of Figure 2, below, is the spectrum of sustainability rankings observed in the PPARs, which range from “Negligible to Low” up to “High”. For each of the projects analyzed (there are 60 total projects accounted for in this graph, as some of the PPARs covered up to 4 individual projects in one report), the graph illustrates how many projects consulted participants, and how many failed to do so, for each evaluation outcome. As we can see, the majority of cases that were determined to be highly or significantly sustainable either did not consult participants directly or only consulted stakeholders broadly, with limited community input represented in the evaluation.  These are interesting findings, because although there is a lot of supposed sustainability being reported, very few cases actually involved the community participants in a meaningful way (to our knowledge, based on the lack of community consultation discussed in the reports). However, unless these evaluations are taking place at grassroots level, engaging the participants in a conversation about the true self-sustainability outcomes of projects, you can’t really know how sustainable the project is by only talking with donors, consultants, governments, etc. Are the right voices really being represented in this evaluation process? *Note: the “Sustainability” ranking was retitled “Risk to Development Outcomes” in 2006.


Graph 2


While projects were deemed sustainable, this is based on very little ‘beneficiary’ input. The significance of this information is simple: not enough is being done to ensure beneficiary participation in ALL STAGES of the development process, especially in the post-project time frame, even by prominent development institutions like the WB/IEG. While we commend the Bank for currently emphasizing citizen engagement via beneficiary feedback, this still seems to be more of a guiding theory than a habitualized practice [4]. Although all 34 documents I analyzed claimed there was “key stakeholder” or beneficiary participation, the reality is that no consistent procedural standard for eliciting such engagement could be identified.

Furthermore, the lack of specific details elaborating upon interview/survey methods, the number of participants involved, the discovery of any unintended outcomes, etc. creates a critical information void. As a free and public resource, the IEG database should not only be considered an important internal tool for the WB to catalog its numerous projects throughout time, but it is also an essential external tool for members of greater civil society who wish to benefit from the Bank’s extensive collection of resources – to learn from WB experiences and inform industry-wide best practices

For this reason, Valuing Voices implores the World Bank to step up its game and establish itself as a leader in post-project evaluation learning, not just in theory but also in practice. While these 33 PPARs represent just a small sample of the over 12,000 projects the WB has implemented since its inception, Valuing Voices hopes to see much more ex-post project evaluation happening in the future through IEG. Today we are seeing a decisive shift in the development world towards valuing sustainable outcomes over short-term fixes, towards informing future projects based on long-term data collection and learning, and towards community participation in all stages of the development process…


If one thing is certain, it is that global emphasis on sustainable development will not be going away anytime soon…but are we doing enough to ensure it?



[1] World Bank OED. (2004, June 28). Project Performance Assessment Report: Republic of Estonia, Agriculture Project. Retrieved from

[2] World Bank OED. (2014, June 26). Project Performance Assessment Report: Nigeria, Second National Fadama Development Project. Retrieved from

[3] World Bank OED. (2005, April 15). Project Performance Assessment Report: Niger, Energy Project. Retrieved from

[4] World Bank. (n.d.). Citizen Engagement: Incorporating Beneficiary Feedback in all projects by FY 18. Retrieved 2015, from


Youth Series Part III: The Role of ICT4D (Information and Communications Technology for Development) in Empowering Youth

Youth Series Part III: The Role of ICT4D (Information and Communications Technology for Development) in Empowering Youth

Youth Series Part I here Factors Hindering Youth Participation in Development

Youth Series Part II here How Technology Enables Youth Participation

For international multilateral organizations that are funding billions of dollars annually for a wide range of initiatives aimed at improving the socioeconomic conditions in developing countries, the challenge these organizations should undertake is to ensure a youth-centric focus within their funding for programs. According to the Pan-African Youth Union, youth empowerment is defined as, “a structured process where young people gain the ability and authority to make real economic, social and political decisions. [They] believe this is the process that builds capacity to implement change, in young people, for use in their own lives, their communities, and in their society, by acting on issues that they define as important.

A key takeaway is that problems and solutions are best addressed when they are self-defined. The problem that has been most adamantly professed revolves around employment and educational opportunities for so many youth who feel that they are not adequately prepared for the demands of the modern labor market. As a result, we propose that international development organizations fill the institutional void that exists in many developing countries by focusing their programming on solving problems such as poverty, unemployment and education with what has also been identified as an empowering tool in the modern era: technology. The most comprehensive solution that involves all of these aspects is greater Information and Communications Technology (ICT) skills training, listening, and using this medium as one method for combating the development challenges youth face in today’s world.


By orienting many youth development initiatives towards ICT skills, there is no doubt that developing youth with have much greater advantages to propel their capacity to be prosperous members of society. This is because, “equitable access to information, knowledge (or know-how) and education is one of the most vital principles in the emerging global knowledge economy. ICTs are practical tools in narrowing knowledge gaps between countries, regions and also people by providing new frontiers in the areas of information exchange, intellectual freedom and online education.” Additionally, “in the knowledge era continuous education and training is the only way for job security, especially if the education and training is in ICT-related skills.” The role of international development organizations should be to enable this type of progressive skills training both just for such access but also as a means for listening to our clients through mobile, Facebook, Yammer, Twitter and other applications. Also by improving access to ICT education programs to youth cohorts, they are more competitive in a global market that is increasingly demanding of workers with advanced ICT skills.  These programs must tackle the “widening digital divide” between developed and developing countries to ensure a more sustainable and balanced development scheme.

To this point, the Executive Secretary of the United Nation Economic Commission for Africa (UNECA), Mr. Abdoulie Janneh, gave a statement at the 2011 African Press Organization (APO) forum themed ‘Accelerating Youth Empowerment for Sustainable Development’, which highlighted the fact that human capital is key in facilitating growth, and with greater education and training the African youth can contribute more to development and growth for the continent. Nonetheless, he could not go without saying that, “several commitments, policies and programmes on youth education and employment have been prioritized at national, sub-regional and global levels to improve the livelihoods of young people in Africa. However, these initiatives have yet to translate into the desired outcomes. Thus, concerted and innovative efforts are still required especially at a time when the youth population continues to increase.” Again we see the trend that current policies have thus far failed to provide the circumstances necessary for youth empowerment to become realized in many African counties, which means development is happening too slowly for the millions of African youths who could be contributing invaluable skills to their societies – if they only had the means- and we were listening and funding their priorities!

An example of a good ICT4D training program is the Youth Empowerment Program (YEP) in Nigeria, which was a two-year program implemented by the International Youth Foundation (IYF) and Microsoft, to “to improve the employability of disadvantaged African youth in Nigeria between ages 16 to 35. The program, with support from Microsoft, worked with LEAP Africa and local partners to provide demand-driven training in information and communications technology (ICT), life skills, entrepreneurship and employment services.” Over two years, the program addressed the inadequacy of technical skills and lack of labor market information in the Nigerian youth by providing training to “improve the employability prospects of 2,500 young people throughout the country,” in an aim to place 70% of the program participants in jobs, internships, self-employment or community service opportunities with greater capacity in education and training. Six months after project completion, the project was evaluated by interviewing a sample follow-up cohort of 69 participants:

·      “All together, 55% of the respondents were employed, self-employed, participated in an internship or community service, or continued their studies after the training.” (This number is thought to be low, primarily because of the few employment opportunities in Bauchi, where the follow-up participants were from. This is typical in many cities where demand far outstrips employment opportunities)

·      “Over 78% of the respondents in the sample follow-up cohort confirmed that the ICT training had improved their employment prospects.” They indicated that this was because ICT skills are important selection criteria in the job market,” and there was also a significant increase in the follow-up cohort’s use of computers.

Unfortunately there was no data on employment that was using these new ICT skills; more data is needed to compare those trained versus untrained regarding employment using these skills used, and how much more ‘development’ was fostered by such trainings.   Yet given our dependence on technology, technical illiteracy seems a logical barrier. IYF has identified eight high-growth sectors for ICT-enabled youth employment, in fields such as, “Banking and Financial Services, Telecommunications, Information Technology, Oil and Gas, Education and Training, Media, Marketing and Advertising, Hospitality and Tourism, and Healthcare Services.”

The Arab Spring movements have proven that power in numbers and influence aided by the technological spread of ideas will not allow the youth cohort to be left behind in the push for development. Rather, they are demanding to be heard, and they are calling for greater capacity to be major contributors in their development goals. By funding ICT training programs that would allow youth to address the institutional weaknesses that hinder their demographic, international development organizations could find that the solution lies in shifting the goals of development towards sustainability – a sustainability that necessitates the empowerment of youth. By funding such training, youth can be heard, employed, and inform the development agendas for their countries.


We Value their Voices, and yours. What else is missing?