What can we learn from Ex-Post (Post-Project) Evaluations?


What can we learn from Ex-Post Evaluations?


In trying to learn more about sustainable development solutions, the first place to look for information is in ex-post evaluations, also commonly called post-project evaluations, which are conducted by either development organizations themselves or by independent external evaluators. Unlike final project evaluations, which are completed at the time of a project’s conclusion to assess whether or not it has achieved its intended goals, an ex-post evaluation is conducted in the years after a project’s official end date – maybe one, three, or five years after the fact. An ex-post evaluation is a highly valuable tool for determining not just how successful a development project may have been after resources and international funding were withdrawn, but rather the long-term sustainability of the outcomes for the community members who were being ‘developed’.

With the seemingly obvious necessity for ex-post evaluations to gaining a better understanding of both positive and negative development practice, I was surprised by how hard it was to actually find any. Some organizations are diligent about conducting post-project evaluations and documenting the results for future reference, namely the development assistance organization Japan International Cooperation Agency (JICA), which has an extensive reference database to search its ex-post evaluations. However, this is certainly not the norm (yet), or if organizations are conducting ex-post evaluations they are not making the information widely available to the public. My research process included search terms such as, “ex-post evaluations by international development organizations”, “post-project evaluations”, and “impact evaluations.” In using these generic search terms, I was only moderately successful in finding helpful evaluations for my reserch, which suggests the need for more readily accessible information to the public about development outcomes.

We also found that some organizations had completed these evaluations, but they were at times too vague to obtain much useful information from. Out of about 10-15 evaluations that we found so far, there were only around 7 that were clear and organized enough to include in my table of summaries. (My search was limited to projects that were conducted predominantly at the community level, rather than at the municipal or state level.) The variable quality of these evaluations has a negative impact on their usefulness – if an ex-post evaluation is in an unsearchable format or doesn’t follow a fairly standardized organization, how will it be able to inform future projects efficiently? Additionally, it would be much easier for project coordinators to learn from past projects, and even other organizations, if there existed a more accessible and methodical database to make searching for ex-post evaluations simple. Despite these challenges, I have included five different evaluations from my preliminary research with which I was able to compare results for a better understanding of how to achieve sustainable project outcomes. The framework used for analyzing these evaluations considered:

  • The sector of the development project (i.e. food security, poverty reduction, agricultrual development);
  • the implementing organization and the evaluating organization (if it was different);
  • the dates and gap between the project and the ex-post evaluation;
  • the project objectives;
  • specific ex-post evaluation methods;
  • the positive/sustainable outcomes;
  • the negative/unsustainable outcomes;
  • the transfer to authorities;
  • the amount of money invested overall;
  • and the level of local participation.

The five evaluations analyzed include:

For a full summary of these evaluations, please see the Ex-Post Evaluations Summary Table. Here, are brief synopses of the most pertinent information for the above framework of analysis, and the table provides a better context for our conclusions.

Here are the key findings from the various ex-post evaluations that we found to be most significant:

  • Over 18 million USD were spent on the five combined projects, but most projects did not explicitly enumerate how many people/households were impacted by the individual projects. An exception to this is the project in Mauritius, which reported reaching around 3,500 people. Without understanding the scale of the program, it is difficult to compare projects directly to one another.
  • Mercy Corps’ MILK Project in Niger was inclusive and participatory in its ex-post evaluation process, which resulted in hard data that can easily be analyzed, compared, and learned from in future projects. In addition, this evaluation utilized a unique pictoral tool developed specifically to include all project participants in the feedback loop, despite widespread illiteracy, so that every individual had the opportunity to provide their insight on project impacts.
  • JICA’s Ethiopian agricultural development program involved community participation in the project from the earliest planning phases, with 100% of members reporting that they had “participated” or “actively participated” in the process. This resulted in feelings of greater personal ownership of the project, and heightened local understanding of their responsibilities.
  • Evaluations that included direct community feedback in their analyses were by far the most helpful when trying to determine sustainability. For instance, in JICA’s Agricultural Development Project in the Kambia District of Sierra Leone there was no mention of local level involvement throughout any of the stages of project planning, implementation, or evaluation, which could have influenced why the project only “somewhat” achieved its objectives
  • Projects that have flexible agendas, willing to change with the changing needs of the population during the planning/implementation phases, are viewed positively by the developing community and achieve more successful outcomes. This willingness to adapt was what characterized the project in GVC OLNUS Argentine Puna. Considering the true, up-to-date needs of the community allowed for greater local participation that enabled the strengthening of local autonomy (and thus, sustainability).
  • None of the project evaluations provided a breakdown of how successful budget allocation was. The JICA projects included a breakdown of the overall budget into equipment and local costs, however despite some evaluations noting who provided certain funding, none mentioned if parts of the budget were inefficiently used. We believe it would be helpful to include not just how much money was invested in a project, but also how much of that budget either prompted direct growth or failed to produce an effective outcome.

Local community members are often referred to as ‘beneficiaries’ in the development process, yet they are the ones who governments, NGOs, and multilateral organizations are trying to empower through their various socioeconomic development missions. So, when we need to understand what worked with a project, and as importantly what didn’t work for a project, it is the voices of the community that need to be heard. A lot of great work is being done in international development, but it is clear that after her initial research that ex-post evaluations are essential to determining project sustainability and that projects that propose community-level development must also take the time to directly involve those community members in their own evaluation process. This feedback loop has the power to inform and influence future projects, while also creating the opportunity to actually listen to what participants (not beneficiaries) can sustain for themselves to achieve a better life.

Where have you found feedback loops that work? What excellent programming can you share?



[1] Nishimaki, R., Kunihiro, H., & Tahashi, S. (2008, July 8). Evaluation Result Summary: The Project for Irrigation Farming Improvement. Retrieved from https://www.jica.go.jp/english/our_work/evaluation/tech_and_grant/project/term/africa/c8h0vm000001rp75-att/ethiopia_2008_01.pdf

[2] Kumagai, M., Otsuka, M., & Sakagami, J. (2009, September 26). Evaluation Result Summary: The Agricultural Development Project in Kambia in the Republic of Sierra Leone. Retrieved from https://www.jica.go.jp/english/our_work/evaluation/tech_and_grant/project/term/africa/c8h0vm000001rp75-att/ethiopia_2008_01.pdf

[3] The Improve Group. (2012, December). Post Project Evaluation of Mercy Corps’ MILK Program in Niger: Examining Contributions to Resilience. Retrieved from https://www.yumpu.com/en/document/read/35930718/niger-milk-post-project-evaluation-final-report-mercy-corps

[4] Proatec SRL. (2013, March). Ex Post Evaluation of Projects Managed by NGOs in Argentina. Retrieved from https://www.oecd.org/derec/italy/Evalutation-of-Projects-Managed-by-NGOs-in-Argentina.pdf

[5] International Fund for Agricultural Development (IFAD). (1997, August). Small-Scale Agricultural Development Project – Ex-post Evaluation. Retrieved from https://www.ifad.org/en/web/ioe/evaluation/asset/39828071


How do we define Sustainability?

How do we define Sustainability? 

Sustainability is a key success outcome for which a project is assessed, one of the five OECD DAC Criteria for Evaluating Development Assistance, which also includes relevance, effectiveness, efficiency, and impact. However, it is important to understand how this term, sustainability, is defined in a more specific way. Traditionally, the evaluation of project sustainability is linked primarily to the financial viability of the project into the future. Will there be an extension of donor funding to continue development activities or an operational budget for maintaining new technologies? Will there be sustainability of resources to pay for continued manpower and training programs? While the financial sustainability of a project is an important factor, Valuing Voices has a different approach to assessing sustainability that focuses primarily on the capacity of the community, rather than the donors, to achieving long-term program success. The Valuing Voices definition is:


Sustainability – assessed by looking at what communities can maintain themselves three, five, or ten years after a development program is completed. The focus is not on the financial sustainability of donor-funded activities, unless that is what communities request.


With this definition in mind, evaluations should be more heavily focused on attaining community feedback regarding what they believe were the aspects of the project that had the greatest significant outcomes. For a program to positively impact the community it was implemented in, it is crucial that the local community members themselves have the ability to sustain the activities that benefitted them the most, ideally with their heightened economic capacities. Implementing organizations and donor feedback is important as well, but without considering the opinions of the clients in developing societies, what do we really know? There are a few components to the evaluation process that are essential for the Valuing Voices model of sustainability to be accomplished:


1.     Full Participation of community participants and local partners, at all stages of project design and implementation;

2.     Planning for sustainability from the start, and ensuring that any knowledge resources generated by the project are designed to be accessible to communities;

3.     Measure outcomes and impact sustainability of projects from two-five years after the end;

4.     Build and fund local evaluation capacity;

5.     Create feedback loops where learning is shared among and between participants, implementing partners, governments and other stakeholders at the local, national, regional, and global level;

6.     Use technology to effectively capture data in a standardized format and facilitate feedback loops.


This is the central idea of the Valuing Voices mission, and now we’d like to hear feedback from YOU about what sustainability really means in the development context. Do you agree with this framework of analysis, or do you think there are other ways to define project sustainability? Does this definition changes in different contexts? Do you believe that the six components above capture the key elements for evaluating the sustainability of the project, or should other components be added?  Let us know in the comments below so we can engage in an open discussion about sustainability – because everyone’s perspective (and voice) should be valued! 

Mercy Corps – early leader in evaluating sustainability… and what donors are funding

Mercy Corps – early leader in evaluating sustainability… and what donors are funding

Mercy Corps shared their work in post-project sustainability early on, inspiring me it was possible. As they put it, "clearly, a sustained ability for collective problem solving offers the best path to lasting improvement in people's lives and, for donors, the best return on investment" in two conflict resolution projects in Kyrgyztan, Tajikistan and Uzbekistan: Peaceful Communities Initiative $6.5 mil (2002-2007) and Community Action Investment Program $11.8 mil (2002-2005). These two projects were complex community mobilization programs with aims "to empower communities to work together in a participatory manner to address the infrastructure and social needs [while] developing sustainable skills in problem solving, consensus-building and accountability. The process also empowers communities to begin to identify and utilize existing resources within the communities and not to depend only on external assistance."

So what happened? Their report on sustainability in 2007 random-sampled and interviewed youth leaders and in community action groups, 55% of the communities and found promising results: 

According to the evaluation:

*93% of surveyed projects are still being actively used by the community after programs closed.

*73% of members of the community action groups (CAG) felt it was still easier to approach local government at least one year after the programs ended and 68% witnessed local government becoming more involved in community activities after the end of the programs as compared with before the programs [participants and partners had] implemented almost 100 infrastructure projects by themselves and independent of donor funds.

* 72% of youth report that they continue to use at least one skill they learned during the programs. Those cited most often include teamwork and communication, as well as practical skills such as sewing, construction, roofing, journalism and cooking

*57% of the communities studied continuing to use one or more of the decision-making practices promoted during the program.

* 42% of CAG members, representing 35 of the 51 communities, reported that the community had worked collectively on new projects or repairs to existing infrastructure

* In total, 40% of general community members interviewed reported that youth had initiated community activities since January 2007, and 68% of these community members recognized that some or all of the activities had not taken place prior to Mercy Corps’ program

This appears excellent! They interviewed youth and key local stakeholders (CAG), used country nationals to evaluate (both of which are wonderful as it is their country) and they distilled best practices in community mobilization in accessible reports shared within the international non-profit world via the web.  Quibbling only a little, a difficulty is that while the report states that they "I dentified factors that influence sustainability, through both positive examples and non-sustained projects and practices" our ability to learn from what didn't work was hidden.  Mercy Corps discussed what didn't work only by recommending what to do, rather than discuss the extent to which specific activities simply didn't work.  While focusing on the positive is the best path forward, ideally Mercy Corps would have also shared what failed, possibly why and whether they had seen this elsewhere (thereby suggesting such activities may not be promising to replicate). Further, what would be valuable is to gauge roughly the percentage of the program value these successes represented versus those that did not work well. Sucha cost-effectiveness ratio would benefit our industry.

Overall, such successful national-level capacity building and program effectiveness learning is terrific, and new focus may lead funding to follow. Two major foundations have lately said that they have an interest in supporting national capacity building and empowerment (Rockefeller) or cost-effectiveness (Bill and Melinda Gates), respectively.

More specifically, Nancy Macpherson of Rockefeller Foundation states the Foundation is "committed to evaluation practices that are rigorous, innovative, inclusive of stakeholders’ voices, and appropriate to the contexts in which the foundation works." This is done by "integrating the views of developing-region evaluators" as well as:

* "strengthening developing country evaluation practice and ownership of results…
* developing innovative methods and approaches to evaluation and learning…
* the empowerment of people; and
* the effectiveness of development interventions by national governments and international partners and, increasingly, by non-state actors—foundations, philanthropists, and agencies that promote investing for impact."

The Bill and Melinda Gates Foundation mentions cost effectiveness

* When evidence is needed to fill a knowledge gap or evaluate a significant policy decision. Evaluation can help to resolve uncertainty and determine the relative cost-effectiveness of different interventions, models, or approaches" (Gates) and

* Both quantitative and qualitative data are relevant in evaluating processes, operations, cost effectiveness, key stakeholders’ perceptions, and enabling contextual factors" (Gates)

USAID seems not to have done any ex-post evaluation since a single one on a Philiiipines loan in 1980, whereas parts of the EU seem to be doing many more such evaluations in agriculture and rural development (2002-06), industrial technologies (2009-11) as recently as 2013 for ICT.  Given the size of its portfolio of development assistance, $20.4 billion in development and humanitarian programs in fiscal 2014, learning about sustained impact seems imperative from a return on investment (ROI) perspecive.

Supporting national capacity and evaluating programs' return on investment is pivotal. In Mercy Corps/ USAID's funding case alone, this comes to over $18 million dollars.  But Mercy Corps paid for this evaluation themselves, from private funds. If we are in the process of fostering country-led and eventually country-financed development, they need to know how much such investments get them – as we do.  

Does anyone know of such great program learning that begins to teach about return on investment? Can you share?