Whose responsibility is it to sustain project activities?
Billions of dollars are pumped into development activities in developing countries all over the world. Communities getting involved in these projects have a clear objective, which is to have their lives improved in the sectors that the projects target. As to whether this is the main objective of the development partners is not clear. What is clear is that the development partners focus more on numbers than on getting people to participate.
We note that majority of these projects are designed to last between 2-5 years. Delays occasioned poor planning or other unforeseen factors eat into the implementation time to an extent that in some projects it takes about 1-2 years to get a program running. This means that the planned implementation time is reduced. Baselines, midlines and end lines studies are conducted to inform changes that may have occurred within the program life, and in most cases they happen shortly after the program has started or just before it ends. In fact, some baselines are conducted after programs have started.
Considering the reduced implementation time and the fact that it takes a much longer time to get concrete behavior change related results, questions emerge whether indeed the reported changes are solid enough during implementation to be sustained. There is also a difference between measuring what can be referred to artificial changes (activities that community members adopt as a way of short-term trial in their excitement, but don't find useful afterward) and long lasting changes that community members adopt because they are useful part and parcel of their lives.
Almost all projects have logical frameworks (logframes) that show how project activities will be implemented and to some extent there are also exit strategies for closing out the project. This can be an illusion long-term. In most cases donors and implementers assume that communities will adopt activities that are being implemented within a specified period of time, and so projects close down at the end of the specified period of implementation assuming things will continue, but have no proof. Valuing Voices has done projected sustainability work in Ethiopia which points to possible differences between what donors expect to the sustained versus what communities are able to sustain.
The big questions remain: "whose responsibility is it to ensure that whatever has been adopted is continued? Whose responsibility is it to sustain project activities post project implementation?" It is silently assumed that communities can take up this responsibility and a key question is "what guarantees are there that this is possible and is happening?" Project sustainability should not be seen as a community-alone responsibility but rather a responsibility for all those who are involved in program activities. Sustainability studies should be planned for and executed in the same breath that the baselines, midlines, end-lines and in the rare cases impact assessments in real time should be planned for. We must do sustainability studies as they provide an additional realistic opportunity to inform us on actual community development post project implementation. Communities should not be left alone with it.
Pineapple, Apple- what differentiates Impact from self-Sustainability Evaluation?
There is great news. Impact Evaluation is getting attention and being funded to do excellent research, such as by the International Initiative for Impact Evaluation (3ie), by donors such as the World Bank, USAID, UKAid, the Bill and Melinda Gates Foundation in countries around the world. Better Evaluation tell us that "USAID, for example, uses the following definition: “Impact evaluations measure the change in a development outcome that is attributable to a defined intervention; impact evaluations are based on models of cause and effect and require a credible and rigorously defined counterfactual to control for factors other that the intervention that might account for the observed change.”
William Savedoff of CGD reports in Evaluation Gap reports that whole countries are setting up such evaluation institutes: "Germany's new independent evaluation institute for the country's development policies, based in Bonn, is a year old. DEval has a mandate that looks similar to Britain's Independent Commission for Aid Impact (discussed in a previous newsletter ) because it will not only conduct its own evaluations but also help the Federal Parliament monitor the effectiveness of international assistance programs and policies. DEval's 2013-2015 work program is ambitious and wide – ranging from specific studies of health programs in Rwanda to overviews of microfinance and studies regarding mitigation of climate change and aid for trade." There is even a huge compendium of impact evaluation databases.
There is definitely a key place for impact evaluations in analyzing which activities are likely to have the most statistically significant (which means definitive change) impact. One such study in Papua New Guinea found SMS (mobile text) inclusion in teaching made a significant difference in student test scores compared to the non-participating 'control group' who did not get the SMS (texts). Another study, the Tuungane I evaluation by a group of Columbia University scholars showed clearly that an International Rescue Committee program on community-level reconstruction did not change participant behaviors. The study was as well designed as an RCT can be, and its conclusions are very convincing. But as the authors note, we don't actually know why the intervention failed. To find that out, we need the kind of thick descriptive qualitative data that only a mixed methods study can provide.
Economist Kremer from Harvard says "“The vast majority of development projects are not subject to any evaluation of this type, but I’d argue the number should at least be greater than it is now.” Impact evaluations use 'randomized control trials', comparing the group that got project assistance to a similar group that didn't to gauge the change. A recent article that talks about treating poverty as a science experiment says "nongovernmental organizations and governments have been slow to adopt the idea of testing programs to help the poor in this way. But proponents of randomization—“randomistas,” as they’re sometimes called—argue that many programs meant to help the poor are being implemented without sufficient evidence that they’re helping, or even not hurting." However we get there, we want to know – the real (or at least likely)- impact of our programming, helping us focus funds wisely.
Data gleaned from impact evaluations is excellent information to have before design and during implementation. While impact evaluations are a thorough addition to the evaluation field, experts recommend they be done from the beginning of implementation. While they ask “Are impacts likely to be sustainable?”, and “to what extent did the impacts match the needs of the intended beneficiaries?” and importantly “did participants/key informants believe the intervention had made a difference?” they focus only on possible sustainability, using indicators we expect to see at project end rather than tangible proof of sustainability of the activities and impacts that communities define themselves that we actually return to measure 2-10 years later.
That is the role for something that has rarely been used in 30 years – for post-project (ex-post) evaluations looking at:
The resilience of expected impacts of the project 2, 5, 10 years after close-out
The communities’ and NGOs’ ability to self-sustain which activities themselves
Positive and negative unintended impacts of the project, especially 2 years after, while still in clear living memory
Kinds of activities the community and NGOs felt were successes which could not be maintained without further funding
Lessons for other projects across projects on what was most resilient that communities valued enough to do themselves or NGOs valued enough to get other funding for, as well as what was not resilient.
Where is this systematically happening already? There are our catalysts ex-post evaluation organizations, drawing on communities' wisdom. Here and there there are other glimpses of ValuingVoices, mainly to inform current programming, such as these two interesting approaches:
Vijayendra Rao describes how a social observatory approach to monitoring and evaluation in India’s self-help groups leads to “Learning by Doing”– drawing on material from the book Localizing Development: Does Participation Work? The examples show how groups are creating faster feedback loops with more useful information by incorporating approaches commonly used in impact evaluations. Rao writes: “The aim is to balance long-term learning with quick turnaround studies that can inform everyday decision-making.”
Ned Breslin, CEO of Water For People talks about “Rethinking Social Entrepreneurism: Moving from Bland Rhetoric to Impact (Assessment)”. His new water and sanitation program, Everyone Forever, does not focus on the inputs and outputs, including water provided or girls returning to school. Instead it centers instead on attaining the ideal vision of what a community would look like with improved water and sanitation, and working to achieve that goal. Instead of working on fundraising only, Breslin wants to redefine the meaning of success as a world in which everyone has access to clean water.
We need a combination. We need to know how good our programming is now through rigorous randomized control trials, and we need to ask communities and NGOs how sustainable the impacts are. Remember, 99% of all development projects worth hundreds of millions of dollars a year are not currently evaluated for long-term self-sustainability by their ultimate consumers, the communities they were designed to help.
We need an Institute of Self-Sustainable Evaluation and a Ministry of Sustainable Development in every emerging nation, funded by donors who support national learning to shape international assistance. We need a self-sustainability global database, mandatory to be referred to in all future project planning. We need to care enough about the well-being of our true client to listen, learn and act.
Catalyst organizations are those whose focus is on implementing programs with community level involvement during projects and local feedback loops to inform post-project evaluations for impact self-sustainability. An excellent example of this is Partners for Democratic Change, whose stated mission is, “to build sustainable capacity to advance civil society and a culture of change and conflict management worldwide,” focusing on initiating democratic practices through an approach called Sustainable Impact Investing. The goal of this approach is to foster the capacity of in-country organizations to “deliver systematic change,” with a focus on development that, “is bottom-up, locally-led rather than foreign-led, based on the belief that change comes from sustainable efforts led by local people, organizations and institutions invested in their own long-term future.”
To implement this progressive and participatory vision for sustainable development, Partners’ founded 22 Centers for Change and Conflict Management between 1989-2011, initially in the regions of Eastern and Central Europe. They later expanded to other regions struggling with democratic sociopolitical change. Partners’ conducted its own ex-post evaluation that averaged the results of 55 case studies that led to positive significant outcomes. The takeaway resulted in three main sustainability lessons:
“The importance of investing in local partners and building their capacity to promote democratic change;
the most pressing development challenges facing the world need to be addressed in a participatory manner with the input and shared commitment of government, businesses and civil society, which requires local leaders with sophisticated skills in change and conflict management;
and finally, the work of social entrepreneurs to make a difference in their own countries is strengthened and legitimated by technical and relational support from an international network of like-minded professionals facing similar challenges.”
With these objectives in mind, the greatest positive outcome that was observed occurred in almost 90% of the 55 stories. This outcome was that development and participation of civil society is most commonly achieved through, “education, training, mentoring, coaching, partnerships and coalition building, organizational development and capacity building, and creating and enabling environment that supports civil society development, such as passing NGO laws.” Further, in 80% of cases, there was advancement of good governance by influencing the participation of civil society working with government on the issues listed above, specifically free and fair elections, human rights protection, etc. Another 50% of the cases increased access to justice and managing and resolving disputes/conflicts, thereby strengthening civil society, and about 40% of the stories focused on promoting inclusive societies, improving majority-minority relations, and increasing leadership capacity for women and youth as agents for social change. The overall result of Partners’ efforts resulted in substantial impacts. Since 1991, the Centers have trained around 15,000 mediators and worked directly with more than 300,000 participants, benefitting an estimated total of 17.5 million people – and these are even considered to be conservative estimates. 22 total Centers had been established, and 18 still exist today with a success rate of 82%.
Yet the Centers still faced challenges, most notably in, “institutionalizing the processes they used to achieve results so that impact can be maximized and sustainable.” While the Centers effectively managed to implement collaborative and participatory methods to attain these successful outcomes, without the ability to institutionalize these processes in local communities and government institutions, the likelihood of sustainability is threatened.
Herein lies the importance of valuing local voices and participation, as it is clear that successful development initiatives depend on working with the community rather than on behalf of the community. Collaborative efforts between local participants and the international organizations that aim to enhance socioeconomic development in their communities results in both farther reaching and more sustainable outcomes than projects that ignore local feedback. Partners also does a great job bridging the objectives of building organizational capacity to sustain programming while also ValuingVoices of participants regarding how that capacity will be beneficial to them. There is an obvious need throughout the development community to follow the good examples made by Partners for Democratic Change in order to promote greater levels of participation on the path to sustainable development.
What can we learn from Ex-Post Evaluations?
In trying to learn more about sustainable development solutions, the first place to look for information is in ex-post evaluations, also commonly called post-project evaluations, which are conducted by either development organizations themselves or by independent external evaluators. Unlike final project evaluations, which are completed at the time of a project’s conclusion to assess whether or not it has achieved its intended goals, an ex-post evaluation is conducted in the years after a project’s official end date – maybe one, three, or five years after the fact. An ex-post evaluation is a highly valuable tool for determining not just how successful a development project may have been after resources and international funding were withdrawn, but rather the long-term sustainability of the outcomes for the community members who were being ‘developed’.
With the seemingly obvious necessity for ex-post evaluations to gaining a better understanding of both positive and negative development practice, I was surprised by how hard it was to actually find any. Some organizations are diligent about conducting post-project evaluations and documenting the results for future reference, namely the development assistance organization Japan International Cooperation Agency (JICA), which has an extensive reference database to search its ex-post evaluations. However, this is certainly not the norm (yet), or if organizations are conducting ex-post evaluations they are not making the information widely available to the public. My research process included search terms such as, “ex-post evaluations by international development organizations”, “post-project evaluations”, and “impact evaluations.” In using these generic search terms, I was only moderately successful in finding helpful evaluations for my reserch, which suggests the need for more readily accessible information to the public about development outcomes.
We also found that some organizations had completed these evaluations, but they were at times too vague to obtain much useful information from. Out of about 10-15 evaluations that we found so far, there were only around 7 that were clear and organized enough to include in my table of summaries. (My search was limited to projects that were conducted predominantly at the community level, rather than at the municipal or state level.) The variable quality of these evaluations has a negative impact on their usefulness – if an ex-post evaluation is in an unsearchable format or doesn’t follow a fairly standardized organization, how will it be able to inform future projects efficiently? Additionally, it would be much easier for project coordinators to learn from past projects, and even other organizations, if there existed a more accessible and methodical database to make searching for ex-post evaluations simple. Despite these challenges, I have included five different evaluations from my preliminary research with which I was able to compare results for a better understanding of how to achieve sustainable project outcomes. The framework used for analyzing these evaluations considered:
- The sector of the development project (i.e. food security, poverty reduction, agricultrual development);
- the implementing organization and the evaluating organization (if it was different);
- the dates and gap between the project and the ex-post evaluation;
- the project objectives;
- specific ex-post evaluation methods;
- the positive/sustainable outcomes;
- the negative/unsustainable outcomes;
- the transfer to authorities;
- the amount of money invested overall;
- and the level of local participation.
The five evaluations analyzed include:
For a full summary of these evaluations, please see the Ex-Post Evaluations Summary Table. Here, are brief synopses of the most pertinent information for the above framework of analysis, and the table provides a better context for our conclusions.
Here are the key findings from the various ex-post evaluations that we found to be most significant:
- Over 18 million USD were spent on the five combined projects, but most projects did not explicitly enumerate how many people/households were impacted by the individual projects. An exception to this is the project in Mauritius, which reported reaching around 3,500 people. Without understanding the scale of the program, it is difficult to compare projects directly to one another.
- Mercy Corps’ MILK Project in Niger was inclusive and participatory in its ex-post evaluation process, which resulted in hard data that can easily be analyzed, compared, and learned from in future projects. In addition, this evaluation utilized a unique pictoral tool developed specifically to include all project participants in the feedback loop, despite widespread illiteracy, so that every individual had the opportunity to provide their insight on project impacts.
- JICA’s Ethiopian agricultural development program involved community participation in the project from the earliest planning phases, with 100% of members reporting that they had “participated” or “actively participated” in the process. This resulted in feelings of greater personal ownership of the project, and heightened local understanding of their responsibilities.
- Evaluations that included direct community feedback in their analyses were by far the most helpful when trying to determine sustainability. For instance, in JICA’s Agricultural Development Project in the Kambia District of Sierra Leone there was no mention of local level involvement throughout any of the stages of project planning, implementation, or evaluation, which could have influenced why the project only “somewhat” achieved its objectives
- Projects that have flexible agendas, willing to change with the changing needs of the population during the planning/implementation phases, are viewed positively by the developing community and achieve more successful outcomes. This willingness to adapt was what characterized the project in GVC OLNUS Argentine Puna. Considering the true, up-to-date needs of the community allowed for greater local participation that enabled the strengthening of local autonomy (and thus, sustainability).
- None of the project evaluations provided a breakdown of how successful budget allocation was. The JICA projects included a breakdown of the overall budget into equipment and local costs, however despite some evaluations noting who provided certain funding, none mentioned if parts of the budget were inefficiently used. We believe it would be helpful to include not just how much money was invested in a project, but also how much of that budget either prompted direct growth or failed to produce an effective outcome.
Local community members are often referred to as ‘beneficiaries’ in the development process, yet they are the ones who governments, NGOs, and multilateral organizations are trying to empower through their various socioeconomic development missions. So, when we need to understand what worked with a project, and as importantly what didn’t work for a project, it is the voices of the community that need to be heard. A lot of great work is being done in international development, but it is clear that after her initial research that ex-post evaluations are essential to determining project sustainability and that projects that propose community-level development must also take the time to directly involve those community members in their own evaluation process. This feedback loop has the power to inform and influence future projects, while also creating the opportunity to actually listen to what participants (not beneficiaries) can sustain for themselves to achieve a better life.
Where have you found feedback loops that work? What excellent programming can you share?
 Nishimaki, R., Kunihiro, H., & Tahashi, S. (2008, July 8). Evaluation Result Summary: The Project for Irrigation Farming Improvement. Retrieved from https://www.jica.go.jp/english/our_work/evaluation/tech_and_grant/project/term/africa/c8h0vm000001rp75-att/ethiopia_2008_01.pdf
 Kumagai, M., Otsuka, M., & Sakagami, J. (2009, September 26). Evaluation Result Summary: The Agricultural Development Project in Kambia in the Republic of Sierra Leone. Retrieved from https://www.jica.go.jp/english/our_work/evaluation/tech_and_grant/project/term/africa/c8h0vm000001rp75-att/ethiopia_2008_01.pdf
 The Improve Group. (2012, December). Post Project Evaluation of Mercy Corps’ MILK Program in Niger: Examining Contributions to Resilience. Retrieved from https://www.yumpu.com/en/document/read/35930718/niger-milk-post-project-evaluation-final-report-mercy-corps
 Proatec SRL. (2013, March). Ex Post Evaluation of Projects Managed by NGOs in Argentina. Retrieved from https://www.oecd.org/derec/italy/Evalutation-of-Projects-Managed-by-NGOs-in-Argentina.pdf
 International Fund for Agricultural Development (IFAD). (1997, August). Small-Scale Agricultural Development Project – Ex-post Evaluation. Retrieved from https://www.ifad.org/en/web/ioe/evaluation/asset/39828071