Sustainable Development Goals (SDGs), Funding and Accountability for sustainable projects?
What are Sustainable Development Goals? ” the United Nations adopted the new post-2015 development agenda. The new proposals – to be achieved by 2030- set 17 new ‘sustainable’ development goals (SDGs) and 169 targets. Some, like Oxfam, see the SDGs as a country budgeting and prioritization as well as an international fundraising tool. They cite that “government revenue currently funds 77% of spending…aligned with government priorities, balanced between investment and recurrent and easy to implement than donor-funded spending…” National investments are vital, but how much has the world used the SDGs to target investments and foster sustainable results?
Using results data such as that of the sectoral SDGs, countries can also ensure accountability for the policies implemented to reduce global and local inequities, but we must learn from the data. Over halfway to the goal, data is being collected, and while there is robust monitoring by countries who have built their M&E systems, other countries are faltering. “A recent report by Paris21 found even highly developed countries are still not able to report more than 40-50% of the SDG indicators” and “only 44% of SDG indicators have sufficient data for proper global and regional monitoring”. Further, there is very little evaluation or transparent accountability. Some of the data illuminate vitally need-to-know-for-better-programming. SDG data shows good news that Western and Asian countries have done better than most of the world 2015-19… but there is a lot of missing data while other data shows staggering inequities such as these:
- In Vietnam, a child born into the majority Kinh, or Viet, ethnic group is three and a half times less likely to die in his or her first five years than a child from other Vietnamese ethnic groups.
- In the United States, a black woman is four times more likely to die in childbirth than a white woman.
So are we using the SDG data to better target funding and improve design? This is the kind of evaluative learning (or at least sharing by those that are doing it :)) that is missing. As my colleague and friend Sanjeev Sridharan writes on Rethinking Evaluation, “As a field we need to more clearly understand evaluation’s role in addressing inequities and promoting inclusion” including “Promoting a Culture of Learning for Evaluation – these include focus on utilization and integration of evaluation into policy and programs.” How well learning is integrating is unknown.
As a big picture update on the progress of the Sustainable Development Goals (SDGs) in 2021, with only nine years left to the goal: It’s not looking good. The scorecards show COVID-19 has slowed down or wiped out many achievements, with 100 million people pushed into extreme poverty, according to the IMF. Pre-Covid, our blog on sectoral SDG statistics on health, poverty, hunger, and climate, was already showing very mixed results and a lack of mutual accountability.
The private sector is ever-being pushed to fund more of such development costs, only marginally successfully, as public sector expenditures are squeezed. Yet the G20 estimates that $2.5 TRILLION is needed every year to meet the SDG goals. As we have seen at Impact Guild, the push to incentivize private commitments is faltering. “To ensure its sustainability, the private sector has specific interests in securing long-term production along commodity supply chains, while reducing their environmental and social impacts and mitigating risks… The long-term economic impacts of funding projects that support the sustainability agenda are, thus, clearly understood. However, additional capital needs to flow into areas that address the risks appropriately. For example, much remains to be done to factor climate change as a risk variable into emerging markets that face the largest financing gap in achieving the SDGs.” Further, if decreased funding trends continue, by 2030, at minimum 400 million people will still live on less than $1.25 a day; around 650 million people will be undernourished, and nearly 1 billion people will be without energy access. So we’re not meeting the SDGs, they’re being derailed by COVID in places, and we aren’t beginning to cost out the need to address climate change and its effects on global development…. so now what?
To ensure that giving everyone a fair chance in life is more than just a slogan; accountability is crucial. This should include a commitment from world leaders to report on progress on “leaving no one behind” in the SDG follow-up and review framework established for the post-2015 agenda and for the private sector to loudly track their investments across the SDGs. For as The Center for American Progress wrote, money and results are key: We must “measure success in terms of outcomes for people, rather than in inputs—such as the amount of money spent on a project—as well as in terms of national or global outcomes” and that “policymakers at the global level and in each country should task a support team of researchers with undertaking an analysis of each commitment.”
A further concern. While we seem to measure the statistics periodically and see funding allocated to SDG priorities, but there are few causal links drawn between intensity in investment in any SDG goal and sustained results. To what degree are the donations/ investments into the SDGs linked to improvements? Without measuring causality or attribution, it could be a case of “A rising tide lifts all boats” as economies improve or, as Covid-related economic decline wiped out 20 years of development gains as Bill Gates noted last year. We need proof that trillions of dollars of international “Sustainable development” programs have any sustained impact beyond the years of intervention.
We must do more evaluation and learn from SDG data for better targeting of investments and do ex-post sustainability evaluations to see what was most sustained, impactful, and relevant. Donors should raise more funds to meet needs and consider only funding what could be sustained locally. Given the still uncounted demands on global development funding, we can no longer hope or wait for global mobilization of trillions given multiple crises pushing more of the world into crisis. Let’s focus now.
Pineapple, Apple- what differentiates Impact from self-Sustainability Evaluation?
There is great news. Impact Evaluation is getting attention and being funded to do excellent research, such as by the International Initiative for Impact Evaluation (3ie), by donors such as the World Bank, USAID, UKAid, the Bill and Melinda Gates Foundation in countries around the world. Better Evaluation tell us that "USAID, for example, uses the following definition: “Impact evaluations measure the change in a development outcome that is attributable to a defined intervention; impact evaluations are based on models of cause and effect and require a credible and rigorously defined counterfactual to control for factors other that the intervention that might account for the observed change.”
William Savedoff of CGD reports in Evaluation Gap reports that whole countries are setting up such evaluation institutes: "Germany's new independent evaluation institute for the country's development policies, based in Bonn, is a year old. DEval has a mandate that looks similar to Britain's Independent Commission for Aid Impact (discussed in a previous newsletter ) because it will not only conduct its own evaluations but also help the Federal Parliament monitor the effectiveness of international assistance programs and policies. DEval's 2013-2015 work program is ambitious and wide – ranging from specific studies of health programs in Rwanda to overviews of microfinance and studies regarding mitigation of climate change and aid for trade." There is even a huge compendium of impact evaluation databases.
There is definitely a key place for impact evaluations in analyzing which activities are likely to have the most statistically significant (which means definitive change) impact. One such study in Papua New Guinea found SMS (mobile text) inclusion in teaching made a significant difference in student test scores compared to the non-participating 'control group' who did not get the SMS (texts). Another study, the Tuungane I evaluation by a group of Columbia University scholars showed clearly that an International Rescue Committee program on community-level reconstruction did not change participant behaviors. The study was as well designed as an RCT can be, and its conclusions are very convincing. But as the authors note, we don't actually know why the intervention failed. To find that out, we need the kind of thick descriptive qualitative data that only a mixed methods study can provide.
Economist Kremer from Harvard says "“The vast majority of development projects are not subject to any evaluation of this type, but I’d argue the number should at least be greater than it is now.” Impact evaluations use 'randomized control trials', comparing the group that got project assistance to a similar group that didn't to gauge the change. A recent article that talks about treating poverty as a science experiment says "nongovernmental organizations and governments have been slow to adopt the idea of testing programs to help the poor in this way. But proponents of randomization—“randomistas,” as they’re sometimes called—argue that many programs meant to help the poor are being implemented without sufficient evidence that they’re helping, or even not hurting." However we get there, we want to know – the real (or at least likely)- impact of our programming, helping us focus funds wisely.
Data gleaned from impact evaluations is excellent information to have before design and during implementation. While impact evaluations are a thorough addition to the evaluation field, experts recommend they be done from the beginning of implementation. While they ask “Are impacts likely to be sustainable?”, and “to what extent did the impacts match the needs of the intended beneficiaries?” and importantly “did participants/key informants believe the intervention had made a difference?” they focus only on possible sustainability, using indicators we expect to see at project end rather than tangible proof of sustainability of the activities and impacts that communities define themselves that we actually return to measure 2-10 years later.
That is the role for something that has rarely been used in 30 years – for post-project (ex-post) evaluations looking at:
The resilience of expected impacts of the project 2, 5, 10 years after close-out
The communities’ and NGOs’ ability to self-sustain which activities themselves
Positive and negative unintended impacts of the project, especially 2 years after, while still in clear living memory
Kinds of activities the community and NGOs felt were successes which could not be maintained without further funding
Lessons for other projects across projects on what was most resilient that communities valued enough to do themselves or NGOs valued enough to get other funding for, as well as what was not resilient.
Where is this systematically happening already? There are our catalysts ex-post evaluation organizations, drawing on communities' wisdom. Here and there there are other glimpses of ValuingVoices, mainly to inform current programming, such as these two interesting approaches:
Vijayendra Rao describes how a social observatory approach to monitoring and evaluation in India’s self-help groups leads to “Learning by Doing”– drawing on material from the book Localizing Development: Does Participation Work? The examples show how groups are creating faster feedback loops with more useful information by incorporating approaches commonly used in impact evaluations. Rao writes: “The aim is to balance long-term learning with quick turnaround studies that can inform everyday decision-making.”
Ned Breslin, CEO of Water For People talks about “Rethinking Social Entrepreneurism: Moving from Bland Rhetoric to Impact (Assessment)”. His new water and sanitation program, Everyone Forever, does not focus on the inputs and outputs, including water provided or girls returning to school. Instead it centers instead on attaining the ideal vision of what a community would look like with improved water and sanitation, and working to achieve that goal. Instead of working on fundraising only, Breslin wants to redefine the meaning of success as a world in which everyone has access to clean water.
We need a combination. We need to know how good our programming is now through rigorous randomized control trials, and we need to ask communities and NGOs how sustainable the impacts are. Remember, 99% of all development projects worth hundreds of millions of dollars a year are not currently evaluated for long-term self-sustainability by their ultimate consumers, the communities they were designed to help.
We need an Institute of Self-Sustainable Evaluation and a Ministry of Sustainable Development in every emerging nation, funded by donors who support national learning to shape international assistance. We need a self-sustainability global database, mandatory to be referred to in all future project planning. We need to care enough about the well-being of our true client to listen, learn and act.
What can we learn from Ex-Post Evaluations?
In trying to learn more about sustainable development solutions, the first place to look for information is in ex-post evaluations, also commonly called post-project evaluations, which are conducted by either development organizations themselves or by independent external evaluators. Unlike final project evaluations, which are completed at the time of a project’s conclusion to assess whether or not it has achieved its intended goals, an ex-post evaluation is conducted in the years after a project’s official end date – maybe one, three, or five years after the fact. An ex-post evaluation is a highly valuable tool for determining not just how successful a development project may have been after resources and international funding were withdrawn, but rather the long-term sustainability of the outcomes for the community members who were being ‘developed’.
With the seemingly obvious necessity for ex-post evaluations to gaining a better understanding of both positive and negative development practice, I was surprised by how hard it was to actually find any. Some organizations are diligent about conducting post-project evaluations and documenting the results for future reference, namely the development assistance organization Japan International Cooperation Agency (JICA), which has an extensive reference database to search its ex-post evaluations. However, this is certainly not the norm (yet), or if organizations are conducting ex-post evaluations they are not making the information widely available to the public. My research process included search terms such as, “ex-post evaluations by international development organizations”, “post-project evaluations”, and “impact evaluations.” In using these generic search terms, I was only moderately successful in finding helpful evaluations for my reserch, which suggests the need for more readily accessible information to the public about development outcomes.
We also found that some organizations had completed these evaluations, but they were at times too vague to obtain much useful information from. Out of about 10-15 evaluations that we found so far, there were only around 7 that were clear and organized enough to include in my table of summaries. (My search was limited to projects that were conducted predominantly at the community level, rather than at the municipal or state level.) The variable quality of these evaluations has a negative impact on their usefulness – if an ex-post evaluation is in an unsearchable format or doesn’t follow a fairly standardized organization, how will it be able to inform future projects efficiently? Additionally, it would be much easier for project coordinators to learn from past projects, and even other organizations, if there existed a more accessible and methodical database to make searching for ex-post evaluations simple. Despite these challenges, I have included five different evaluations from my preliminary research with which I was able to compare results for a better understanding of how to achieve sustainable project outcomes. The framework used for analyzing these evaluations considered:
- The sector of the development project (i.e. food security, poverty reduction, agricultrual development);
- the implementing organization and the evaluating organization (if it was different);
- the dates and gap between the project and the ex-post evaluation;
- the project objectives;
- specific ex-post evaluation methods;
- the positive/sustainable outcomes;
- the negative/unsustainable outcomes;
- the transfer to authorities;
- the amount of money invested overall;
- and the level of local participation.
The five evaluations analyzed include:
For a full summary of these evaluations, please see the Ex-Post Evaluations Summary Table. Here, are brief synopses of the most pertinent information for the above framework of analysis, and the table provides a better context for our conclusions.
Here are the key findings from the various ex-post evaluations that we found to be most significant:
- Over 18 million USD were spent on the five combined projects, but most projects did not explicitly enumerate how many people/households were impacted by the individual projects. An exception to this is the project in Mauritius, which reported reaching around 3,500 people. Without understanding the scale of the program, it is difficult to compare projects directly to one another.
- Mercy Corps’ MILK Project in Niger was inclusive and participatory in its ex-post evaluation process, which resulted in hard data that can easily be analyzed, compared, and learned from in future projects. In addition, this evaluation utilized a unique pictoral tool developed specifically to include all project participants in the feedback loop, despite widespread illiteracy, so that every individual had the opportunity to provide their insight on project impacts.
- JICA’s Ethiopian agricultural development program involved community participation in the project from the earliest planning phases, with 100% of members reporting that they had “participated” or “actively participated” in the process. This resulted in feelings of greater personal ownership of the project, and heightened local understanding of their responsibilities.
- Evaluations that included direct community feedback in their analyses were by far the most helpful when trying to determine sustainability. For instance, in JICA’s Agricultural Development Project in the Kambia District of Sierra Leone there was no mention of local level involvement throughout any of the stages of project planning, implementation, or evaluation, which could have influenced why the project only “somewhat” achieved its objectives
- Projects that have flexible agendas, willing to change with the changing needs of the population during the planning/implementation phases, are viewed positively by the developing community and achieve more successful outcomes. This willingness to adapt was what characterized the project in GVC OLNUS Argentine Puna. Considering the true, up-to-date needs of the community allowed for greater local participation that enabled the strengthening of local autonomy (and thus, sustainability).
- None of the project evaluations provided a breakdown of how successful budget allocation was. The JICA projects included a breakdown of the overall budget into equipment and local costs, however despite some evaluations noting who provided certain funding, none mentioned if parts of the budget were inefficiently used. We believe it would be helpful to include not just how much money was invested in a project, but also how much of that budget either prompted direct growth or failed to produce an effective outcome.
Local community members are often referred to as ‘beneficiaries’ in the development process, yet they are the ones who governments, NGOs, and multilateral organizations are trying to empower through their various socioeconomic development missions. So, when we need to understand what worked with a project, and as importantly what didn’t work for a project, it is the voices of the community that need to be heard. A lot of great work is being done in international development, but it is clear that after her initial research that ex-post evaluations are essential to determining project sustainability and that projects that propose community-level development must also take the time to directly involve those community members in their own evaluation process. This feedback loop has the power to inform and influence future projects, while also creating the opportunity to actually listen to what participants (not beneficiaries) can sustain for themselves to achieve a better life.
Where have you found feedback loops that work? What excellent programming can you share?
 Nishimaki, R., Kunihiro, H., & Tahashi, S. (2008, July 8). Evaluation Result Summary: The Project for Irrigation Farming Improvement. Retrieved from https://www.jica.go.jp/english/our_work/evaluation/tech_and_grant/project/term/africa/c8h0vm000001rp75-att/ethiopia_2008_01.pdf
 Kumagai, M., Otsuka, M., & Sakagami, J. (2009, September 26). Evaluation Result Summary: The Agricultural Development Project in Kambia in the Republic of Sierra Leone. Retrieved from https://www.jica.go.jp/english/our_work/evaluation/tech_and_grant/project/term/africa/c8h0vm000001rp75-att/ethiopia_2008_01.pdf
 The Improve Group. (2012, December). Post Project Evaluation of Mercy Corps’ MILK Program in Niger: Examining Contributions to Resilience. Retrieved from https://www.yumpu.com/en/document/read/35930718/niger-milk-post-project-evaluation-final-report-mercy-corps
 Proatec SRL. (2013, March). Ex Post Evaluation of Projects Managed by NGOs in Argentina. Retrieved from https://www.oecd.org/derec/italy/Evalutation-of-Projects-Managed-by-NGOs-in-Argentina.pdf
 International Fund for Agricultural Development (IFAD). (1997, August). Small-Scale Agricultural Development Project – Ex-post Evaluation. Retrieved from https://www.ifad.org/en/web/ioe/evaluation/asset/39828071