Sustainable Development Goals and Foreign Aid– How Sustainable and Accountable to Whom?  Reposting Blog from LinkedIn Pulse


Sustainable Development Goals and Foreign Aid–
How Sustainable and Accountable to Whom?


Jindra Cekan, PhD of ValuingVoices

World leaders will paint New York City red next week at the UN Summit adopting the new post-2015 development agenda.  The agreed plans set 17 new ‘Sustainable’ Development Goals (SDGs) to be achieved by 2030. These are successors to the Millennium Development Goals (MDGs).

While many of us have heard of them, how many of us know whether we met them and what prospects are there for the Sustainable Development Goals to do well or better?  According to Bill Gates the MDGs were “the best idea for focusing the world on fighting global poverty that I have ever seen.” The Brookings Institute goes on to praise the eight MDG targets for aiming high by setting targets such as halving world poverty and reducing child mortality, improving universal primary education and gender equality and empower women etc. [1]. Donor countries pledged three times more than they had until that time (raising the percentage of gross national income for international development assistance from 0.2% to 0.7%, not huge amounts but laudable).  From 2000 to 2015 extreme poverty did fall by half (although some argue China and Asia were well on their way before this aid came) and in some countries (Senegal, Cambodia) child deaths fell by half.  Global health improved via huge coalitions on immunization and HIV/AIDS. Yet while poverty dropped and health increased, hunger, environment and sanitation targets were not met, for instance, and there are 850 million people still hungry worldwide (11% of all people) [2]. Yet gains far outweigh losses.  The new Sustainable Development Goals are to be achieved in 15 years, by 2030. The UN and member nations will track a remarkable 169 indicators,monitoring progress towards the SDGs at the local, national, regional, and global levels… [3]




Overall, one would feel rather tickled by these results — not 100% but still amazing given global disparities. Nancy Birdsall of the Center for Global Development thinks measurement is not the goal: “Growing global interconnectedness means that the problems the world faces, that hold back development, are increasingly shared… we’re making a promise to ourselves that we are one world, one planet, one society, one people, who look out for each other…” [4]

But I’m a fan of measurable results, I must admit. One would logically think that our international development projects funded by U.S. Agency for International Development (USAID), the U.S. Department of Agriculture (USDS), the Millennium Challenge Corporation and others could outline how the caused the good results that some MDGs showed. USAID’s website links its work to the MDGs clearly: “in September 2010, President Obama called for the elevation of development as a key pillar of America’s national security and foreign policy. This set forth a vision of an empowered and robust U.S. Agency for International Development that could lead the world in solving the greatest development challenges of our time and, ultimately, meet the goal of ending extreme poverty in the next generation” [5]. It goes on to talk about work to “Promote sustainable development through high-impact partnerships and local solutions“ [5]. USDA’s Foreign Agricultural Service states their “non-emergency food aid programs help meet recipients’ nutritional needs and also support agricultural development and education. These food assistance programs, combined with trade capacity building efforts, support long-term economic development” [6]. Finally, MCC states they are “committed to delivering results throughout the entire lifecycle of its investments. From before investments begin to their completion and beyond… MCC’s continuum of results is designed to foster learning and accountability” [7].

Maybe. We don’t actually know because 99% of the time we never return to projects after they end to learn how sustainable they actually were. We could be fostering super-sustainability. Or not.

For international development programming works on 1-5 year programming cycles. Multi-million dollar project requests for proposals are designed and sent out by these funders to non-profit or for-profit implementers. These are awarded to one or more organization, quite rigorously monitored, and most have very good results. Then they end. Since 2000, the US Agency for International Development has spent $280 billion on country-to-country development and humanitarian aid projects as well as funding multilateral aid and in spite of much work evaluating the final impact of projects at the end, they never go back [8]. The EU has spent a staggering $1.4 trillion. USAID has funded one evaluation that has gone back to see what communities and partners could sustain… in the last 30 years, and that is about to be published. A handful of international non-profits have taken matters in their own hands and funded such studies privately. The EU’s track record is even more dismal, with policies being proposed but not done [10]. The World Bank, which has funded over 12,000 projects has an independent evaluation arm, the Independent Evaluation Group. They returned after projects closed out to evaluate results only 33 times and we found only three of them systematically talked to project participants about what was sustained.

The bottom line is, how do we know anything we’ve done in international development or SDGs is sustainable unless we go back to see? What amazing or awful results must we know for future design? If we do not return, are we really accountable to our taxpayers and our real clients: the participants and the national countries foreign aid recipients themselves?

The UN has pledged to have an SGD report card to “measure progress towards sustainable development and help ensure the accountability of all stakeholders for achieving the SDGs….[and a] Global Partnership for Sustainable Development Data, to help drive the Data Revolution….by using data we can ensure accountability for the policies that are implemented to reduce global and local inequities” [3]. I completely agree that having citizen generated data at the local, national, regional, and global levels is so very important “to fill gaps in our knowledge, establish global norms and standards and…help countries develop robust national strategies for data development.”  And as the World Bank IEG’s Caroline Heider states, measuring them is complex (e.g. agriculture affected by climate change and measuring changes across sectors is hard) but worthwhile [11].

While SDG data tells us what donor-funded activities and policies work, very few in international development know how sustainably our programming works for our ultimate clients, our participants and partners.  And the price needn’t be high—a recent post-project evaluation we did cost under $120,000 which is a pittance given the project cost over $30 million and reached 500,000 folks.  We found clear (mostly successful) lessons. USAID has, after 30 years, funded one post-project evaluation that also has clear cost-effective lessons (forthcoming). Really, in this era of cost-effectiveness, don’t we want data on what worked best (Note to self: do that more) and what worked least (Note to self: stop doing that)?

Learning what participants and partners could self-sustain after we left is actually all we should care about. They want to get beyond aid. Shouldn’t we know if we are getting them there? Self-sustainability of outcomes is a clear indicator of good Return on Investment of our resources and expertise and their time, effort and expertise. It shows us we want to put ourselves out of a job, having built country-led development that really has a future in-country with their own resources.

Two steps are:

1) Donors to add a funding equivalent to 1% of program value for five years after closeout for all projects over $10 million to support local capacity-building of NGOs and national partners to take over implementation plus to evaluate lessons across different sector’s sustainability outcomes.

2) A cross-donor fund for country-led analysis of such learning plus lessons for what capacity needs to be built in-country to take over programming. This needs support from regionally-based knowledge repositories and learning centers in Africa, Asia, etc. Online and tangible centers could house both implementer reports/ evaluations and analyze/ share lessons learned across sectors and countries from post-project evaluations for projects that closed out 2-7 years ago for future design.

Now that is accountability. Let’s advocate for sustainability funding, data and learning now.

What are your suggestions? How can we improve sustainability?




[1] McArthur, J. (2013, February 21). Own the Goals: What the Millennium Development Goals Have Accomplished. Retrieved from

[2] End Poverty 2015 Millennium Campaign. (n.d.). MDG 1: Eradicate Extreme Poverty and Hunger. Retrieved from

[3] Sharma, S. (2015, August 20). From Aspirations to Reality: How to Effectively Measure the Sustainable Development Goals. Retrieved from

[4] Mirchandani, R. (2015, September 22). Does It Matter If We Don’t Achieve the SDGs? A New Podcast with Nancy Birdsall and Michael Elliott. Retrieved from

[5] USAID. (n.d.). USAID Forward. Retrieved from

[6] United States Department of Agriculture (USDA). (n.d.). Food Assistance. Retrieved from

[7] Millennium Challenge Corporation. (n.d.). Our Impact. Retrieved 2016, from

[8] Cekan, J. (2015, March 13). When Funders Move On. Retrieved from

[9] Global Issues. (n.d.). Foreign Aid for Development Assistance: Foreign Aid Numbers in Charts and Graphs. Retrieved from

[10] Florio, M. (2009, November/December). Sixth European Conference on Evaluation of Cohesion Policy: Getting Incentives Right — Do We Need Ex Post CBA? Retrieved from

[11] Heider, C. (2015, September 15). Evaluation Beyond 2015: Implications of the SDGs for Evaluation. Retrieved from


IEG Blog Series Part II: Theory vs. Practice at the World Bank


IEG Blog Series Part II: Theory vs. Practice at the World Bank


IEG logo


In Part I of this blog series, I described my research process for identifying the level to which the World Bank (WB) is conducting participatory post project sustainability evaluations for its many international development projects. Through extensive research and analysis of the WB’s IEG database, Valuing Voices concluded that there is a very loosely defined taxonomy for ex-post project evaluation at the WB, making it difficult to identify a consistent standard of evaluation methodology for sustainability impact assessments.

Particularly, we were concerned with identifying examples of direct beneficiary involvement in evaluating long-term sustainability outcomes, for instance by surveying/interviewing participants to determine which project objectives were self-sustained…and which were not. Unfortunately, it is quite rare for development organizations to conduct ex–post evaluations that involve all levels of project participants to contribute to long-term information feedback loops. However, there was one document type in the IEG database that gave us at Valuing Voices some room for optimism: Project Performance Assessment Reports (PPARs). PPARs are defined by the IEG as documents that are,

“…based on a review of the Implementation Completion Report (a self-evaluation by the responsible Bank department) and fieldwork conducted by OED [Operations Evaluation Department, synonymous with IEG]. To Prepare PPARs, staff examines project files and other documents, interview operation staff, and in most cases visit the borrowing country for onsite discussions with project staff and beneficiaries” [1].

The key takeaway from this definition is that these reports supplement desk studies (ICRs) with new fieldwork data provided, in part, by the participants themselves. The IEG database lists hundreds of PPAR documents, but I focused on only the 33 documents that came up when I queried “post-project”.

Here are a few commonalities to note about the 33 PPARs I studied:

  • They are all recent documents – the oldest document was published in 2004, and the most recent documents from 2014.
  • The original projects that are assessed in the PPARs were finalized anywhere from 2-10+ years before the PPAR was written, making them true ex-posts
  • They all claimed to involve mission site visits and communication with key project stakeholders, but they did not all claim to involve beneficiaries explicitly


Although the WB/IEG mentions that beneficiary participation takes place in “most” of the ex-post missions back to the project site in its definition of a PPAR, Valuing Voices was curious to know if there is a standard protocol for the level of participant involvement, the methods of data collection, and ultimately, the overall quality of the new fieldwork data collected to inform PPARs. For this data quality analysis, Valuing Voices identified these key criteria:

  • Overall summary of evaluation methods
  • Who was involved, specifically? Was there direct beneficiary participation? What were the research methods/procedures used?
  • What was the level of sustainability (termed Risk to Development Outcome* after 2006) established by the PPAR?
  • Was this different from the level of sustainability as projected by the preceding ICR report?
  • Were participants involved via interviews? (Yes/No)
  • If yes, were they semi-structured (open-ended questions allowing for greater variety/detail of qualitative data) or quantitative surveys
  • How many beneficiaries were interviewed/surveyed?
  • What % of total impacted beneficiary population was this number?
  • Was there a control group used? (Yes/No)

Despite our initial optimism, we determined that the quality of the data provided in these PPARs was highly variable, and overall quite low. A summary of the findings is as follows:


1. Rarely were ‘beneficiaries’ interviewed

  • Only 15% of the PPARs (5) gave details about the interview methodologies, but of this only 3% of the PPARs (1) described in detail how many participants were consulted, what they said and how they were interviewed (Nigeria 2014 [2]).
  • 54% of the reports (18), mentioned beneficiary input in data collected in the post-project mission, but gave no specific information on the number of participants involved nor were their voices cited nor was any information included on the methodologies used. The vast majority only vaguely referenced the findings of the post project mission, rather than data collection specifics. A typical example of this type of report is Estonia 2004 [1]
  • 30% of the PPARs (10) actually involved no direct participant/beneficiary participation in the evaluation process, with these missions only including stakeholders such as project staff, local government, NGOs, donors, consultants, etc.A typical example of this type of report is Niger 2005 [3]

These percentages are illustrated in Figure 1, below, which gives a visual breakdown of the number of reports that involved direct participant consultation with detailed methodologies provided (5), the number of reports where stakeholders were broadly consulted but no specific methodologies were provided (18), and the number of reports where no participants were directly involved in the evaluation process (10).


Graph 1


2. Sustainability of project outcomes was unclear

  • In 54% of cases, there was some change in the level of sustainability from the original level predicted in the ICR (which precedes and informs the PPAR) to the level established in the PPAR.  Ironically, of the 33 cases, 22 of them were classified as Likely or Highly Likely or Significantly Likely to be sustainable, yet participants were not asked for their input.
  • So on what basis was sustainability judged? Of the three cases where there was high participant consultation, the Nigerian project’s (where they asked 10% of participants for feedback) sustainability prospects was only moderate while India (also 10% feedback) and Kenya (14-20%) both were classified as likely to be sustainable.

Along the Y axis of Figure 2, below, is the spectrum of sustainability rankings observed in the PPARs, which range from “Negligible to Low” up to “High”. For each of the projects analyzed (there are 60 total projects accounted for in this graph, as some of the PPARs covered up to 4 individual projects in one report), the graph illustrates how many projects consulted participants, and how many failed to do so, for each evaluation outcome. As we can see, the majority of cases that were determined to be highly or significantly sustainable either did not consult participants directly or only consulted stakeholders broadly, with limited community input represented in the evaluation.  These are interesting findings, because although there is a lot of supposed sustainability being reported, very few cases actually involved the community participants in a meaningful way (to our knowledge, based on the lack of community consultation discussed in the reports). However, unless these evaluations are taking place at grassroots level, engaging the participants in a conversation about the true self-sustainability outcomes of projects, you can’t really know how sustainable the project is by only talking with donors, consultants, governments, etc. Are the right voices really being represented in this evaluation process? *Note: the “Sustainability” ranking was retitled “Risk to Development Outcomes” in 2006.


Graph 2


While projects were deemed sustainable, this is based on very little ‘beneficiary’ input. The significance of this information is simple: not enough is being done to ensure beneficiary participation in ALL STAGES of the development process, especially in the post-project time frame, even by prominent development institutions like the WB/IEG. While we commend the Bank for currently emphasizing citizen engagement via beneficiary feedback, this still seems to be more of a guiding theory than a habitualized practice [4]. Although all 34 documents I analyzed claimed there was “key stakeholder” or beneficiary participation, the reality is that no consistent procedural standard for eliciting such engagement could be identified.

Furthermore, the lack of specific details elaborating upon interview/survey methods, the number of participants involved, the discovery of any unintended outcomes, etc. creates a critical information void. As a free and public resource, the IEG database should not only be considered an important internal tool for the WB to catalog its numerous projects throughout time, but it is also an essential external tool for members of greater civil society who wish to benefit from the Bank’s extensive collection of resources – to learn from WB experiences and inform industry-wide best practices

For this reason, Valuing Voices implores the World Bank to step up its game and establish itself as a leader in post-project evaluation learning, not just in theory but also in practice. While these 33 PPARs represent just a small sample of the over 12,000 projects the WB has implemented since its inception, Valuing Voices hopes to see much more ex-post project evaluation happening in the future through IEG. Today we are seeing a decisive shift in the development world towards valuing sustainable outcomes over short-term fixes, towards informing future projects based on long-term data collection and learning, and towards community participation in all stages of the development process…


If one thing is certain, it is that global emphasis on sustainable development will not be going away anytime soon…but are we doing enough to ensure it?



[1] World Bank OED. (2004, June 28). Project Performance Assessment Report: Republic of Estonia, Agriculture Project. Retrieved from

[2] World Bank OED. (2014, June 26). Project Performance Assessment Report: Nigeria, Second National Fadama Development Project. Retrieved from

[3] World Bank OED. (2005, April 15). Project Performance Assessment Report: Niger, Energy Project. Retrieved from

[4] World Bank. (n.d.). Citizen Engagement: Incorporating Beneficiary Feedback in all projects by FY 18. Retrieved 2015, from