Grow the .002% of all global development projects that are evaluated ex-post closure for sustainability

Grow the .002% of all global development projects that are evaluated ex-post closure for sustainability

It seems like ‘fake news’ that after decades of global development so few evaluations would have peered back in time to see what was sustained. While I was consulting to the Policy Planning and Learning Bureau at USAID, I asked the head of this M&E department who does ex-post sustainability evaluation as I knew USAID had done some in the 1980s, Cindy Clapp-Wincek answered ‘No one, there are no incentives to do it.’ (She later became our advisor.)

Disbelieving, I did a year of secondary keyword research before devoting my professional consulting life to advocating for and doing ex-post evaluations of sustained outcomes and impacts. I searched USAID, OECD, and other bilateral and later multilateral donors’ databases and found thousands of studies, most of which were inaccurately named ‘ex-post’ or ‘post-closure’ studies.  Some of the roughly 1,000 projects I looked at at USAID and OECD that came up under ‘ex-post’, ‘ex post’, ‘post closure’ were final evaluations that were slightly delayed, a few were evaluations that were at least one year after closure, but were desk studies without interviews. Surprisingly, the vast majority of final evaluations found were those that only recommended ex-post evaluation several years later to confirm projected sustainability.

 

 

 

 

 

 

 

 

In 2016 at the American Evaluation Association conference, a group of us did a presentation. In it, I cited these statistics from of 1st year of Valuing Voices’ research:

  • Of 900+ “ex-post” “ex post” “post closure” documents in USAID’s DEC database, there were only 12 actual post-project evaluations with fieldwork have been done in the last 20 years
  • Of 12,000 World Bank projects – only 33 post-project evaluations asked ‘stakeholders’ for input, and only 3 showed clearly they talked to participants
  • In 2010 Asian Development Bank conducted 491 desk reviews of completed projects, and returned to 18 actual field-based post-project evaluations that included participant voices; they have done only this 1 study.
  • We found no evaluations by recipient governments of aid projects’ sustainability

12 years of research, advocacy and fieldwork later, the ‘catalysts’ database on Valuing Voices now shows actual fieldwork-informed evaluations by 40 organizations that had actual ex-posts that returned to the field to ask participants and project partners what was sustained, highlighting 92 ex-posts.

How many ex-post project closure evaluations have been done? .002% of all projects. The 0.002% statistic looks at just public foreign development aid from 1960 (not even counting private funding such as foundations or gifts to organizations, which isn’t tracked in any publicly available database). Calculating aggregated OECD aid statistics (excluding private because it’s only recent data) over 62 years $5.6 trillion by 2022 (thanks to Rebecca Regan-Sachs for the updated #s).

I then estimated 3.000 actual ex-posts which comes from 2,500 JICA projects plus almost 500 other projects that I have either found looking through databases all across the spectrum from governments and multilaterals (almost 100 in our catalysts, and am assuming there must be 400 others done in the 1980s-2000 like USAID and the World Bank).

Without a huge research team it is improssible to aggregate data on the total number of projects by all donors. So I extrapolated from project activity disbursements of one year (2022) for Mali on the www.foreignassistance.gov page. In my 35 years of experience, Mali, where I did my doctoral research, typifies he average USAID aid recipient. They had 382 projects going in 2022. I rounded up to 400 projects x 70 years (since 1960 when OECD data began) x 100 countries by just one donor (of the 150 possible recipient countries, to be conservative). This comes to 2.8 million projects. So if we take 39 OECD countries as donors (given most have far less to give than US), in total 109 million publicly funded aid projects disbursed $5.6 trillion since 1960. While final evaluations are industry standard, only .002% is the estimated number of ex-post evaluations of projects the were evaluated with data from local participants and partners of the 109 million projects .

This became Valuing Voices focus, and we created an open-access database for learning, and conducted our own  My team and I identified 92 ex-posts that returned to ask locals what lasted, what didn’t, why, and what emerged from their own efforts. We also created evaluability checklists and created a new evaluation, Sustained and Emerging Impacts Evaluation that included examining not just what donors put in place to last, but also what emerged outcomes from local efforts to sustain results with more limited resources, partnerships, capacities and local ownership/motivation. These four drivers were found by Rogers and Coates for USAID’s food security exit study in 2015). We have done 15 ex-posts for 9 clients since 2006 and shared Adaptation Fund ex-post training materials in 2023.

 

Yet the public assumes we know our development is sustainable. 2015’s ‘Sustainable Development Goals‘ focused aid on 17 themes, which was to generate $12 trillion more in annual spending on SDG sectors than the;$21 trillion already being invested each year. Nonetheless, a recent UN report states that there is now a $4 trillion annual financing gap to achieve the SDGs. All this funding goes to projects that are currently implemented, not to evaluate what had been sustained from past projects that already closed. Such learning from what succeeded or failed, or what emerged from local efforts to keep activities and results going is pivotal to improving current and future programming is almost wholly missing from the dialogue; I know, I asked multiple SDG evaluation experts.

 

Why do we return to learn so rarely? There are many reasons, the most prosaic among them being administrative.

  • When aid funds are spent over 2-10 years, projects are closed, evaluated at the end, ‘handed over’ to national governments, and no additional funding exists to return ‘ex-post’ closure to learn.
  • Next is the push to continue to improve lives through implementation which means low rates of overhead allocated to M&E and learning during, much less after closure.
  • Another is the assumption that ‘old’ projects differ so much from new ones, but there are few differences. After all there are only so many ways to grow food, feed the malnourished, educate children; evaluating ‘old’ projects can teach ‘new’ projects.
  • A last major one, from Valuing Voices’ research of 12 years may be the largest: Fear of admitting failure. Please read Valuing Voices’ 2016 blog highlighted many Lessons about Funding, Assumptions and Fears (Part 3). One US aid lobbyist told me in 2017 that I must not share this lack of learning about sustained impacts because it could imperil US aid funding; I told her I had to tell people because lives were at stake.
  • Overall, there is much to learn; most ex-post evaluations show mixed results. None show 100% sustainability and while most show 30-60% sustainability, none are 0% sustained either. If we don’t learn to replicate what worked and cease what didn’t now, then future programming will be as flawed and successes, especially brilliant emerging locally designed ex-post outcomes such as Niger’s local funding of redesign of health incentives will remain hidden.

 

Occasionally donors invest in sets of ex-post learning evaluations such as USAID’s ‘global waters’ seven water/ sanitation evaluations linked to the E3 Bureau taking sustainability as a strategic goal. Yet the overall findings from USAID’s own staff of these ex-posts Drivers of WASH study were chilling. While 25 million gained access to drinking water and 18 million to basic sanitation, ‘they have largely not endured.’ But the good news in such research is that the donor learned that infrastructure fails when spare parts are not accessible and maintenance not funded or performed, which can be planned for and addressed during implementation by investing in resources and partnerships. They learned that relying on volunteers is unreliable and management needs to be bolstered, which can lead to some implementation funding to be focused on capacities and local ownership. We can plan better for sustainability by learning from ex-post and exit studies (see Valuing Voices’ checklists in this 2023 article on Fostering Values-Driven Sustainability).

 

And since 2019, three climate funds, the Adaptation Fund, the Global Environmental Facility, and the Climate Investment Funds have turned to ex-post evaluations to look at sustainability and longer-term resilience and even transformation, given environmental shocks may take years to affect the project sites. The Adaptation Fund has done four ex-posts, with more to come in 2024/25, and the CIF is beginning now. The GEF has done a Post-Completion Assessment Pilot for the Yellow Sea Region . Hopeful!

The Contentious Power of Evaluations, Guest Blog by Hanneke de Bode

THE CONTENTIOUS POWER OF EVALUATIONS

or why sustainable results are so hard to come by….

 

A while ago, I reacted to a discussion among development aid/cooperation evaluators about why there are so few NGO evaluations available. It transpired that many people do not even know what they often look like, which is why I wrote a kind of Common Denominator Report: only about small evaluations, and self-explanatory in the sense that one understands why they are rarely published. The version below is slightly different from the original.

 

Most important elements of a standard evaluation report for NGOs and their donors; about twenty days of work; about 20,000 Euros budget (TVA included).

 

In reality, the work takes at least twice as much time as calculated and will still be incomplete/ quick, and dirty because it cannot decently be done within the proposed framework of conditions and answering all 87 questions or so that normally figure in the ToR.

 

EXECUTIVE SUMMARY

The main issues in the project/ programme, the main findings, the main conclusions, and the main recommendations, presented in a positive and stimulating way (the standard request from the Comms and Fundraising departments) and pointing the way to the sunny uplands. This summary is written after a management response to the draft report has been ‘shared with you’. The management response normally says:

  • this is too superficial (even if you explain that it could not be done better, given the constraints);
  • this is incomplete (even if you didn’t receive the information you needed);
  • this is not what we asked (even if you had an agreement about the deliverables)
  • you have not understood us (even if your informants do not agree among themselves and contradict each other)
  • you have not used the right documents (even if this is what they gave you)
  • you have got the numbers wrong; the situation has changed in the meantime (even if they were in your docs);
  • your reasoning is wrong (meaning we don’t like it);
  • the respondents to the survey(s)/ the interviews were the wrong ones (even if the evaluand suggested them);
  • we have already detected these issues ourselves, so there is no need to put them in the report (meaning don’t be so negative).

 

BACKGROUND

Who the commissioning organisation is, what they do, who the evaluand is, what the main questions for the evaluators were, who got selected to do this work, and how they understood the questions and the work in general.

 

METHODOLOGY

In the Terms of Reference for the evaluation, many commissioners already state how they want an evaluation done. This list is almost invariably forced on the evaluators, thereby reducing them from having independent status to being the ‘hired help’ from a Temp Agency:

  • briefings by Director and SMT [Senior Management Team] members for scoping and better understanding;
  • desk research leading to notes about facts/ salient issues/ questions for clarification;
  • survey(s) among a wider stakeholder population;
  • 20-40 interviews with internal/ external stakeholders;
  • analysis of data/ information;
  • recommendations;
  • processing feedback on the draft report.

 

DELIVERABLES

In the Terms of Reference, many commissioners already state which deliverable they want and in what form:

  • survey(s);
  • interviews;
  • round table/ discussion of findings and conclusions;
  • draft report;
  • final report;
  • presentation to/ discussion with selected stakeholders.

 

PROJECT/ PROGRAMME OVERVIEW

 

Many commissioners send evaluators enormous folders with countless documents, often amounting to over 3000 pages of uncurated text with often unclear status (re authors, purpose, date, audience) and more or less touching upon the facts the evaluators are on a mission to find. This happens even when the evaluators give them a short list with the most relevant docs (such as grant proposal/ project plan with budget, time and staff calculations, work plans, intermediate reports, intermediate assessments, and contact lists). Processing them leads to the following result:

 

According to one/ some of the many documents that were provided:

  • the organisation’s vision is that everybody should have everything freely and without effort;
  • the organisation’s mission is to work towards having part of everything to not everybody, in selected areas;
  • the project’s/ programme’s ToC indicates that if wishes were horses, poor men would ride;
  • the project’s/ programme’s duration was four/ five years;
  • the project’s/ programme’s goal/ aim/ objective was to provide selected parts of not everything to selected parts of not everybody, to make sure the competent authorities would support the cause and enshrine the provisions in law, The beneficiaries would enjoy the intended benefits, understand how to maintain them and teach others to get, enjoy and amplify them, that the media would report favourably on the efforts, in all countries/ regions/ cities/ villages concerned and that the project/ programme would be able to sustain itself and have a long afterlife;
  • the project’s/ programme’s instruments were fundraising and/ or service provision and/ or advocacy;
  • the project/ programme had some kind of work/ implementation plan.

 

FINDINGS/ ANALYSIS

 

This is where practice meets theory. It normally ends up in the report like this:

 

Due to a variety of causes:

  • unexpectedly slow administrative procedures;
  • funds being late in arriving;
  • bigger than expected pushback and/ or less cooperation than hoped for from authorities- competitors- other NGOs- local stakeholders;
  • sudden changes in project/ programme governance and/ or management;
  • incomplete and/ or incoherent project/ programme design;
  • incomplete planning of project/ programme activities;
  • social unrest and/ or armed conflicts;
  • Covid;

 

The project/ programme had a late/ slow/ rocky start. Furthermore, the project/ programme was hampered by:

  • partial implementation because of a misunderstanding of the Theory of Change which few employees know about/ have seen/ understand, design and/ or planning flaws and/ or financing flaws and/ or moved goalposts and/ or mission drift and/ or personal preferences and/ or opportunism;
  • a limited mandate and insufficient authority for the project’s/ programme’s management;
  • high attrition among and/ or unavailability of key staff;
  • a lack of complementary advocacy and lobbying work;
  • patchy financial reporting and/ or divergent formats for reporting to different donors taking time and concentration away;
  • absent/ insufficient monitoring and documenting of progress;
  • little or no adjusting because of absent or ignored monitoring results/ rigid donor requirements;
  • limited possibilities of stakeholder engagement with birds/ rivers/ forests/ children/ rape survivors/ people in occupied territories/ murdered people/ people dependent on NGO jobs & cash etc;
  • internal tensions and conflicting interests;
  • neglected internal/ external communications;
  • un/ pleasant working culture/ lack of trust/ intimidation/ coercion/ culture of being nice and uncritical/ favouritism;
  • the inaccessibility of conflict areas;

 

Although these issues had already been flagged up in:

  • the evaluation of the project’s/ programme’s first phase;
  • the midterm review;
  • the project’s/ programme’s Steering Committee meetings;
  • the project’s/ programme’s Advisory Board meetings;
  • the project’s/ programme’s Management Team meetings;

 

Very little change seems to have been introduced by the project managers/ has been detected by the evaluators.

 

In terms of the OECD/ DAC criteria, the evaluators have found the following:

  • relevance – the idea is nice, but does it cut the mustard?/ others do this too/ better;
  • coherence – so so, see above;
  • efficiency – so so, see above;
  • effectiveness – so so, see above;
  • impact – we see a bit here and there, sometimes unexpected positive/ negative results too, but will the positives last? It is too soon to tell, but see above;
  • sustainability – unclear/ limited/ no plans so far.

 

OVERALL CONCLUSION

 

If an organisation is (almost) the only one in its field, or if the cause is still a worthy cause, as evaluators you don’t want the painful parts of your assessments to reach adversaries. This also explains the vague language in many reports and why overall conclusions are often phrased as:

 

However, the obstacles mentioned above were cleverly navigated by the knowledgeable and committed project/ programme staff in such a way that in the end, the project/ programme can be said to have achieved its goal/ aim/ objective to a considerable extent.

 

Galileo: “Eppur si muove” = “And yet it moves”

 

 

RECOMMENDATIONS

 

Most NGO commissioners make drawing up a list of recommendations compulsory. Although there is a discussion within the evaluation community about evaluators’ competence to do precisely that, many issues found in this type of evaluation have organisational; not content; origins. The corresponding recommendations are rarely rocket science and could be formulated by most people with basic organisational insights or a bit of public service or governance experience. Where content is concerned, many evaluators are selected because of their thematic experience and expertise, so it is not necessarily wrong to make suggestions.

They often look like this:

 

Project/ programme governance

  • limit the number of different bodies and make remit/ decision making power explicit;
  • have real progress reports;
  • have real meetings with a real agenda, real documents, real minutes, real decisions, and real follow-up;
  • adjust;

Project/ programme management

  • review and streamline/ rationalise structure to reflect strategy and work programme;
  • give project/ programme leaders real decision making and budgetary authority;
  • have real progress meetings with a real agenda, real minutes, real decisions, and real follow-up;
  • implement what you decide, but monitor if it makes sense;
  • adjust;

Organisational management

  • consult staff on recommendations/ have learning sessions;
  • draft implementation plan for recommendations;
  • carry them out;

Processes and Procedures

  • get staff agreement on them;
  • commit them to paper;
  • stick to them – but not rigidly;

Obviously, if we don’t get organisational structure and functioning, programme or project design, implementation, monitoring, evaluation, and learning right, there is scant hope for the longer-term sustainability of the results that we should all be aiming for.

Sustainability of what and how do we know? Measuring projects, programs, policies…

On my way to present at the European Evaluation Society’s annual conference, I wanted to close the loop on the Nordic and Netherlands ex-post analysis. The reason is, that we’ll be discussing the intersection of different ways to evaluate ‘sustainability’ over the long- and short-term, and how we’re transforming evaluation systems. The session on Friday morning is called “Long- And Short-Term Dilemmas In Sustainability Evaluations” (Cekan, Bodnar, Hermans, Meyer, and Patterson). We come from academia as professors, consultancies to International organizations, International/ national non-profits, and our European (Dutch, German, Czech), South African, and American governments. We’ll discuss it as a ‘fishbowl’ of ideas.

The session’s abstract adds the confounding factor of program vs project versus portfolio-wide evaluations all-around sustainability.

Details on our session are below and why I’m juxtaposing it to the Nordic and Netherlands ex-posts in detail, comes next. As we note in our EES ’22 session description, “One of the classic complications in sustainability is dealing with short-term – long-term dilemmas. Interventions take place in a local and operational setting, affecting the daily lives of stakeholders. Sustainability is at stake when short-term activities are compromising the long-term interests of these stakeholders and future generations, for instance, due to a focus on the achievement of shorter-term results rather than ensuring durable impacts for participants… Learning about progress towards the SDGs or the daunting task of keeping a global temperature rise this century well below 2 degrees Celsius above pre-industrial levels, for instance, requires more than nationally and internationally agreed indicator-systems, country monitoring, and reporting and good intentions.”

But there are wider ambitions for most sustainability activities undertaken by a range of donors, policy actors, project implementers, and others: Sustainability “needs to span both human-social and natural-ecological systems’ time scales. Furthermore, long-term sustainability, in the face of climate change and SDGs, demands a dynamic view, with due attention for complexity, uncertainty, resilience, and systemic transformation pathways…. the need for a transformation of current evaluation systems – seeing them as nested or networked systems… Their focus may range from focused operational projects to the larger strategic programmes of which these projects are part, to again the larger policies that provide the context or drivers for these programmes. Analogue to these nested layers runs a time dimension, from the short-term projects (months to years), to multi-year programmes, to policies with outlooks of a decade or more.” 

When Preston did his research in 2020-21 which I oversaw, we focused on the projects precisely because that is where we believe ‘impact’ happens in a measurable way by participants and partners. Yet we found that many defined their parameters differently. Preston writes, “This paper focuses on what such research [on projects evaluated at least 2 years post-closure] yielded, not definitive findings of programs or multi-year country strategies that are funded for 20-30 years continuously, nor projects funded by country-level embassies which did not feature on the Ministry site. We focus on project bilateral project evaluations, not multilateral funding of sectors. We also …received input that Sweden’s EBA has a (non-project [not ex-post] portfolio of ‘country evaluations’ which looked back over 10 or even 20-year time horizons

So we present these compiled detailed studies on the Netherlands, Norway, Finland,  Sweden, and Denmark for your consideration. Can we arrive at a unified definition of ‘sustainability’ or imagine a unified ‘sustainability evaluation’ definition and scope? I hope so, will let you know after EES this week! What do you think, is it possible?

Aid providers: More puzzle pieces, including unexpected outcomes; ours is not the whole picture

Aid providers: More puzzle pieces, including unexpected outcomes; ours is not the whole picture

When we did our first ex-post evaluation/ delayed final evaluation in 2006 in Niger for Lutheran World Relief (LWR) funded by the Bill & Melinda Gates Foundation (pg75 on), we found all sorts of unexpected/ unintended outcomes and impacts that far outweighed the original aid’s expectations. The project measured success by the livelihoods rebuilding post-drought ravaged sheep herds and water points for them. Instead, while the LWR aid was beautifully based on ‘habbanaye,’ a pastoralist practice of lending or giving small-stock offspring to poorer family members (and was expanded to passing on animals to poorer community members), and results showed a majority of the poor benefitted, our respondents showed it was far more nuanced. We aid providers (and our expectations) are only a part of any ‘impact’, which needs to be defined by the communities themselves:

  • While many families benefitted from the sheep, enabling some young boys to shepherd several at a time, it turns out the poorest chosen by the communities were not necessarily ones who lost small livestock during the drought but were, in fact, the ultra-poor who had never had them. Therefore, proof of successful ‘restocking’ post-drought left these poor who were helped the most, out, as they were… unexpected;
  • Holding onto the donated sheep was not as important an indicator for some: one woman told us that selling her aid-received sheep to buy her daughter dowry to marry a wealthier husband was a far better investment for their financial future than the sheep would have been. Our main measure of success was not nuanced enough;
  • The provision of water through well-rehabilitation and building in the five villages was a vital resource. Women reported they saved 8 hours every two days by having potable water in their villages. Before then, they spent three hours walking each way to the far-off well and waited two hours to fetch 50l water, which they head-carried back. With the well in place, they generated household income through weaving mats, cooking food for sale, etc., which amounted to as much as 20% of increased household income – a boon! Also, having both time and water access enabled them to bathe themselves and their children, make their husbands lunch, and make their mothers-in-law tea, which led to far more household harmony. No ‘impact’ measures outside of livestock and water were included or could be added;
  • Finally, the resulting show-stopper: in the last village where we interviewed women participants, they said that the groups of fellow recipients were a boon for community solidarity across ethnic groups. In their meetings, thanks to the sheep, water, and collective moral support, they said the conversations turned from conflict to collaboration, and best of all, women reported, “our husbands don’t beat us anymore.” Peace among ethnic groups, within and between households was completely unexpected.

Such a highly unexpected outcome would fall under #2 and #4 of the Netherlands study below. Unfortunately, the Foundation seemed less interested in these unexpected but stellar results. Yet at the same time I have empathy for their position, as so many of us in global development want to help solve problems, and demand proof we have.…. so we can leave and help others, equally deserving. Taking complexity into account, seeing lives in a wider context where our aid can be helping differently or even harming makes garnering more aid hard.

As the brilliant Time to Listen series by CDA showed, aid intervenes in people’s lives in complex ways and we need to listen to our participants and partners who always share more complex views than our reports can honor..A few  of hundreds of quotes of 6000 interviews, this from the Solomon Islands:

Some appreciate the aid as it is given:

“People in my village are very grateful for the road because now with trucks coming into our village, the women can now take their vegetables to the market. Before, the tomatoes just rotted in the gardens. Tomatoes go bad quickly and despite our attempts in the past to take them to the market to sell, we always lost.” Woman from East Malaita

But for others there are great caveats:

“Donors should send their officers to Solomon Islands to implement activities in urban and rural areas. This will help them understand the difficulties we often face with people, environment, culture, geography, etc. ‘no expectem evri ting bae stret’ [Don’t expect everything to go right].” Man, Honiara

“They have their own charters, sometimes we might want to go another way but they don’t want to touch that. So sometimes there is some conflict there; some projects are not really what we would like to address – because the donors only want to do one component, and not another, because it is sensitive, or because they want quick results and to get out.” Government official, Honiara

“What changes have I noticed since independence? Whatever development you see here is due to individual struggles. No single aid program is sustainable. NGOs are created by donors and are comfortable with who they know. NGOs eat up the bulk of help intended for the communities. NGOs become international employers. They do their own thing in our province. Most projects have no impact. I want to say stop all aid except for education and health. If international assistance concentrates on quality education and health, the educated and healthy people will take care of themselves.” Government official, Auki, Malaita

“The most important impacts of aid people do not think about – they are not listed, not planned, they are remote, but these are the longest lasting. Often they are the opposite of the stated objectives. So remote, unintended, unexpected impacts are very often more important and more lasting and more dramatic than the short term intended, measured ones.” Aid consultant, Honiara

How widespread is our myopic focus on our intended results? A recent Netherlands Foreign Aid IOB study found unintended effects were an evaluator’s blindspot as across 664 evaluations over 20 years, “The ‘text miners’ found that only 1 in 6 IOB evaluation documents pay attention to unintended effects.” This dearth of attention to all the other things happening in projects led to 10 micro, macro, meso, and multiple level effects, from negative price effects such as food aid on local food producers or nationalist backlash to Afghan projects to positive effects (they found 40% of projects had this) such as a harbor built happening to expand beach tourism as well. 

But if we don’t look for such effects, we don’t know the true impact of our aid programming. We also don’t honor the breadth of people’s lives rather than just as narrow ‘aid beneficiaries’ (ugh), not even honoring them with the terms’ participants’ much less ‘partners’ in their own development).

In our ex-post work, we find a wide array of ways ownership and implementation of activities is done after donors leave and without additional or with different resources, capacities, and partnerships. Taking emerging outcomes and impacts examples from a different Niger project, and one from Nepal:

1. Partnerships Ownership: Half of the members of the all-women Village Banks reported helping one another deal with domestic disputes and violence. (Pact/Nepal)

2. Capacities: Trained local women charged rates to sell course materials onward (PACT/Nepal)

3. OwnershipParticipants valued clinic-based birthing and sustained it by introducing locally-created social punishments and incentives (CRS/Niger)

4. Resources: New Ministry funding reallocated to sustain [health] investments, and private traders generated large crop purchases and contracts (CRS/Niger)

The assets and capacities we bring to help people and their country systems help only a sliver of their lives, and often in unexpected ways that sadly we aid donors and implementers don’t seem interested in.  There are other puzzle pieces to add…

Let us not forget, as a Sustainable Development Goals Evaluation colleague said in 2017 on a call:

I am sure you, my dear colleagues, have reams of similar findings from your fieldwork. Please share yours!

Upcoming Jan Webinar: Lessons from Nordic / the Netherlands’ ex-post project evaluations: 14 Jan 2021

Upcoming Webinar: Lessons from Nordic / the

Netherlands’ ex-post project evaluations: 14 Jan 2021

In June-August 2020, Preston Stewart, our Valuing Voices intern, conducted through government databases of the four Nordic countries – Norway, Finland, Sweden, and Denmark – and the Netherlands to identify ex-post evaluations of government-sponsored projects. The findings and recommendations for action are detailed in a four-part white paper series, beginning with a paper called The Search.

We would like to invite you to a presentation of our findings, and a discussion of what can be learned about:

1) the search process,

2) how ex-post evaluations are defined and categorized,

3) what was done well by each country’s ex-posts,

4) sustainability-related findings and lessons, and

5) what M&E experts in each country can improve on ex-post evaluation practices.

One big finding is that there were only 32 evaluations that seemed to be ex-post project, and only 1/2 of them actually were at least 2 years after project closure.

 

We have many more lessons about conflicting definitions, that ex-post evaluation is not the norm in the evaluations processes of the five governments, that development programs could, if committed to ex-post evaluation, learn about sustainability by engaging with the findings from many more such evaluations, and to increase accountability to the public and for transparent learning, ex-post evaluations should be shared in public, easy-to-access online repositories.

Join us!

REGISTER: https://www.eventbrite.com/e/webinar-lessons-from-nordic-the-netherlands-ex-post-project-evaluations-tickets-132856426147 

Your ticket purchase entitles you to the webinar, its meeting recording, associated documents, and online Sustainability Network membership for resources and discussion. Payment on a sliding, pay-as-you-can scale.

Interview repost: Why Measure & Evaluate Corporate Sustainability Projects?

Interview repost: Why Measure & Evaluate Corporate Sustainability Projects?

A CSR firm in Central Europe asked me to talk about sustainability, evaluation, and Corporate Social Responsibility. We had a terrific interview – please click thru to: https://www.besmarthead.com/cs/media/post/smartheadtalks-2-jindra-cekan-eng

 

 

The topics we discussed include:

* Sustainability in CSR involves value creation through benefits-creation for both companies and consumers

* Projects feeding into the United Nations Sustainable Development Goals may not be sustained over the long-term and we must return to evaluate what could be sustained and what emerged ex-post

* Scale measurable and meaningful impact investment through the transformation of development nonprofits’ programming + approach

* Three activities in sustainability that companies can start doing right away, that you would recommend to any company anywhere in the world

And Very Happy Holidays to all… may our lives and world be more sustainable in 2021….