by Jindra Cekan | Jun 27, 2022 | Aid effectiveness, Evaluation, Evaluation Findings, Learning, NGOs, OECD, OECD DAC, Sustainability, Sustainable development, TOR
THE CONTENTIOUS POWER OF EVALUATIONS
or why sustainable results are so hard to come by….
A while ago, I reacted to a discussion among development aid/cooperation evaluators about why there are so few NGO evaluations available. It transpired that many people do not even know what they often look like, which is why I wrote a kind of Common Denominator Report: only about small evaluations, and self-explanatory in the sense that one understands why they are rarely published. The version below is slightly different from the original.
Most important elements of a standard evaluation report for NGOs and their donors; about twenty days of work; about 20,000 Euros budget (TVA included).
In reality, the work takes at least twice as much time as calculated and will still be incomplete/ quick, and dirty because it cannot decently be done within the proposed framework of conditions and answering all 87 questions or so that normally figure in the ToR.
EXECUTIVE SUMMARY
The main issues in the project/ programme, the main findings, the main conclusions, and the main recommendations, presented in a positive and stimulating way (the standard request from the Comms and Fundraising departments) and pointing the way to the sunny uplands. This summary is written after a management response to the draft report has been ‘shared with you’. The management response normally says:
- this is too superficial (even if you explain that it could not be done better, given the constraints);
- this is incomplete (even if you didn’t receive the information you needed);
- this is not what we asked (even if you had an agreement about the deliverables)
- you have not understood us (even if your informants do not agree among themselves and contradict each other)
- you have not used the right documents (even if this is what they gave you)
- you have got the numbers wrong; the situation has changed in the meantime (even if they were in your docs);
- your reasoning is wrong (meaning we don’t like it);
- the respondents to the survey(s)/ the interviews were the wrong ones (even if the evaluand suggested them);
- we have already detected these issues ourselves, so there is no need to put them in the report (meaning don’t be so negative).
BACKGROUND
Who the commissioning organisation is, what they do, who the evaluand is, what the main questions for the evaluators were, who got selected to do this work, and how they understood the questions and the work in general.
METHODOLOGY
In the Terms of Reference for the evaluation, many commissioners already state how they want an evaluation done. This list is almost invariably forced on the evaluators, thereby reducing them from having independent status to being the ‘hired help’ from a Temp Agency:
- briefings by Director and SMT [Senior Management Team] members for scoping and better understanding;
- desk research leading to notes about facts/ salient issues/ questions for clarification;
- survey(s) among a wider stakeholder population;
- 20-40 interviews with internal/ external stakeholders;
- analysis of data/ information;
- recommendations;
- processing feedback on the draft report.
DELIVERABLES
In the Terms of Reference, many commissioners already state which deliverable they want and in what form:
- survey(s);
- interviews;
- round table/ discussion of findings and conclusions;
- draft report;
- final report;
- presentation to/ discussion with selected stakeholders.
PROJECT/ PROGRAMME OVERVIEW
Many commissioners send evaluators enormous folders with countless documents, often amounting to over 3000 pages of uncurated text with often unclear status (re authors, purpose, date, audience) and more or less touching upon the facts the evaluators are on a mission to find. This happens even when the evaluators give them a short list with the most relevant docs (such as grant proposal/ project plan with budget, time and staff calculations, work plans, intermediate reports, intermediate assessments, and contact lists). Processing them leads to the following result:
According to one/ some of the many documents that were provided:
- the organisation’s vision is that everybody should have everything freely and without effort;
- the organisation’s mission is to work towards having part of everything to not everybody, in selected areas;
- the project’s/ programme’s ToC indicates that if wishes were horses, poor men would ride;
- the project’s/ programme’s duration was four/ five years;
- the project’s/ programme’s goal/ aim/ objective was to provide selected parts of not everything to selected parts of not everybody, to make sure the competent authorities would support the cause and enshrine the provisions in law, The beneficiaries would enjoy the intended benefits, understand how to maintain them and teach others to get, enjoy and amplify them, that the media would report favourably on the efforts, in all countries/ regions/ cities/ villages concerned and that the project/ programme would be able to sustain itself and have a long afterlife;
- the project’s/ programme’s instruments were fundraising and/ or service provision and/ or advocacy;
- the project/ programme had some kind of work/ implementation plan.
FINDINGS/ ANALYSIS
This is where practice meets theory. It normally ends up in the report like this:
Due to a variety of causes:
- unexpectedly slow administrative procedures;
- funds being late in arriving;
- bigger than expected pushback and/ or less cooperation than hoped for from authorities- competitors- other NGOs- local stakeholders;
- sudden changes in project/ programme governance and/ or management;
- incomplete and/ or incoherent project/ programme design;
- incomplete planning of project/ programme activities;
- social unrest and/ or armed conflicts;
- Covid;
The project/ programme had a late/ slow/ rocky start. Furthermore, the project/ programme was hampered by:
- partial implementation because of a misunderstanding of the Theory of Change which few employees know about/ have seen/ understand, design and/ or planning flaws and/ or financing flaws and/ or moved goalposts and/ or mission drift and/ or personal preferences and/ or opportunism;
- a limited mandate and insufficient authority for the project’s/ programme’s management;
- high attrition among and/ or unavailability of key staff;
- a lack of complementary advocacy and lobbying work;
- patchy financial reporting and/ or divergent formats for reporting to different donors taking time and concentration away;
- absent/ insufficient monitoring and documenting of progress;
- little or no adjusting because of absent or ignored monitoring results/ rigid donor requirements;
- limited possibilities of stakeholder engagement with birds/ rivers/ forests/ children/ rape survivors/ people in occupied territories/ murdered people/ people dependent on NGO jobs & cash etc;
- internal tensions and conflicting interests;
- neglected internal/ external communications;
- un/ pleasant working culture/ lack of trust/ intimidation/ coercion/ culture of being nice and uncritical/ favouritism;
- the inaccessibility of conflict areas;
Although these issues had already been flagged up in:
- the evaluation of the project’s/ programme’s first phase;
- the midterm review;
- the project’s/ programme’s Steering Committee meetings;
- the project’s/ programme’s Advisory Board meetings;
- the project’s/ programme’s Management Team meetings;
Very little change seems to have been introduced by the project managers/ has been detected by the evaluators.
In terms of the OECD/ DAC criteria, the evaluators have found the following:
- relevance – the idea is nice, but does it cut the mustard?/ others do this too/ better;
- coherence – so so, see above;
- efficiency – so so, see above;
- effectiveness – so so, see above;
- impact – we see a bit here and there, sometimes unexpected positive/ negative results too, but will the positives last? It is too soon to tell, but see above;
- sustainability – unclear/ limited/ no plans so far.
OVERALL CONCLUSION
If an organisation is (almost) the only one in its field, or if the cause is still a worthy cause, as evaluators you don’t want the painful parts of your assessments to reach adversaries. This also explains the vague language in many reports and why overall conclusions are often phrased as:
However, the obstacles mentioned above were cleverly navigated by the knowledgeable and committed project/ programme staff in such a way that in the end, the project/ programme can be said to have achieved its goal/ aim/ objective to a considerable extent.

Galileo: “Eppur si muove” = “And yet it moves”
RECOMMENDATIONS
Most NGO commissioners make drawing up a list of recommendations compulsory. Although there is a discussion within the evaluation community about evaluators’ competence to do precisely that, many issues found in this type of evaluation have organisational; not content; origins. The corresponding recommendations are rarely rocket science and could be formulated by most people with basic organisational insights or a bit of public service or governance experience. Where content is concerned, many evaluators are selected because of their thematic experience and expertise, so it is not necessarily wrong to make suggestions.
They often look like this:
Project/ programme governance
- limit the number of different bodies and make remit/ decision making power explicit;
- have real progress reports;
- have real meetings with a real agenda, real documents, real minutes, real decisions, and real follow-up;
- adjust;
Project/ programme management
- review and streamline/ rationalise structure to reflect strategy and work programme;
- give project/ programme leaders real decision making and budgetary authority;
- have real progress meetings with a real agenda, real minutes, real decisions, and real follow-up;
- implement what you decide, but monitor if it makes sense;
- adjust;
Organisational management
- consult staff on recommendations/ have learning sessions;
- draft implementation plan for recommendations;
- carry them out;
Processes and Procedures
- get staff agreement on them;
- commit them to paper;
- stick to them – but not rigidly;
Obviously, if we don’t get organisational structure and functioning, programme or project design, implementation, monitoring, evaluation, and learning right, there is scant hope for the longer-term sustainability of the results that we should all be aiming for.
by Jindra Cekan | Jun 6, 2022 | Accountability, Evaluation, ex-post evaluation, programs, Project design, SDGs, Sustainability, Sustainable development
On my way to present at the European Evaluation Society’s annual conference, I wanted to close the loop on the Nordic and Netherlands ex-post analysis. The reason is, that we’ll be discussing the intersection of different ways to evaluate ‘sustainability’ over the long- and short-term, and how we’re transforming evaluation systems. The session on Friday morning is called “Long- And Short-Term Dilemmas In Sustainability Evaluations” (Cekan, Bodnar, Hermans, Meyer, and Patterson). We come from academia as professors, consultancies to International organizations, International/ national non-profits, and our European (Dutch, German, Czech), South African, and American governments. We’ll discuss it as a ‘fishbowl’ of ideas.
The session’s abstract adds the confounding factor of program vs project versus portfolio-wide evaluations all-around sustainability.
Details on our session are below and why I’m juxtaposing it to the Nordic and Netherlands ex-posts in detail, comes next. As we note in our EES ’22 session description, “One of the classic complications in sustainability is dealing with short-term – long-term dilemmas. Interventions take place in a local and operational setting, affecting the daily lives of stakeholders. Sustainability is at stake when short-term activities are compromising the long-term interests of these stakeholders and future generations, for instance, due to a focus on the achievement of shorter-term results rather than ensuring durable impacts for participants… Learning about progress towards the SDGs or the daunting task of keeping a global temperature rise this century well below 2 degrees Celsius above pre-industrial levels, for instance, requires more than nationally and internationally agreed indicator-systems, country monitoring, and reporting and good intentions.”
But there are wider ambitions for most sustainability activities undertaken by a range of donors, policy actors, project implementers, and others: Sustainability “needs to span both human-social and natural-ecological systems’ time scales. Furthermore, long-term sustainability, in the face of climate change and SDGs, demands a dynamic view, with due attention for complexity, uncertainty, resilience, and systemic transformation pathways…. the need for a transformation of current evaluation systems – seeing them as nested or networked systems… Their focus may range from focused operational projects to the larger strategic programmes of which these projects are part, to again the larger policies that provide the context or drivers for these programmes. Analogue to these nested layers runs a time dimension, from the short-term projects (months to years), to multi-year programmes, to policies with outlooks of a decade or more.”
When Preston did his research in 2020-21 which I oversaw, we focused on the projects precisely because that is where we believe ‘impact’ happens in a measurable way by participants and partners. Yet we found that many defined their parameters differently. Preston writes, “This paper focuses on what such research [on projects evaluated at least 2 years post-closure] yielded, not definitive findings of programs or multi-year country strategies that are funded for 20-30 years continuously, nor projects funded by country-level embassies which did not feature on the Ministry site. We focus on project bilateral project evaluations, not multilateral funding of sectors. We also …received input that Sweden’s EBA has a (non-project [not ex-post] portfolio of ‘country evaluations’ which looked back over 10 or even 20-year time horizons…
So we present these compiled detailed studies on the Netherlands, Norway, Finland, Sweden, and Denmark for your consideration. Can we arrive at a unified definition of ‘sustainability’ or imagine a unified ‘sustainability evaluation’ definition and scope? I hope so, will let you know after EES this week! What do you think, is it possible?
by Jindra Cekan | Sep 24, 2021 | Aid effectiveness, Bill and Melinda Gates Foundation, Donors, emerging outcomes, Evaluation, ex-post evaluation, Impact, Lutheran World Relief, Nepal, Niger, Sustainability, Sustainable development
Aid providers: More puzzle pieces, including unexpected outcomes; ours is not the whole picture
When we did our first ex-post evaluation/ delayed final evaluation in 2006 in Niger for Lutheran World Relief (LWR) funded by the Bill & Melinda Gates Foundation (pg75 on), we found all sorts of unexpected/ unintended outcomes and impacts that far outweighed the original aid’s expectations. The project measured success by the livelihoods rebuilding post-drought ravaged sheep herds and water points for them. Instead, while the LWR aid was beautifully based on ‘habbanaye,’ a pastoralist practice of lending or giving small-stock offspring to poorer family members (and was expanded to passing on animals to poorer community members), and results showed a majority of the poor benefitted, our respondents showed it was far more nuanced. We aid providers (and our expectations) are only a part of any ‘impact’, which needs to be defined by the communities themselves:
- While many families benefitted from the sheep, enabling some young boys to shepherd several at a time, it turns out the poorest chosen by the communities were not necessarily ones who lost small livestock during the drought but were, in fact, the ultra-poor who had never had them. Therefore, proof of successful ‘restocking’ post-drought left these poor who were helped the most, out, as they were… unexpected;
- Holding onto the donated sheep was not as important an indicator for some: one woman told us that selling her aid-received sheep to buy her daughter dowry to marry a wealthier husband was a far better investment for their financial future than the sheep would have been. Our main measure of success was not nuanced enough;
- The provision of water through well-rehabilitation and building in the five villages was a vital resource. Women reported they saved 8 hours every two days by having potable water in their villages. Before then, they spent three hours walking each way to the far-off well and waited two hours to fetch 50l water, which they head-carried back. With the well in place, they generated household income through weaving mats, cooking food for sale, etc., which amounted to as much as 20% of increased household income – a boon! Also, having both time and water access enabled them to bathe themselves and their children, make their husbands lunch, and make their mothers-in-law tea, which led to far more household harmony. No ‘impact’ measures outside of livestock and water were included or could be added;
- Finally, the resulting show-stopper: in the last village where we interviewed women participants, they said that the groups of fellow recipients were a boon for community solidarity across ethnic groups. In their meetings, thanks to the sheep, water, and collective moral support, they said the conversations turned from conflict to collaboration, and best of all, women reported, “our husbands don’t beat us anymore.” Peace among ethnic groups, within and between households was completely unexpected.
Such a highly unexpected outcome would fall under #2 and #4 of the Netherlands study below. Unfortunately, the Foundation seemed less interested in these unexpected but stellar results. Yet at the same time I have empathy for their position, as so many of us in global development want to help solve problems, and demand proof we have.…. so we can leave and help others, equally deserving. Taking complexity into account, seeing lives in a wider context where our aid can be helping differently or even harming makes garnering more aid hard.
As the brilliant Time to Listen series by CDA showed, aid intervenes in people’s lives in complex ways and we need to listen to our participants and partners who always share more complex views than our reports can honor..A few of hundreds of quotes of 6000 interviews, this from the Solomon Islands:
Some appreciate the aid as it is given:
“People in my village are very grateful for the road because now with trucks coming into our village, the women can now take their vegetables to the market. Before, the tomatoes just rotted in the gardens. Tomatoes go bad quickly and despite our attempts in the past to take them to the market to sell, we always lost.” Woman from East Malaita
But for others there are great caveats:
“Donors should send their officers to Solomon Islands to implement activities in urban and rural areas. This will help them understand the difficulties we often face with people, environment, culture, geography, etc. ‘no expectem evri ting bae stret’ [Don’t expect everything to go right].” Man, Honiara
“They have their own charters, sometimes we might want to go another way but they don’t want to touch that. So sometimes there is some conflict there; some projects are not really what we would like to address – because the donors only want to do one component, and not another, because it is sensitive, or because they want quick results and to get out.” Government official, Honiara
“What changes have I noticed since independence? Whatever development you see here is due to individual struggles. No single aid program is sustainable. NGOs are created by donors and are comfortable with who they know. NGOs eat up the bulk of help intended for the communities. NGOs become international employers. They do their own thing in our province. Most projects have no impact. I want to say stop all aid except for education and health. If international assistance concentrates on quality education and health, the educated and healthy people will take care of themselves.” Government official, Auki, Malaita
“The most important impacts of aid people do not think about – they are not listed, not planned, they are remote, but these are the longest lasting. Often they are the opposite of the stated objectives. So remote, unintended, unexpected impacts are very often more important and more lasting and more dramatic than the short term intended, measured ones.” Aid consultant, Honiara
How widespread is our myopic focus on our intended results? A recent Netherlands Foreign Aid IOB study found unintended effects were an evaluator’s blindspot as across 664 evaluations over 20 years, “The ‘text miners’ found that only 1 in 6 IOB evaluation documents pay attention to unintended effects.” This dearth of attention to all the other things happening in projects led to 10 micro, macro, meso, and multiple level effects, from negative price effects such as food aid on local food producers or nationalist backlash to Afghan projects to positive effects (they found 40% of projects had this) such as a harbor built happening to expand beach tourism as well.

But if we don’t look for such effects, we don’t know the true impact of our aid programming. We also don’t honor the breadth of people’s lives rather than just as narrow ‘aid beneficiaries’ (ugh), not even honoring them with the terms’ participants’ much less ‘partners’ in their own development).
In our ex-post work, we find a wide array of ways ownership and implementation of activities is done after donors leave and without additional or with different resources, capacities, and partnerships. Taking emerging outcomes and impacts examples from a different Niger project, and one from Nepal:
1. Partnerships & Ownership: Half of the members of the all-women Village Banks reported helping one another deal with domestic disputes and violence. (Pact/Nepal)
2. Capacities: Trained local women charged rates to sell course materials onward (PACT/Nepal)
3. Ownership: Participants valued clinic-based birthing and sustained it by introducing locally-created social punishments and incentives (CRS/Niger)
4. Resources: New Ministry funding reallocated to sustain [health] investments, and private traders generated large crop purchases and contracts (CRS/Niger)
The assets and capacities we bring to help people and their country systems help only a sliver of their lives, and often in unexpected ways that sadly we aid donors and implementers don’t seem interested in. There are other puzzle pieces to add…
Let us not forget, as a Sustainable Development Goals Evaluation colleague said in 2017 on a call:

I am sure you, my dear colleagues, have reams of similar findings from your fieldwork. Please share yours!
by Jindra Cekan | Jan 9, 2021 | Denmark, Evaluation, ex-post evaluation, Finland, impact evaluation, Netherlands, Nordics, Norway, Sustainability, Sustainable development, Sweden, Valuing Voices
Upcoming Webinar: Lessons from Nordic / the
Netherlands’ ex-post project evaluations: 14 Jan 2021
In June-August 2020, Preston Stewart, our Valuing Voices intern, conducted through government databases of the four Nordic countries – Norway, Finland, Sweden, and Denmark – and the Netherlands to identify ex-post evaluations of government-sponsored projects. The findings and recommendations for action are detailed in a four-part white paper series, beginning with a paper called The Search.
We would like to invite you to a presentation of our findings, and a discussion of what can be learned about:
1) the search process,
2) how ex-post evaluations are defined and categorized,
3) what was done well by each country’s ex-posts,
4) sustainability-related findings and lessons, and
5) what M&E experts in each country can improve on ex-post evaluation practices.
One big finding is that there were only 32 evaluations that seemed to be ex-post project, and only 1/2 of them actually were at least 2 years after project closure.

We have many more lessons about conflicting definitions, that ex-post evaluation is not the norm in the evaluations processes of the five governments, that development programs could, if committed to ex-post evaluation, learn about sustainability by engaging with the findings from many more such evaluations, and to increase accountability to the public and for transparent learning, ex-post evaluations should be shared in public, easy-to-access online repositories.
Join us!
REGISTER: https://www.eventbrite.com/e/webinar-lessons-from-nordic-the-netherlands-ex-post-project-evaluations-tickets-132856426147
Your ticket purchase entitles you to the webinar, its meeting recording, associated documents, and online Sustainability Network membership for resources and discussion. Payment on a sliding, pay-as-you-can scale.
by Jindra Cekan | Dec 18, 2020 | CSR, Evaluation, Sustainability, Sustainable development
Interview repost: Why Measure & Evaluate Corporate Sustainability Projects?
A CSR firm in Central Europe asked me to talk about sustainability, evaluation, and Corporate Social Responsibility. We had a terrific interview – please click thru to: https://www.besmarthead.com/cs/media/post/smartheadtalks-2-jindra-cekan-eng
The topics we discussed include:
* Sustainability in CSR involves value creation through benefits-creation for both companies and consumers
* Projects feeding into the United Nations Sustainable Development Goals may not be sustained over the long-term and we must return to evaluate what could be sustained and what emerged ex-post
* Scale measurable and meaningful impact investment through the transformation of development nonprofits’ programming + approach
* Three activities in sustainability that companies can start doing right away, that you would recommend to any company anywhere in the world
And Very Happy Holidays to all… may our lives and world be more sustainable in 2021….
by Jindra Cekan | Dec 10, 2020 | Catholic Relief Services (CRS), Credit, Evaluation, ex-post evaluation, microcredit, Microenterprise, Sustainable development
Reblog: ITAD/CRS “Lessons from an ex-post evaluation – and why we should do more of them”
Reposted from: https://www.itad.com/article/lessons-from-an-ex-post-evaluation-and-why-we-should-do-more-of-them/
We’ll be sharing learning from the evaluation alongside CRS colleagues at the Savings Led Working Group session on Members Day of the SEEP Annual Conference – so pop by if you would like to learn more.
What is an ex-post evaluation?
Ex-post evaluations are (by definition) done after the project has closed. There is no hard and fast rule on exactly when an ex-post evaluation should be done but as the aim of an ex-post evaluations is to assess the sustainability of results and impacts, usually some time will need to have passed to make this assessment.
A little bit about EFI
EFI was a Mastercard Foundation-funded program in Burkina Faso, Senegal, Zambia and Uganda whose core goal was to ensure that vulnerable households experienced greater financial inclusion. Within EFI, Private Service Providers (PSPs) formed and facilitated savings groups using CRS’ Savings and Internal Lending Communities (SILC) methodology, with the SILC groups responsible for paying the PSP a small fee for the services that they provide.
This payment is intended to improve sustainability by incentivising the groups’ facilitators to form and train new groups, as well as providing continued support to existing groups, beyond the end of the project.
A little bit about the evaluation
So, if the aim of the PSP model is sustainability, you need an evaluation that can test this! Evaluation at the end of project implementation can assess indications of results that might be sustained into the future. However, if you wait until some time has passed after activities have ended, then there is much clearer evidence on which activities and results are ongoing – and how likely these are to continue. Uganda was also a great test case for the evaluation because CRS hadn’t provided any follow-on support.
Our evaluation set out to assess the extent to which the EFI-trained PSPs and their SILC groups were still functioning 19 months after the programme ended and the extent to which the PSP model had contributed to the sustainability of activities and results.
What the ex-post evaluation found
We found a handful of findings that were only possible because it was an ex-post evaluation:
- There were 56% more reported groups among the sampled PSPs at the time of data collection than there were at the end of the project.
- Half of the PSP networks established within the sample are still functioning (to some extent).
- PSPs continued to receive remuneration for the work that they did, 19 months after project closure. However, there were inconsistencies in frequency and scale of remuneration, as well as variation in strategies to sensitize communities on the need to pay.
This only covers a fraction of the findings but we were able to conclude that the PSP model appeared to be highly sustainable. The evaluation also found that there were challenges to sustainability which could be addressed in future delivery of the PSP model. Significantly, the PSP model was designed with sustainability in mind – and this evaluation provides good evidence that PSPs were still operating 19 months after the end of the project.
What made the evaluation possible
We get it. It isn’t always easy to do ex-post evaluations. Evaluations are usually included in donor-implementer contracts, which end shortly after the project ends, leaving implementers without the resources to go back and evaluate 18 months later. This often results in a lack of funding and an absence of project staff. This is also combined with new projects starting up, obscuring opportunities for project-specific findings and learning as it’s not possible to attribute results to a specific project.
In many ways, we were lucky. Itad implements the Mastercard Foundations Savings Learning Lab, a six-year initiative that supports learning among the Foundation’s savings sector portfolio programmes – including EFI. EFI closed in the Learning Lab’s second year and with support from the Foundation and enthusiasm from CRS, we set aside some resource to continue this learning post-project. So, we had funding!
We also worked with incredibly motivated ex-EFI, CRS staff who made time to actively engage in the evaluation process and facilitate links to the PSP network, PSPs and SILC group members. So, we had the people!
And, no-one had implemented a similar PSP model in supported districts of Uganda since the end of EFI. So, we were also able to attribute!
Why we should strive to do more ex-post evaluations
Despite these challenges, and recognising it isn’t always easy, doesn’t mean it is not possible. And with projects like EFI where sustainability was central to its model, we would say it’s essential to assess whether the programme worked and how the model can be improved.
Unfortunately, practitioners and evaluators can shout all we like but the onus is on funders. We need funders to carve out dedicated resource for ex-post evaluations. This is even more important for programmes that have the development of replicable and sustainable models at their core. For some projects, this can be anticipated – and planned for – at project design stage. Other projects may show promise for learning on sustainability, unexpectedly, during implementation. Dedicated funding pots or call-down contracts for ex-post evaluations are just a couple of ways donors might be able to resource ex-post evaluations when there is a clear need for additional learning on the sustainability project results.
This learning should lead to better decision making, more effective use of donor funds and ultimately, more sustainable outcomes for beneficiaries.”
Other Findings:
Some of the other findings of this report on Financial Inclusion are:
RESOURCES: “Finding 1.iii. PSPs continue to receive remuneration for the work that they do; however, there are inconsistencies in frequency and scale of remuneration, as well as variety in strategies to sensitize communities on the need to pay.”
CAPACITIES: “Finding 3.ii. All networks included a core function of “collaboration, information-sharing and problemsolving”; however, networks were not sufficiently supported or incentivized to fulfill complex functions, such as PSP quality assurance or consumer protection, and their coverage area and late implementation limited the continued functioning of networks.”
PARTNERSHIP: “Finding 2.i. Only four of the 24 groups are clearly linked with other stakeholders and two were supported by EFI to create these linkages.”
Consider doing one!