Evaluating global aid projects ex-post closure: Evaluability, and how-to via Sustainability (and Resilience) Evaluation training materials:
How do we evaluate projects ex-post, and are all projects evaluable after donors leave? How do we learn from project documents to ascertain likely markers of sustainability (hint: see materials for Theory of Sustainability and Sustainability Ratings) that we can verify? How do we design projects to make sustainability more likely (hint: implement pre-exit the drivers in the second image, below)? How to consider evaluating resilience to climate change (hint: evaluate resilience characteristics)?
Not only are we evaluating projects ex-post, but we are also creating processes for others to follow, including this new Sustainability Framework. The left three columns project the likelihood of sustainability, which the right three columns verify. It includes the Valuing Voices ’emerging outcomes’ from local efforts to sustain results after donors leave. SO exciting:
Ex-post Sustainability Framework
Also, just today, we published our training materials which we are using in ex-posts 3 and 4 in Argentina (Ministry of the Environment and the World Bank); https://www.adaptation-fund.org/document/training-material-for-ex-post-pilots/. They include videos for those who like to listen and watch us, as well as Powerpoint slides/ PDFs and Excels of our training materials (including suggested methods) for those who like to read…
Take a look, use, and tell us what you think and what you are learning! Thanks, Jindra Cekan (Sustainability) and Meg Spearman (Resilience) and the Adaptation Fund team!
or why sustainable results are so hard to come by….
A while ago, I reacted to a discussion among development aid/cooperation evaluators about why there are so few NGO evaluations available. It transpired that many people do not even know what they often look like, which is why I wrote a kind of Common Denominator Report: only about small evaluations, and self-explanatory in the sense that one understands why they are rarely published. The version below is slightly different from the original.
Most important elements of a standard evaluation report for NGOs and their donors; about twenty days of work; about 20,000 Euros budget (TVA included).
In reality, the work takes at least twice as much time as calculated and will still be incomplete/ quick, and dirty because it cannot decently be done within the proposed framework of conditions and answering all 87 questions or so that normally figure in the ToR.
The main issues in the project/ programme, the main findings, the main conclusions, and the main recommendations, presented in a positive and stimulating way (the standard request from the Comms and Fundraising departments) and pointing the way to the sunny uplands. This summary is written after a management response to the draft report has been ‘shared with you’. The management response normally says:
this is too superficial (even if you explain that it could not be done better, given the constraints);
this is incomplete (even if you didn’t receive the information you needed);
this is not what we asked (even if you had an agreement about the deliverables)
you have not understood us (even if your informants do not agree among themselves and contradict each other)
you have not used the right documents (even if this is what they gave you)
you have got the numbers wrong; the situation has changed in the meantime (even if they were in your docs);
your reasoning is wrong (meaning we don’t like it);
the respondents to the survey(s)/ the interviews were the wrong ones (even if the evaluand suggested them);
we have already detected these issues ourselves, so there is no need to put them in the report (meaning don’t be so negative).
Who the commissioning organisation is, what they do, who the evaluand is, what the main questions for the evaluators were, who got selected to do this work, and how they understood the questions and the work in general.
In the Terms of Reference for the evaluation, many commissioners already state how they want an evaluation done. This list is almost invariably forced on the evaluators, thereby reducing them from having independent status to being the ‘hired help’ from a Temp Agency:
briefings by Director and SMT [Senior Management Team] members for scoping and better understanding;
desk research leading to notes about facts/ salient issues/ questions for clarification;
survey(s) among a wider stakeholder population;
20-40 interviews with internal/ external stakeholders;
analysis of data/ information;
processing feedback on the draft report.
In the Terms of Reference, many commissioners already state which deliverable they want and in what form:
round table/ discussion of findings and conclusions;
presentation to/ discussion with selected stakeholders.
PROJECT/ PROGRAMME OVERVIEW
Many commissioners send evaluators enormous folders with countless documents, often amounting to over 3000 pages of uncurated text with often unclear status (re authors, purpose, date, audience) and more or less touching upon the facts the evaluators are on a mission to find. This happens even when the evaluators give them a short list with the most relevant docs (such as grant proposal/ project plan with budget, time and staff calculations, work plans, intermediate reports, intermediate assessments, and contact lists). Processing them leads to the following result:
According to one/ some of the many documents that were provided:
the organisation’s vision is that everybody should have everything freely and without effort;
the organisation’s mission is to work towards having part of everything to not everybody, in selected areas;
the project’s/ programme’s ToC indicates that if wishes were horses, poor men would ride;
the project’s/ programme’s duration was four/ five years;
the project’s/ programme’s goal/ aim/ objective was to provide selected parts of not everything to selected parts of not everybody, to make sure the competent authorities would support the cause and enshrine the provisions in law, The beneficiaries would enjoy the intended benefits, understand how to maintain them and teach others to get, enjoy and amplify them, that the media would report favourably on the efforts, in all countries/ regions/ cities/ villages concerned and that the project/ programme would be able to sustain itself and have a long afterlife;
the project’s/ programme’s instruments were fundraising and/ or service provision and/ or advocacy;
the project/ programme had some kind of work/ implementation plan.
This is where practice meets theory. It normally ends up in the report like this:
Due to a variety of causes:
unexpectedly slow administrative procedures;
funds being late in arriving;
bigger than expected pushback and/ or less cooperation than hoped for from authorities- competitors- other NGOs- local stakeholders;
sudden changes in project/ programme governance and/ or management;
incomplete and/ or incoherent project/ programme design;
incomplete planning of project/ programme activities;
social unrest and/ or armed conflicts;
The project/ programme had a late/ slow/ rocky start. Furthermore, the project/ programme was hampered by:
partial implementation because of a misunderstanding of the Theory of Change which few employees know about/ have seen/ understand, design and/ or planning flaws and/ or financing flaws and/ or moved goalposts and/ or mission drift and/ or personal preferences and/ or opportunism;
a limited mandate and insufficient authority for the project’s/ programme’s management;
high attrition among and/ or unavailability of key staff;
a lack of complementary advocacy and lobbying work;
patchy financial reporting and/ or divergent formats for reporting to different donors taking time and concentration away;
absent/ insufficient monitoring and documenting of progress;
little or no adjusting because of absent or ignored monitoring results/ rigid donor requirements;
limited possibilities of stakeholder engagement with birds/ rivers/ forests/ children/ rape survivors/ people in occupied territories/ murdered people/ people dependent on NGO jobs & cash etc;
internal tensions and conflicting interests;
neglected internal/ external communications;
un/ pleasant working culture/ lack of trust/ intimidation/ coercion/ culture of being nice and uncritical/ favouritism;
the inaccessibility of conflict areas;
Although these issues had already been flagged up in:
the evaluation of the project’s/ programme’s first phase;
the midterm review;
the project’s/ programme’s Steering Committee meetings;
the project’s/ programme’s Advisory Board meetings;
the project’s/ programme’s Management Team meetings;
Very little change seems to have been introduced by the project managers/ has been detected by the evaluators.
In terms of the OECD/ DAC criteria, the evaluators have found the following:
relevance – the idea is nice, but does it cut the mustard?/ others do this too/ better;
coherence – so so, see above;
efficiency – so so, see above;
effectiveness – so so, see above;
impact – we see a bit here and there, sometimes unexpected positive/ negative results too, but will the positives last? It is too soon to tell, but see above;
sustainability – unclear/ limited/ no plans so far.
If an organisation is (almost) the only one in its field, or if the cause is still a worthy cause, as evaluators you don’t want the painful parts of your assessments to reach adversaries. This also explains the vague language in many reports and why overall conclusions are often phrased as:
However, the obstacles mentioned above were cleverly navigated by the knowledgeable and committed project/ programme staff in such a way that in the end, the project/ programme can be said to have achieved its goal/ aim/ objective to a considerable extent.
Galileo: “Eppur si muove” = “And yet it moves”
Most NGO commissioners make drawing up a list of recommendations compulsory. Although there is a discussion within the evaluation community about evaluators’ competence to do precisely that, many issues found in this type of evaluation have organisational; not content; origins. The corresponding recommendations are rarely rocket science and could be formulated by most people with basic organisational insights or a bit of public service or governance experience. Where content is concerned, many evaluators are selected because of their thematic experience and expertise, so it is not necessarily wrong to make suggestions.
They often look like this:
Project/ programme governance
limit the number of different bodies and make remit/ decision making power explicit;
have real progress reports;
have real meetings with a real agenda, real documents, real minutes, real decisions, and real follow-up;
Project/ programme management
review and streamline/ rationalise structure to reflect strategy and work programme;
give project/ programme leaders real decision making and budgetary authority;
have real progress meetings with a real agenda, real minutes, real decisions, and real follow-up;
implement what you decide, but monitor if it makes sense;
consult staff on recommendations/ have learning sessions;
draft implementation plan for recommendations;
carry them out;
Processes and Procedures
get staff agreement on them;
commit them to paper;
stick to them – but not rigidly;
Obviously, if we don’t get organisational structure and functioning, programme or project design, implementation, monitoring, evaluation, and learning right, there is scant hope for the longer-term sustainability of the results that we should all be aiming for.
Ex-post Eval Week: Measuring sustainability post-program –go in and stay for the learning! By Holta Trandafili
Reblog from AEA: https://aea365.org/blog/ex-post-eval-week-measuring-sustainability-post-program-go-in-and-stay-for-the-learning-by-holta-trandafili/ January 21, 2021
Greetings, I am Holta Trandafili, a researcher and evaluator captivated by sustainability theories and the sustainment of results. I believe that a thoughtful, systematic inquiry of what happens after an intervention ends adds value to what we know about sustainability. Since 2015 I have co-led ten post-program evaluations (also known as ex-posts) in Uganda, Kenya, Sri Lanka, India, Myanmar, and Bolivia. Their findings point to questions and issues of theory, measurement, and sustainability expectations relevant to any program:
To what do we compare results to judge success? Is it that 60% of community groups or water points being operational three years after closeout a good result? Should it be 87% or 90%? Why? Should we use the end-line as the measuring yardstick, especially as contexts change? Whose view of success counts?
How long should we expect results, or community groups left behind, or activities to continue post-program? Two or ten years or Forever? Why?
Is going back once enough to make a judgment on sustainability? What would we find if we went back in 2020 where we evaluated ex-post in 2015 or even 2019?
Lessons Learned:Here are my reflections and resources on sustainability:
To the enthusiastic evaluators ready to start ex-posts
Lesson learned: Organizations often carry out ex-posts for accountability. However, greater wealth lays in learning. Make learning part of your evaluation objectives. It took my organization 5 years from the first ex-post to have more open conversations and share our sustainability learning on what to improve: how we design, transition, and measure programs’ impact. Now we are genuinely more accountable.
Get involved: Don’t lose heart if your first ex-posts prove difficult to conduct or have mixed results or unearth new questions and insights on sustainability. You are not alone. Find another evaluator that has gone through an ex-post experience and ask them to write a blog, present at a conference, write guidelines, attend a course, or merely meet to vent and dream.
To those already fighting to mainstream ex-post measurement in their organizations or their clients
Mainstreaming ex-post evaluations is commendable for any institution. In this process we should start making the case to pilot longitudinal ex-post measurements (i.e., going back not once but several points in time). We can truly unpack the issues of temporality and longevity for sustainment of results. See JICA’s example on ex-post monitoring.
Invest in theory-driven evaluations like Realist Evaluation to unpack the hidden mechanisms behind which different types of outcomes are sustained, asking: among whom, in what contexts, how, why?
Six years. That’s how long ago I began researching proof of sustained impact(s) through its ex-post project evaluation. Until now Valuing Voices has focused on aid donors. We are expanding to the private sector.
In my PhD I was sure it was a lack of researched and shared proof of successful prevention of famine that led to inaction. In Valuing Voices’ research on ex-post project evaluation, I again felt “if only they knew, they would act”. I pulled together a variety of researchers and consultants who (often pro-bono, or for limited fees) researched the shockingly rare field evaluations of what was sustained after projects closed, why, and what participants and partners did themselves to sustain impacts.
Sustaining the outcomes and achieving impacts, are, after all, what global development projects promise. These ‘sustainable development’ results are at the top (or far-right, below) of our ‘logical frameworks’. We promise the country-level partners, our taxpayers and donors, that we will achieve them, yet…
We have applied to many grants for support, unsuccessfully, and have applied to evaluate a handful more ex-post sustainability evaluations which other consultants have won – while we were disappointed, in equal measure we are happy others are learning to do this, as we share our resources freely to promote exactly such practices across hundreds of thousands unevaluated projects! We are currently doing an ex-post project evaluation of an agriculture value chain in Tanzania, yet there are a handful done per year. At one conference, our discussant Michael Bamberger joked we were lucky not to be found dead under a bridge for taking on such a dangerous topic. We remain undeterred, and delight in colleagues we promote such work and thanking us for ours.
At the same time, several things have become apparent:
Vital lessons for how aid can do better remain unexplored, and true accountability to our country-national participants and partners ends when fixed-time, fixed deliverable project resources are spent and proof of accountability for money and results that donors want are filed away. Sadly, while capacity building is done throughout implementation, knowledge management about results is abysmal as ‘our projects’ data almost always dies quietly in donor and implementer computer hard drives after close-out, rather than being accessible in-country for further learning. Go partner!
We hardly ever return after all our evaluations to share with communities which speaks to ‘partnerships’ not being with the participants, and we often ‘exit’ without giving ample time to handover so that things can be sustained, e.g. local partners found, local and other international funding harnessed, etc. Learn together!
There is a real need to fund systematized methods for such evaluations, mandate access to quality baseline, midterm and final evaluations, and mandate that all projects above a certain funding level (e.g. $1mil) include funding for such evaluation and learning 2-10 years later. Many so-called ex-post evaluations are in fact either delayed final evaluations, desk studies without any fieldwork, rather methodologically flawed comparisons or with fieldwork which doesn’t talk to the intended ‘beneficiaries’ for such pivotal ground-level feedback. Innovate by listening!
It is unclear to us how any organization that has done an ex-post sustainability evaluation has learned from it and changed their systems, although we have been told some are ‘looking for a successful project to evaluate’, and that after a failed one, they are discontinued. We know of some (I)NGOs who are putting ex-posts into their new strategies, and two INGOs who are researching exitsmore – good. Be brave!
Recently, we are delighted some new NGOs are dipping their feet into their first evaluations of sustainability, they do so bravely. The tension between accountability and learning is heightened at the prospect that implementers and donors have failed to create sustained impact. But why judge them when all the design and systems in place are to reward success while projects are running (and even those don’t always show much) so that they all get congratulations and more funding for very similar projects? Who knows who is focused on sustaining impacts with funding capacities, partnerships and country-led design, implementing with feedback loops and adjusting for the long-term, helping communities evaluate us rather than how well they are fulfilling our targets, etc. Sustaining impacts will win you funding!
Logically, here are many indications among ex-post sustainability evaluations that profitable, but low-risk and diversified agriculture, microenterprise/ business projects are better sustained (Niger, Ethiopia, Tanzania, Nepal, etc.). This does not mean that all projects need to be profitable, but cost-covering projects even in the health and education/ vocational training/ sectors is important as many of us know. Self-funding!
So rather than giving up on sustained impacts, we are adding another branch to the Valuing Voices tree.
My partners and I have extensively researched the need for and co-founded Impact Guild. We will work alongside NGOs and impact investors to foster:
FUNDING: The money available from development aid donors is shrinking in volume + value, while development financing is scaling up exponentially.
The SDGs and the Paris Agreement are prompting a massive scale-up of development financing from billions to trillions of dollars into ‘sustainable development’, yet with rare Scandinavian and Foundation exceptions, donors appear to be switching from longer-term development to humanitarian aid. Further, despite decades of experience, international and national nonprofit development implementers are mostly absent in the conversation around scaling-up the flow of capital to achieve and sustain development goals. Exceptions are some in the International NGO Impact Investing Network (AMPLIFY)
2. RESULTS:Funding for projects that can show great results (e.g. Social Impact Bonds/ Development Impact Bonds, which are in fact pay-for-performance instruments), even sustained impacts from partnering with local small and medium enterprises, national level ministries, and local NGOs. Far too long, implementers have been able to get funding for projects with mediocre results; impact investors are raising the bar and even donors are helping hedge risk. This includes M&E ‘impact’ value that rigorously tracks results (savvy private-sector donors require counterfactual/ control group data, isolating results from that intervention).
3. LEARNING: Impact Investors have a lot to learn from non-profits and aid donors as well.
They talk about impact but too often that is synonymous with generic results, while International and National nonprofits (NGOs) have detailed, grassroots systems in place;
Most seem to be content – for now – to invest in the 17 Sustainable Development Goal areas (e.g. vetting investable projects by screening criteria of not only getting a financial return, but also by broad sectoral investments, e.g. poverty, hunger, climate etc.). Many claim they have affected change, without data to prove it. The SDGs are slowly creating indicators to address this, and investors also need to be brought along to differentiate between corporate efficiency activities for their operations and those that affect change at the output, outcome and impact levels in communities;
There are still large leaps of logic and claims among investors and some know that data is lacking to claim good grassroots targeting and actual results that prove they are changing hunger, poverty and other sectors in Africa, Asia, Latin Americ. Good development professionals would see that the very design would make results accessible only to the elite of that country (e.g. $1 nutrition bars are inaccessible to most of a country’s population living on income of $2.00 a day)
We will bring with us all we know about great potential sustained impacts programming, such as Theory of Sustainability, looking for emerging results alongside planned early on, learning from failure for success, partnering successfully for country-led development, etc.
So keep watching these ‘spaces’: www.ValuingVoices.com and www.ImpactGuild.orgfor updates on bridging these worlds, hopefully for ever-greater sustained impacts. Let us know if you would like to partner!
What happens after the project ends?
Lessons about Funding, Assumptions and Fears (Part 3)
In part 1 and part 2 of this blog, we showcased 11 of the 18 organizations that have done post-project evaluations. While this was scratching the surface of all that is to be learned, we shared a few insights on How we do it Matters, Expect Unexpected Results and Country-national Ownership. We gained some champions in this process of sharing our findings, including Professor Zenda Ofir of South Africa, who said “wecannot claim to have had success in development interventions if theoutcomes and/or impacts are not durable, or at least have a chance to sustain or endure.”
In this third blog of Lessons Learned from What Happens After the Project Ends, we turn to some of the curious factors that hold us back from undertaking more post project evaluations: Funding, Assumptions, and Fears.
Why haven’t we gone back? For the last 2+years Valuing Voices has been researching the issue, we have heard from colleagues: ‘we would love to evaluate post-project but we don’t have any money, ‘donors don’t fund this’, ‘it is too expensive’[*]. Funding currently from bilateral donors such as USAID is given in 1-5 year tranches with fixed terms for completion of results and learning from them and one-year close-out processes . Much of the canon of evaluations conducted after close out that we amassed was from international NGOs that had used their private funds to evaluate large donor-funded projects for their own learning. Many aimed also to show leadership in sustainability and admittedly dazzle their funders – join them!.
We fund capacity building during projects but if we do not return to evaluate how well we have supported our partners and communities to translate this to sustainability, then we fall short. Meetings convened by INTRAC on civil society sustainability are opening new doors for joint learning about factors such as “legitimacy… leadership, purpose, values, and structures” within organizations well beyond any project’s end . The OECD’s DAC criteria for evaluating development assistance define sustainability as: “concerned with measuring whether the benefits of an activity are likely to continue after donor funding has been withdrawn. Projects need to be environmentally as well as financially sustainable“ . We need to extend our view beyond typical criterion for sustainability being a focus primarily on continued funding.
We need funding to explore whether certain sectors lend themselves to sustainability. In addition to the cases in blog 1, a study by CARE/ OXFAM/ PACT on Cambodian Savings groups finds that we have some revisions to make on how we design and implement with communities to foster sustainability in this sector which typically promises greater sustainability because capital can be recycled . Valuing Voices blogs show indications that once we amass a greater range of post-project evaluations (funders unite), the insights gleaned can illuminate cost efficient paths to more sustained programming, possibly leading to revisions in programming or interventions which have greater likelihood for country-ownership
Extend the program cycle to include post-project sustainability evaluation. Rare are donors such as the Australian government (forthcoming) and USAID’s Food For Peace that commission such studies. Rare is the initiative such as 3ie that has research funds allocated by major donors to explore an aspect of impact. We miss out on key opportunities to learn from the past for improved project design if we do not return to learn how sustained our outcomes and impacts have been. We miss learning how we could better implement so more national partners could take on the task of sustaining the changes we catalyzed.
We call on donors to fund a research initiative to comprehensively review sustainability evaluations.
We call on governments to ask for this in their discussion with donors.
We call on implementers to invest in such learning to improve the quality of implementation today and sustained impact in the years to come.
Development assistance makes many assumptions about what happens after projects end in terms of people’s self-sufficiency, partners’ capacity to continue to support activities, and projects’ financial independence and people’s ability to step into the shoes of donors and carry on. Unless we take a hard look at our assumptions, we will not move from proving what we expect to learning what is actually there.
Among them are these six assumptions:
All will be well once we exit; we have implemented so well that of course national participants and partners will be ready and able to carry on without us. We may assume the only important outcomes and impacts are within our Logical Frameworks and Theories of Change. Thus there is no need to return to explore unexpected negative ones, or ways in which the people we strengthened may have innovated in unexpectedly wonderful ways. Aysel Vazirova, a fellow international consultant wrote me: “Post-project evaluations provide data for a deeper analysis of sustainability and help to appreciate numerous avenues taken by the beneficiaries in incorporating development projects into their lives. The theory of change narratives presented by a majority of development programs and projects have a rather disturbing resemblance to the structure of magic tales: (from) Lack – (to) change – (to) happy ending. Post project evaluations have a power to change a rigid structure of this narrative.”
We assume evaluations are often used to inform new designs, yet dozens of colleagues have lamented that too often this does not happen in the race to new project design. But there is hope. World Wildlife Fund/UK M&E expert Clare Crawford says when following its new management standards, WWF “expects to see the recommendations of an evaluation before the next phase of design can happen (hence evaluations happen a little before the end of a strategic period). WWF-UK, when reading new program plans is mandated to verify if – and how – the recommendations of the last evaluation(s) were made use of in the new design phase. Equally we track management responses to evaluations to see how learning has been applied in current or in future work.” Such a link across the program cycle is not common in our experience and none of the post-project sustained impact evaluations we reviewed said how learning would be used.
We may assume data continues accessible from the projects we have evaluated, yet our team member Siobhan Green has found that until recently, with the move toward open data, often project data remains the province of the donors and implementers and to the best of our knowledge leaves the country when projects close. While some sectoral data such as health and education data remains local, we are finding in fieldwork that household level data has been rolled up or discarded once projects close, which makes interviews difficult.
We may assume that the participants and partners are not able to evaluate projects, particularly after the fact. Being vulnerable does not mean that people are not able to share insights or assess how projects helped or not. Methods such as empowerment evaluation and evaluative thinking are powerful supports  .
Some may assume that the situation has changed in the intervening years, that there is no benefit in returning to see what results remain. Change is inevitable and sometimes more rapid or dramatic than others. But does that mean we shouldn’t want to understand what happened? This is the greatest disservice of all, for we are selling “sustainable development” so how well have we designed it to be so?
We assume that learning for our own benefit is enough. A potential client brought me in to discuss my working on a rare post-project evaluation last year. It was to cost hundreds of thousands of dollars and while would occur in several countries. What I discovered was that while the donor really wanted to learn what results remained more than a decade on, I asked ‘how would the countries themselves benefit from this research and findings?’ There was a long silence. Turns out, nothing from the research would benefit or even remain in country. No one had considered the learning needs of the countries themselves. This simply cannot continue if we are to be accountable to those we serve.
This may be the greatest barrier of all to returning to assess sustainability.
We assume our projects continue. We may be afraid to look for what will this tell us about the sustainability of our efforts to save lives and livelihoods so we only choose to publicly study what is successful. Valuing Voices has found that across most the post project studies there is some ‘selection bias’, as we repeatedly learned in our research from colleagues that organizations choose to evaluate projects that are most likely to be successfully sustained. For instance, USAID Food For Peace’ study notes, “The countries included in this study—Bolivia, Honduras, India, and Kenya—were also chosen because of their attention to sustainability and exit.” Yet as an Appreciative Inquiry practitioner, I would argue that learning what worked best to know what to do more of may be the best way forward.
All too often the choice of evaluation design, and sensitivity to findings fly in the face of learning—particularly when findings are negative. This raises fears around a discontinuation of funding (an implementer fear; a beneficiary fear; could also be a recipient government’s fear). Yet as Bill Gates says, “your most unhappy customers are your greatest source of learning.”>
Participants asked during the project cycle about interventions may be fearful of truth telling because of perceived vulnerabilities around promised future resources, local power imbalances in control over resources, or even political imperatives to adopt a particular position. Alternatively we may not believe them, thinking they may not tell us the truth were that to stop resources.
Those are ours.
Peter Kimeu, my wise advisor and 20-year friend and colleague from Kenya tells us some fears of the that we need to listen to – those that haunt our national partners and participants.
They are afraid we do not see their real desires:
“It is ‘not how many have you (the NGO) fed, but how many of us have the capability to feed ourselves and our community?’
‘How can we (country national) support our fellow citizens to take our lives and livelihoods into our own hands and excel, sustainably?’
‘What is sustainability if it isn’t expanded opportunities, Isn’t the capability of one to make a choice of value/quality life out of the many choices that the opportunities present?”
Will you help us address these challenges? Will you join us in advocating filling the gap in the program cycle, and looking beyond it to how we design and implement with country nationals? Will you, in your own work foster their ownership throughout and beyond? We need to fund learning from sustained impact to transparently discuss assumptions and face our fears. This is a sustained purpose we need to and can fill.
[*] It does not have to be. We have done these evaluations for under $170,000, all-inclusive.
Jindra Cekan, Ph.D. has used participatory methods for 30 years to connect with participants, ranging from villagers in Africa, Central/ Latin America and the Balkans to policy makers and Ministers around the world for her international clients. Their voices have informed the new Sustained and Emerging Impacts Evaluation, other M&E, stakeholder analysis, strategic planning, knowledge management and organizational learning.
If you don’t find what you are looking for via the search, categories, or posts above, you can go to the Blog page, scroll to the bottom, and click “previous posts” to go through all of the posts (newest–>oldest).