Reblog: ITAD/CRS “Lessons from an ex-post evaluation – and why we should do more of them”

Reblog: ITAD/CRS “Lessons from an ex-post evaluation – and why we should do more of them”

Reposted from: https://www.itad.com/article/lessons-from-an-ex-post-evaluation-and-why-we-should-do-more-of-them/

Even as evaluation specialists, rarely do we get the chance to carry out ex-post evaluations. We recently carried out an ex-post evaluation of Catholic Relief Services’ (CRS) Expanding Financial Inclusion (EFI) programme and believe we’ve found some key lessons that make the case for more ex-post evaluations.

We’ll be sharing learning from the evaluation alongside CRS colleagues at the Savings Led Working Group session on Members Day of the SEEP Annual Conference – so pop by if you would like to learn more.

What is an ex-post evaluation?

Ex-post evaluations are (by definition) done after the project has closed. There is no hard and fast rule on exactly when an ex-post evaluation should be done but as the aim of an ex-post evaluations is to assess the sustainability of results and impacts, usually some time will need to have passed to make this assessment.

A little bit about EFI

EFI was a Mastercard Foundation-funded program in Burkina Faso, Senegal, Zambia and Uganda whose core goal was to ensure that vulnerable households experienced greater financial inclusion. Within EFI, Private Service Providers (PSPs) formed and facilitated savings groups using CRS’ Savings and Internal Lending Communities (SILC) methodology, with the SILC groups responsible for paying the PSP a small fee for the services that they provide.

This payment is intended to improve sustainability by incentivising the groups’ facilitators to form and train new groups, as well as providing continued support to existing groups, beyond the end of the project.

A little bit about the evaluation

So, if the aim of the PSP model is sustainability, you need an evaluation that can test this! Evaluation at the end of project implementation can assess indications of results that might be sustained into the future. However, if you wait until some time has passed after activities have ended, then there is much clearer evidence on which activities and results are ongoing – and how likely these are to continue. Uganda was also a great test case for the evaluation because CRS hadn’t provided any follow-on support.

Our evaluation set out to assess the extent to which the EFI-trained PSPs and their SILC groups were still functioning 19 months after the programme ended and the extent to which the PSP model had contributed to the sustainability of activities and results.

What the ex-post evaluation found

We found a handful of findings that were only possible because it was an ex-post evaluation:

  • There were 56% more reported groups among the sampled PSPs at the time of data collection than there were at the end of the project.
  • Half of the PSP networks established within the sample are still functioning (to some extent).
  • PSPs continued to receive remuneration for the work that they did, 19 months after project closure. However, there were inconsistencies in frequency and scale of remuneration, as well as variation in strategies to sensitize communities on the need to pay.

This only covers a fraction of the findings but we were able to conclude that the PSP model appeared to be highly sustainable. The evaluation also found that there were challenges to sustainability which could be addressed in future delivery of the PSP model. Significantly, the PSP model was designed with sustainability in mind – and this evaluation provides good evidence that PSPs were still operating 19 months after the end of the project.

What made the evaluation possible

We get it. It isn’t always easy to do ex-post evaluations. Evaluations are usually included in donor-implementer contracts, which end shortly after the project ends, leaving implementers without the resources to go back and evaluate 18 months later. This often results in a lack of funding and an absence of project staff. This is also combined with new projects starting up, obscuring opportunities for project-specific findings and learning as it’s not possible to attribute results to a specific project.

In many ways, we were lucky. Itad implements the Mastercard Foundations Savings Learning Lab, a six-year initiative that supports learning among the Foundation’s savings sector portfolio programmes – including EFI. EFI closed in the Learning Lab’s second year and with support from the Foundation and enthusiasm from CRS, we set aside some resource to continue this learning post-project. So, we had funding!

We also worked with incredibly motivated ex-EFI, CRS staff who made time to actively engage in the evaluation process and facilitate links to the PSP network, PSPs and SILC group members. So, we had the people!

And, no-one had implemented a similar PSP model in supported districts of Uganda since the end of EFI. So, we were also able to attribute!

Why we should strive to do more ex-post evaluations

Despite these challenges, and recognising it isn’t always easy, doesn’t mean it is not possible. And with projects like EFI where sustainability was central to its model, we would say it’s essential to assess whether the programme worked and how the model can be improved.

Unfortunately, practitioners and evaluators can shout all we like but the onus is on funders. We need funders to carve out dedicated resource for ex-post evaluations. This is even more important for programmes that have the development of replicable and sustainable models at their core. For some projects, this can be anticipated – and planned for – at project design stage. Other projects may show promise for learning on sustainability, unexpectedly, during implementation. Dedicated funding pots or call-down contracts for ex-post evaluations are just a couple of ways donors might be able to resource ex-post evaluations when there is a clear need for additional learning on the sustainability project results.

This learning should lead to better decision making, more effective use of donor funds and ultimately, more sustainable outcomes for beneficiaries.”

Other Findings:

Some of the other findings of this report on Financial Inclusion are:

RESOURCES: “Finding 1.iii. PSPs continue to receive remuneration for the work that they do; however, there are inconsistencies in frequency and scale of remuneration, as well as variety in strategies to sensitize communities on the need to pay.”

CAPACITIES: “Finding 3.ii. All networks included a core function of “collaboration, information-sharing and problemsolving”; however, networks were not sufficiently supported or incentivized to fulfill complex functions, such as PSP quality assurance or consumer protection, and their coverage area and late implementation limited the continued functioning of networks.”

PARTNERSHIP: “Finding 2.i. Only four of the 24 groups are clearly linked with other stakeholders and two were supported by EFI to create these linkages.”

Consider doing one!

What happens after the project ends?  Lessons about Funding, Assumptions and Fears (Part 3)

 

What happens after the project ends?
Lessons about Funding, Assumptions and Fears (Part 3)

 

In part 1 and part 2 of this blog, we showcased 11 of the 18 organizations that have done post-project evaluations.  While this was scratching the surface of all that is to be learned, we shared a few insights on How we do it Matters, Expect Unexpected Results and Country-national Ownership. We gained some champions in this process of sharing our findings, including Professor Zenda Ofir of South Africa, who said “we cannot claim to have had success in development interventions if the outcomes and/or impacts are not durable, or at least have a chance to sustain or endure.”

 

In this third blog of Lessons Learned from What Happens After the Project Ends, we turn to some of the curious factors that hold us back from undertaking more post project evaluations: Funding, Assumptions, and Fears.

 

Funding

  • Why haven’t we gone back? For the last 2+years Valuing Voices has been researching the issue, we have heard from colleagues: ‘we would love to evaluate post-project but we don’t have any money, ‘donors don’t fund this’, ‘it is too expensive’[*].  Funding currently from bilateral donors such as USAID is given in 1-5 year tranches with fixed terms for completion of results and learning from them and one-year close-out processes [1]. Much of the canon of evaluations conducted after close out that we amassed was from international NGOs that had used their private funds to evaluate large donor-funded projects for their own learning.  Many aimed also to show leadership in sustainability and admittedly dazzle their funders – join them!.
  • We fund capacity building during projects but if we do not return to evaluate how well we have supported our partners and communities to translate this to sustainability, then we fall short. Meetings convened by INTRAC on civil society sustainability are opening new doors for joint learning about factors such as “legitimacy… leadership, purpose, values, and structures” within organizations well beyond any project’s end [2]. The OECD’s DAC criteria for evaluating development assistance define sustainability as: “concerned with measuring whether the benefits of an activity are likely to continue after donor funding has been withdrawn. Projects need to be environmentally as well as financially sustainable“ [3]. We need to extend our view beyond typical criterion for sustainability being a focus primarily on continued funding.
  • We need funding to explore whether certain sectors lend themselves to sustainability. In addition to the cases in blog 1, a study by CARE/ OXFAM/ PACT on Cambodian Savings groups finds that we have some revisions to make on how we design and implement with communities to foster sustainability in this sector which typically promises greater sustainability because capital can be recycled [4]. Valuing Voices blogs show indications that once we amass a greater range of post-project evaluations (funders unite), the insights gleaned can illuminate cost efficient paths to more sustained programming, possibly leading to revisions in programming or interventions which have greater likelihood for country-ownership
  • Extend the program cycle to include post-project sustainability evaluation. Rare are donors such as the Australian government (forthcoming) and USAID’s Food For Peace that commission such studies. Rare is the initiative such as 3ie that has research funds allocated by major donors to explore an aspect of impact. We miss out on key opportunities to learn from the past for improved project design if we do not return to learn how sustained our outcomes and impacts have been. We miss learning how we could better implement so more national partners could take on the task of sustaining the changes we catalyzed.
  • We call on donors to fund a research initiative to comprehensively review sustainability evaluations.
  • We call on governments to ask for this in their discussion with donors. 
  • We call on implementers to invest in such learning to improve the quality of implementation today and sustained impact in the years to come.

 

 

Assumptions

Development assistance makes many assumptions about what happens after projects end in terms of people’s self-sufficiency,   partners’ capacity to continue to support activities, and projects’ financial independence and people’s ability to step into the shoes of donors and carry on.   Unless we take a hard look at our assumptions, we will not move from proving what we expect to learning what is actually there.

 

evaluation_-_Hledat_Googlem

 

Among them are these six assumptions:

  • All will be well once we exit; we have implemented so well that of course national participants and partners will be ready and able to carry on without us. We may assume the only important outcomes and impacts are within our Logical Frameworks and Theories of Change. Thus there is no need to return to explore unexpected negative ones, or ways in which the people we strengthened may have innovated in unexpectedly wonderful ways. Aysel Vazirova, a fellow international consultant wrote me: “Post-project evaluations provide data for a deeper analysis of sustainability and help to appreciate numerous avenues taken by the beneficiaries in incorporating development projects into their lives. The theory of change narratives presented by a majority of development programs and projects have a rather disturbing resemblance to the structure of magic tales: (from) Lack – (to) change – (to) happy ending. Post project evaluations have a power to change a rigid structure of this narrative.”
  • We assume evaluations are often used to inform new designs, yet dozens of colleagues have lamented that too often this does not happen in the race to new project design. But there is hope. World Wildlife Fund/UK M&E expert Clare Crawford says when following its new management standards, WWF “expects to see the recommendations of an evaluation before the next phase of design can happen (hence evaluations happen a little before the end of a strategic period).  WWF-UK, when reading new program plans is mandated to verify if – and how – the recommendations of the last evaluation(s) were made use of in the new design phase.  Equally we track management responses to evaluations to see how learning has been applied in current or in future work.” Such a link across the program cycle is not common in our experience and none of the post-project sustained impact evaluations we reviewed said how learning would be used.
  • We may assume data continues accessible from the projects we have evaluated, yet our team member Siobhan Green has found that until recently, with the move toward open data, often project data remains the province of the donors and implementers and to the best of our knowledge leaves the country when projects close. While some sectoral data such as health and education data remains local, we are finding in fieldwork that household level data has been rolled up or discarded once projects close, which makes interviews difficult.
  • We may assume that the participants and partners are not able to evaluate  projects, particularly after the fact.  Being vulnerable does not mean that people are not able to share insights or assess how projects helped or not. Methods such as empowerment evaluation and evaluative thinking are powerful supports [5] [6].
  • Some may assume that the situation has changed in the intervening years, that there is no benefit in returning to see what results remain. Change is inevitable and sometimes more rapid or dramatic than others.  But does that mean we shouldn’t want to understand what happened? This is the greatest disservice of all, for we are selling “sustainable development” so how well have we designed it to be so?
  • We assume that learning for our own benefit is enough. A potential client brought me in to discuss my working on a rare post-project evaluation last year. It was to cost hundreds of thousands of dollars and while would occur in several countries. What I discovered was that while the donor really wanted to learn what results remained more than a decade on, I asked ‘how would the countries themselves benefit from this research and findings?’  There was a long silence. Turns out, nothing from the research would benefit or even remain in country. No one had considered the learning needs of the countries themselves. This simply cannot continue if we are to be accountable to those we serve.

 

Fears

This may be the greatest barrier of all to returning to assess sustainability.

  • We assume our projects continue. We may be afraid to look for what will this tell us about the sustainability of our efforts to save lives and livelihoods so we only choose to publicly study what is successful. Valuing Voices has found that across most the post project studies there is some ‘selection bias’, as we repeatedly learned in our research from colleagues that organizations choose to evaluate projects that are most likely to be successfully sustained.  For instance, USAID Food For Peace’ study notes, “The countries included in this study—Bolivia, Honduras, India, and Kenya—were also chosen because of their attention to sustainability and exit.” Yet as an Appreciative Inquiry practitioner, I would argue that learning what worked best to know what to do more of may be the best way forward.
  • All too often the choice of evaluation design, and sensitivity to findings fly in the face of learning—particularly when findings are negative.  This raises fears around a discontinuation of funding (an implementer fear; a beneficiary fear; could also be a recipient government’s fear). Yet as Bill Gates says, “your most unhappy customers are your greatest source of learning.”>
  • Participants asked during the project cycle about interventions may be fearful of truth telling because of perceived vulnerabilities around promised future resources, local power imbalances in control over resources, or even political imperatives to adopt a particular position. Alternatively we may not believe them, thinking they may not tell us the truth were that to stop resources.

 

Those are ours.

  • Peter Kimeu, my wise advisor and 20-year friend and colleague from Kenya tells us some fears of the that we need to listen to – those that haunt our national partners and participants.

 

They are afraid we do not see their real desires:

  • “It is ‘not how many have you (the NGO) fed, but how many of us have the capability to feed ourselves and our community?’
  • ‘How can we (country national) support our fellow citizens to take our lives and livelihoods into our own hands and excel, sustainably?’
  • What is sustainability if it isn’t expanded opportunities, Isn’t the capability of one to make a choice of value/quality life out of the many choices that the opportunities present?”

 

Will you help us address these challenges? Will you join us in advocating filling the gap in the program cycle, and looking beyond it to how we design and implement with country nationals? Will you, in your own work foster their ownership throughout and beyond? We need to fund learning from sustained impact to transparently discuss assumptions and face our fears.  This is a sustained purpose we need to and can fill.

 

 

Sources:

[1] Capable Partners Program & FHI 36. (2010). Essential NGO Guide to Managing Your USAID Award: Chapter 6 – Close Out. Retrieved from https://www.ngoconnect.net/sites/default/files/resources/Essential%20NGO%20Guide%20-%20Chapter%206%20-%20Close%20Out.pdf

[2] Hayman, R. (2014, November 5). Civil society sustainability: Stepping up to the challenge. Retrieved from https://www.intrac.org/civil-society-sustainability-stepping-challenge/

[3] OECD. (n.d.). DAC Criteria for Evaluating Development Assistance. Retrieved 2015, from https://web.archive.org/web/20151206171605/http://www.oecd.org/dac/evaluation/daccriteriaforevaluatingdevelopmentassistance.htm

[4] Emerging Markets Consulting. (2013). Sustainability of Savings Group Programs in Cambodia for CARE, Oxfam, and Pact. Retrieved from https://mangotree.org/Resource/Sustainability-of-Savings-Group-Programs-in-Cambodia-for-CARE-Oxfam-and-Pact

[5] Better Evaluation. (n.d.). Empowerment Evaluation. Retrieved from https://www.betterevaluation.org/plan/approach/empowerment_evaluation

[6] Griñó, L., Levine, C., Porter, S., & Roberts, G. (Eds.). (2016). Embracing Evaluative Thinking for Better Outcomes: Four NGO Case Studies. Retrieved from https://www.theclearinitiative.org/resources/embracing-evaluative-thinking-for-better-outcomes-four-ngo-case-studies

 


[*] It does not have to be. We have done these evaluations for under $170,000, all-inclusive.