Reblog: ITAD/CRS “Lessons from an ex-post evaluation – and why we should do more of them”

Reblog: ITAD/CRS “Lessons from an ex-post evaluation – and why we should do more of them”

Reposted from: https://www.itad.com/article/lessons-from-an-ex-post-evaluation-and-why-we-should-do-more-of-them/

Even as evaluation specialists, rarely do we get the chance to carry out ex-post evaluations. We recently carried out an ex-post evaluation of Catholic Relief Services’ (CRS) Expanding Financial Inclusion (EFI) programme and believe we’ve found some key lessons that make the case for more ex-post evaluations.

We’ll be sharing learning from the evaluation alongside CRS colleagues at the Savings Led Working Group session on Members Day of the SEEP Annual Conference – so pop by if you would like to learn more.

What is an ex-post evaluation?

Ex-post evaluations are (by definition) done after the project has closed. There is no hard and fast rule on exactly when an ex-post evaluation should be done but as the aim of an ex-post evaluations is to assess the sustainability of results and impacts, usually some time will need to have passed to make this assessment.

A little bit about EFI

EFI was a Mastercard Foundation-funded program in Burkina Faso, Senegal, Zambia and Uganda whose core goal was to ensure that vulnerable households experienced greater financial inclusion. Within EFI, Private Service Providers (PSPs) formed and facilitated savings groups using CRS’ Savings and Internal Lending Communities (SILC) methodology, with the SILC groups responsible for paying the PSP a small fee for the services that they provide.

This payment is intended to improve sustainability by incentivising the groups’ facilitators to form and train new groups, as well as providing continued support to existing groups, beyond the end of the project.

A little bit about the evaluation

So, if the aim of the PSP model is sustainability, you need an evaluation that can test this! Evaluation at the end of project implementation can assess indications of results that might be sustained into the future. However, if you wait until some time has passed after activities have ended, then there is much clearer evidence on which activities and results are ongoing – and how likely these are to continue. Uganda was also a great test case for the evaluation because CRS hadn’t provided any follow-on support.

Our evaluation set out to assess the extent to which the EFI-trained PSPs and their SILC groups were still functioning 19 months after the programme ended and the extent to which the PSP model had contributed to the sustainability of activities and results.

What the ex-post evaluation found

We found a handful of findings that were only possible because it was an ex-post evaluation:

  • There were 56% more reported groups among the sampled PSPs at the time of data collection than there were at the end of the project.
  • Half of the PSP networks established within the sample are still functioning (to some extent).
  • PSPs continued to receive remuneration for the work that they did, 19 months after project closure. However, there were inconsistencies in frequency and scale of remuneration, as well as variation in strategies to sensitize communities on the need to pay.

This only covers a fraction of the findings but we were able to conclude that the PSP model appeared to be highly sustainable. The evaluation also found that there were challenges to sustainability which could be addressed in future delivery of the PSP model. Significantly, the PSP model was designed with sustainability in mind – and this evaluation provides good evidence that PSPs were still operating 19 months after the end of the project.

What made the evaluation possible

We get it. It isn’t always easy to do ex-post evaluations. Evaluations are usually included in donor-implementer contracts, which end shortly after the project ends, leaving implementers without the resources to go back and evaluate 18 months later. This often results in a lack of funding and an absence of project staff. This is also combined with new projects starting up, obscuring opportunities for project-specific findings and learning as it’s not possible to attribute results to a specific project.

In many ways, we were lucky. Itad implements the Mastercard Foundations Savings Learning Lab, a six-year initiative that supports learning among the Foundation’s savings sector portfolio programmes – including EFI. EFI closed in the Learning Lab’s second year and with support from the Foundation and enthusiasm from CRS, we set aside some resource to continue this learning post-project. So, we had funding!

We also worked with incredibly motivated ex-EFI, CRS staff who made time to actively engage in the evaluation process and facilitate links to the PSP network, PSPs and SILC group members. So, we had the people!

And, no-one had implemented a similar PSP model in supported districts of Uganda since the end of EFI. So, we were also able to attribute!

Why we should strive to do more ex-post evaluations

Despite these challenges, and recognising it isn’t always easy, doesn’t mean it is not possible. And with projects like EFI where sustainability was central to its model, we would say it’s essential to assess whether the programme worked and how the model can be improved.

Unfortunately, practitioners and evaluators can shout all we like but the onus is on funders. We need funders to carve out dedicated resource for ex-post evaluations. This is even more important for programmes that have the development of replicable and sustainable models at their core. For some projects, this can be anticipated – and planned for – at project design stage. Other projects may show promise for learning on sustainability, unexpectedly, during implementation. Dedicated funding pots or call-down contracts for ex-post evaluations are just a couple of ways donors might be able to resource ex-post evaluations when there is a clear need for additional learning on the sustainability project results.

This learning should lead to better decision making, more effective use of donor funds and ultimately, more sustainable outcomes for beneficiaries.”

Other Findings:

Some of the other findings of this report on Financial Inclusion are:

RESOURCES: “Finding 1.iii. PSPs continue to receive remuneration for the work that they do; however, there are inconsistencies in frequency and scale of remuneration, as well as variety in strategies to sensitize communities on the need to pay.”

CAPACITIES: “Finding 3.ii. All networks included a core function of “collaboration, information-sharing and problemsolving”; however, networks were not sufficiently supported or incentivized to fulfill complex functions, such as PSP quality assurance or consumer protection, and their coverage area and late implementation limited the continued functioning of networks.”

PARTNERSHIP: “Finding 2.i. Only four of the 24 groups are clearly linked with other stakeholders and two were supported by EFI to create these linkages.”

Consider doing one!

What’s likely to ‘stand’ after we go? A new consideration in project design and evaluation

What’s likely to ‘stand’ after we go? A new consideration in project design and evaluation

This spring I had the opportunity to not only evaluate a food security project but also to use the knowledge gleaned for the follow-on project design.  This Ethiopian Red Cross (ERCS) project “Building Resilient Community: Integrated Food Security Project to Build the Capacity of Dedba, Dergajen & Shibta Vulnerable People to Food Insecurity” (with Federation and Swedish Red Cross support) was targeted to 2,259 households in Dedba, Dergajen and Shibta through provision of crossbreed cows, ox fattening, sheep/goats, beehives and poultry which were to be repaid in cash over time as well water and agriculture/ seedlings for environmental resilience.   ERCS had been working with the Ethiopian government to provide credit for these food security inputs to households in Tigray which were to be repaid in cash over time.  During this evaluation, we met with 168 respondents (8% of total project participants).

 

Not only were we looking for food consumption impacts (which were very good), and income impacts (good), we also probed for self-sustainability of activities. My evaluation team and I asked 52 of these participants more in-depth questions on income and self-sustainability preferences. In Tigray, Ethiopia, we used participatory methods to learn what they felt they could most sustain themselves after they repaid the credit and the project moved on to new participants and communities. 

VV_AEA_Finaldraft101314_pptx

We also asked the to rank what input provided the greatest source of income.  The largest income (above 30,000 birr or $1,500) was earned from dairy and oxen fattening, while a range of dairy, oxen, shoats and beehives provided over half of our respondents (40 people) smaller amounts between 1,000-10,000 birr ($50 to $500).

And even while 87% of total loans were for ox fattening, dairy cows (and beehives) which brought in farm more income, and only 11% of loans were sheep/goats (shoats) and 2% for poultry, the self-sustainability feedback was clear. In the chart below, poultry and shoats (and to a lesser degree, ox fattening) were what men and women felt they could self-sustain. In descending order, the vast majority of participants prioritized these activities:

To learn more about how we discussed that Ethiopian participants can self-monitor, see blog.

So how can such a listening and learning approach feed program success and sustainability? We need to sit with communities to discuss the project’s objectives during design plus manage our/ our donors’ impact expectations:

1) If raising income in the short-term is the goal, the project could only have offered dairy and ox fattening to the communities as their incomes gained the most. Note, fewer took this risk as the credit for these assets was costly.

2) If they took a longer view, investing in what communities felt they could self-sustain, then poultry and sheep/goats were the activities to promote. This is because more people (especially women, who preferred poultry 15:1 and shoats 2:1 compared to men ) could afford these smaller amounts of credit as well as the feed to sustain them.

3) In order to learn about true impacts we must return post-project close to confirm the extent to which income increases continued, as well as the degree to which communities were truly able to self-sustain the activities the project enabled them to launch. How do our goals fit with the communities’?

What is important is seeing community actors, our participants as the experts. It is their lives and livelihoods, and not one of us in international development is living there except them…

What are your questions and thoughts? Have you seen such tradeoffs? We long to know…

[*NB: There were other inputs (water, health, natural resource conservation) which are separate from this discussion.]

Stepping up community self-sustainability, one [Ethiopian] step at a time

 

Stepping up community self-sustainability, one [Ethiopian] step at a time

 

Having just come back from evaluation and design fieldwork for an Ethiopian Red Cross (ERCS)/ Swedish Red Cross/ Federation of the Red Cross and Red Crescent project, the power of communities is still palpable in my mind. They know what great impact looks like. They know what activities they can best sustain themselves. It’s up to us to ask, listen and learn from them and support their own monitoring/ evaluating/ reporting. It’s up to us to share such learning with others and to act on it everywhere.

There are a myriad of possible sustainability indicators, and the outcome indicators below, suggested by 116 rural participants from Tigray, Ethiopia seem to fall into two categories of expected changes: Assets and Life Quality (Table 1). As the food security/ livelihood project extended credit for animal purchases, it is logical that tracking increased income, savings, assets, and home investments plus expenditures on food and electricity appeared.

We gleaned this from discussions with participants, asking them “what can we track together that would show that we had impact”? Our question led to a spirited discussion of not only what was traceable, but also what could be publicly posted and ‘ground-truthed’ by the community. Discussing indicators led to even deeper conversations about the causes of food insecurity which were illuminating to staff.  What was surprising, for instance, was the extent to which families saw changing seasonal child-field labor practices in favor of 100% child-school attendance as great indicators.  School attendance (or lack thereof) was dependent on families’ need for children’s seasonal labor in the fields. Community members said they knew who sent their children or not, which no only ‘cleaned’ the publicly posted data but triangulated implementer surveys and opened room for discussions of vulnerability.

 

self-sustain-indicators

 

 

Not only is this exciting for the project’s outcome tracking but even more importantly, our team proposed to create a community self-monitoring system, suggested in by Causemann/Gohl in an IIED PLA Notes article– “Tools for measuring change: self-assessment by communities” used in Africa and Asia. This learning, management and reporting process will fill a gaping need as current “monitoring systems serve only for donor accountability, but neither add value for poor people nor for the implementing NGOs because they do not improve effectiveness on the ground.” The authors found that not only “participatory data collection produces higher quality data in some fields than standard extractive methodologies [as] understanding the context leads to a higher accuracy of data and learning processes [which] increase the level of accountability… “ but also that such shared collaboration builds mutual learning and bridge-building.” While our community members may have offered to track this publicly to make this partner happy, men and women discussed this excitedly and embraced the idea of self-monitoring happily. ERCS will be discussing with communities to either track data monthly in notebooks or on a large chart hung in the woreda office for transparency.  Data (Chart 1) would include these asset and quality of life indicators as well as loan repayments (tracked vertically) while households (tracked horizontally) could see who was meeting the goal (checked boxes), not meeting it fully (dashed boxes) or not meeting it at all yet (blank boxes).  Community members corrected each other as they devised the indicators during our participatory research and this openness reassures us that the public monitoring will be quite transparent as well.

 

ParticipTracjking

[1]

Further, what was especially satisfying was getting feedback from across the three tibias (sub-regions) on what activities they felt they could sustain themselves irrespective of the project’s continuation. Table 2 shows us which activities communities felt were most self-sustainable by households; these could form the core of the follow on project. Sheet/goats, poultry and oxen for fattening were highly prioritized by both women and men, in addition to a few choosing improved dairy cows. The convergence of similar responses was gratifying and somewhat unexpected, as there were several other project activities.  The communities’ own priorities need to be seriously considered as currently they get only one loan per family and thus self-sustainable activities are key.

 

self-sustain-activities

 

There is more to incorporate in future project planning by NGOs like Ethiopia’s ERCS. The NGO-IDEAs concept mentioned above also includes involving project participants in setting goals and targets themselves, differentiating between who achieved them and why, and brainstorming who/what contributes to it and what they should do next. Peer groups, development agencies and any actors could collect and learn from the data. Imagine the empowerment were communities to design, monitor and evaluate and tell us as their audience!

And they must, according to ODI UK’s Watkins, who has a clear vision on how to achieve a global equity agenda for the post-2015 MDG goalsHe suggests converting the principle of ‘leave no one behind’ into measurable targets. He argues that, by introducing a series of ‘stepping stone’ benchmarks, the world can set ambitious goals on equity by 2030. He writes, wisely, that “narrowing these equity deficits is not just an ethical imperative but a condition for accelerated progress towards the ambitious 2030 targets. There are no policy blueprints. However, the toolkit for governments actively seeking to narrow disparities …has to include some key elements [such as] identifying who is being left behind and why is an obvious starting point. That’s why improvements to the quality of data available to policy-makers is an equity issue in its own right”. Valuing Voices believes who creates that data is an equally compelling equity issue.

 

So how will we reach these ambitious targets by 2030? By putting in stepping stone targets, returning project design functions to the ultimate clients – the communities themselves- and matching their wants with what we long to transfer to them. In this way we will be Valuing their Voices so much that they evaluate our projects jointly and we can respond. That’s how it should always have been.

What are your thoughts on this? We long to know.

 

 

Sources:

[1] Ashley, H., Kenton, N., & Milligan, A. (Eds.). (2013). Tools for supporting sustainable natural resource management and livelihoods. Participatory Learning and Action, (66). Retrieved from https://pubs.iied.org/14620IIED/

[2] Watkins, K. (2013, October 17). Leaving no-one behind: An equity agenda for the post- 2015 goals. Retrieved from https://www.odi.org/blogs/7924-leaving-no-one-behind-equity-agenda-post-2015-goals