Listening better… for more sustainable impact

Listening better… for more sustainable impact

Are we listening better? Maybe.  As Irene Gujit states on Better EvaluationKeystone’s work on ‘constituent voice’ enables a "shift [in] power dynamics and make organizations more accountable to primary constituents”. For example, "organisations can compare with peers to trigger discussions on what matters to those in need… in (re)defining success and ‘closing the loop’ with a response to feedback [on the project], feedback mechanisms can go well beyond upward accountability."

There are impressive new toolkits available to elicit and hear participant voice about perceived outcomes and impacts, such as People First Impact Method and NGO IDEAS' Monitoring Self-Effectiveness.  As People First states, "Across the aid sector, the voices of ordinary people are mostly not being heard. Compelling evidence shows how the aid structure unwittingly sidelines the people whom we aim to serve. Important decisions are frequently made from afar and often based on limited or inaccurate assumptions. As a result, precious funds are not always spent in line with real priorities, or in ways that should help people build their own confidence and abilities…. As a sector, we urgently need to work differently." These are results of 40 year old participatory/Rapid Rural Appraisal distilled and shared by IDS/UK's Robert Chambers which I've used for 25 years, including lately for self-sustainability evaluation.

In addition to qualitative, participatory tools, the application of quantitative evaluative tools have a ways to grow to be terrific at listening and learning.  Keystone did interesting work on impact evaluation (lately associated with Random Control Trials comparing existing projects and comparable non-participating sites to prove impact). Their study found that not only "no one engaged through the research for this note is particularly happy with the current state of the art…. There is a strong appetite to improve the delivery of evaluative activities in general and impact evaluation in particular … Setting expectations by engaging and communicating early and often with stakeholders and audiences for the evaluation is critical, as is timing." So many of us believe that evaluation cannot be an afterthought, but monitoring and evaluation needs to be integrated into project design, with feedback loops informing implementation.

Yet this otherwise excellent article made one point that is common, yet like Alice looking through the looking glass backwards. For they write feedback is "to inform intended beneficiaries and communities (downward accountability) about whether or not, and in what ways, a program is benefiting the community". Yet it is the other way around! Only communities have the capacity to tell us how well they feel we are helping them!  

Listen_Wylio6801732893_06e6ce7cf3_m

Thankfully, we are increasingly willing to listen and learn about aid effectiveness. Some major actors shaping funding decisions have already thrown down the feedback gauntlet:

* As our 2013 blog asked for, Charity Navigator is now applying its new “Results Reporting” rating criteria, which include six data points regarding charities feedback practices. The new ratings will be factored into Charity Navigator star ratings from 2016.

* Heavyweight World Bank president Jim Kim has decreed that the Bank will require robust feedback from beneficiaries on all projects for which there is an identifiable beneficiary

* The Hewlett, Ford, Packard, Rita Allen, Kellogg, JPB and LiquidNet for Good Foundations have recently come together to create the Fund for Shared Insight to catalyze a new feedback culture within the philanthropy sector.

* This February, a new report on UK's international development agency, DFID recommended a new direction to their aid: "The development discourse has generally focused on convincing donors to boost their aid spending, when the conversation should instead be on “how aid works, how it can support development, how change happens in countries, and all of the different responses that need to come together to support that change…. One important change will be for professionals to deliver more adaptive programming and work in more flexible and entrepreneurial ways… emphasized the need for development delivery to be led by local people. Commenting on ODI’s research, [DFID} said successful development examples showed “people solving problems for themselves rather than coming in and trying to manage that process externally through an aid program.”

Hallelujah!  What aid effectiveness great listening are you seeing?

Learning from the Past… for Future Sustainability

Learning from the Past… for Future Sustainability

Heading up Food Security for Catholic Relief Services (CRS) was my first international development job in 1995-1999 and I have watched this organization grow in its commitment to program quality and learning/ knowledge management ever since.  At the time I oversaw 17 of of CRS' USAID/ Food for Peace (FFP) programs.  So I was delighted that not only has CRS done an ex-post evaluation and used the findings for programming (e.g. the effectiveness of investing in a particular sector—for example, the importance of supporting girls’ education within a food security program) and also for advocacy (e.g. evaluation lessons from Rwandan peace-building projects seven years after the genocide informed CRS’ evolving approach to peace and justice strategies), but I get to celebrate FFP learning too.  In addition to having consulted to USAID/PPL (Policy, Planning and Learning) and the FANTA project, all featured below, I went to Tufts University’s Fletcher School. Super to see great organizations learning about sustainability!

CRS’ 2007 project package guidance for implementation and guidance (ProPackII), described ex-post evaluation/sustainable impact evaluation’s aim “to determine which project interventions have been continued by project participants on their own [which] may contribute to future program design…. it is fair to say that NGOs rarely evaluate what remains following the withdrawal of project funding [which] is unfortunate [as] important lessons can be generated regarding factors that help to ensure project sustainability.

A 2004 Catholic Relief Services excellent ex-post evaluation in Ethiopia was featured: “Looking at the past for better programming: dap I Ex-Post Assessment Report”. It assessed sustainability of Agriculture Natural Resource Management, and Food-Assisted Child Survival/Community Based Health Care programming, done as an internal evaluation by CRS staff and partners with document review, partner, government and community interviews. Results were mixed.

  • Some activities generated enough food and income that households could eat throughout the year and have some savings, making them more resilient against drought
  • Almost 100% of cropland bunding and irrigation practices for improved crop production were still being applied and buffered them during a subsequent drought
  • Health practices had also continued (e.g. trained traditional birth attendants had continued to provide services with high levels of enthusiasm and commitment, and increased levels of health care-seeking behaviors existed).

However, many other benefits and services had severely deteriorated:

  • Nearly all water committees had dissolved, fee collection was irregular or had been discontinued
  • Many water schemes were not operational
  • The centrally managed [tree] nurseries had been abandoned (given the existing management capacity of communities and government).

CRS/Ethiopia and its partners came to see that:

  • “The potable water strategy had over-focused on the technical aspects (“hardware”) while not paying enough attention to the community organizing dimensions and support by existing government services (“software”).
  • Even limited post-project follow-up by partners and government staff might have gone a long way towards mitigating the deterioration of project benefits and services.

What was terrific was that they “went on to use these findings and lessons learned from this ex-post evaluation to inform the design of similar projects in Ethiopia… while also raising awareness of these issues among partner staff”. The ex-post recommended increased planning for sustainability, setting up village management for post-project and incentive maintenance.  Great learning, yet we have found few ex-posts at CRS or elsewhere.  Our industry needs to explore issues such as those the evaluators posed: Was the lack of sustainability due to technical, institutional or financial faults in the programming? In other words, was the lack of self-sustainability due to the design/ aim of the activity itself or how it was implemented?

In 2013, USAID’s Food For Peace commissioned fascinating research on Exit Strategies.  Tufts University went to Bolivia, Honduras, India and Kenya which were phasing out of Title II food aid to look at how to “ensure that the benefits of interventions are sustained after they end, [as] there is little rigorous evidence on the effectiveness of different types of exit strategies used in USAID Office of Food for Peace Title II development food aid prog

rams.” The research is to “assess the extent to which the programs’ impacts were maintained or improved and to help understand factors of success or failure in the specific exit strategies that were used.” They have made the important discernment that the effectiveness of Title II programs depends on both short-term impact and long-term sustainability.

FANTA_Exit_Strategies_Bolivia.pdf

The FANTA project (contractor) made the following preliminary results available:

  1. Impact assessment at exit does not consistently predict sustained impact two years later…. It can be misleading.
  2. Many activities, practices, and impacts across sectors declined over the two years after exit. These declines are related to inadequate design and implementation of sustainability strategies and exit processes.
  3. There are specific ways to increase the likelihood of sustainability: Sustaining service provision and beneficiary utilization of services and practices depends on three critical factors: Resources, Technical and Management Capacity, Motivation
  4. Withdrawal of free food rations or any other free input (as incentive) jeopardizes sustainability without consideration of substitute incentives. For instance,

     

     

    • Withdrawal of food was a disincentive for participation in and provision of [child] growth monitoring…. Resources and health system linkages are needed to sustain health activities
    • Motivation, capacity and resources are all needed to maintain water systems
    • Agriculture and Natural Resource Management suffered greatly when resource incentives disappeared

Their main recommendations are that sustainability should be built into the design from the beginning, program cycles are longer and exit is gradual.

CRS found the same issue of incentives as a barrier, as they did technical and (institutional) capacity/ motivation/ management issues.  We have much to learn… at least we’ve started Valuing Voices and asking… and eventually designing for sustainability!

What should projects accomplish… and for whom?

 

What should projects accomplish… and for whom?

 

An unnamed international non-profit client contacted me to evaluate their resilience project mid-stream, to gauge prospects for sustainable handover. EUREKA, I thought! After email discussions with them I drafted an evaluation process that included learning from a variety of stakeholders, ranging from Ministries, local government and the national University who were to take over the programming work about what they thought would be most sustainable once the project ended and how in the next two years the project could best foster self-sustainability by country-nationals. I projected several weeks for in-depth participatory discussions with local youth groups and sentinel communities directly affected by the food security/ climate change onslaught and who benefited from resilience activities to learn what had worked, what didn’t and who would take what self-responsibility locally going forward.

Pleased with myself, I sent off a detailed proposal. The non-profit soon answered that I hadn’t fully understood my task.  In their view the main task at hand was to determine what the country needed the non-profit to keep doing, so the donor could be convinced to extend their (U.S.-based) funding.  The question at hand became how could I change my evaluation to feed them back this key information for the next proposal design?

Maybe it was me, maybe it was the autumn winds, maybe it was my inability to sufficiently subsume long-term sustainability questions under shorter-term non-profit financing interests that led me to drop this.  Maybe the elephant in the living room that is often unspoken is the need for some non-profits to prioritize their own organizational sustainability to ‘do good’ via donor funding rather than working for community self-sustainability.

Maybe donor/funders should share this blame, needing to push funding out, proving success at any cost to get more funding and so the cycle goes on. As a Feedback Lab feature on a Effective Philanthropy report recently stated: “Only rarely do funders ask, ‘What do the people you are trying to help actually think about what you are doing?’ Participants in the CEP study say that funders rarely provide the resources to find the answer. Nor do funders seem to care whether or not grantees are changing behavior and programs in response to how the ultimate beneficiaries respond” [1].

And how much responsibility do communities themselves hold for not balking?  Why are they so often ‘price-takers’ (in economic terms) rather than ‘price-makers’? As wise Judi Aubel asked in a recent evaluation list-serve discussion When will communities rise up to demand that the “development” resources designed to support/strengthen them be spent on programs/strategies which correspond to their concerns/priorities??” 

 

We can help them do just that by creating good conditions for them to be heard.  We can push advocates to work to ensure the incoming Sustainable Development Goals (post-MDGs) listen to what recipient nations feel are sustainable, more than funders. We can help their voices be heard via systems that enable donor/ implementers to learn from citizen feedback, such as Keystone has via their Constituent Voice practice (in January 2015 it is launching an online feedback data sharing platform called the Feedback Commons) or GlobalGiving’s new Effectiveness Dashboard (see Feedback Labs).

We can do it locally in our work in the field, shifting the focus from our expertise to theirs, from our powerfulness to theirs. In field evaluations can use Empowerment Evaluation. We can fund feedback loops pre-RFP (requests for proposals), during project design, implementation and beyond, with the right incentives tools for learning from community and local and national-level input so that country-led development begins to be actual not just a nice platitude.  We can fund ValuingVoices’ self-sustainability research on what lasts after projects end. We can conserve project content and data in Open Data formats for long-term learning from country-nationals.

 

West.Mali_.TFSI_.Water_.Women_.Well_.Africare.1

 

Most of all, we can honour our participants as experts, which is what I strive to do in my work. I’ll leave you with a story from Mali. in 1991 I was doing famine-prevention research in Koulikoro Mali where average rainfall is 100mm a year (4 inches). I accompanied women I was interviewing to a deep well which was 100m deep (300 feet). They used plastic pliable buckets and the first five drew up 90% of the bucket full. When I asked to try, they seriously gave me a bucket. I laughed, as did they when we saw that only 20% of my bucket was full. I had splashed the other 80% out on the way up. Who’s the expert?

How are we helping them get more of what they need, rather than what we are willing to give? How are we prioritizing their needs over our organizational income? How are we #ValuingVoices?

 

Sources:

[1] The Center for Effective Philanthropy. (2014, October 27). Closing the Citizen Feedback Loop. Retrieved December 2014, from https://web.archive.org/web/20141031130101/https://feedbacklabs.org/closing-the-citizen-feedback-loop/

[2] Better Evaluation. (n.d.). Empowerment Evaluation. Retrieved December 2014, from https://www.betterevaluation.org/plan/approach/empowerment_evaluation

[3] Sonjara. (2016). Content and Data: Intangible Assets Part V. Retrieved from http://www.sonjara.com/blog?article_id=135

 

What’s likely to ‘stand’ after we go? A new consideration in project design and evaluation

What’s likely to ‘stand’ after we go? A new consideration in project design and evaluation

This spring I had the opportunity to not only evaluate a food security project but also to use the knowledge gleaned for the follow-on project design.  This Ethiopian Red Cross (ERCS) project “Building Resilient Community: Integrated Food Security Project to Build the Capacity of Dedba, Dergajen & Shibta Vulnerable People to Food Insecurity” (with Federation and Swedish Red Cross support) was targeted to 2,259 households in Dedba, Dergajen and Shibta through provision of crossbreed cows, ox fattening, sheep/goats, beehives and poultry which were to be repaid in cash over time as well water and agriculture/ seedlings for environmental resilience.   ERCS had been working with the Ethiopian government to provide credit for these food security inputs to households in Tigray which were to be repaid in cash over time.  During this evaluation, we met with 168 respondents (8% of total project participants).

 

Not only were we looking for food consumption impacts (which were very good), and income impacts (good), we also probed for self-sustainability of activities. My evaluation team and I asked 52 of these participants more in-depth questions on income and self-sustainability preferences. We used participatory methods to learn what they felt they could most sustain themselves after they repaid the credit and the project moved on to new participants and communities. 

VV_AEA_Finaldraft101314_pptx

We also asked the to rank what input provided the greatest source of income.  The largest income (above 30,000 birr or $1,500) was earned from dairy and oxen fattening, while a range of dairy, oxen, shoats and beehives provided over half of our respondents (40 people) smaller amounts between 1,000-10,000 birr ($50 to $500).

And even while 87% of total loans were for ox fattening, dairy cows (and beehives) which brought in farm more income, and only 11% of loans were sheep/goats (shoats) and 2% for poultry, the self-sustainability feedback was clear. In the chart below, poultry and shoats (and to a lesser degree, ox fattening) were what men and women felt they could self-sustain. In descending order, the vast majority of participants prioritized these activities:

To learn more about how we discussed that Ethiopian participants can self-monitor, see blog.

So how can such a listening and learning approach feed program success and sustainability? We need to sit with communities to discuss the project’s objectives during design plus manage our/ our donors’ impact expectations:

1) If raising income in the short-term is the goal, the project could only have offered dairy and ox fattening to the communities as their incomes gained the most. Note, fewer took this risk as the credit for these assets was costly.

2) If they took a longer view, investing in what communities felt they could self-sustain, then poultry and sheep/goats were the activities to promote. This is because more people (especially women, who preferred poultry 15:1 and shoats 2:1 compared to men ) could afford these smaller amounts of credit as well as the feed to sustain them.

3) In order to learn about true impacts we must return post-project close to confirm the extent to which income increases continued, as well as the degree to which communities were truly able to self-sustain the activities the project enabled them to launch. How do our goals fit with the communities’?

What is important is seeing community actors, our participants as the experts. It is their lives and livelihoods, and not one of us in international development is living there except them…

What are your questions and thoughts? Have you seen such tradeoffs? We long to know…

[*NB: There were other inputs (water, health, natural resource conservation) which are separate from this discussion.]

Data for whose good?

Data for whose good?

Many of us work in international development because we are driven to serve, to make corners of the world better by improving the lives of those that live there. Many of us are driven by compassion to help directly through working ‘in the field’ with ‘beneficiary’/ participants, some of us manifest our desire to help through staying in our home countries, advocating to powers that be for more funding, while others create new technologies to help improve the lives of others all over the world. Some of us what to use Western funds and report back to our taxpayers that funds were well-spent, others want to create future markets via increasing globally-thriving economies.  We use data all the time to prove our case.

USAID has spent millions on USAID Forward and monitoring and evaluation systems. Organizations such as 3ie rigorously document projected impact of projects while they are being implemented. Japan’s JICA and the OECD are two of the rarest kinds of organizations – returning post-project to look at the continued impact (as USAID did 30 years ago and stopped).  Sadly the World Bank and USAID have only done one post-project evaluation each in the last 20 years that drew on communities’ opinions. While a handful of non-profits have used private funds to do recent ex-post evaluations, the esteemed American Evaluation Association has (shockingly) not one resource.

Do we not care about sustained impact? Or are we just not looking in the right places with the right perspective? Linda Raftree has a blog on Big Data and Resilience. She says, “instead of large organizations thinking about how they can use data from afar to ‘rescue’ or ‘help’ the poor, organizations should be working together with communities in crisis (or supporting local or nationally based intermediaries to facilitate this process) so that communities can discuss and pull meaning from the data, contextualize it and use it to help themselves….” Respect for communities’ self-determination seems to be a key missing ingredient.

As an article from the Center for Global Development cites the empowerment that data gives citizens and our own international donors knowledge by which to steer: Citizens.  When statistical information is released to the public through a vigorous open government mechanism it can help citizens directly.  Citizens need data both to hold their government accountable and to improve their private decision-making.  (On the CGD website, see discussions of the value of public disclosure for climate policy here and for AIDS foreign assistance here.)

In my experience, most communities have information but are not perceived to have data unless they collect it using 'Western' methods. Having data to support and back information, opinions and demands can serve communities in negotiations with entities that wield more power. (See the book “Who Counts, the power of participatory statistics” on how to work with communities to create ‘data’ from participatory approaches). Even if we codify qualitative (interview) data and quantify it via surveys, this is not enough if there is no political will to make change to respond to the data and to demands being made based on the data. This is due in some part to a lack of foundational respect that communities’ views count.

Occasionally, excellent thinkers at World Bank 'get' this: "In 2000, a study by the World Bank, conducted in fifty developing countries, stated that “there are 2.8 billion poverty experts: the poor themselves. Yet the development discourse about poverty has been dominated by the perspectives and expertise of those who are not poor … The bottom poor, in all their diversity, are excluded, impotent, ignored and neglected; the bottom poor are a blind spot in development." (This came from a session description for the 2014 World Bank Spring Meetings Civil Society Forum meetings, where I presented for Valuing Voices this spring, see photo below).

WorldBankPanelEngaging NGOs in Development & Dialogue0414

And as Anju Sharma’s great blog on community empowerment says, “Why do we continue to talk merely of community “participation” in development? Why not community-driven development, or community-driven adaptation, where communities don’t just participate in activities meant to benefit them, but actually lead them?” Valuing Voices would like to add that we need participatory self-sustainability feedback data from communities documenting Global Aid Effectiveness, ‘walking’ Busan’s talk.  Rather than our evaluating their effectiveness in carrying out our development objectives, goals, activities and proposed outcomes, let’s shift to manifest theirs!

Our centuries-old love affair with data is hard to break.  Fine, data has to inform our actions, so let’s make it as grassroots, community-driven as possible, based on respect for the knowledge of those most affected by projects, where the rubber hits the road. While that may make massive development projects targeted at hundreds of thousands uniformly… messy… but at least projects many be more efficacious, sustainable and theirs.   What do you think?