When Funders Move On (Originally published by Stanford Social Innovation Review 03/15)

When Funders Move OnDonors and nonprofits need to learn more about how to help program participants keep progressing after the support ends.

Imagine standing in Detroit or South-Central Los Angeles. A team of experts has come to help you out of grinding poverty. Some of these experts specialize in credit issues, others in education or health or gardening. They have funding for three years, so they set up offices, create participant lists and prioritize problems to tackle. They give you seeds and loans, and advice. And you—and others from your neighborhood—begin creating small businesses and home gardens. You and other adults learn about infant nutrition; children who live in your area get free school materials; teachers at your local schools receive extra training. Everyone begins to do better.

A year and a half passes. Another expert arrives to find out how things are going relative to the team’s projections. Some businesses are succeeding, others have faltered; some gardens are flourishing, others are neglected. You participate in a focus group, and you answer questions optimistically.

At the three-year mark, many of your neighbors are participating in this project, and tangible successes appear to be spreading. But suddenly, the experts are packing their boxes. The project office closes. The initiative has supposedly been “handed over” to the community. No one who worked for the project comes back.

 No one comes back. This is the state of affairs for too many so-called “sustainable international development” initiatives around the world, and it has to change.

As Gugelev and Stern brilliantly note in a recent SSIR article, we must be transparent about our “endgame”: Too many international development projects are bound to fixed endings and “fail to reckon with the gap between what the nonprofit can achieve and what the problem actually requires.” Due to fixed funding requirements, donors often leave when the calendar tells them to, whether or not a project has achieved the desired impact. And according to Valuing Voices research, they don’t even go back to assess the outcomes of their work or consider what (if anything) might help progress continue!

Since 2000, the US government has spent more than $280 billion on bilateral and multilateral assistance; the EU has spent $1.4 trillion. Just in 2002, US foundations, businesses, and NGOs spent more than $34 billion overseasAnd while most taxpayers believe that this spending supports “sustainable development,” our research shows that 99 percent of the nonprofit grant and for-profit contracted projects these funds enabled were not evaluated after the funding concluded. Unfortunately, this continues: In 2014, the US spent $20 billion and the EU spent $80 billion on program assistance without any plan for post-project evaluation. This does not mean the projects are not sustainable; we simply do not know.

In fact, the United States Agency for International Development (USAID)—once considered the leader in post-project evaluations assessing relevance, effectiveness, efficiency, and sustainability—has managed only one post-project evaluation in 30 years (due later this year). And although thousands of documents appear in multilateral donor database searches as evaluations, most are “desk studies”—conducted remotely and not based on new fieldwork. Of these, only a few include feedback from program participants (leading the way are Japan’s International Cooperation Agency and the UN’s Organization for Economic Co-operation and Development, which have systematically done post-project evaluations).

ValuingVoicesArrowGraphic

There is usually terrific monitoring and evaluation during project implementation (red) and evaluation at start-mid-end (green), virtually no one returns afterwards (blue).

Why don’t we do a better job of following up? Are we afraid of the possibility of seeing poor results? If so, it’s time to face that fear. In some cases, we will surely see good results or even unanticipated positive impact—we’re missing that too.

Hewlett Foundation’s Fay Twersky implored nonprofits to “systematically solicit feedback from intended beneficiaries” in an SSIR podcast on Monitoring and Evaluation (M&E). We agree. We need to know more than whether those participants’ situations are improving while a program is in full swing, and we need to know what it will take for things to continue to improve after the funding goes away. Imagine the cost efficiencies we would gain by replicating activities that they could sustain. Imagine the cost-efficiencies and productivity if we prioritized activities with the largest sustainable return on investment (ROI). Now that is something impact investors could buy into.

There are positive signs on the horizon. According to Keystone Accountability, an increasing number of nonprofit organizations are committing to “making governments, NGOs, and donors more responsive to the needs of their constituents.” And funding that supports the idea of using participant feedback to improve programs and make them sustainable is on the rise, as Center for Effective Philanthropy’s “Hearing from Those We Seek to Help” and the Fund for Shared Insight have noted. All of this could yield a hugely different array of endgames that are sustainable by communities that can perhaps later even select development aid offers based on past effectiveness.

But there is still much to do. We offer the following recommendations to project implementers and donors, based on our own experience, and on our observations of several initiatives where we have seen project participants independently continuing and adapting work that was begun with external support:

  • Shift the development model from what donors and implementers think would be best to what the intended participants think is best. Design projects with them, and mandate that request for proposals (RFP) design involves communities.
  • Document, share, and discuss what was most sustained and what participating communities and local partners can do to sustain projects, and how we can sustainably support them through design and implementation to make them more effective.
  • Require a plan for transitioning to sustainability after projects close for any project that uses more than $1 million dollars. This should include handover plans to local nonprofits, with training and financial support; training for communities on how to manage the sustainable activities it prioritizes; financing mechanisms for those activities; and report sharing in IATI open-data format, with project data saved and stored in the cloud for global access.
  • Do post-project sustainability evaluations on all these projects, and discuss the results widely with other funders, the government, and the private sector, including how to feed back lessons into future design.
  • Advocate for participatory input in all evaluations. This input should make up 30 percent of future evaluation findings (now far less).
  • Consistently solicit feedback from local communities through national evaluators, both during and after projects, to better understand how the program you’re running or supporting from afar is working on the ground. Invest in building the national capacity and systems needed to make that feedback helpful for all stakeholders, including national governments.
  • Advocate for extensive civil society input into the United Nations’ Sustainable Development Goals so they serve our participants’ visions for the world they want.

 

As Peter Kimeu of Catholic Relief Services said to me, “It will be sustainable development if the people at community level are involved in designing and delivering their own dreams of development.”

 **************************************

Jindra Cekan (@WhatWeValue) is founder of Valuing Voices, with 28 years in international development design, monitoring, and evaluation. She has a doctorate from the Fletcher School of Law and Diplomacy; was a University of Cambridge Fellow; and works with foundation, nonprofit, and for-profit clients.

Are We Done Yet?

Are We Done Yet?

When are we off the hook, so to speak, for the well-being of the participants whom we said we'd make healthier, better fed, more educated, safer, etc?

 

America’s Agency for International Development (USAID) is the main channel for international development aid.  It is also an organization interested in learning from its programming and numerous contracts support such work. One such contract by FHI360/FANTA was Food for Peace tasking them to review the agency’s Title II development food aid from 2003-2009 covering 28 countries. This Second Food Aid and Food Security Assessment (FAFSA-2) Summary  found that such programs can “reduce undernutrition in young children, improve health and nutrition outcomes, and increase access to income and food” and also found practices that did not work well. 

 

While USAID has made enormous strides in the intervening six years on monitoring and evaluation (I was a consultant to USAID/PPL/LER from 2013-14), excellent recommendations that would support great, sustainable programs are unfulfilled:

Recommendations #1, 4 “USAID/FFP should develop an applied research agenda and sponsor studies that focus on the implementation of Title II programs in the field to better define what works and what does not…. [and] should select the review panel for new Title II applications… and give reviewers a ‘cheat sheet’ on interventions and approaches that USAID/FFP is and is not interested in funding because they work better or do not work as well, [and] provide this same information in the Request for Assistance” [Request for proposals].

 

Yes, all across our industry there is little learning from past evaluations for future design and Valuing Voices believes local participants and stakeholders need to be consulted to tell us what (still) works and what they want more of not only during implementation but long after. Their voices must support great design, as it’s their lives we go there to improve; they must be involved in the design of these original requests that non-profits design and fulfill. Further, the study found that only 1/3 of all evaluations were included in USAID’s database[1], and as Valuing Voices’ partner Sonjara has written in our blog, aid transparency requires data retention and access for learning to happen.

 

Recommendation #3 “USAID/FFP should include options for extensions of awards or separate follow-on awards to enable USAID/FFP to continue to support high-performing programs beyond five years and up to ten years… [as] longer implementation periods are associated with greater impact.”

 

This would address the ‘how much impact can we accomplish in 1, 3, 5 years” question that many of us in international non-profits ask ourselves. Finally, the graphic below is self-explanatory – USAID sees its role ending at close-out.

www_fsnnetwork_org_sites_default_files_fafsa2-summary-feb2013_pdf

The crux lies in their honest statement: "It was beyond the scope and resources of the FAFSA-2 to explore in any depth the sustainability of Title II development programs after they ended." While they state that there is merit in having impact while you intervene, such as "having a positive impact on the nutritional status of the first cohort of children is of immense benefit in its own right", they go on to say that "ideally, one would like to see mothers continuing positive child feeding practices and workers continuing to deliver services long after programs end [yet] whether the [maternal child health and nutrition] interventions are sustainable beyond one generation is unknown and would require research."   This is because funding is pre-programmed, fixed to end within set 1, 3, 5 year increments, and no one goes back to learn how it all turned out.  This is what most needs to change, this illusion that what happens after closeout is no longer our issue, that the ‘positive impact’ we had while there is enough.

They are not alone. I think of NORAD, the Government of Norway's development arm as very progressive. So I went to  NORAD's website and searched for 'ex-post' (we do a lot of that at ValuingVoices). So like our World Bank blog on finding real ex-post evaluations, many many things are considered 'ex-post', including one actual evaluation in Palestine with fieldwork which asked participants and a few that looked at institutional sustainability. Many of the 100+ 'finds' were actually documents recommending ex-post. Typical of our other searches of other donors.  I emailed NORAD whether there were more with participant voices, yet they assured me they did them. Maybe our problem is in definitions and taxonomy again. Maybe we should call them post-project participant feedback?

Most of my colleagues would agree that the sustainability of activities aimed at making communities food secure in the long-term and independent of aid is a shared goal, one which short-term assistance aimed at huge impacts such as to ‘make communities food secure’ and ‘sustainably decrease malnutrition’ (common proposal goals) is unrealistic. We need participant voices to teach us how well we served them. We need to return, learn “what works and what does not”, and Value Voices in true sustained partnership. We all look forward to being done. 

 


[1] “Another major obstacle to transparency and learning from the Title II program experience was the fact that only one-third of the final evaluations were publicly available on the Development Experience Clearinghouse (DEC), despite the requirement that Awardees post them to the DEC…. [There was a lack of] cross-cutting studies or in-depth analyses of Title II evaluation results to advance organizational learning  [and] much greater use could be made of the evaluation data for systematic reviews, meta-analyses, secondary analyses, and learning.”

 

 

 

 

 

 

What should projects accomplish… and for whom?

 

What should projects accomplish… and for whom?

 

An unnamed international non-profit client contacted me to evaluate their resilience project mid-stream, to gauge prospects for sustainable handover. EUREKA, I thought! After email discussions with them I drafted an evaluation process that included learning from a variety of stakeholders, ranging from Ministries, local government and the national University who were to take over the programming work about what they thought would be most sustainable once the project ended and how in the next two years the project could best foster self-sustainability by country-nationals. I projected several weeks for in-depth participatory discussions with local youth groups and sentinel communities directly affected by the food security/ climate change onslaught and who benefited from resilience activities to learn what had worked, what didn’t and who would take what self-responsibility locally going forward.

Pleased with myself, I sent off a detailed proposal. The non-profit soon answered that I hadn’t fully understood my task.  In their view the main task at hand was to determine what the country needed the non-profit to keep doing, so the donor could be convinced to extend their (U.S.-based) funding.  The question at hand became how could I change my evaluation to feed them back this key information for the next proposal design?

Maybe it was me, maybe it was the autumn winds, maybe it was my inability to sufficiently subsume long-term sustainability questions under shorter-term non-profit financing interests that led me to drop this.  Maybe the elephant in the living room that is often unspoken is the need for some non-profits to prioritize their own organizational sustainability to ‘do good’ via donor funding rather than working for community self-sustainability.

Maybe donor/funders should share this blame, needing to push funding out, proving success at any cost to get more funding and so the cycle goes on. As a Feedback Lab feature on a Effective Philanthropy report recently stated: “Only rarely do funders ask, ‘What do the people you are trying to help actually think about what you are doing?’ Participants in the CEP study say that funders rarely provide the resources to find the answer. Nor do funders seem to care whether or not grantees are changing behavior and programs in response to how the ultimate beneficiaries respond” [1].

And how much responsibility do communities themselves hold for not balking?  Why are they so often ‘price-takers’ (in economic terms) rather than ‘price-makers’? As wise Judi Aubel asked in a recent evaluation list-serve discussion When will communities rise up to demand that the “development” resources designed to support/strengthen them be spent on programs/strategies which correspond to their concerns/priorities??” 

 

We can help them do just that by creating good conditions for them to be heard.  We can push advocates to work to ensure the incoming Sustainable Development Goals (post-MDGs) listen to what recipient nations feel are sustainable, more than funders. We can help their voices be heard via systems that enable donor/ implementers to learn from citizen feedback, such as Keystone has via their Constituent Voice practice (in January 2015 it is launching an online feedback data sharing platform called the Feedback Commons) or GlobalGiving’s new Effectiveness Dashboard (see Feedback Labs).

We can do it locally in our work in the field, shifting the focus from our expertise to theirs, from our powerfulness to theirs. In field evaluations can use Empowerment Evaluation. We can fund feedback loops pre-RFP (requests for proposals), during project design, implementation and beyond, with the right incentives tools for learning from community and local and national-level input so that country-led development begins to be actual not just a nice platitude.  We can fund ValuingVoices’ self-sustainability research on what lasts after projects end. We can conserve project content and data in Open Data formats for long-term learning from country-nationals.

 

West.Mali_.TFSI_.Water_.Women_.Well_.Africare.1

 

Most of all, we can honour our participants as experts, which is what I strive to do in my work. I’ll leave you with a story from Mali. in 1991 I was doing famine-prevention research in Koulikoro Mali where average rainfall is 100mm a year (4 inches). I accompanied women I was interviewing to a deep well which was 100m deep (300 feet). They used plastic pliable buckets and the first five drew up 90% of the bucket full. When I asked to try, they seriously gave me a bucket. I laughed, as did they when we saw that only 20% of my bucket was full. I had splashed the other 80% out on the way up. Who’s the expert?

How are we helping them get more of what they need, rather than what we are willing to give? How are we prioritizing their needs over our organizational income? How are we #ValuingVoices?

 

Sources:

[1] The Center for Effective Philanthropy. (2014, October 27). Closing the Citizen Feedback Loop. Retrieved December 2014, from https://web.archive.org/web/20141031130101/https://feedbacklabs.org/closing-the-citizen-feedback-loop/

[2] Better Evaluation. (n.d.). Empowerment Evaluation. Retrieved December 2014, from https://www.betterevaluation.org/plan/approach/empowerment_evaluation

[3] Sonjara. (2016). Content and Data: Intangible Assets Part V. Retrieved from http://www.sonjara.com/blog?article_id=135