Embedding Sustainability Everywhere – All Five Slices Now
It has been a tumultuous year, and next year does not look like we will have much stability as a respite. As domestic concerns grow larger in two huge economies, US and UK, the question of the place foreign aid will play abound in conversations around the world.
However 2017’s transitions transform our work, for now I do have some good news,
1) Sustainability can be cheap. Far cheaper, in fact to design for sustainability, create feedback loops checking on sustainability through the eyes of our partners and participants, monitor for sustainability than to assume it’ll happen and far cheaper than finding out funds could have had far greater impact if we had valued their voices in the first place.
In our work to help our clients and partners fund, design, implement and monitor/ evaluate with sustainability in mind, we created what we hope is a helpful tool (guidance forthcoming).
a) By Designing for Sustainability with those who will sustain them, their financial buy-in and commitment are far higher (see CRS/ Niger ), as is advocacy and community buy-in (see new post-project OXFAM/ DRC ) and there are indicators the costs of start-up later are more cost-effective.
b) By clarifying Sustainability Indicators we check assumptions about who will do so, how much of a priority our activities are before we scale them up (Federation/ Ethiopian Red Cross). Retrospective post-project sustainability evaluations also enable us to learning from past successes and do better.
c) Sustainability Monitoring and Adaptation involve those pesky but pivotal feedback loops which are vital to understanding if we have gone off the rails or not, especially in terms of unexpected shocks derailing logical frameworks of designed projects. USAID’s nice recent CLA (Collaborating, Learning, Adapting) process includes donor funding for adaptation mid-stream which fosters effectiveness and sustainability . Even lovelier are the Doing Development Differently examples that are often very low-cost and high-effect.
d) Informed Exit, Stakeholder Sustainability Consultation should be done throughout the cycle, at least a year earlier than most projects can begin this (note; not the last few months please). Transitioning for success leverages sunk costs for ongoing results. It includes heavy knowledge management on how the implementers managed information and resources, tracked data, sustained outputs and outcomes which now local partners will need to do, etc (FFP Exit Strategies study, a Czech study, and a new UK Results study shows range of items to consider during and post-transition)  .
e) Post-Project Sustainability: Our Valuing Voices/ Better Evaluation/ Tufts joint presentation at AEA on SEIE has more on how learning from post-project evaluation lessons can change sustainability of future projects for the better ! Further, what capacities and systems have been built in-country to sustain the results after our funding and expertise leaves? What can we do differently?
Can it be done? Demand is rising. I recently presented at a conference about embedding sustainability in programming now, and an enterprising NGO took this idea to heart. They proposed joint design with communities, which is the very bedrock of designing for sustained impact. Kudos!
2) Secondly, donors are willing to pay for sustainability. Sustainability is ensconced in USAID/ Food For Peace’s (FFP) 2016 Strategy and CLA (above) . In the section called New learning and Implications for FFP Programming, they present findings such as “actions that drive big results during the life of the project may actually undermine sustainability in the long run. It raises the question as to whether FFP is willing to accept more modest results in the near term if they can be delivered in a way that will yield more sustainable gains over time….”
They also point us in the direction of country-led development: “Sustained capacity, resources, motivation, and linkages all require a focus on catalysts for change beyond FFP. Facilitative approaches that rely on and strengthen local actors help ensure that resource and knowledge transfers, and the incentives and linkages that support them, will be self-perpetuating beyond project end”. Notably, while UK’s DFID focuses on maximizing impact through Value-for-Money, it is a shorter-term economy, efficiency and effectiveness rather than sustained impact for the end-users and DFID’s exit strategies have recently been critiqued .
There is an issue of disincentives for the new administration to heed (if the agricultural lobby for US food exports does not prevail): “In [USAID’s exit] study evaluating how sustainable the results of Title II development programs are 2–3 years after project closure, FFP found that “providing free resources can threaten sustainability, unless replacement of those resources both as project inputs and as incentives has been addressed” . As the Natural Resources section notes, “Whether entirely in the hands of the community or linked to a formal institution, the incentives and resources necessary to maintain a community asset are part of the system that will sustain it. The lack of such systems is visible in rusted irrigation pumps, failed mangrove plantations, abandoned bore wells, eroded dikes, and silted-in fish ponds around the world” . Yet the private and public sectors are important: “Sustainable, broad-based change is more likely to be achieved by supporting and strengthening existing community, private sector, and public sector mechanisms for product and service delivery, and by supporting the capacity, quality, and accountability of government institutions” .
FFP’s new strategy calls for taking a systems approach to change that emphasizes sustainable long-term gains over unsustainable short-term wins. Even more delightful, in a small meeting at MFAN with Dina Esposito, Director of Food For Peace in November, she announced that they were looking to do pilot funding of an additional three years to typical five-year DFAP development projects. One year would involve collaborative participatory design between partners, communities and FFP, the second additional years being evaluation post-project of sustained and emerging impact!
This is a sea shift that can hopefully withstand political winds. After all, US foreign aid accounts for less than 1% of our federal budget, even though many Americans believe it is over 15% (hence easy to cut)…. but fingers crossed the aid effectiveness value of our work is… Valued.
 Cekan, J., PhD, Kagendo, R., & Towns, A. (2016). Participation by All: The Keys to Sustainability of a CRS Food Security Project in Niger. Retrieved from https://www.crs.org/our-work-overseas/research-publications/participation-all
 Lindley-Jones, H. (2016, November 16). ‘If we don’t do it, who will?’ A study into the sustainability of Community Protection Structures supported by Oxfam in the Democratic Republic of Congo (DRC). Retrieved from https://policy-practice.oxfam.org.uk/publications/if-we-dont-do-it-who-will-a-study-into-the-sustainability-of-community-protecti-620149
 USAID Learning Lab. (n.d.). Collaborating, Learning, and Adapting (CLA)? Retrieved from https://usaidlearninglab.org/faq/collaborating%2C-learning%2C-and-adapting-cla
 Food and Nutrition Technical Assistance (FANTA). (n.d.). Effective Sustainability and Exit Strategies for USAID FFP Development Food Assistance Projects. Retrieved from https://www.fantaproject.org/research/exit-strategies-ffp
 Del Mese, F. (2016, November 16). When aid relationships change: DFID’s approach to managing exit and transition in its development partnerships. Retrieved from https://icai.independent.gov.uk/report/transition/
 Cekan, J., Rogers, B. L., Rogers, P., & Zivetz, L. (2016, October 26). Barking Up a Better Tree: Lessons about SEIE (Sustained and Emerging Impact Evaluation). Retrieved from https://valuingvoices.com/wp-content/uploads/2016/11/Barking-up-a-Better-Tree-AEA-Oct-26-FINAL.pdf
 USAID. (2016, October 6). 2016–2025 Food Assistance and Food Security Strategy. Retrieved from https://www.usaid.gov/ffpstrategy#:~:text=FFP’s%20new%20strategy%2C%20the%202016,USG)%20food%20assistance%20as%20a
 Rogers, B. L., & Coates, J. (2015, December). Sustaining Development: A Synthesis of Results from a Four-Country Study of Sustainability and Exit Strategies among Development Food Assistance Projects. Retrieved from https://www.fantaproject.org/research/exit-strategies-ffp
The Disruptive Potential of Feedback
by MARC GUNTHER, OCTOBER 18, 2015 ( reblogged http://nonprofitchronicles.com/2015/10/18/the-disruptive-potential-of-feedback/)
Few institutions in the US are as undemocratic as endowed foundations. The executives in charge of foundations answer to, er, no one. They give money away, so people tend to laugh at their jokes, tell them they look well, nod in agreement at their banal remarks. What’s not to like?
As for nonprofits, they pay heed to foundations and donors, but they need not listen to their “beneficiaries,” unless they feel a moral obligation to do so. What if, goodness knows, the people they are trying to serve turn out to be unhappy with the service? Talk about inconvenient truths.
Last week in Washington, a group of about 70 people — the generals and foot soldiers of a growing movement to devolve power to mostly poor recipients of aid in the US and abroad — came together to talk about how to turn that power dynamic of philanthropy upside down. They believe that feedback from constituents “has the potential to unleash massive, timely and necessary changes in the way social change and development are pursued,” in the words of Feedback Labs, a DC-based NGO that convened the firstFeedback Summit.
As Dennis Whittle, the executive director of Feedback Labs, has written:
Will aid and philanthropy democratize themselves? Will aid agencies and foundations cede power and sovereignty to the people they are trying to serve?
It’s too soon to say but there were signs during the two-day confab that a half-dozen or so forward-thinking foundations, along with a growing number of nonprofits, are starting to figure how to create tight feedback loops that will enable them to solicit feedback from citizens, listen, analyze and, most important, change their practices as a result of what they learn.
“It’s the right thing to do, morally and ethically, philosophically. It’s the smart thing to do,” Whittle said. Now the goal is to make it “the feasible thing to do, financially and operationally.”
How do feedback loops differ from conventional monitoring and evaluation (M&E)? One attendee told me that feedback loops are the equivalent of diagnosing and treating a disease; a conventional evaluation is more like an autopsy, and thus of limited value to the patient.
Here are three signs that the feedback movement is gathering momentum:
A group called the Fund for Shared Insight, a collaboration of foundations that makes grants to improve philanthropy, has launched an initiative called Listen for Good that intends to fund 50 nonprofits to seek feedback from the people they are trying to help. They will use the now-famous Net Promoter System methodology developed for business by Bain & Co., which is working with the fund to make sure that the simple, elegant and yet rigorous system is deployed effectively. “It needs to be a high velocity loop of feedback, learning and action,” said Vikki Tam, a Bain partner. To the extent possible, the feedback results will be made public, enabling nonprofits to compare their net promoter scores with peers. “It’s so important for foundations to be open about what they do, and what they’re learning,” said Lindsay Austin Louie, a program officer at William and Flora Hewlett Foundation who works closely with the fund. Supporters of the fund include the David and Lucile Packard Foundation, the Ford Foundation, the Gordon and Betty Moore Foundation, the JPB Foundation, Liquidnet (which I wrote about here), the Rita Allen Foundation and the W.K. Kellogg Foundation.
Efforts to build “good enough” feedback loops are underway, aimed at helping small or midsize NGOs measure their impact without having to undertake expensive, long-term randomized control trials. Thoai Ngo, a senior director of research at Innovations for Poverty Action, talked about an effort called The Goldilocks Project that aims to build “right-fit” evaluation systems, focusing on collecting credible, actionable data in a timely way–that is, feedback to help an NGO change course if needed. Ken Berger, the former chief executive at Charity Navigator, described his work at a firm called Algorhythm which offers impact measurement to small NGOs for as little as $750. “These are organizations that never before had an opportunity to measure what matters most,” Berger said.
Technology is making it much easier to gather feedback, and make sense of it. Louis Dorval is the co-founder of VOTO Mobile, a Ghana-based tech startup that aims to “amplify the voice of the under-heard” by using voice and text messages on mobile devices to survey citizens, as well as send one-way messages. Less than three years old, Voto Mobile has already worked with about 250 organizations, including Unicef and Innovations for Poverty Action. David Bonbright of Keystone Accountability, who is a pioneer in the feedback arena, talked about Feedback Commons, an online platform designed to allow organizations to “share and compare” the feedback they collect from their constituents.
SUPPLY AND DEMAND
All these initiatives are designed to improve the development of feedback loops. Before long, NGOs should be able to show how collecting feedback generates better outcomes for their clients. That’s all to the good. Think of that as the supply side of the feedback “business.”
But what about the demand side? Who’s going to fund feedback loops? I’m still a newcomer to the development world but my impression (from reporting on water and sanitation projects, and on cookstoves) is that most foundations and NGOs are not as rigorous as they could and should be about measuring their impact.
This brings us back to the fundamental power dynamic of philanthropy, as Caroline Fiennes of Giving Evidence explained during the Feedback Summit.
“Why don’t foundations ask for feedback?” Fiennes asked. “Because they don’t have to.”
“It’s difficult and it’s a bit painful,” she went on. “If you have made a chunk of money and you want to give it away, in general you will feel good about that, and everybody will love you. Once you start asking questions” — questions designed to find out if the work is making as much of a difference as it can–“you might not like the answers.”
In an essay called What do they want? at Aeon, Claire Melamed elaborates:
While most individual aid workers do care, very much, about the people they work with and for, the actual structure of the aid business offers few reasons for anyone to worry about what aid recipients think or want. Staff in aid agencies need to think about what their funders want to pay for. For their own performance reviews, they need to think about how to demonstrate that what they are doing is achieving the best possible results with the smallest amount of money. So the incentives for spending money on expensive surveys to find out what representative samples of poor people think of their operations are just not there.
And besides, the information might be a threat. What if it turned out that people feel patronised by aid workers? Or that they would rather their food didn’t arrive with logos announcing their indebtedness to foreign governments? Or that they resent being given a T-shirt when really they would sooner just have the money? What if people don’t really want another agricultural programme, and they’d rather have a bus ticket to the nearest town and somewhere to stay when they get there? These kinds of discoveries could be quite discomfiting for the agencies themselves – though in the long run, they would presumably do a better job.”
This is why those funders (like the foundations behind the Fund for Shared Insight) who push for feedback loops and rigorous evaluation deserve a lot of credit. Let’s hope their numbers grow.
Longing to do an Ex-post Sustainability Evaluation? How to support this work…
Just back from Niger where Catholic Relief Services, Rutere Kagendo and I are doing a fascinating post-project evaluation. Fresh on my mind is the commitment we all had to make to get this quite ground-breaking research going. Here is the full report, but there are three kinds of conditions we found were integral for success: client-ValuingVoices match; project and site selection; and resources.
Client – Valuing Voices match:
The study needs to be appreciated as innovative, adding to the program quality and learning of the organization so funding is provided and there is in-house interest in the findings;
The local office needs to allocate staff and technical time to support the study technically and logistically (see below);
Shared clarity is needed among all involved that such a study looks for self-sustained activities and outcomes. While there are lessons that emerge about the quality of implementation, its focus is what participants and their country-partners could continue themselves after project close-out and withdrawal of resources. It also can include lessons about what the local non-profit and the national stakeholders are doing to support community success (or not) and unexpected outcomes. Our clients need an openness to honestly seeing what was not sustained and exploring why;
While Valuing Voices provides expertise given review of the handful of post-project and exit evaluations that exist, client is interested in sharing findings and advocating to donors to fund more of these studies;
Disseminating findings internally where a possibility of learning from this evaluation to support similar current implementation; research could help similar project learning, lessons for country nationals such as Ministries;
Prioritizing local capacity – Valuing Voices believes in using regional M&E capacity to do the work; where possible we partner with regional evaluators while also building capacity within our client's staff to carry out such work;
Sharing and discussing findings locally: Valuing Voices believes knowledge learned needs to be shared in immediate feedback loops. We present: a) to each village after each site’s research; b) to local key partners and representatives from each village at the end of the qualitative Rapid Rural Appraisal findings; c) to the non-profit in-country at the end of qualitative research and d) internationally to headquarters at the end of the combined analysis of the qualitative and quantitative research with findings in the final report.
Project and Site Selection:
Non-profit programming projects has been closed out for at least two years and no more than seven years (for recall);
No other NGO has done very similar work in the region in the intervening years;
The region selected is representative of the project as a whole (e.g. agro-ecological zones, economic/ livelihood/ health, educational or other sectoral criteria);
Research areas are secure and safe (e.g. from civil unrest, severe drought/ floods, epidemics, to the degree possible);
Timing does not interfere with urgent priorities of those involved with the study (e.g. livelihoods are not jeapordized in communities, holidays are kept, other technical work is not disrupted).
Resources (Time, Material and Project Expertise):
Time: the research is qualitative followed by quantitative, coming to 80-90 days of qualitative and quantitative research overall (roughly 5 weeks of fieldwork in teams of 4-10, analysis, report-writing and presentation;
Project and evaluation documents are available to inform and contextualize approach including activities, outcomes and projected impacts;
Data: key to the fieldwork are village and participant lists from pre-closeout days so participants can be interviewed both during a Rapid Rural Appraisal and a follow-on household survey;
Internal/external sectoral staff and at least one past project staff are part of the team to inform and ‘ground truth’ research;
Logistics support is provided by the client; from vehicle/driver and lodging support in the field to materials such as mobile phones, flipcharts, photocopying and advances;
A consultant or staff prepares the sites before the research teams come, e.g. to confirm communities are willing to be visited (each visit will be 2-5 days) and to identify participants and partners still there;
Partners familiar with the closed project can be identified so they can be interviewed by the research team;
Local language expertise is needed, e.g. translator to local language, as well as data entry personnel afterwards.
While Valuing Voices provides the technical lead experts and statistical back office analysis, including sampling and rigorous analysis, senior non-profit staff are needed in-country for contextualization and input on preliminary findings, as well as senior technical staff to review the final product;
A home for findings and on the road for dissemination: good knowledge management is needed for data retention, for the findings to have a sustainable ‘home’– be that info-graphics and print copies distributed to villages and partners or online repositories created that are language-accessible both nationally and by foreign donors; webinars, conference presentations etc are needed to optimize the learning via sustainability dissemination campaigns.
Much of this is needed for the research to be the best quality and yield the highest results. It is exciting learning is to be had from not only what communities and supporters could sustain but what they exceeded or dropped! Consider doing one to see how post-project sustainability research can improve your current implementation, future design, and long-term self-sustainability!
Pick a term, any term…but stick to it!
Valuing Voices is interested in identifying learning leaders in international development that are using participatory post-project evaluation methods to learn about the sustainability of their development projects. These organizations not only believe they need to see the sustained impact of their projects by learning from what has worked and what hasn’t in the past, but also that participants are the most knowledgeable about such impacts. So how do they define sustainability? This is determined by asking questions such as the following: were project goals self-sustained by the ‘beneficiary’ communities that implemented these projects? By our VV definition, self-sustainability can only be determined by going back to the project site, 2-5 years after project closeout, to speak directly with the community about the long-term intended/unintended impacts.
Naturally, we turned to the World Bank (WB) – the world’s prominent development institution – to see if this powerhouse of development, both in terms of annual monetary investment and global breadth of influence, has effectively involved local communities in the evaluation of sustainable (or unsustainable) outcomes. Specifically, my research was focused on identifying the degree to which participatory post-project evaluation was happening at the WB.
A fantastic blog* regarding participatory evaluation methods at the WB emphasizes the WB’s stated desire to improve development effectiveness by “ensuring all views are considered in participatory evaluation,” particularly through its community driven development projects. As Heider points out,
“The World Bank Group wants to improve its development effectiveness by, among others things, engaging citizens throughout the operational project cycle. It has set itself an ambitious target: 100% citizen engagement in projects that have clearly identifiable beneficiaries.”
Wow! Though these methods are clearly well intentioned, there seems to be a flaw in the terminology. The IEG says, “[Community driven development projects] are based on beneficiary participation from design through implementation, which make them a good example of citizen-centered assessment techniques in evaluation,” …however, this fails to recognize the importance of planning for community-driven post-project sustainability evaluations, to be conducted by the organization in order to collect valuable data concerning the long-term intended/unintended impacts of development work.
With the intention of identifying evidence of the above-mentioned mode of evaluation at the WB, my research process involved analyzing the resources provided by the WB’s Independent Evaluation Group (IEG) database of evaluations. As the accountability branch of the World Bank Group, the IEG works to gather institution-wide knowledge about the outcomes of the WBs finished projects. Its mission statement is as follows:
“The goals of evaluation are to learn from experience, to provide an objective basis for assessing the results of the Bank Group’s work, and to provide accountability in the achievement of its objectives. It also improves Bank Group work by identifying and disseminating the lessons learned from experience and by framing recommendations drawn from evaluation findings.”
Another important function of the IEG database is to provide information for the public and external development organizations to access and learn from; this wealth of data and information about the World Bank’s findings is freely accessible online.
When searching for evidence of post-project learning, I was surprised to find that the taxonomy varied greatly; e.g. projects I was looking for could be found under ‘post-project’, post project’, ‘ex-post’ or ‘ex post’. What was also unclear was any specific category under which these could be found, including a definition of what exactly is required in an IEG ex post impact evaluation. According to the IEG, there are 13 major evaluation categories, which are described in more detail here. I was expecting to find an explicit category dedicated to post-project sustainability, but instead this type of evaluation was included under Project Level Evaluations (which include PPARs and ICRs [Implementation Completion Reports]), and Impact evaluations.
This made it difficult to determine a clear procedural standard for documents reporting sustainability outcomes and other important data for the entire WB.
I began my research process by simply querying a few key terms into the database. In the first step of my research, which will be elaborated upon in Part I in this blog series, I attempted to identify evidence of ex post sustainability evaluation at the IEG by searching for the term “post-project” in the database, which yielded 73 results when using a hyphen and 953 results without using a hyphen. I found it interesting the inconsistency in the number of results depending on the use of a hyphen, but in order to narrow the search parameters to conduct a manageable content analysis of the documents, I chose to breakdown these 73 results by document type to determine if there are any examples of primary fieldwork research. In these documents, the term “post-project” was not used in the title of the documents or referenced in the executive summary as the specific aim of the evaluation, but rather used to loosely define the ex post time frame. Figure 1 illustrates the breakdown of document types found in the sample of 73 documents that came up when I searched for the key term “post-project”:
Figure 1: Breakdown by Document Type out of Total 73 Results when searching post-project
As the chart suggests, many of the documents (56% – which accounts for all of the pie chart slices except Project Level Evaluations) were purely desk studies – evaluating WB programs and the overall effectiveness of organization policies. These desk studies draw data from existing reports, such as those published at project closeout, without supplementing past data with new fieldwork research.
Out of the 9 categories, the only document type that showed evidence of any follow up evaluations were the Project Performance Assessment Reports (PPARs), defined by the IEG as documents that are…
“…based on a review of the Implementation Completion Report (a self-evaluation by the responsible Bank department) and fieldwork conducted by OED [Operations Evaluation Department]. To prepare PPARs, OED staff examines project files and other documents, interview operational staff, and in most cases visit the borrowing country for onsite discussions with project staff and beneficiaries. The PPAR thereby seeks to validate and augment the information provided in the ICR, as well as examine issues of special interest to broader OED studies.”
Bingo. This is what we’re looking for. The PPARs accounted for 32 out of the 73 results, or a total of 44%. As I examined the methodology used to conduct PPARs, I found that in the 32 cases that came up when I searched for “post-project”, after Bank funds were “fully dispersed to a project” and resources were withdrawn, the IEG sent a post-project mission back into the field to collaborate on new M&E with local stakeholders and beneficiaries. The IEG gathered new data through the use of field surveys or interviews to determine project effectiveness.
Based on these findings, I conducted a supplementary search of the term “ex post”, which yielded 672 results. From this search, 11 documents were categorized by the IEG as “Impact Evaluations”, of which 3 showed evidence of talking with participants to evaluate for sustainability outcomes. In follow-up blogs in this series I will elaborate upon the significance of these additional findings and go into greater detail regarding the quality of the data in these 32 PPARs, but here are a few key takeaways from this preliminary research:
Taxonomy and definition of ex-post is missing. After committing approximately 15-20 hours of research time to this content analysis, it is clear that navigating the IEG database to search for methodology standards to evaluate for sustainability is a more complicated process than it should be for such a prominent learning institution. The vague taxonomy used to categorize post-project/ex-post evaluation by the WB limits the functionality of this resource as a public archive dedicated to informing the sustainability of development projects the World Bank has funded.
Despite affirmative evidence of participatory community involvement in the post-project evaluation of WB projects, not all PPARs in the IEG database demonstrated a uniform level of ‘beneficiary’ participation. In most cases, it was unclear how many community members impacted by the project were really involved in the ex-post process, which made it difficult to determine even a general range of the number of participants involved in post-project activity at the WB.
Although PPARs report findings based, in part, on post-project missions (as indicated in the preface of the reports), the specific methods/structure of the processes were not described, and oftentimes the participants were not explicitly referenced in the reports. (More detailed analysis on this topic to come in Blog Series Part 2!)
These surprisingly inconsistent approaches make it difficult to compare results across this evaluation type, as there is no precise status quo.
Finally, the World Bank, which has funded 12,000 projects since its inception, should have far more than 73 post-project/ ex-post evaluations…but maybe I’m just quibbling with terms.
Stay tuned for PART II of this series, coming soon!
Listening better… for more sustainable impact
Are we listening better? Maybe. As Irene Gujit states on Better Evaluation, Keystone’s work on ‘constituent voice’ enables a "shift [in] power dynamics and make organizations more accountable to primary constituents”. For example, "organisations can compare with peers to trigger discussions on what matters to those in need… in (re)defining success and ‘closing the loop’ with a response to feedback [on the project], feedback mechanisms can go well beyond upward accountability."
There are impressive new toolkits available to elicit and hear participant voice about perceived outcomes and impacts, such as People First Impact Method and NGO IDEAS' Monitoring Self-Effectiveness. As People First states, "Across the aid sector, the voices of ordinary people are mostly not being heard. Compelling evidence shows how the aid structure unwittingly sidelines the people whom we aim to serve. Important decisions are frequently made from afar and often based on limited or inaccurate assumptions. As a result, precious funds are not always spent in line with real priorities, or in ways that should help people build their own confidence and abilities…. As a sector, we urgently need to work differently." These are results of 40 year old participatory/Rapid Rural Appraisal distilled and shared by IDS/UK's Robert Chambers which I've used for 25 years, including lately for self-sustainability evaluation.
In addition to qualitative, participatory tools, the application of quantitative evaluative tools have a ways to grow to be terrific at listening and learning. Keystone did interesting work on impact evaluation (lately associated with Random Control Trials comparing existing projects and comparable non-participating sites to prove impact). Their study found that not only "no one engaged through the research for this note is particularly happy with the current state of the art…. There is a strong appetite to improve the delivery of evaluative activities in general and impact evaluation in particular … Setting expectations by engaging and communicating early and often with stakeholders and audiences for the evaluation is critical, as is timing." So many of us believe that evaluation cannot be an afterthought, but monitoring and evaluation needs to be integrated into project design, with feedback loops informing implementation.
Yet this otherwise excellent article made one point that is common, yet like Alice looking through the looking glass backwards. For they write feedback is "to inform intended beneficiaries and communities (downward accountability) about whether or not, and in what ways, a program is benefiting the community". Yet it is the other way around! Only communities have the capacity to tell us how well they feel we are helping them!
Thankfully, we are increasingly willing to listen and learn about aid effectiveness. Some major actors shaping funding decisions have already thrown down the feedback gauntlet:
* As our 2013 blog asked for, Charity Navigator is now applying its new “Results Reporting” rating criteria, which include six data points regarding charities feedback practices. The new ratings will be factored into Charity Navigator star ratings from 2016.
* Heavyweight World Bank president Jim Kim has decreed that the Bank will require robust feedback from beneficiaries on all projects for which there is an identifiable beneficiary.
* The Hewlett, Ford, Packard, Rita Allen, Kellogg, JPB and LiquidNet for Good Foundations have recently come together to create the Fund for Shared Insight to catalyze a new feedback culture within the philanthropy sector.
* This February, a new report on UK's international development agency, DFID recommended a new direction to their aid: "The development discourse has generally focused on convincing donors to boost their aid spending, when the conversation should instead be on “how aid works, how it can support development, how change happens in countries, and all of the different responses that need to come together to support that change…. One important change will be for professionals to deliver more adaptive programming and work in more flexible and entrepreneurial ways… emphasized the need for development delivery to be led by local people. Commenting on ODI’s research, [DFID} said successful development examples showed “people solving problems for themselves rather than coming in and trying to manage that process externally through an aid program.”
Hallelujah! What aid effectiveness great listening are you seeing?
What should projects accomplish… and for whom?
An unnamed international non-profit client contacted me to evaluate their resilience project mid-stream, to gauge prospects for sustainable handover. EUREKA, I thought! After email discussions with them I drafted an evaluation process that included learning from a variety of stakeholders, ranging from Ministries, local government and the national University who were to take over the programming work about what they thought would be most sustainable once the project ended and how in the next two years the project could best foster self-sustainability by country-nationals. I projected several weeks for in-depth participatory discussions with local youth groups and sentinel communities directly affected by the food security/ climate change onslaught and who benefited from resilience activities to learn what had worked, what didn’t and who would take what self-responsibility locally going forward.
Pleased with myself, I sent off a detailed proposal. The non-profit soon answered that I hadn’t fully understood my task. In their view the main task at hand was to determine what the country needed the non-profit to keep doing, so the donor could be convinced to extend their (U.S.-based) funding. The question at hand became how could I change my evaluation to feed them back this key information for the next proposal design?
Maybe it was me, maybe it was the autumn winds, maybe it was my inability to sufficiently subsume long-term sustainability questions under shorter-term non-profit financing interests that led me to drop this. Maybe the elephant in the living room that is often unspoken is the need for some non-profits to prioritize their own organizational sustainability to ‘do good’ via donor funding rather than working for community self-sustainability.
Maybe donor/funders should share this blame, needing to push funding out, proving success at any cost to get more funding and so the cycle goes on. As a Feedback Lab feature on a Effective Philanthropy report recently stated: “Only rarely do funders ask, ‘What do the people you are trying to help actually think about what you are doing?’ Participants in the CEP study say that funders rarely provide the resources to find the answer. Nor do funders seem to care whether or not grantees are changing behavior and programs in response to how the ultimate beneficiaries respond” .
And how much responsibility do communities themselves hold for not balking? Why are they so often ‘price-takers’ (in economic terms) rather than ‘price-makers’? As wise Judi Aubel asked in a recent evaluation list-serve discussion “When will communities rise up to demand that the “development” resources designed to support/strengthen them be spent on programs/strategies which correspond to their concerns/priorities??”
We can help them do just that by creating good conditions for them to be heard. We can push advocates to work to ensure the incoming Sustainable Development Goals (post-MDGs) listen to what recipient nations feel are sustainable, more than funders. We can help their voices be heard via systems that enable donor/ implementers to learn from citizen feedback, such as Keystone has via their Constituent Voice practice (in January 2015 it is launching an online feedback data sharing platform called the Feedback Commons) or GlobalGiving’s new Effectiveness Dashboard (see Feedback Labs).
We can do it locally in our work in the field, shifting the focus from our expertise to theirs, from our powerfulness to theirs. In field evaluations can use Empowerment Evaluation. We can fund feedback loops pre-RFP (requests for proposals), during project design, implementation and beyond, with the right incentives tools for learning from community and local and national-level input so that country-led development begins to be actual not just a nice platitude. We can fund ValuingVoices’ self-sustainability research on what lasts after projects end. We can conserve project content and data in Open Data formats for long-term learning from country-nationals.
Most of all, we can honour our participants as experts, which is what I strive to do in my work. I’ll leave you with a story from Mali. in 1991 I was doing famine-prevention research in Koulikoro Mali where average rainfall is 100mm a year (4 inches). I accompanied women I was interviewing to a deep well which was 100m deep (300 feet). They used plastic pliable buckets and the first five drew up 90% of the bucket full. When I asked to try, they seriously gave me a bucket. I laughed, as did they when we saw that only 20% of my bucket was full. I had splashed the other 80% out on the way up. Who’s the expert?
How are we helping them get more of what they need, rather than what we are willing to give? How are we prioritizing their needs over our organizational income? How are we #ValuingVoices?
 The Center for Effective Philanthropy. (2014, October 27). Closing the Citizen Feedback Loop. Retrieved December 2014, from https://web.archive.org/web/20141031130101/https://feedbacklabs.org/closing-the-citizen-feedback-loop/
 Better Evaluation. (n.d.). Empowerment Evaluation. Retrieved December 2014, from https://www.betterevaluation.org/plan/approach/empowerment_evaluation
 Sonjara. (2016). Content and Data: Intangible Assets Part V. Retrieved from http://www.sonjara.com/blog?article_id=135