It's not just Me, it's We
Many of us want to be of service. That's why we go into international development, government, and many other fields. We hope our words and deeds help make others' lives better.
For 25 years I've written proposals, designed and evaluated projects, knowing that while I could not live in-country due to my family constraints, I could get resources there and help us learn how well they are used. I became a consultant so I could raise my kids without being on the road 60% of the time, one who promotes national consultants so that African, Asian, Latin American and European experts evaluate their own projects. I put myself into the shoes of our participants and realized any local person my age wants to leave behind a better, more sustainably viable livelihood for her family, so I looked to see what was most sustained and how we knew it. I took my love of participatory approaches of listening to and learning from the end-users and founded Valuing Voices to promote learning from projects whose activities were most self-sustained.
Yet this is not enough. I am one person with only my views (however great I think they are :), many of us have great views and knowledge about how to best promote sustainable development. For the state of things today seem to me that too often our donors have limited funds for limited time with goals that they limit because they can only assure success by holding the outcome and funding reins so tightly that none of us are fostering self-sustainable development which takes time, faith in one's participants. I have found that the lack of post-project evaluation (see ValuingVoices.com/blogs such as this one on causes and conditions being ripe for sustainability) is a symptom but doing them also provides a huge opportunity to design projects well learning from what communities were able to sustain themselves, based on why/how it worked and how can we do this well again? For instance, from my fieldwork I have realized that questions such as ‘sustainable by whom for how long’ are ones I never asked and don’t think others have ways to go about it well (yet)… unless you have ideas!
How can we foster aid effectiveness, effective philanthropy, community-driven-development, community-driven and NGO-led impact , and effective policy? It takes many of us – giraffes, ostriches, wliderbeast, gazelles, each with our own expertise.
This takes Time to Listen, respect for local capacities (Doing Development Differently) and an openness to step out of the limelight of 'we saved you' to asking "how can we best work together for a sustainable world?". This takes you, me, WE. One way is to join together in a LinkedIn Group: Sustainable Solutions for Excellent Impact where we can discuss how can we best design, implement, evaluate, fund, promote (etc!) projects well that are programmatically, financially, institutionally and environmentally sustainable. Please join us!
Pick a term, any term…but stick to it!
Valuing Voices is interested in identifying learning leaders in international development that are using participatory post-project evaluation methods to learn about the sustainability of their development projects. These organizations not only believe they need to see the sustained impact of their projects by learning from what has worked and what hasn’t in the past, but also that participants are the most knowledgeable about such impacts. So how do they define sustainability? This is determined by asking questions such as the following: were project goals self-sustained by the ‘beneficiary’ communities that implemented these projects? By our VV definition, self-sustainability can only be determined by going back to the project site, 2-5 years after project closeout, to speak directly with the community about the long-term intended/unintended impacts.
Naturally, we turned to the World Bank (WB) – the world’s prominent development institution – to see if this powerhouse of development, both in terms of annual monetary investment and global breadth of influence, has effectively involved local communities in the evaluation of sustainable (or unsustainable) outcomes. Specifically, my research was focused on identifying the degree to which participatory post-project evaluation was happening at the WB.
A fantastic blog* regarding participatory evaluation methods at the WB emphasizes the WB’s stated desire to improve development effectiveness by “ensuring all views are considered in participatory evaluation,” particularly through its community driven development projects. As Heider points out,
“The World Bank Group wants to improve its development effectiveness by, among others things, engaging citizens throughout the operational project cycle. It has set itself an ambitious target: 100% citizen engagement in projects that have clearly identifiable beneficiaries.”
Wow! Though these methods are clearly well intentioned, there seems to be a flaw in the terminology. The IEG says, “[Community driven development projects] are based on beneficiary participation from design through implementation, which make them a good example of citizen-centered assessment techniques in evaluation,” …however, this fails to recognize the importance of planning for community-driven post-project sustainability evaluations, to be conducted by the organization in order to collect valuable data concerning the long-term intended/unintended impacts of development work.
With the intention of identifying evidence of the above-mentioned mode of evaluation at the WB, my research process involved analyzing the resources provided by the WB’s Independent Evaluation Group (IEG) database of evaluations. As the accountability branch of the World Bank Group, the IEG works to gather institution-wide knowledge about the outcomes of the WBs finished projects. Its mission statement is as follows:
“The goals of evaluation are to learn from experience, to provide an objective basis for assessing the results of the Bank Group’s work, and to provide accountability in the achievement of its objectives. It also improves Bank Group work by identifying and disseminating the lessons learned from experience and by framing recommendations drawn from evaluation findings.”
Another important function of the IEG database is to provide information for the public and external development organizations to access and learn from; this wealth of data and information about the World Bank’s findings is freely accessible online.
When searching for evidence of post-project learning, I was surprised to find that the taxonomy varied greatly; e.g. projects I was looking for could be found under ‘post-project’, post project’, ‘ex-post’ or ‘ex post’. What was also unclear was any specific category under which these could be found, including a definition of what exactly is required in an IEG ex post impact evaluation. According to the IEG, there are 13 major evaluation categories, which are described in more detail here. I was expecting to find an explicit category dedicated to post-project sustainability, but instead this type of evaluation was included under Project Level Evaluations (which include PPARs and ICRs [Implementation Completion Reports]), and Impact evaluations.
This made it difficult to determine a clear procedural standard for documents reporting sustainability outcomes and other important data for the entire WB.
I began my research process by simply querying a few key terms into the database. In the first step of my research, which will be elaborated upon in Part I in this blog series, I attempted to identify evidence of ex post sustainability evaluation at the IEG by searching for the term “post-project” in the database, which yielded 73 results when using a hyphen and 953 results without using a hyphen. I found it interesting the inconsistency in the number of results depending on the use of a hyphen, but in order to narrow the search parameters to conduct a manageable content analysis of the documents, I chose to breakdown these 73 results by document type to determine if there are any examples of primary fieldwork research. In these documents, the term “post-project” was not used in the title of the documents or referenced in the executive summary as the specific aim of the evaluation, but rather used to loosely define the ex post time frame. Figure 1 illustrates the breakdown of document types found in the sample of 73 documents that came up when I searched for the key term “post-project”:
Figure 1: Breakdown by Document Type out of Total 73 Results when searching post-project
As the chart suggests, many of the documents (56% – which accounts for all of the pie chart slices except Project Level Evaluations) were purely desk studies – evaluating WB programs and the overall effectiveness of organization policies. These desk studies draw data from existing reports, such as those published at project closeout, without supplementing past data with new fieldwork research.
Out of the 9 categories, the only document type that showed evidence of any follow up evaluations were the Project Performance Assessment Reports (PPARs), defined by the IEG as documents that are…
“…based on a review of the Implementation Completion Report (a self-evaluation by the responsible Bank department) and fieldwork conducted by OED [Operations Evaluation Department]. To prepare PPARs, OED staff examines project files and other documents, interview operational staff, and in most cases visit the borrowing country for onsite discussions with project staff and beneficiaries. The PPAR thereby seeks to validate and augment the information provided in the ICR, as well as examine issues of special interest to broader OED studies.”
Bingo. This is what we’re looking for. The PPARs accounted for 32 out of the 73 results, or a total of 44%. As I examined the methodology used to conduct PPARs, I found that in the 32 cases that came up when I searched for “post-project”, after Bank funds were “fully dispersed to a project” and resources were withdrawn, the IEG sent a post-project mission back into the field to collaborate on new M&E with local stakeholders and beneficiaries. The IEG gathered new data through the use of field surveys or interviews to determine project effectiveness.
Based on these findings, I conducted a supplementary search of the term “ex post”, which yielded 672 results. From this search, 11 documents were categorized by the IEG as “Impact Evaluations”, of which 3 showed evidence of talking with participants to evaluate for sustainability outcomes. In follow-up blogs in this series I will elaborate upon the significance of these additional findings and go into greater detail regarding the quality of the data in these 32 PPARs, but here are a few key takeaways from this preliminary research:
Taxonomy and definition of ex-post is missing. After committing approximately 15-20 hours of research time to this content analysis, it is clear that navigating the IEG database to search for methodology standards to evaluate for sustainability is a more complicated process than it should be for such a prominent learning institution. The vague taxonomy used to categorize post-project/ex-post evaluation by the WB limits the functionality of this resource as a public archive dedicated to informing the sustainability of development projects the World Bank has funded.
Despite affirmative evidence of participatory community involvement in the post-project evaluation of WB projects, not all PPARs in the IEG database demonstrated a uniform level of ‘beneficiary’ participation. In most cases, it was unclear how many community members impacted by the project were really involved in the ex-post process, which made it difficult to determine even a general range of the number of participants involved in post-project activity at the WB.
Although PPARs report findings based, in part, on post-project missions (as indicated in the preface of the reports), the specific methods/structure of the processes were not described, and oftentimes the participants were not explicitly referenced in the reports. (More detailed analysis on this topic to come in Blog Series Part 2!)
These surprisingly inconsistent approaches make it difficult to compare results across this evaluation type, as there is no precise status quo.
Finally, the World Bank, which has funded 12,000 projects since its inception, should have far more than 73 post-project/ ex-post evaluations…but maybe I’m just quibbling with terms.
Stay tuned for PART II of this series, coming soon!
Listening better… for more sustainable impact
Are we listening better? Maybe. As Irene Gujit states on Better Evaluation, Keystone’s work on ‘constituent voice’ enables a "shift [in] power dynamics and make organizations more accountable to primary constituents”. For example, "organisations can compare with peers to trigger discussions on what matters to those in need… in (re)defining success and ‘closing the loop’ with a response to feedback [on the project], feedback mechanisms can go well beyond upward accountability."
There are impressive new toolkits available to elicit and hear participant voice about perceived outcomes and impacts, such as People First Impact Method and NGO IDEAS' Monitoring Self-Effectiveness. As People First states, "Across the aid sector, the voices of ordinary people are mostly not being heard. Compelling evidence shows how the aid structure unwittingly sidelines the people whom we aim to serve. Important decisions are frequently made from afar and often based on limited or inaccurate assumptions. As a result, precious funds are not always spent in line with real priorities, or in ways that should help people build their own confidence and abilities…. As a sector, we urgently need to work differently." These are results of 40 year old participatory/Rapid Rural Appraisal distilled and shared by IDS/UK's Robert Chambers which I've used for 25 years, including lately for self-sustainability evaluation.
In addition to qualitative, participatory tools, the application of quantitative evaluative tools have a ways to grow to be terrific at listening and learning. Keystone did interesting work on impact evaluation (lately associated with Random Control Trials comparing existing projects and comparable non-participating sites to prove impact). Their study found that not only "no one engaged through the research for this note is particularly happy with the current state of the art…. There is a strong appetite to improve the delivery of evaluative activities in general and impact evaluation in particular … Setting expectations by engaging and communicating early and often with stakeholders and audiences for the evaluation is critical, as is timing." So many of us believe that evaluation cannot be an afterthought, but monitoring and evaluation needs to be integrated into project design, with feedback loops informing implementation.
Yet this otherwise excellent article made one point that is common, yet like Alice looking through the looking glass backwards. For they write feedback is "to inform intended beneficiaries and communities (downward accountability) about whether or not, and in what ways, a program is benefiting the community". Yet it is the other way around! Only communities have the capacity to tell us how well they feel we are helping them!
Thankfully, we are increasingly willing to listen and learn about aid effectiveness. Some major actors shaping funding decisions have already thrown down the feedback gauntlet:
* As our 2013 blog asked for, Charity Navigator is now applying its new “Results Reporting” rating criteria, which include six data points regarding charities feedback practices. The new ratings will be factored into Charity Navigator star ratings from 2016.
* Heavyweight World Bank president Jim Kim has decreed that the Bank will require robust feedback from beneficiaries on all projects for which there is an identifiable beneficiary.
* The Hewlett, Ford, Packard, Rita Allen, Kellogg, JPB and LiquidNet for Good Foundations have recently come together to create the Fund for Shared Insight to catalyze a new feedback culture within the philanthropy sector.
* This February, a new report on UK's international development agency, DFID recommended a new direction to their aid: "The development discourse has generally focused on convincing donors to boost their aid spending, when the conversation should instead be on “how aid works, how it can support development, how change happens in countries, and all of the different responses that need to come together to support that change…. One important change will be for professionals to deliver more adaptive programming and work in more flexible and entrepreneurial ways… emphasized the need for development delivery to be led by local people. Commenting on ODI’s research, [DFID} said successful development examples showed “people solving problems for themselves rather than coming in and trying to manage that process externally through an aid program.”
Hallelujah! What aid effectiveness great listening are you seeing?
What’s likely to ‘stand’ after we go? A new consideration in project design and evaluation
This spring I had the opportunity to not only evaluate a food security project but also to use the knowledge gleaned for the follow-on project design. This Ethiopian Red Cross (ERCS) project “Building Resilient Community: Integrated Food Security Project to Build the Capacity of Dedba, Dergajen & Shibta Vulnerable People to Food Insecurity” (with Federation and Swedish Red Cross support) was targeted to 2,259 households in Dedba, Dergajen and Shibta through provision of crossbreed cows, ox fattening, sheep/goats, beehives and poultry which were to be repaid in cash over time as well water and agriculture/ seedlings for environmental resilience. ERCS had been working with the Ethiopian government to provide credit for these food security inputs to households in Tigray which were to be repaid in cash over time. During this evaluation, we met with 168 respondents (8% of total project participants).
Not only were we looking for food consumption impacts (which were very good), and income impacts (good), we also probed for self-sustainability of activities. My evaluation team and I asked 52 of these participants more in-depth questions on income and self-sustainability preferences. In Tigray, Ethiopia, we used participatory methods to learn what they felt they could most sustain themselves after they repaid the credit and the project moved on to new participants and communities.
We also asked the to rank what input provided the greatest source of income. The largest income (above 30,000 birr or $1,500) was earned from dairy and oxen fattening, while a range of dairy, oxen, shoats and beehives provided over half of our respondents (40 people) smaller amounts between 1,000-10,000 birr ($50 to $500).
And even while 87% of total loans were for ox fattening, dairy cows (and beehives) which brought in farm more income, and only 11% of loans were sheep/goats (shoats) and 2% for poultry, the self-sustainability feedback was clear. In the chart below, poultry and shoats (and to a lesser degree, ox fattening) were what men and women felt they could self-sustain. In descending order, the vast majority of participants prioritized these activities:
To learn more about how we discussed that Ethiopian participants can self-monitor, see blog.
So how can such a listening and learning approach feed program success and sustainability? We need to sit with communities to discuss the project’s objectives during design plus manage our/ our donors’ impact expectations:
1) If raising income in the short-term is the goal, the project could only have offered dairy and ox fattening to the communities as their incomes gained the most. Note, fewer took this risk as the credit for these assets was costly.
2) If they took a longer view, investing in what communities felt they could self-sustain, then poultry and sheep/goats were the activities to promote. This is because more people (especially women, who preferred poultry 15:1 and shoats 2:1 compared to men ) could afford these smaller amounts of credit as well as the feed to sustain them.
3) In order to learn about true impacts we must return post-project close to confirm the extent to which income increases continued, as well as the degree to which communities were truly able to self-sustain the activities the project enabled them to launch. How do our goals fit with the communities’?
What is important is seeing community actors, our participants as the experts. It is their lives and livelihoods, and not one of us in international development is living there except them…
What are your questions and thoughts? Have you seen such tradeoffs? We long to know…
[*NB: There were other inputs (water, health, natural resource conservation) which are separate from this discussion.]
Data for whose good?
Many of us work in international development because we are driven to serve, to make corners of the world better by improving the lives of those that live there. Many of us are driven by compassion to help directly through working ‘in the field’ with ‘beneficiary’/ participants, some of us manifest our desire to help through staying in our home countries, advocating to powers that be for more funding, while others create new technologies to help improve the lives of others all over the world. Some of us what to use Western funds and report back to our taxpayers that funds were well-spent, others want to create future markets via increasing globally-thriving economies. We use data all the time to prove our case.
USAID has spent millions on USAID Forward and monitoring and evaluation systems. Organizations such as 3ie rigorously document projected impact of projects while they are being implemented. Japan’s JICA and the OECD are two of the rarest kinds of organizations – returning post-project to look at the continued impact (as USAID did 30 years ago and stopped). Sadly the World Bank and USAID have only done one post-project evaluation each in the last 20 years that drew on communities’ opinions. While a handful of non-profits have used private funds to do recent ex-post evaluations, the esteemed American Evaluation Association has (shockingly) not one resource.
Do we not care about sustained impact? Or are we just not looking in the right places with the right perspective? Linda Raftree has a blog on Big Data and Resilience. She says, “instead of large organizations thinking about how they can use data from afar to ‘rescue’ or ‘help’ the poor, organizations should be working together with communities in crisis (or supporting local or nationally based intermediaries to facilitate this process) so that communities can discuss and pull meaning from the data, contextualize it and use it to help themselves….” Respect for communities’ self-determination seems to be a key missing ingredient.
As an article from the Center for Global Development cites the empowerment that data gives citizens and our own international donors knowledge by which to steer: Citizens. When statistical information is released to the public through a vigorous open government mechanism it can help citizens directly. Citizens need data both to hold their government accountable and to improve their private decision-making. (On the CGD website, see discussions of the value of public disclosure for climate policy here and for AIDS foreign assistance here.)
In my experience, most communities have information but are not perceived to have data unless they collect it using 'Western' methods. Having data to support and back information, opinions and demands can serve communities in negotiations with entities that wield more power. (See the book “Who Counts, the power of participatory statistics” on how to work with communities to create ‘data’ from participatory approaches). Even if we codify qualitative (interview) data and quantify it via surveys, this is not enough if there is no political will to make change to respond to the data and to demands being made based on the data. This is due in some part to a lack of foundational respect that communities’ views count.
Occasionally, excellent thinkers at World Bank 'get' this: "In 2000, a study by the World Bank, conducted in fifty developing countries, stated that “there are 2.8 billion poverty experts: the poor themselves. Yet the development discourse about poverty has been dominated by the perspectives and expertise of those who are not poor … The bottom poor, in all their diversity, are excluded, impotent, ignored and neglected; the bottom poor are a blind spot in development." (This came from a session description for the 2014 World Bank Spring Meetings Civil Society Forum meetings, where I presented for Valuing Voices this spring, see photo below).
And as Anju Sharma’s great blog on community empowerment says, “Why do we continue to talk merely of community “participation” in development? Why not community-driven development, or community-driven adaptation, where communities don’t just participate in activities meant to benefit them, but actually lead them?” Valuing Voices would like to add that we need participatory self-sustainability feedback data from communities documenting Global Aid Effectiveness, ‘walking’ Busan’s talk. Rather than our evaluating their effectiveness in carrying out our development objectives, goals, activities and proposed outcomes, let’s shift to manifest theirs!
Our centuries-old love affair with data is hard to break. Fine, data has to inform our actions, so let’s make it as grassroots, community-driven as possible, based on respect for the knowledge of those most affected by projects, where the rubber hits the road. While that may make massive development projects targeted at hundreds of thousands uniformly… messy… but at least projects many be more efficacious, sustainable and theirs. What do you think?
Sustainability SPRINGing out all over the place… and Disrupting
So what is sustainability? You may think it's the climate's long-term wellbeing and how to gauge changes to that. You may think it's linked to sustainable development regarding consumption, trade, education and environment and how to assess it. You may think it's data-driven organizational success as Chelsea Clinton describes, or is it Michael Porter's business' view of Creating Shared Value on social and environmental concerns or is it about people, as hallowed University of Cambridge trains experts in its Institute for Sustainability Leadership (I revel that I was a Fellow there in the '90s). Finally, is it WCED’s lovely definition "Sustainable development is development that meets the needs of the present without compromising the ability of future generations to meet their own needs"? Yes, when applied to communities' abilities to self-sustainably and resiliently chart their own development!
So how are we to get there? A Sustainable Brands Conference this year gets us there through being clear about their own consumption, and USAID is no different. USAID Forward is putting their money where their keyboards are (so to speak), toward more sustainable local delivery by directing a huge 30 percent of its funding to “local solutions” through procurement in coming years. This framework is to “support the ‘new model of development’ that USAID Administrator Rajiv Shah has touted, which entails a shift away from hiring U.S.-based development contractors and NGOs to implement projects, and toward channeling money through host-country governments and local organizations to build their capacity to do the work themselves and sustain programs after funding dries up. I, and others celebrate the investments this will enable local firms to make in their own capacity, in leading development!
Of course all sorts of safeguards are needed, and ideally US firms would be providing capacity development, but shouldn’t we have been doing this all along, to move toward transferring ‘development’ to the countries themselves?
Source: GAO report
Also vital to sustainable development is learning from what works and doing more of it. USAID is finally planning to incorporate more ex-post evaluations into its toolkit of evaluating sustainability! Two weeks ago, PPL/LER shared their great new policy document- “Local systems: A framework for supporting sustained development” on how they can better incorporate local systems thinking into policy as well as DIME (Design, Implementation, Monitoring and Evaluation). Industry insider DevEx tells us "even though the agency plans to use ex-post evaluations to measure whether development projects are successful or not, these evaluations will not focus on “specific contractor performance” but instead consider the “types of approaches that contribute to more sustainable outcomes…to inform USAID’s country strategies and project design." While PVO implementing partners will not [yet?] be required to do ex-post evaluations as part of their projects, having this door cracked open is excitingly opening. Notably, it is a ‘back to the future’ moment, as 30 years ago USAID led the development world in post-project evaluations, yet in the last 24 years has done none (or at least not published any) except for the Food for Peace retrospective below, as I found in our Valuing Voices research of USAID's Development Experience Clearinghouse.
There is far more to watch. In our view, the whole development industry needs to grapple with the perceived barrier that funding ends with projects (note: a trust could be set up to document post-project impact 1, 3, 5 years later and results retained, much as 3ie does now for impact evaluations) and the view that one cannot discern attributable project impact with a time-lag of several years. Yet even the General Accounting Office is asking for longitudinal data; they reviewed USAID’s document and wants to see clear measures of success at Mission and HQ level by different indicators of local institutional sustainability and impact four years on.
Why should we care? As Chelsea Clinton of the Clinton Global Initiative puts it, "you can't measure everything, but you can measure almost everything through quantitative or qualitative means, so that we know what we're disproportionately good at. And, candidly, what we're not so good at, so we can stop doing that.
Yes! Development should be about doing more of what works, sustainably, and less of what doesn’t. USAID’s Local Systems Framework found the best could also be free, as in this one Food For Peace evaluation shows:
Returning to Chelsea Clinton, I’ll conclude by stating something obvious. She "wants to see some evidence of why we're making decisions, as opposed to the anecdotes” which is what getting post-project evaluation data from our true clients, our participants, is all about. Clinton says this will transform CGI into a smart, accountable, and sustainable support system for philanthropic disrupters around the world. USAID is radical for me, today, with their Local Systems investments… my neighborhood disrupter.
Are you such a disrupter too? Who else is one whom we can celebrate together?