Pick a term, any term…but stick to it!
Valuing Voices is interested in identifying learning leaders in international development that are using participatory post-project evaluation methods to learn about the sustainability of their development projects. These organizations not only believe they need to see the sustained impact of their projects by learning from what has worked and what hasn’t in the past, but also that participants are the most knowledgeable about such impacts. So how do they define sustainability? This is determined by asking questions such as the following: were project goals self-sustained by the ‘beneficiary’ communities that implemented these projects? By our VV definition, self-sustainability can only be determined by going back to the project site, 2-5 years after project closeout, to speak directly with the community about the long-term intended/unintended impacts.
Naturally, we turned to the World Bank (WB) – the world’s prominent development institution – to see if this powerhouse of development, both in terms of annual monetary investment and global breadth of influence, has effectively involved local communities in the evaluation of sustainable (or unsustainable) outcomes. Specifically, my research was focused on identifying the degree to which participatory post-project evaluation was happening at the WB.
A fantastic blog* regarding participatory evaluation methods at the WB emphasizes the WB’s stated desire to improve development effectiveness by “ensuring all views are considered in participatory evaluation,” particularly through its community driven development projects. As Heider points out,
“The World Bank Group wants to improve its development effectiveness by, among others things, engaging citizens throughout the operational project cycle. It has set itself an ambitious target: 100% citizen engagement in projects that have clearly identifiable beneficiaries.”
Wow! Though these methods are clearly well intentioned, there seems to be a flaw in the terminology. The IEG says, “[Community driven development projects] are based on beneficiary participation from design through implementation, which make them a good example of citizen-centered assessment techniques in evaluation,” …however, this fails to recognize the importance of planning for community-driven post-project sustainability evaluations, to be conducted by the organization in order to collect valuable data concerning the long-term intended/unintended impacts of development work.
With the intention of identifying evidence of the above-mentioned mode of evaluation at the WB, my research process involved analyzing the resources provided by the WB’s Independent Evaluation Group (IEG) database of evaluations. As the accountability branch of the World Bank Group, the IEG works to gather institution-wide knowledge about the outcomes of the WBs finished projects. Its mission statement is as follows:
“The goals of evaluation are to learn from experience, to provide an objective basis for assessing the results of the Bank Group’s work, and to provide accountability in the achievement of its objectives. It also improves Bank Group work by identifying and disseminating the lessons learned from experience and by framing recommendations drawn from evaluation findings.”
Another important function of the IEG database is to provide information for the public and external development organizations to access and learn from; this wealth of data and information about the World Bank’s findings is freely accessible online.
When searching for evidence of post-project learning, I was surprised to find that the taxonomy varied greatly; e.g. projects I was looking for could be found under ‘post-project’, post project’, ‘ex-post’ or ‘ex post’. What was also unclear was any specific category under which these could be found, including a definition of what exactly is required in an IEG ex post impact evaluation. According to the IEG, there are 13 major evaluation categories, which are described in more detail here. I was expecting to find an explicit category dedicated to post-project sustainability, but instead this type of evaluation was included under Project Level Evaluations (which include PPARs and ICRs [Implementation Completion Reports]), and Impact evaluations.
This made it difficult to determine a clear procedural standard for documents reporting sustainability outcomes and other important data for the entire WB.
I began my research process by simply querying a few key terms into the database. In the first step of my research, which will be elaborated upon in Part I in this blog series, I attempted to identify evidence of ex post sustainability evaluation at the IEG by searching for the term “post-project” in the database, which yielded 73 results when using a hyphen and 953 results without using a hyphen. I found it interesting the inconsistency in the number of results depending on the use of a hyphen, but in order to narrow the search parameters to conduct a manageable content analysis of the documents, I chose to breakdown these 73 results by document type to determine if there are any examples of primary fieldwork research. In these documents, the term “post-project” was not used in the title of the documents or referenced in the executive summary as the specific aim of the evaluation, but rather used to loosely define the ex post time frame. Figure 1 illustrates the breakdown of document types found in the sample of 73 documents that came up when I searched for the key term “post-project”:
Figure 1: Breakdown by Document Type out of Total 73 Results when searching post-project
As the chart suggests, many of the documents (56% – which accounts for all of the pie chart slices except Project Level Evaluations) were purely desk studies – evaluating WB programs and the overall effectiveness of organization policies. These desk studies draw data from existing reports, such as those published at project closeout, without supplementing past data with new fieldwork research.
Out of the 9 categories, the only document type that showed evidence of any follow up evaluations were the Project Performance Assessment Reports (PPARs), defined by the IEG as documents that are…
“…based on a review of the Implementation Completion Report (a self-evaluation by the responsible Bank department) and fieldwork conducted by OED [Operations Evaluation Department]. To prepare PPARs, OED staff examines project files and other documents, interview operational staff, and in most cases visit the borrowing country for onsite discussions with project staff and beneficiaries. The PPAR thereby seeks to validate and augment the information provided in the ICR, as well as examine issues of special interest to broader OED studies.”
Bingo. This is what we’re looking for. The PPARs accounted for 32 out of the 73 results, or a total of 44%. As I examined the methodology used to conduct PPARs, I found that in the 32 cases that came up when I searched for “post-project”, after Bank funds were “fully dispersed to a project” and resources were withdrawn, the IEG sent a post-project mission back into the field to collaborate on new M&E with local stakeholders and beneficiaries. The IEG gathered new data through the use of field surveys or interviews to determine project effectiveness.
Based on these findings, I conducted a supplementary search of the term “ex post”, which yielded 672 results. From this search, 11 documents were categorized by the IEG as “Impact Evaluations”, of which 3 showed evidence of talking with participants to evaluate for sustainability outcomes. In follow-up blogs in this series I will elaborate upon the significance of these additional findings and go into greater detail regarding the quality of the data in these 32 PPARs, but here are a few key takeaways from this preliminary research:
Taxonomy and definition of ex-post is missing. After committing approximately 15-20 hours of research time to this content analysis, it is clear that navigating the IEG database to search for methodology standards to evaluate for sustainability is a more complicated process than it should be for such a prominent learning institution. The vague taxonomy used to categorize post-project/ex-post evaluation by the WB limits the functionality of this resource as a public archive dedicated to informing the sustainability of development projects the World Bank has funded.
Despite affirmative evidence of participatory community involvement in the post-project evaluation of WB projects, not all PPARs in the IEG database demonstrated a uniform level of ‘beneficiary’ participation. In most cases, it was unclear how many community members impacted by the project were really involved in the ex-post process, which made it difficult to determine even a general range of the number of participants involved in post-project activity at the WB.
Although PPARs report findings based, in part, on post-project missions (as indicated in the preface of the reports), the specific methods/structure of the processes were not described, and oftentimes the participants were not explicitly referenced in the reports. (More detailed analysis on this topic to come in Blog Series Part 2!)
These surprisingly inconsistent approaches make it difficult to compare results across this evaluation type, as there is no precise status quo.
Finally, the World Bank, which has funded 12,000 projects since its inception, should have far more than 73 post-project/ ex-post evaluations…but maybe I’m just quibbling with terms.
Stay tuned for PART II of this series, coming soon!
Making money– microenterprise– is this a way to sustainable livelihoods? PACT’s Nepalese Lessons
Many Americans are steeped in the belief that we must ‘pull ourselves up by our bootstraps’, that hard work and especially faith in small businesses is the way to success. This is one of the many reasons why microfinance so appeals to donors as an investment. Does it work?
The US NGO-umbrella, Interaction, posted some “Aid Works” global results, including “the percentage of USAID-funded microfinance institutions that achieved financial sustainability jumped from 38% in 2000 to 76% in 2012.” Yet there have been numerous detractors of the model and the unsustainability of control over resources/ empowerment  .
What does one ex-post evaluations that we have on hand tell us? PACT’s USAID-funded WORTH program in Nepal was focused on women ending poverty through business, banking and literacy/ bookkeeping . The project, implemented between 1999 and 2001 worked with 240 local NGOs to reach 125,000 women in 6,000 economic groups across Nepal’s southern Terai (in 2001 a Maoist insurgency led to the groups being on their own) . By then, 1,500 of these groups led by the women themselves (35,000-strong) received training to become informal-sector Village Banks . Working with local NGOs enabled them to reach 100,000 women in a few months due to the NGOs’ presence and connections in the communities. The collaboration worked well due to a shared belief by PACT and the NGOs that dependency is not empowering. As the report says “WORTH groups and banks were explicitly envisaged as more than just microfinance providers; they were seen as organizations that would build up women as agents of change and development in their communities” .
In 2006, PACT and Nepalese Valley Research Group looked to see sustainability of the banks, the extent of retained income by the women as well as any effect on community development and broader issues such as domestic abuse . They went to 272 Banks from a random sample of 450 from seven of the 21 WORTH districts. Remarkably, they found even more functioning: 288 (16 more) of them were thriving and – wow- WORTH women had spawned another 400 more groups on their own . Participant interviews were done with members and management as well as those women who had left their Banks and members of groups that had dissolved plus they interviewed a ‘control group’ of poor, non-WORTH women in Village Bank communities.
Was it a universal success? Almost. See the bar chart below showing what impacts the management committee felt the village banks had had on members, which is mostly better off, some the same, some far better off. This held true for the original village bank members and the new bank members.
The SEEP network reviewed WORTH’s ex-post and found five key findings:
- Wealth creation: A Village Bank today holds average total assets of over Rs. 211,000, or $3,100, more than three times its holdings in 2001. Each woman member of WORTH now has an average equity stake of $116 in her Village Bank .
- Sustainability: Approximately two-thirds (64 percent) of the original 1,536 Village Banks are still active eight and a half years after the program began and five to six years after all WORTH-related support ended. That means there are nearly 1,000 surviving groups with approximately 25,000 members .
- Replication: A quarter of the existing WORTH groups has helped start an estimated 425 new groups involving another 11,000 women with neither external assistance nor prompting from WORTH itself. If all these groups are currently operating, then more Village Bankers are conducting business today in Nepal than when formal WORTH programming ended in 2001. The report also said 63% of the Village Bank members derived the income from agriculture/ sale of food versus 17% in commerce/ retail trade and the rest in miscellaneous trades. Over 40% of the participants said they borrowed to pay for education and health costs and another 20% to pay off other loans plus for festivals (e.g. birth, death) .
- Literacy: 97 percent of respondents reported that literacy is “very important” to their lives; 83 percent reported that because of WORTH they are able to send more of their children to school .
- Domestic disputes and violence: Two-thirds of groups reported that members bring their personal or family problems to the group for advice or help. Of these, three-quarters reported helping members deal with issues of domestic disputes and related problems. Forty-three percent of women said that their degree of freedom from domestic violence has changed because of their membership in a WORTH group. One in 10 reported that WORTH has actually helped “change her life” because of its impact on domestic violence .
The report outlines other impacts, including self-help actions such as two-thirds of groups being engaged in community action, and three-quarters said that the group has done something to help others in the community. Speaking of community, it is notable that the self-selected women were primarily from wealthier groups (60%), 15% from the middle class, with only 20% from the most disadvantaged castes . Frankly this is not as surprising, as those most willing to take on risk are rarely the poorest until later; 67% of the very poor later wanted to join such a bank (once the risk was shown not to be too high versus income) .
The study’s author asks “Yet for all this documented success, WORTH and other savings-led microfinance programs remain among the best kept secret in the world of international development and poverty alleviation. Although together such programs reach some two million poor people, they go almost unnoticed by the $20 billion credit-led microfinance industry… The empowered women in this study—like WORTH women elsewhere in Asia and Africa— have proved themselves equipped to lead a new generation of entrepreneurs who can take WORTH [onward] through a model of social franchising now being pilot-tested [which is] as creative and potentially groundbreaking as is WORTH…WORTH has the potential to become an “international movement that supports women’s efforts to lift themselves, their families, and their communities out of poverty” .
So why aren’t are we learning from such projects and scaling them up everywhere? PACT is . They have reached 365,000 women in 14 countries – including Myanmar, Cambodia, Colombia, Swaziland, DRC, Ethiopia, with Nigeria and Malawi starting this year . Coca-Cola awarded $400,000 to PACT in 2013 to replicate WORTH in Vietnam with 2,400 women . Who else is replicating this model? It’s not clear from many excellent microenterprise sites I visited except one tells me that Mastercard Foundation and Aga Khan are looking into wider replication as well. Let’s track their results and ask participants!
 Bateman, M. (2011, September 20). Microcredit doesn’t work – it’s now official. Retrieved from https://opinion.bdnews24.com/2011/09/20/microcredit-doesn%E2%80%99t-work-%E2%80%93-it%E2%80%99s-now-official/
 Vaessan, J., Rivas, A., & Duvendack, M. (2014, November). The Effects of Microcredit on Women’s Control Over Household Spending in Developing Countries: A Systematic Review and Meta-analysis. Retrieved from https://www.findevgateway.org/paper/2014/11/effects-microcredit-womens-control-over-household-spending-developing-countries
 Mayoux, L. (2008, June). Women Ending Poverty: The WORTH Program in Nepal – Empowerment through Literacy, Banking and Business 1999-2007. Retrieved from https://www.findevgateway.org/case-study/2008/06/women-ending-poverty-worth-program-nepal-empowerment-through-literacy-banking
 PACT. (n.d.). WORTH. Retrieved 2015, from https://web.archive.org/web/20141106013639/http://www.pactworld.org/worth
 PACT. (2013, August 13). The Coca-Cola Foundation awards $400,000 grant to Pact. Retrieved from https://www.pactworld.org/article/coca-cola-foundation-awards-400000-grant-pact
Times are a Changin' in those who Fund Listening… then Doing
So you've been helped by an organization. You think it has a good mission and have actively participated in its activities yet one day (somewhat arbitrarily in your view), it takes you off its list, shuts its doors and moves to another state. What would you feel? Angry? Perplexed? Disappointed?
So a year or two goes by and you get a knock on your door from a similar organization, wanting you to participate with them, that their mission is great, that you will benefit a lot. While you may really want their help, you are understandably wary and wonder if the same will happen. Heck, the last ones didn't tell you why they left, even with unfinished work, nor came back to see how you were faring…
Maybe that won’t happen anymore. Until recently many of our international development participants (some call them beneficiaries) could feel the same way. Our projects came (and went) with set goals, on fixed funding cycles, with little ongoing input from them to influence how projects accomplishes good things, much less learned what happened after projects ended. Rarely have we put into place participant monitoring systems with feedback loops much less listen to participants on how to design for self-sustainability.…
But times are a changin'; there is much to celebrate among funders and implementers, programming and policy makers.
1) There is a happy blizzard of interest in listening to our participants. From Feedback Labs "committed to making governments, NGOs and donors more responsive to the needs of their constituents" and Rita Allen Foundation funding for the Center for Effective Philanthropy's "Hearing from those we seek to Help" to now the Rockefeller and Hewlett Foundation's Effective Philanthropy's beginning a joint Fund for Shared Insight which "provides grants to nonprofit organizations to encourage and incorporate feedback from the people we seek to help; understand the connection between feedback and better results…".
Independent voices abound that are advocating for participants' voices to be heard in design, implementation, monitoring and evaluation: "While we may have a glut of information and even the best of intentions, our initiatives will continue to fall short until we recognize that our ‘beneficiaries’ are really the people who have the solutions that both they and we need." And others call for even more than recognition – participation of the funders in discussions with participants: A recent study by the Center for Effective Philanthropy heard from recipient NGOs that the "funders who best understand our beneficiaries’ needs are the ones who visit us during our programs, meet [those[ served by our organization, spend time talking to them and being with them.”
2) Information and Communication Technologies for Development) has created options of listening to our project participants, learning from them/ with them through mobiles, tablets and other mechanisms (e.g. Catholic Relief Services' ICT4D 6th annual conference with presentations from donors and government as well, Ushahidi which we've celebrated before); IATI, the International Aid Transparency Initiative has spent 6 years fostering sustainable and foreign aid transparent development, now reaching 24 signatory countries, and 290 organizations. A data revolution is taking shape to join donor data, national government statistical data and civil society socio-economic data. There is a brand new initiative at IDS named Doing Development Differently. Listening and learning indeed!
3) And even more importantly, an understanding that development is not a one-size fits all endeavour, is arising. I blogged about Rwanda's success in nutritional impact from allowing communities to address their specific needs and this week New Republic published an excellent article by Michael Hobbes which says "The repeated “success, scale, fail” experience of the last 20 years of development practice suggests something super boring: Development projects thrive or tank according to the specific dynamics of the place in which they’re applied. It’s not that you test something in one place, then scale it up to 50. It’s that you test it in one place, then test it in another, then another." Hobbes goes on to add that what we need is a revision in our expectations of international aid. "The rise of formerly destitute countries into the sweaters-and-smartphones bracket is less a refutation of the impact of development aid than a reality-check of its scale. In 2013, development aid from all the rich countries combined was $134.8 billion, or about $112 per year for each of the world’s 1.2 billion people living on less than $1.25 per day. Did we really expect an extra hundred bucks a year to pull anyone, much less a billion of them, out of poverty?… Even the most wildly successful projects decrease maternal mortality by a few percent here, add an extra year or two of life expectancy there. This isn’t a criticism of the projects themselves. This is how social policy works, in baby steps and trial-and-error and tweaks, not in game changers."
4) What does change the game in the view of Valuing Voices is who we listen to and what we do, for how long. Often, project participants have been the implementers of our solutions rather than the drivers of their own ideas of development; much is lost in translation. As Linda Raftree reports from one Finnish Slush attendee, "“When you think ‘since people are poor, they have nothing, they will really want this thing I’m going to give them,’ you will fail:…“People everywhere already have values, knowledge, relationships, things that they themselves value. This all impacts on what they want and what they are willing to receive. The biggest mistake is assuming that you know what is best, and thinking ‘these people would think like me if I were them.’ That is never the case.” Hallelujah.
Let's listen before we implement our best answers, adapt to specific communities, think of how to foster self-sustainability rather than just successful impact, ask what that is in their terms. Let’s return to listen to participants’ views on sustained impact, on unexpected results… let’s fund this and do development differently!
So how are you listening to participants today?
Data for whose good?
Many of us work in international development because we are driven to serve, to make corners of the world better by improving the lives of those that live there. Many of us are driven by compassion to help directly through working ‘in the field’ with ‘beneficiary’/ participants, some of us manifest our desire to help through staying in our home countries, advocating to powers that be for more funding, while others create new technologies to help improve the lives of others all over the world. Some of us what to use Western funds and report back to our taxpayers that funds were well-spent, others want to create future markets via increasing globally-thriving economies. We use data all the time to prove our case.
USAID has spent millions on USAID Forward and monitoring and evaluation systems. Organizations such as 3ie rigorously document projected impact of projects while they are being implemented. Japan’s JICA and the OECD are two of the rarest kinds of organizations – returning post-project to look at the continued impact (as USAID did 30 years ago and stopped). Sadly the World Bank and USAID have only done one post-project evaluation each in the last 20 years that drew on communities’ opinions. While a handful of non-profits have used private funds to do recent ex-post evaluations, the esteemed American Evaluation Association has (shockingly) not one resource.
Do we not care about sustained impact? Or are we just not looking in the right places with the right perspective? Linda Raftree has a blog on Big Data and Resilience. She says, “instead of large organizations thinking about how they can use data from afar to ‘rescue’ or ‘help’ the poor, organizations should be working together with communities in crisis (or supporting local or nationally based intermediaries to facilitate this process) so that communities can discuss and pull meaning from the data, contextualize it and use it to help themselves….” Respect for communities’ self-determination seems to be a key missing ingredient.
As an article from the Center for Global Development cites the empowerment that data gives citizens and our own international donors knowledge by which to steer: Citizens. When statistical information is released to the public through a vigorous open government mechanism it can help citizens directly. Citizens need data both to hold their government accountable and to improve their private decision-making. (On the CGD website, see discussions of the value of public disclosure for climate policy here and for AIDS foreign assistance here.)
In my experience, most communities have information but are not perceived to have data unless they collect it using 'Western' methods. Having data to support and back information, opinions and demands can serve communities in negotiations with entities that wield more power. (See the book “Who Counts, the power of participatory statistics” on how to work with communities to create ‘data’ from participatory approaches). Even if we codify qualitative (interview) data and quantify it via surveys, this is not enough if there is no political will to make change to respond to the data and to demands being made based on the data. This is due in some part to a lack of foundational respect that communities’ views count.
Occasionally, excellent thinkers at World Bank 'get' this: "In 2000, a study by the World Bank, conducted in fifty developing countries, stated that “there are 2.8 billion poverty experts: the poor themselves. Yet the development discourse about poverty has been dominated by the perspectives and expertise of those who are not poor … The bottom poor, in all their diversity, are excluded, impotent, ignored and neglected; the bottom poor are a blind spot in development." (This came from a session description for the 2014 World Bank Spring Meetings Civil Society Forum meetings, where I presented for Valuing Voices this spring, see photo below).
And as Anju Sharma’s great blog on community empowerment says, “Why do we continue to talk merely of community “participation” in development? Why not community-driven development, or community-driven adaptation, where communities don’t just participate in activities meant to benefit them, but actually lead them?” Valuing Voices would like to add that we need participatory self-sustainability feedback data from communities documenting Global Aid Effectiveness, ‘walking’ Busan’s talk. Rather than our evaluating their effectiveness in carrying out our development objectives, goals, activities and proposed outcomes, let’s shift to manifest theirs!
Our centuries-old love affair with data is hard to break. Fine, data has to inform our actions, so let’s make it as grassroots, community-driven as possible, based on respect for the knowledge of those most affected by projects, where the rubber hits the road. While that may make massive development projects targeted at hundreds of thousands uniformly… messy… but at least projects many be more efficacious, sustainable and theirs. What do you think?
Sustainability SPRINGing out all over the place… and Disrupting
So what is sustainability? You may think it's the climate's long-term wellbeing and how to gauge changes to that. You may think it's linked to sustainable development regarding consumption, trade, education and environment and how to assess it. You may think it's data-driven organizational success as Chelsea Clinton describes, or is it Michael Porter's business' view of Creating Shared Value on social and environmental concerns or is it about people, as hallowed University of Cambridge trains experts in its Institute for Sustainability Leadership (I revel that I was a Fellow there in the '90s). Finally, is it WCED’s lovely definition "Sustainable development is development that meets the needs of the present without compromising the ability of future generations to meet their own needs"? Yes, when applied to communities' abilities to self-sustainably and resiliently chart their own development!
So how are we to get there? A Sustainable Brands Conference this year gets us there through being clear about their own consumption, and USAID is no different. USAID Forward is putting their money where their keyboards are (so to speak), toward more sustainable local delivery by directing a huge 30 percent of its funding to “local solutions” through procurement in coming years. This framework is to “support the ‘new model of development’ that USAID Administrator Rajiv Shah has touted, which entails a shift away from hiring U.S.-based development contractors and NGOs to implement projects, and toward channeling money through host-country governments and local organizations to build their capacity to do the work themselves and sustain programs after funding dries up. I, and others celebrate the investments this will enable local firms to make in their own capacity, in leading development!
Of course all sorts of safeguards are needed, and ideally US firms would be providing capacity development, but shouldn’t we have been doing this all along, to move toward transferring ‘development’ to the countries themselves?
Source: GAO report
Also vital to sustainable development is learning from what works and doing more of it. USAID is finally planning to incorporate more ex-post evaluations into its toolkit of evaluating sustainability! Two weeks ago, PPL/LER shared their great new policy document- “Local systems: A framework for supporting sustained development” on how they can better incorporate local systems thinking into policy as well as DIME (Design, Implementation, Monitoring and Evaluation). Industry insider DevEx tells us "even though the agency plans to use ex-post evaluations to measure whether development projects are successful or not, these evaluations will not focus on “specific contractor performance” but instead consider the “types of approaches that contribute to more sustainable outcomes…to inform USAID’s country strategies and project design." While PVO implementing partners will not [yet?] be required to do ex-post evaluations as part of their projects, having this door cracked open is excitingly opening. Notably, it is a ‘back to the future’ moment, as 30 years ago USAID led the development world in post-project evaluations, yet in the last 24 years has done none (or at least not published any) except for the Food for Peace retrospective below, as I found in our Valuing Voices research of USAID's Development Experience Clearinghouse.
There is far more to watch. In our view, the whole development industry needs to grapple with the perceived barrier that funding ends with projects (note: a trust could be set up to document post-project impact 1, 3, 5 years later and results retained, much as 3ie does now for impact evaluations) and the view that one cannot discern attributable project impact with a time-lag of several years. Yet even the General Accounting Office is asking for longitudinal data; they reviewed USAID’s document and wants to see clear measures of success at Mission and HQ level by different indicators of local institutional sustainability and impact four years on.
Why should we care? As Chelsea Clinton of the Clinton Global Initiative puts it, "you can't measure everything, but you can measure almost everything through quantitative or qualitative means, so that we know what we're disproportionately good at. And, candidly, what we're not so good at, so we can stop doing that.
Yes! Development should be about doing more of what works, sustainably, and less of what doesn’t. USAID’s Local Systems Framework found the best could also be free, as in this one Food For Peace evaluation shows:
Returning to Chelsea Clinton, I’ll conclude by stating something obvious. She "wants to see some evidence of why we're making decisions, as opposed to the anecdotes” which is what getting post-project evaluation data from our true clients, our participants, is all about. Clinton says this will transform CGI into a smart, accountable, and sustainable support system for philanthropic disrupters around the world. USAID is radical for me, today, with their Local Systems investments… my neighborhood disrupter.
Are you such a disrupter too? Who else is one whom we can celebrate together?
Pineapple, Apple- what differentiates Impact from Sustainability Evaluation?
There is great news. Impact Evaluation is getting attention and being funded to do excellent research, such as by the International Initiative for Impact Evaluation (3ie), by donors such as the World Bank, USAID, UKAid, the Bill and Melinda Gates Foundation in countries around the world. Better Evaluation tell us that “USAID, for example, uses the following definition: “Impact evaluations measure the change in a development outcome that is attributable to a defined intervention; impact evaluations are based on models of cause and effect and require a credible and rigorously defined counterfactual to control for factors other that the intervention that might account for the observed change.”
William Savedoff of CGD reports in Evaluation Gap reports that whole countries are setting up such evaluation institutes: “Germany’s new independent evaluation institute for the country’s development policies, based in Bonn, is a year old. DEval has a mandate that looks similar to Britain’s Independent Commission for Aid Impact (discussed in a previous newsletter ) because it will not only conduct its own evaluations but also help the Federal Parliament monitor the effectiveness of international assistance programs and policies. DEval’s 2013-2015 work program is ambitious and wide – ranging from specific studies of health programs in Rwanda to overviews of microfinance and studies regarding mitigation of climate change and aid for trade.” There is even a huge compendium of impact evaluation databases.
There is definitely a key place for impact evaluations in analyzing which activities are likely to have the most statistically significant (which means definitive change) impact. One such study in Papua New Guinea found SMS (mobile text) inclusion in teaching made a significant difference in student test scores compared to the non-participating ‘control group’ who did not get the SMS (texts). Another study, the Tuungane I evaluation by a group of Columbia University scholars showed clearly that an International Rescue Committee program on community-level reconstruction did not change participant behaviors. The study was as well designed as an RCT can be, and its conclusions are very convincing. But as the authors note, we don’t actually know why the intervention failed. To find that out, we need the kind of thick descriptive qualitative data that only a mixed methods study can provide.
Economist Kremer from Harvard says ““The vast majority of development projects are not subject to any evaluation of this type, but I’d argue the number should at least be greater than it is now.” Impact evaluations use ‘randomized control trials’, comparing the group that got project assistance to a similar group that didn’t to gauge the change. A recent article that talks about treating poverty as a science experiment says “nongovernmental organizations and governments have been slow to adopt the idea of testing programs to help the poor in this way. But proponents of randomization—“randomistas,” as they’re sometimes called—argue that many programs meant to help the poor are being implemented without sufficient evidence that they’re helping, or even not hurting.” However we get there, we want to know – the real (or at least likely)- impact of our programming, helping us focus funds wisely.
Data gleaned from impact evaluations is excellent information to have before design and during implementation. While impact evaluations are a thorough addition to the evaluation field, experts recommend they be done from the beginning of implementation. While they ask “Are impacts likely to be sustainable?”, and “to what extent did the impacts match the needs of the intended beneficiaries?” and importantly “did participants/key informants believe the intervention had made a difference?” they focus only on possible sustainability, using indicators we expect to see at project end rather than tangible proof of sustainability of the activities and impacts that communities define themselves that we actually return to measure 2-10 years later.
That is the role for something that has rarely been used in 30 years – for post-project (ex-post) evaluations looking at:
- The resilience of expected impacts of the project 2, 5, 10 years after close-out
- The communities’ and NGOs’ ability to sustain which activities themselves
- Positive and negative unintended impacts of the project, especially 2 years after, while still in clear living memory
- Kinds of activities the community and NGOs felt were successes which could not be maintained without further funding
- Lessons for other projects across projects on what was most resilient that communities valued enough to do themselves or NGOs valued enough to get other funding for, as well as what was not resilient.
Where is this systematically happening already? There are our catalysts ex-post evaluation organizations, drawing on communities’ wisdom. Here and there there are other glimpses of ValuingVoices, mainly to inform current programming, such as these two interesting approaches:
- Vijayendra Rao describes how a social observatory approach to monitoring and evaluation in India’s self-help groups leads to “Learning by Doing”– drawing on material from the book Localizing Development: Does Participation Work? The examples show how groups are creating faster feedback loops with more useful information by incorporating approaches commonly used in impact evaluations. Rao writes: “The aim is to balance long-term learning with quick turnaround studies that can inform everyday decision-making.”
- Ned Breslin, CEO of Water For People talks about “Rethinking Social Entrepreneurism: Moving from Bland Rhetoric to Impact (Assessment)”. His new water and sanitation program, Everyone Forever, does not focus on the inputs and outputs, including water provided or girls returning to school. Instead it centers instead on attaining the ideal vision of what a community would look like with improved water and sanitation, and working to achieve that goal. Instead of working on fundraising only, Breslin wants to redefine the meaning of success as a world in which everyone has access to clean water.
We need a combination. We need to know how good our programming is now through rigorous randomized control trials, and we need to ask communities and NGOs how sustainable the impacts are. Remember, 99% of all development projects worth hundreds of millions of dollars a year are not currently evaluated for long-term sustainability by their ultimate consumers, the communities they were designed to help.
We need an Institute of Sustainable Evaluation and a Ministry of Sustainable Development in every emerging nation, funded by donors who support national learning to shape international assistance. We need a sustainability global database, mandatory to be referred to in all future project planning. We need to care enough about the well-being of our true client to listen, learn and act.