Who is responsible for sustaining development?

Whose responsibility is it to sustain project activities?

Billions of dollars are pumped into development activities in developing countries all over the world. Communities getting involved in these projects have a clear objective, which is to have their lives improved in the sectors that the projects target. As to whether this is the main objective of the development partners is not clear. What is clear is that the development partners focus more on numbers than on getting people to participate.

 

We note that majority of these projects are designed to last between 2-5 years. Delays occasioned poor planning or other unforeseen factors eat into the implementation time to an extent that in some projects it takes about 1-2 years to get a program running. This means that the planned implementation time is reduced.  Baselines, midlines and end lines studies are conducted to inform changes that may have occurred within the program life, and in most cases they happen shortly after the program has started or just before it ends. In fact, some baselines are conducted after programs have started.

Considering the reduced implementation time and the fact that it takes a much longer time to get concrete behavior change related results, questions emerge whether indeed the reported changes are solid enough during implementation to be sustained. There is also a difference between measuring what can be referred to artificial changes (activities that community members adopt as a way of short-term trial in their excitement, but don't find useful afterward) and long lasting changes that community members adopt because they are useful part and parcel of their lives.

 

Almost all projects have logical frameworks (logframes) that show how project activities will be implemented and to some extent there are also exit strategies for closing out the project. This can be an illusion long-term. In most cases donors and implementers assume that communities will adopt activities that are being implemented within a specified period of time, and so projects close down at the end of the specified period of implementation assuming things will continue, but have no proof. Valuing Voices has done projected sustainability work in Ethiopia which points to possible differences between what donors expect to the sustained versus what communities are able to sustain.

 

Communitywomansorting

 

The big questions remain: "whose responsibility is it to ensure that whatever has been adopted is continued? Whose responsibility is it to sustain project activities post project implementation?" It is silently assumed that communities can take up this responsibility and a key question is "what guarantees are there that this is possible and is happening?" Project sustainability should not be seen as a community-alone responsibility but rather a responsibility for all those who are involved in program activities.  Sustainability studies should be planned for and executed in the same breath that the baselines, midlines, end-lines and in the rare cases impact assessments in real time should be planned for. We must do sustainability studies as they provide an additional realistic opportunity to inform us on actual community development post project implementation. Communities should not be left alone with it.

 

Altruistic Accountability… for Sustainability

Altruistic Accountability… for Sustainability

Many of us in international development feel a sense of responsibility for others to be well, and for our work to improve their lives as well as for the work to be done in good stewardship of aid resources and optimizing their impact. As Matthieu Ricard writes, "Altruism is a benevolent state of mind. To be altruistic is to be concerned about the fate of all those around us and to wish them well. This should be done together with the determination to act for their benefit. Valuing others is the main state of mind that leads to altruism."  We also feel a responsibility to our international aid donors and taxpayers. We who implement, monitor and evaluate projects work to ensure that the altruism of aid is responsible to both donors and recipients.

Altruism appears most vividly when implementers issue appeals after disasters, with millions donated as a result, but unsung heroes are also development workers. Organizations such as Charity Navigator, ONE and Center for Global Development on how well US organizations spent funds and track donor-country policy accountability. Thoughtful donor studies such as French Development Agency’s OECD study report on the power of AidWatch and Reality of Aid intiatives in Europe for their taxpayers.

But who is pushing for our donor’s accountability to the country’s participants themselves? While USAID funds many program evaluations, some of which “identify promising, effective…strategies and to conduct needs assessments and other research to guide program decisions”, they are always at project end, rather than looking at sustainability of the outcomes and impacts, and focus on Congressional and domestic listeners. This is no funding and no small audience. The US Department of State/ USAID’s FY13 Summary report states that in fiscal year 2013, USAID had $23.8 billion to disburse, over $12 billion for programming. While total beneficiary (participant) numbers were not provided, emergency food assistance alone used $981 million for nearly 21.6 million people in 25 countries.

So who is a watchdog for what results? OXFAM may excellently highlight opportunities for better programming. 3ie does many studies looking at projected impact and does systemic reviews (but only three were post-project). Challenges such as Making All Voices Count may fund channels for country-nationals to hold their own governments responsible, but can in-country project participants ever demand sustainable results from anyone but their own governments? Herein lies the crux of the issue. Unless governments demand it (unlikely in ‘free’ aid), only pressure from donor country nationals (you? we) can push for changes.

globe

At the core of Valuing Voices mission is advocacy for altruistic accountability of the sustainability of projects to country-ownership at all levels. For us, this involves valuing and also giving voice to those supported by, and also tasked with doing ‘sustainable projects’. Unless we know how sustainable our development projects have been, we have only temporarily helped those in greatest need. This means looking beyond whether funding continued to whether the benefits of an activity or even the existence of entire local NGOs tasked with this actually continued after funding was withdrawn. Unless we strive to learn what has continued to work best or failed to be continued after projects left in the views of the participants themselves, we can let down the very people who have entrusted us with hopes of a self-sustainable future of well being. Unless we listen to project staff and local partners to see what program staff felt they did right/wrong, and national partners felt they were supported to do keep doing right, we have minimized success of future projects. While increasing numbers of organizations such as Hewlett Foundation fund work to “increase the responsiveness of governments to their citizens’ needs. We do this by working to make governments more transparent and accountable,” the long-term effectiveness of our donor development assistance is not yet visible.

OECD guidelines on corporate accountability and transparency are illuminating. Adapting it from State-Corporate to Non-profit-State is interesting. For how well have we considered who ‘owns’ these development projects in practical terms from inception onward? Our donors? Implementing agencies? Local partners and communities?

OECD Guidelines on Corporate Governance of State-Owned Enterprises

1: The State Acting as an Owner

2: Equitable Treatment and Relations with Shareholders

3: Ensuring an Effective Legal and Regulatory Framework for State-Owned Enterprises

4: Transparency and Disclosure

How well do we design projects along these lines to do this successfully? Not terrifically:

  • Too often ‘stakeholders’ are not consulted at the very inception of the proposal design, only at design or implementation
  • Too often our work is aimed at making only our ‘client’- our donors- happy with our results rather than the country nationals who are tasked with self-sustaining them.
  • Too often handover is done at the 11th hour, not transferring it throughout implementation or building local capacity for those taking over be true projects’ owners.

But it is coming, through changing societal trends. On the data-access front, USAID (and differently, other European donors) have promised to modernize diplomacy and development by 2017 by “increas[ing] the number and effectiveness of communication and collaboration tools that leverage interactive digital platforms to improve direct engagement with both domestic and foreign publics. This will include increasing the number of publicly available data sets and ensuring that USAID-funded evaluations are published online, expanding publicly available foreign assistance data, increasing the number of repeat users of International Information.” Now to generate and add self- sustainability data to inform future projects!

Second, on the they-are-se front, our basic human nature, according to Ricard, lends itself to altruism. “Let's assume that the majority of us are basically good people who are willing to build a better world. In that case, we can do so together thanks to altruism. If we have more consideration for others, we will promote a more caring economy, and we will promote harmony in society and remedy inequalities.” Let’s get going…

IEG Blog Series Part II: Theory vs. Practice at the World Bank

 

IEG Blog Series Part II: Theory vs. Practice at the World Bank

 

IEG logo

 

In Part I of this blog series, I described my research process for identifying the level to which the World Bank (WB) is conducting participatory post project sustainability evaluations for its many international development projects. Through extensive research and analysis of the WB’s IEG database, Valuing Voices concluded that there is a very loosely defined taxonomy for ex-post project evaluation at the WB, making it difficult to identify a consistent standard of evaluation methodology for sustainability impact assessments.

Particularly, we were concerned with identifying examples of direct beneficiary involvement in evaluating long-term sustainability outcomes, for instance by surveying/interviewing participants to determine which project objectives were self-sustained…and which were not. Unfortunately, it is quite rare for development organizations to conduct ex–post evaluations that involve all levels of project participants to contribute to long-term information feedback loops. However, there was one document type in the IEG database that gave us at Valuing Voices some room for optimism: Project Performance Assessment Reports (PPARs). PPARs are defined by the IEG as documents that are,

“…based on a review of the Implementation Completion Report (a self-evaluation by the responsible Bank department) and fieldwork conducted by OED [Operations Evaluation Department, synonymous with IEG]. To Prepare PPARs, staff examines project files and other documents, interview operation staff, and in most cases visit the borrowing country for onsite discussions with project staff and beneficiaries” [1].

The key takeaway from this definition is that these reports supplement desk studies (ICRs) with new fieldwork data provided, in part, by the participants themselves. The IEG database lists hundreds of PPAR documents, but I focused on only the 33 documents that came up when I queried “post-project”.

Here are a few commonalities to note about the 33 PPARs I studied:

  • They are all recent documents – the oldest document was published in 2004, and the most recent documents from 2014.
  • The original projects that are assessed in the PPARs were finalized anywhere from 2-10+ years before the PPAR was written, making them true ex-posts
  • They all claimed to involve mission site visits and communication with key project stakeholders, but they did not all claim to involve beneficiaries explicitly

 

Although the WB/IEG mentions that beneficiary participation takes place in “most” of the ex-post missions back to the project site in its definition of a PPAR, Valuing Voices was curious to know if there is a standard protocol for the level of participant involvement, the methods of data collection, and ultimately, the overall quality of the new fieldwork data collected to inform PPARs. For this data quality analysis, Valuing Voices identified these key criteria:

  • Overall summary of evaluation methods
  • Who was involved, specifically? Was there direct beneficiary participation? What were the research methods/procedures used?
  • What was the level of sustainability (termed Risk to Development Outcome* after 2006) established by the PPAR?
  • Was this different from the level of sustainability as projected by the preceding ICR report?
  • Were participants involved via interviews? (Yes/No)
  • If yes, were they semi-structured (open-ended questions allowing for greater variety/detail of qualitative data) or quantitative surveys
  • How many beneficiaries were interviewed/surveyed?
  • What % of total impacted beneficiary population was this number?
  • Was there a control group used? (Yes/No)

Despite our initial optimism, we determined that the quality of the data provided in these PPARs was highly variable, and overall quite low. A summary of the findings is as follows:

 

1. Rarely were ‘beneficiaries’ interviewed

  • Only 15% of the PPARs (5) gave details about the interview methodologies, but of this only 3% of the PPARs (1) described in detail how many participants were consulted, what they said and how they were interviewed (Nigeria 2014 [2]).
  • 54% of the reports (18), mentioned beneficiary input in data collected in the post-project mission, but gave no specific information on the number of participants involved nor were their voices cited nor was any information included on the methodologies used. The vast majority only vaguely referenced the findings of the post project mission, rather than data collection specifics. A typical example of this type of report is Estonia 2004 [1]
  • 30% of the PPARs (10) actually involved no direct participant/beneficiary participation in the evaluation process, with these missions only including stakeholders such as project staff, local government, NGOs, donors, consultants, etc.A typical example of this type of report is Niger 2005 [3]

These percentages are illustrated in Figure 1, below, which gives a visual breakdown of the number of reports that involved direct participant consultation with detailed methodologies provided (5), the number of reports where stakeholders were broadly consulted but no specific methodologies were provided (18), and the number of reports where no participants were directly involved in the evaluation process (10).

 

Graph 1

 

2. Sustainability of project outcomes was unclear

  • In 54% of cases, there was some change in the level of sustainability from the original level predicted in the ICR (which precedes and informs the PPAR) to the level established in the PPAR.  Ironically, of the 33 cases, 22 of them were classified as Likely or Highly Likely or Significantly Likely to be sustainable, yet participants were not asked for their input.
  • So on what basis was sustainability judged? Of the three cases where there was high participant consultation, the Nigerian project’s (where they asked 10% of participants for feedback) sustainability prospects was only moderate while India (also 10% feedback) and Kenya (14-20%) both were classified as likely to be sustainable.

Along the Y axis of Figure 2, below, is the spectrum of sustainability rankings observed in the PPARs, which range from “Negligible to Low” up to “High”. For each of the projects analyzed (there are 60 total projects accounted for in this graph, as some of the PPARs covered up to 4 individual projects in one report), the graph illustrates how many projects consulted participants, and how many failed to do so, for each evaluation outcome. As we can see, the majority of cases that were determined to be highly or significantly sustainable either did not consult participants directly or only consulted stakeholders broadly, with limited community input represented in the evaluation.  These are interesting findings, because although there is a lot of supposed sustainability being reported, very few cases actually involved the community participants in a meaningful way (to our knowledge, based on the lack of community consultation discussed in the reports). However, unless these evaluations are taking place at grassroots level, engaging the participants in a conversation about the true self-sustainability outcomes of projects, you can’t really know how sustainable the project is by only talking with donors, consultants, governments, etc. Are the right voices really being represented in this evaluation process? *Note: the “Sustainability” ranking was retitled “Risk to Development Outcomes” in 2006.

 

Graph 2

 

While projects were deemed sustainable, this is based on very little ‘beneficiary’ input. The significance of this information is simple: not enough is being done to ensure beneficiary participation in ALL STAGES of the development process, especially in the post-project time frame, even by prominent development institutions like the WB/IEG. While we commend the Bank for currently emphasizing citizen engagement via beneficiary feedback, this still seems to be more of a guiding theory than a habitualized practice [4]. Although all 34 documents I analyzed claimed there was “key stakeholder” or beneficiary participation, the reality is that no consistent procedural standard for eliciting such engagement could be identified.

Furthermore, the lack of specific details elaborating upon interview/survey methods, the number of participants involved, the discovery of any unintended outcomes, etc. creates a critical information void. As a free and public resource, the IEG database should not only be considered an important internal tool for the WB to catalog its numerous projects throughout time, but it is also an essential external tool for members of greater civil society who wish to benefit from the Bank’s extensive collection of resources – to learn from WB experiences and inform industry-wide best practices

For this reason, Valuing Voices implores the World Bank to step up its game and establish itself as a leader in post-project evaluation learning, not just in theory but also in practice. While these 33 PPARs represent just a small sample of the over 12,000 projects the WB has implemented since its inception, Valuing Voices hopes to see much more ex-post project evaluation happening in the future through IEG. Today we are seeing a decisive shift in the development world towards valuing sustainable outcomes over short-term fixes, towards informing future projects based on long-term data collection and learning, and towards community participation in all stages of the development process…

 

If one thing is certain, it is that global emphasis on sustainable development will not be going away anytime soon…but are we doing enough to ensure it?

 

Sources:

[1] World Bank OED. (2004, June 28). Project Performance Assessment Report: Republic of Estonia, Agriculture Project. Retrieved from http://documents.worldbank.org/curated/en/173891468752061273/pdf/295610EE.pdf

[2] World Bank OED. (2014, June 26). Project Performance Assessment Report: Nigeria, Second National Fadama Development Project. Retrieved from https://ieg.worldbankgroup.org/sites/default/files/Data/reports/Nigeria_Fadama2_PPAR_889580PPAR0P060IC0disclosed07070140_0.pdf

[3] World Bank OED. (2005, April 15). Project Performance Assessment Report: Niger, Energy Project. Retrieved from http://documents.worldbank.org/curated/en/899681468291380590/pdf/32149.pdf

[4] World Bank. (n.d.). Citizen Engagement: Incorporating Beneficiary Feedback in all projects by FY 18. Retrieved 2015, from https://web.archive.org/web/20150102233948/http://pdu.worldbank.org/sites/pdu2/en/about/PDU/EngageCitizens

 

IEG Blog Series Part I: Pick a term, any term…but stick to it!

Pick a term, any term…but stick to it!


IEG logo


            Valuing Voices is interested in identifying learning leaders in international development that are using participatory post-project evaluation methods to learn about the sustainability of their development projects. These organizations not only believe they need to see the sustained impact of their projects by learning from what has worked and what hasn’t in the past, but also that participants are the most knowledgeable about such impacts. So how do they define sustainability? This is determined by asking questions such as the following: were project goals self-sustained by the ‘beneficiary’ communities that implemented these projects? By our VV definition, self-sustainability can only be determined by going back to the project site, 2-5 years after project closeout, to speak directly with the community about the long-term intended/unintended impacts. 

            Naturally, we turned to the World Bank (WB) – the world’s prominent development institution – to see if this powerhouse of development, both in terms of annual monetary investment and global breadth of influence, has effectively involved local communities in the evaluation of sustainable (or unsustainable) outcomes.  Specifically, my research was focused on identifying the degree to which participatory post-project evaluation was happening at the WB.

A fantastic blog* regarding participatory evaluation methods at the WB emphasizes the WB’s stated desire to improve development effectiveness by “ensuring all views are considered in participatory evaluation,” particularly through its community driven development projects. As Heider points out,

The World Bank Group wants to improve its development effectiveness by, among others things, engaging citizens throughout the operational project cycle. It has set itself an ambitious target: 100% citizen engagement in projects that have clearly identifiable beneficiaries.”

Wow! Though these methods are clearly well intentioned, there seems to be a flaw in the terminology. The IEG says, “[Community driven development projects] are based on beneficiary participation from design through implementation, which make them a good example of citizen-centered assessment techniques in evaluation,” …however, this fails to recognize the importance of planning for community-driven post-project sustainability evaluations, to be conducted by the organization in order to collect valuable data concerning the long-term intended/unintended impacts of development work.

With the intention of identifying evidence of the above-mentioned mode of evaluation at the WB, my research process involved analyzing the resources provided by the WB’s Independent Evaluation Group (IEG) database of evaluations. As the accountability branch of the World Bank Group, the IEG works to gather institution-wide knowledge about the outcomes of the WBs finished projects. Its mission statement is as follows:

“The goals of evaluation are to learn from experience, to provide an objective basis for assessing the results of the Bank Group’s work, and to provide accountability in the achievement of its objectives. It also improves Bank Group work by identifying and disseminating the lessons learned from experience and by framing recommendations drawn from evaluation findings.”

Another important function of the IEG database is to provide information for the public and external development organizations to access and learn from; this wealth of data and information about the World Bank’s findings is freely accessible online.

            When searching for evidence of post-project learning, I was surprised to find that the taxonomy varied greatly; e.g. projects I was looking for could be found under ‘post-project’, post project’, ‘ex-post’ or ‘ex post’. What was also unclear was any specific category under which these could be found, including a definition of what exactly is required in an IEG ex post impact evaluation. According to the IEG, there are 13 major evaluation categories, which are described in more detail here. I was expecting to find an explicit category dedicated to post-project sustainability, but instead this type of evaluation was included under Project Level Evaluations (which include PPARs and ICRs [Implementation Completion Reports]), and Impact evaluations.

This made it difficult to determine a clear procedural standard for documents reporting sustainability outcomes and other important data for the entire WB.

            I began my research process by simply querying a few key terms into the database. In the first step of my research, which will be elaborated upon in Part I in this blog series, I attempted to identify evidence of ex post sustainability evaluation at the IEG by searching for the term “post-project” in the database, which yielded 73 results when using a hyphen and 953 results without using a hyphen. I found it interesting the inconsistency in the number of results depending on the use of a hyphen, but in order to narrow the search parameters to conduct a manageable content analysis of the documents, I chose to breakdown these 73 results by document type to determine if there are any examples of primary fieldwork research. In these documents, the term “post-project” was not used in the title of the documents or referenced in the executive summary as the specific aim of the evaluation, but rather used to loosely define the ex post time frame. Figure 1 illustrates the breakdown of document types found in the sample of 73 documents that came up when I searched for the key term “post-project”:

            As the chart suggests, many of the documents (56% – which accounts for all of the pie chart slices except Project Level Evaluations) were purely desk studies – evaluating WB programs and the overall effectiveness of organization policies. These desk studies draw data from existing reports, such as those published at project closeout, without supplementing past data with new fieldwork research.

            Out of the 9 categories, the only document type that showed evidence of any follow up evaluations were the Project Performance Assessment Reports (PPARs), defined by the IEG as documents that are…

“…based on a review of the Implementation Completion Report (a self-evaluation by the responsible Bank department) and fieldwork conducted by OED [Operations Evaluation Department]. To prepare PPARs, OED staff examines project files and other documents, interview operational staff, and in most cases visit the borrowing country for onsite discussions with project staff and beneficiaries. The PPAR thereby seeks to validate and augment the information provided in the ICR, as well as examine issues of special interest to broader OED studies.”

            Bingo. This is what we’re looking for. The PPARs accounted for 32 out of the 73 results, or a total of 44%. As I examined the methodology used to conduct PPARs, I found that in the 32 cases that came up when I searched for “post-project”, after Bank funds were “fully dispersed to a project” and resources were withdrawn, the IEG sent a post-project mission back into the field to collaborate on new M&E with local stakeholders and beneficiaries. The IEG gathered new data through the use of field surveys or interviews to determine project effectiveness.

            Based on these findings, I conducted a supplementary search of the term “ex post”, which yielded 672 results. From this search, 11 documents were categorized by the IEG as “Impact Evaluations”, of which 3 showed evidence of talking with participants to evaluate for sustainability outcomes. In follow-up blogs in this series I will elaborate upon the significance of these additional findings and go into greater detail regarding the quality of the data in these 32 PPARs, but here are a few key takeaways from this preliminary research:

  • Taxonomy and definition of ex-post is missing. After committing approximately 15-20 hours of research time to this content analysis, it is clear that navigating the IEG database to search for methodology standards to evaluate for sustainability is a more complicated process than it should be for such a prominent learning institution. The vague taxonomy used to categorize post-project/ex-post evaluation by the WB limits the functionality of this resource as a public archive dedicated to informing the sustainability of development projects the World Bank has funded.
     
  • Despite affirmative evidence of participatory community involvement in the post-project evaluation of WB projects, not all PPARs in the IEG database demonstrated a uniform level of ‘beneficiary’ participation. In most cases, it was unclear how many community members impacted by the project were really involved in the ex-post process, which made it difficult to determine even a general range of the number of participants involved in post-project activity at the WB.
     
  • Although PPARs report findings based, in part, on post-project missions (as indicated in the preface of the reports), the specific methods/structure of the processes were not described, and oftentimes the participants were not explicitly referenced in the reports. (More detailed analysis on this topic to come in Blog Series Part 2!)
     
  • These surprisingly inconsistent approaches make it difficult to compare results across this evaluation type, as there is no precise status quo.

Finally, the World Bank, which has funded 12,000 projects since its inception, should have far more than 73 post-project/ ex-post evaluations…but maybe I’m just quibbling with terms.


Stay tuned for PART II of this series, coming soon! 

Listening better… for more sustainable impact

Listening better… for more sustainable impact

Are we listening better? Maybe.  As Irene Gujit states on Better EvaluationKeystone’s work on ‘constituent voice’ enables a "shift [in] power dynamics and make organizations more accountable to primary constituents”. For example, "organisations can compare with peers to trigger discussions on what matters to those in need… in (re)defining success and ‘closing the loop’ with a response to feedback [on the project], feedback mechanisms can go well beyond upward accountability."

There are impressive new toolkits available to elicit and hear participant voice about perceived outcomes and impacts, such as People First Impact Method and NGO IDEAS' Monitoring Self-Effectiveness.  As People First states, "Across the aid sector, the voices of ordinary people are mostly not being heard. Compelling evidence shows how the aid structure unwittingly sidelines the people whom we aim to serve. Important decisions are frequently made from afar and often based on limited or inaccurate assumptions. As a result, precious funds are not always spent in line with real priorities, or in ways that should help people build their own confidence and abilities…. As a sector, we urgently need to work differently." These are results of 40 year old participatory/Rapid Rural Appraisal distilled and shared by IDS/UK's Robert Chambers which I've used for 25 years, including lately for self-sustainability evaluation.

In addition to qualitative, participatory tools, the application of quantitative evaluative tools have a ways to grow to be terrific at listening and learning.  Keystone did interesting work on impact evaluation (lately associated with Random Control Trials comparing existing projects and comparable non-participating sites to prove impact). Their study found that not only "no one engaged through the research for this note is particularly happy with the current state of the art…. There is a strong appetite to improve the delivery of evaluative activities in general and impact evaluation in particular … Setting expectations by engaging and communicating early and often with stakeholders and audiences for the evaluation is critical, as is timing." So many of us believe that evaluation cannot be an afterthought, but monitoring and evaluation needs to be integrated into project design, with feedback loops informing implementation.

Yet this otherwise excellent article made one point that is common, yet like Alice looking through the looking glass backwards. For they write feedback is "to inform intended beneficiaries and communities (downward accountability) about whether or not, and in what ways, a program is benefiting the community". Yet it is the other way around! Only communities have the capacity to tell us how well they feel we are helping them!  

Listen_Wylio6801732893_06e6ce7cf3_m

Thankfully, we are increasingly willing to listen and learn about aid effectiveness. Some major actors shaping funding decisions have already thrown down the feedback gauntlet:

* As our 2013 blog asked for, Charity Navigator is now applying its new “Results Reporting” rating criteria, which include six data points regarding charities feedback practices. The new ratings will be factored into Charity Navigator star ratings from 2016.

* Heavyweight World Bank president Jim Kim has decreed that the Bank will require robust feedback from beneficiaries on all projects for which there is an identifiable beneficiary

* The Hewlett, Ford, Packard, Rita Allen, Kellogg, JPB and LiquidNet for Good Foundations have recently come together to create the Fund for Shared Insight to catalyze a new feedback culture within the philanthropy sector.

* This February, a new report on UK's international development agency, DFID recommended a new direction to their aid: "The development discourse has generally focused on convincing donors to boost their aid spending, when the conversation should instead be on “how aid works, how it can support development, how change happens in countries, and all of the different responses that need to come together to support that change…. One important change will be for professionals to deliver more adaptive programming and work in more flexible and entrepreneurial ways… emphasized the need for development delivery to be led by local people. Commenting on ODI’s research, [DFID} said successful development examples showed “people solving problems for themselves rather than coming in and trying to manage that process externally through an aid program.”

Hallelujah!  What aid effectiveness great listening are you seeing?

Times are a Changin’ in those who Fund Listening, then Doing

Times are a Changin' in those who Fund Listening… then Doing

So you've been helped by an organization. You think it has a good mission and have actively participated in its activities yet one day (somewhat arbitrarily in your view), it takes you off its list, shuts its doors and moves to another state.  What would you feel? Angry?  Perplexed? Disappointed?

 

So a year or two goes by and you get a knock on your door from a similar organization, wanting you to participate with them, that their mission is great, that you will benefit a lot. While you may really want their help, you are understandably wary and wonder if the same will happen. Heck, the last ones didn't tell you why they left, even with unfinished work, nor came back to see how you were faring…

 

Maybe that won’t happen anymore. Until recently many of our international development participants (some call them beneficiaries) could feel the same way.  Our projects came (and went) with set goals, on fixed funding cycles, with little ongoing input from them to influence how projects accomplishes good things, much less learned what happened after projects ended.  Rarely have we put into place participant monitoring systems with feedback loops much less listen to participants on how to design for self-sustainability.

 

But times are a changin'; there is much to celebrate among funders and implementers, programming and policy makers.

 

1) There is a happy blizzard of interest in listening to our participants.  From Feedback Labs "committed to making governments, NGOs and donors more responsive to the needs of their constituents" and Rita Allen Foundation funding for the Center for Effective Philanthropy's "Hearing from those we seek to Help" to now the Rockefeller and Hewlett Foundation's Effective Philanthropy's beginning a joint Fund for Shared Insight which "provides grants to nonprofit organizations to encourage and incorporate feedback from the people we seek to help; understand the connection between feedback and better results…".

Independent voices abound that are advocating for participants' voices to be heard in design, implementation, monitoring and evaluation: "While we may have a glut of information and even the best of intentions, our initiatives will continue to fall short until we recognize that our ‘beneficiaries’ are really the people who have the solutions that both they and we need." And others call for even more than recognition – participation of the funders in discussions with participants: A recent study by the Center for Effective Philanthropy heard from recipient NGOs that the "funders who best understand our beneficiaries’ needs are the ones who visit us during our programs, meet [those[ served by our organization, spend time talking to them and being with them.”

 

2) Information and Communication Technologies for Development) has created options of listening to our project participants, learning from them/ with them through mobiles, tablets and other mechanisms (e.g. Catholic Relief Services' ICT4D 6th annual conference with presentations from donors and government as well, Ushahidi which we've celebrated before);  IATI, the International Aid Transparency Initiative has spent 6 years fostering sustainable and foreign aid transparent development, now reaching 24 signatory countries, and 290 organizations. A data revolution is taking shape to join donor data, national government statistical data and civil society socio-economic data. There is a brand new initiative at IDS named Doing Development DifferentlyListening and learning indeed!

 

3) And even more importantly, an understanding that development is not a one-size fits all endeavour, is arising. I blogged about Rwanda's success in nutritional impact from allowing communities to address their specific needs and this week New Republic published an excellent article by Michael Hobbes which says "The repeated “success, scale, fail” experience of the last 20 years of development practice suggests something super boring: Development projects thrive or tank according to the specific dynamics of the place in which they’re applied. It’s not that you test something in one place, then scale it up to 50. It’s that you test it in one place, then test it in another, then another." Hobbes goes on to add that what we need is a revision in our expectations of international aid. "The rise of formerly destitute countries into the sweaters-and-smartphones bracket is less a refutation of the impact of development aid than a reality-check of its scale. In 2013, development aid from all the rich countries combined was $134.8 billion, or about $112 per year for each of the world’s 1.2 billion people living on less than $1.25 per day. Did we really expect an extra hundred bucks a year to pull anyone, much less a billion of them, out of poverty?… Even the most wildly successful projects decrease maternal mortality by a few percent here, add an extra year or two of life expectancy there. This isn’t a criticism of the projects themselves. This is how social policy works, in baby steps and trial-and-error and tweaks, not in game changers."

 

4) What does change the game in the view of Valuing Voices is who we listen to and what we do, for how long.  Often, project participants have been the implementers of our solutions rather than the drivers of their own ideas of development; much is lost in translation. As Linda Raftree reports from one Finnish Slush attendee, "“When you think ‘since people are poor, they have nothing, they will really want this thing I’m going to give them,’ you will fail:…“People everywhere already have values, knowledge, relationships, things that they themselves value. This all impacts on what they want and what they are willing to receive. The biggest mistake is assuming that you know what is best, and thinking ‘these people would think like me if I were them.’ That is never the case.” Hallelujah. 

 

Let's listen before we implement our best answers, adapt to specific communities, think of how to foster self-sustainability rather than just successful impact, ask what that is in their terms. Let’s return to listen to participants’ views on sustained impact, on unexpected results… let’s fund this and do development differently!

 

So how are you listening to participants today?