Czech it out! Great evaluation happening in the Czech Republic

Czech it out! Great evaluation happening in the Czech Republic

One of the delights of living in another country is the surprises one encounters. For me, coming back to our second 'home', it was an evaluative surprise. For by connecting to the Czech Foreign Ministry's Evaluation team, I found evidence of learning from meta-evaluation, doing ex-post evaluation, conscientious tracking of project cost-effectiveness and an openness to self-sustainability research funding and using national evaluators to lead them.

Czech Foreign aid is widespread- "Through development cooperation, the Czech Republic helps to eradicate poverty in less developed parts of the world by means of sustainable socio-economic development. It also contributes to global security and stability, conflict prevention, the promotion of democracy, human rights and fundamental freedoms, and the rule of law". Development assistance is done by several entities, the main two under the Ministry of Foreign Affairs, ORS (Development Cooperation and Humanitarian Aid) and its subsidiary CRA (Czech Development Agency).

The Ministry of Foreign Affairs oversees some fascinating evaluation work. After attending several partner-donor meetings and presentations of a meta-evaluation, an ex-post from an array of projects in Georgia and a discussion of findings across all evaluations in 2014, I am impressed. Why? Because not only are they willing to learn from both successes and failures, openly discussing challenges in learning between grants and contracts, but also because they are tackling programming in 10 countries (e.g. Afghanistan, Bosnia and Herzegovina, Ethiopia, Georgia) with a mere 17 people and a budget of $35 million for 2015.  


What are some of the things they are learning?

1. They commissioned a meta-evaluation looking at 20 projects from 2012-2013. What worked well was well described and documented evaluations that were also cost-effective (evaluations were 4% of total costs) and tried to offer constructive solutions to things that did not work well in projects.  While some methodologies needed to be better, and reports were hard to access, a major finding was what needs to be improved is inclusion of local recipients in stakeholder analyses, soliciting their views on what the evaluations should focus and on how the projects affected them.  Further, during discussions we highlighted the need for an evaluation of outcomes and impacts, not just how evaluations quality was but also which organizations had the best results and why.


2. They commissioned an ex-post evaluation across eight organizations' in the Republic of Georgia (5 Czech, 3 Georgian), of one-year projects with 131 separate activities in civic engagement, media and youth between 2008-2012. The evaluation looked at the short-term effectiveness and longer-term sustainability of activities in the Republic of Georgia.  Key findings included good relevance of aid offered, high cost-efficiency, low effectiveness for Georgian decision-making, primarily individual (rather than systemic) sustainability, though some good impact.

Key recommendations from this evaluation-, which Valuing Voices thinks, are universal included:

a. Implement min. 3-year projects, whereby focus in a selected region (or a few regions) on a selected local priority topic, ensure in-depth needs analysis, multi-stakeholder cooperation [including participants], sustainable mechanisms, ongoing local support and enough flexibility as per external factors.

b. Allocate budget for burning human rights issues and for enhancing planning, monitoring, evaluation and learning capacities of Civil Society Organizations.



c. Coordinate activities with other implementers and donors in the target area and if possible (taking into account the political situation) also with local state institutions and potential implementers 

d. Implement multi-stakeholder initiatives in a specific area (health, environment, social inclusion, minorities) with an advocacy component, sharing of results / lessons learnt and a media component 

3.  Among annual recommendations from Evaluation discussions throughout the year emerged this surprising one on cost-effectiveness. A detailed financial budget is now standard, and expenses for project activities among a majority of (grant-funded) projects and the Czech Development Agency are now required. This enables cost-effectiveness comparisons at least across grants (albeing not across for-profit contracts). In my experience this is unparalleled! (Let me know if other countries do this please!)


Overall, the fact that the Czech Foreign Ministry and implementing partners are willing to look at themselves critically and transparently improve accountability to its ultimate recipients and taxpayers makes me shout Hurrah from all of Prague’s 100 spires!  Here is one of them, taken from a Ministry’s window.IMG_9623



Walking in our participants’ shoes, Doing Development Differently

Walking in our participants' shoes, Doing Development Differently

You and I like to make informed decisions.  We go to restaurants recommended by Yelp or Facebook friends. We refer to Consumer Reports' rankings before we buy appliances and read Amazon reviews before we purchase items, or at least ask random friends what works for them.  I just bought a fridge and looked at energy star ratings to buy one that was efficient and climate-friendly. Would you buy an expensive appliance without market research reassuring you of the likely success?


Yet that is what we ask millions of development participants to do every day. In international development, community participants don't have such luxuries as knowing which projects worked well before, or that success in terms of health, income, education is replicable using this model. They invest their time and resources, blindly, hoping for a good return. 


Doing Development Differently is a very encouraging initiative by UK's Overseas Development Institute (ODI) which focuses on learning from bottom-up, country-led development.  Leni Wild tells us via a Malawi Country Scorecard example that NGO-led innovation during implementation is key as is creating coalitions of shared responsibility to meet participant needs. Natalia Adler shares a Nicaraguan Human-Centered Design where policy makers literally walked the lives of participants, learning from being participant-observers and supporters, not omniscient experts.


For our projects and policies are but a piece of participants' lives, but an important input if done for appropriate impact and sustainability. But do we know if we all mean the same thing? Years ago I asked what 'food security' meant in several Malian communities. In addition to expected answers- 'being able to feed myself and my family'- I got answers such as 'many children' (both the cause of having food and that more children generate sustained food security once adult, employed), and 'children's schooling' (having surplus to send their children to school year-around).


Not only do we need to understand what impact looks like to our participants, and start putting in systems for them to track it and report back to us, but also what does sustainability means to our participants?  Also what can they self-sustain? What can we do more skillfully in future project design, learning from what worked best?


* Imagine what project impacts we could learn about if we returned to see, 2-5 years after projects closed out, to see what remarkable unintended impacts were created as in Niger.

* Imagine how self-sustainable project activities could be if we designed future projects based on past project successes (trackin which activities communities around the world were most able to replicate after projects left)?

* Imagine being the non-profit able to claim that the majority of activities were self-sustainable by communities 10 years later, and receiving the best Star ratings by them?!

* Imagine being the Minister of Development in Africa, Asia or Latin America being able to vett incoming projects based on likelihood of achieving Development Star results?


Using national evaluators, building national systems of online IATI-compliant documentation of what their own people consider to be the most sustainable impacts of projects, as well as are missing pieces of the 'development' puzzle. Listening could teach us a lot more than our logical framework (logframe) expectations of impact from our (donor’s) view — especially what communities imagine projects will lead to. Even farther out, imagine if together we calculate projects' economic benefits/ the Return on Investment to participants (crudely put, the 'bang for the buck") of our projects in their view, as well as our efficacy in terms of resource use, that scary thought…


Doing Development Differently is as exciting as are initiatives like USAID Forward in supporting development toward countries, e.g. starting to channel funds directly to local implementers under which falls USAID’s CLA (Collaborating, Learning and Adapting). Another great example is UK’s DIFD’s BRIDGE which incorporates Strategic Learning and adaptation into projects (adapting the during implementation, rather than the quite fixed straight-jacket most projects are tied to on signing agreements or contracts).


For we need to put ourselves out of a job by training local NGOs to supplant us, to support capacity/ create systems in Ministries to take over all but our funding, and especially, to listen to those who know best- communities whose lives we are to improve. Where are you seeing success brewing?


Sustainability SPRINGing out all over the place… and Disrupting!

Sustainability SPRINGing out all over the place… and Disrupting


So what is sustainability? You may think it's the climate's long-term wellbeing and how to gauge changes to that.  You may think it's linked to sustainable development regarding consumption, trade, education and environment and how to assess it. You may think it's data-driven organizational success as Chelsea Clinton describes, or is it Michael Porter's business' view of Creating Shared Value on social and environmental concerns or is it about people, as hallowed University of Cambridge trains experts in its Institute for Sustainability Leadership (I revel that I was a Fellow there in the '90s). Finally, is it WCED’s lovely definition "Sustainable development is development that meets the needs of the present without compromising the ability of future generations to meet their own needs"? Yes, when applied to communities' abilities to self-sustainably and resiliently chart their own development! 

So how are we to get there? A Sustainable Brands Conference this year gets us there through being clear about their own consumption, and USAID is no different. USAID Forward is putting their money where their keyboards are (so to speak), toward more sustainable local delivery by directing a huge 30 percent of its funding to “local solutions” through procurement in coming years.  This framework is to “support the ‘new model of development’ that USAID Administrator Rajiv Shah has touted, which entails a shift away from hiring U.S.-based development contractors and NGOs to implement projects, and toward channeling money through host-country governments and local organizations to build their capacity to do the work themselves and sustain programs after funding dries up. I, and others celebrate the investments this will enable local firms to make in their own capacity, in leading development!

Of course all sorts of safeguards are needed, and ideally US firms would be providing capacity development, but shouldn’t we have been doing this all along, to move toward transferring ‘development’ to the countries themselves?


Source: GAO report

Also vital to sustainable development is learning from what works and doing more of it. USAID is finally planning to incorporate more ex-post evaluations into its toolkit of evaluating sustainability!  Two weeks ago, PPL/LER shared their great new policy document- “Local systems: A framework for supporting sustained development on how they can better incorporate local systems thinking into policy as well as DIME (Design, Implementation, Monitoring and Evaluation).  Industry insider DevEx tells us "even though the agency plans to use ex-post evaluations to measure whether development projects are successful or not, these evaluations will not focus on “specific contractor performance” but instead consider the “types of approaches that contribute to more sustainable outcomes…to inform USAID’s country strategies and project design." While PVO implementing partners will not [yet?] be required to do ex-post evaluations as part of their projects, having this door cracked open is excitingly opening. Notably, it is a ‘back to the future’ moment, as 30 years ago USAID led the development world in post-project evaluations, yet in the last 24 years has done none (or at least not published any) except for the Food for Peace retrospective below, as I found in our Valuing Voices research of USAID's Development Experience Clearinghouse.

There is far more to watch. In our view, the whole development industry needs to grapple with the perceived barrier that funding ends with projects (note: a trust could be set up to document post-project impact 1, 3, 5 years later and results retained, much as 3ie does now for impact evaluations) and the view that one cannot discern attributable project impact with a time-lag of several years. Yet even the General Accounting Office is asking for longitudinal data; they reviewed USAID’s document and wants to see clear measures of success at Mission and HQ level by different indicators of local institutional sustainability and impact four years on.

Why should we care? As Chelsea Clinton of the Clinton Global Initiative puts it, "you can't measure everything, but you can measure almost everything through quantitative or qualitative means, so that we know what we're disproportionately good at. And, candidly, what we're not so good at, so we can stop doing that.

Yes! Development should be about doing more of what works, sustainably, and less of what doesn’t. USAID’s Local Systems Framework found the best could also be free, as in this one Food For Peace evaluation shows:


Returning to Chelsea Clinton, I’ll conclude by stating something obvious. She "wants to see some evidence of why we're making decisions, as opposed to the anecdotes” which is what getting post-project evaluation data from our true clients, our participants, is all about. Clinton says this will transform CGI into a smart, accountable, and sustainable support system for philanthropic disrupters around the world. USAID is radical for me, today, with their Local Systems investments… my neighborhood disrupter.


Are you such a disrupter too? Who else is one whom we can celebrate together? 

Pineapple, Apple- what differentiates Impact from self-Sustainability Evaluation?

Pineapple, Apple- what differentiates Impact from self-Sustainability Evaluation?

There is great news.  Impact Evaluation is getting attention and being funded to do excellent research, such as by the International Initiative for Impact Evaluation (3ie), by donors such as the World Bank, USAID, UKAid, the Bill and Melinda Gates Foundation in countries around the world.  Better Evaluation tell us that "USAID, for example, uses the following definition: “Impact evaluations measure the change in a development outcome that is attributable to a defined intervention; impact evaluations are based on models of cause and effect and require a credible and rigorously defined counterfactual to control for factors other that the intervention that might account for the observed change.”  

William Savedoff of CGD reports in Evaluation Gap reports that whole countries are setting up such evaluation institutes:  "Germany's new independent evaluation institute for the country's development policies, based in Bonn, is a year old.  DEval has a mandate that looks similar to Britain's Independent Commission for Aid Impact (discussed in a previous newsletter ) because it will not only conduct its own evaluations but also help the Federal Parliament monitor the effectiveness of international assistance programs and policies. DEval's 2013-2015 work program is ambitious and wide – ranging from specific studies of health programs in Rwanda to overviews of microfinance and studies regarding mitigation of climate change and aid for trade." There is even a huge compendium of impact evaluation databases.

There is definitely a key place for impact evaluations in analyzing which activities are likely to have the most statistically significant (which means definitive change) impact. One such study in Papua New Guinea found SMS (mobile text) inclusion in teaching made a significant difference in student test scores compared to the non-participating 'control group' who did not get the SMS (texts).  Another study, the Tuungane I evaluation by a group of Columbia University scholars showed clearly that an International Rescue Committee program on community-level reconstruction did not change participant behaviors. The study was as well designed as an RCT can be, and its conclusions are very convincing.  But as the authors note, we don't actually know why the intervention failed. To find that out, we need the kind of thick descriptive qualitative data that only a mixed methods study can provide.

Economist Kremer from Harvard says "“The vast majority of development projects are  not subject to any evaluation of this type, but I’d argue the number should at least be greater than it is now.” Impact evaluations use 'randomized control trials', comparing the group that got project assistance to a similar group that didn't to gauge the change. A recent article that talks about treating poverty as a science experiment says "nongovernmental organizations and governments have been slow to adopt the idea of testing programs to help the poor in this way. But proponents of randomization—“randomistas,” as they’re sometimes called—argue that many programs meant to help the poor are being implemented without sufficient evidence that they’re helping, or even not hurting."  However we get there, we want to know – the real (or at least likely)- impact of our programming, helping us focus funds wisely.

Data gleaned from impact evaluations is excellent information to have before design and during implementation.  While impact evaluations are a thorough addition to the evaluation field, experts recommend they be done from the beginning of implementation. While they ask “Are impacts likely to be sustainable?”, and “to what extent did the impacts match the needs of the intended beneficiaries?” and importantly “did participants/key informants believe the intervention had made a difference?” they focus only on possible sustainability, using indicators we expect to see at project end rather than tangible proof of sustainability of the activities and impacts that communities define themselves that we actually return to measure 2-10 years later.


That is the role for something that has rarely been used in 30 years – for post-project (ex-post) evaluations looking at:

  1. The resilience of expected impacts of the project 2, 5, 10 years after close-out
  2. The communities’ and NGOs’ ability to self-sustain which activities themselves
  3. Positive and negative unintended impacts of the project, especially 2 years after, while still in clear living memory
  4. Kinds of activities the community and NGOs felt were successes which could not be maintained without further funding
  5. Lessons for other projects across projects on what was most resilient that communities valued enough to do themselves or NGOs valued enough to get other funding for, as well as what was not resilient.


Where is this systematically happening already? There are our catalysts ex-post evaluation organizations, drawing on communities' wisdom. Here and there there are other glimpses of ValuingVoices, mainly to inform current programming, such as these two interesting approaches:

  • Vijayendra Rao describes how a social observatory approach to monitoring and evaluation in India’s self-help groups leads to “Learning by Doing”– drawing on material from the book Localizing Development: Does Participation Work? The examples show how groups are creating faster feedback loops with more useful information by incorporating approaches commonly used in impact evaluations. Rao writes: “The aim is to balance long-term learning with quick turnaround studies that can inform everyday decision-making.”
  • Ned Breslin, CEO of Water For People talks about “Rethinking Social Entrepreneurism: Moving from Bland Rhetoric to Impact (Assessment)”. His new water and sanitation program, Everyone Forever, does not focus on the inputs and outputs, including water provided or girls returning to school. Instead it centers instead on attaining the ideal vision of what a community would look like with improved water and sanitation, and working to achieve that goal. Instead of working on fundraising only, Breslin wants to redefine the meaning of success as a world in which everyone has access to clean water.

We need a combination. We need to know how good our programming is now through rigorous randomized control trials, and we need to ask communities and NGOs how sustainable the impacts are.  Remember,  99% of all development projects worth hundreds of millions of dollars a year are not currently evaluated for long-term self-sustainability by their ultimate consumers, the communities they were designed to help.  

We need an Institute of Self-Sustainable Evaluation and a Ministry of Sustainable Development in every emerging nation, funded by donors who support national learning to shape international assistance. We need a self-sustainability global database, mandatory to be referred to in all future project planning. We need to care enough about the well-being of our true client to listen, learn and act.

The lack of ex-post project evaluation at the World Bank: One has no power

The lack of ex-post project evaluation at the World Bank: One has no power

The World Bank has a huge repository of 8,483 evaluation resources in its e-library database, so naturally Valuing Voices was very interested in investigating how many of those resources were ex-post evaluations of past World Bank projects. After searching the term “ex post evaluation” in the e-library, I ended up with 260 hits from those initial 8,483 resources that were a match. This looked like great news to have so many potential ex-post evaluations to analyze from such a powerhouse in international development as the World Bank. From the initial 260 hits, I expected about 50 of them to be what we consider to be ‘true’ ex-post, which is an evaluation conducted a few years after a project has been completed to assess for factors such as sustainability and long-term effectiveness of the program after donor resources had been withdrawn.

However, when I began to sift through the resources in more detail, the results were not exactly as we had anticipated. In order to determine how many of the 260 “ex-post evaluation” hits were true ex-post the process was simple, albeit time consuming. I looked through every hit, reading the abstracts provided by the World Bank and investigating individual resources in more detail if they seemed promising. While doing this, I categorized each hit by document type, keeping a tally of all the totals. The results were as follows:

Document Type

Number Encountered

Impact Evaluations


Retrospective Evaluations


Non-Evaluations (literature review, recommendations, guidelines, etc. related to evaluations)


Other (Policy reports, annual reports, sourcebooks, etc. not related to evaluations)


Ex-Post Evaluations



Total: 260


Did anything about these results surprise you? Yes, you read that right. Out of all 260 hits that came up from the search “ex-post evaluation” in the World Bank online database, an astounding grand total of one was a true ex-post evaluation of a past project. A bilateral counterpart, Japan’s JICA, has done 236 in 2009-2011 on past ODA projects, one of our rare stars in ex-post learning.

Suffice it to say, Valuing Voices was shocked by these results. There exists a clear need, based on this research, for the World Bank to contribute to the process of informing future projects by learning from past experiences, successes, mistakes, and community feedback through the valuable ex-post evaluation method. While impact and retrospective evaluations are indeed important, the nature of compiling many evaluations into one broad analysis doesn’t allow for a detailed assessment of how individual projects performed, especially when the respondents in many other multilateral ex-posts tend to remain government counterparts rather than local respondents. This type of comprehensive analysis of the long-term sustainability of completed projects can only be done by conducting ex-post evaluations for projects on a case-by-case basis.

Jindra Cekan (head of ValuingVoices) was invited to attend and speak at the World Bank’s Civil Society Organization spring meetings last week, and, armed with my findings, asked why we don’t see ex-posts at two sessions. Astrid Marroh, a senior staffer tasked with setting new strategy for the Bank, answered that longitudinal learning is, “a nut we have not yet cracked”.  Varun Gauri, writing the major World Development Report 2015 on Mind and Culture at the Bank, said that not only do Bank staff need to, “change the incentives from managing projects as managers to focus on the project’s ultimate aim,” but also that the Bank, “needs to follow the private sector’s approach by ‘Dog-fooding’ our projects– living our own projects“ (where private sector producers try and eat the dog food themselves).           

So what is the takeaway lesson learned form all this? Organizations like the World Bank are what set the precedent in international development, yet even this influential international organization fails to conduct regular ex-post evaluations. Despite having plentiful literature and recommendations on how to conduct ex-post evaluations and why they are important to the development process, it is clear that ex-post is not happening at the World Bank. Now is time for the organization to change the status quo and start valuing the voices of their project participants by conducting rigorous ex-post evaluations of their projects including feedback from the community level, in order to finally address this deficiency and establish a cycle of feedback loops and informed decision making that will benefit all involved – and make ‘development aid’ obsolete. 

Stepping up community self-sustainability, one [Ethiopian] step at a time


Stepping up community self-sustainability, one [Ethiopian] step at a time


Having just come back from evaluation and design fieldwork for an Ethiopian Red Cross (ERCS)/ Swedish Red Cross/ Federation of the Red Cross and Red Crescent project, the power of communities is still palpable in my mind. They know what great impact looks like. They know what activities they can best sustain themselves. It’s up to us to ask, listen and learn from them and support their own monitoring/ evaluating/ reporting. It’s up to us to share such learning with others and to act on it everywhere.

There are a myriad of possible sustainability indicators, and the outcome indicators below, suggested by 116 rural participants from Tigray, Ethiopia seem to fall into two categories of expected changes: Assets and Life Quality (Table 1). As the food security/ livelihood project extended credit for animal purchases, it is logical that tracking increased income, savings, assets, and home investments plus expenditures on food and electricity appeared.

We gleaned this from discussions with participants, asking them “what can we track together that would show that we had impact”? Our question led to a spirited discussion of not only what was traceable, but also what could be publicly posted and ‘ground-truthed’ by the community. Discussing indicators led to even deeper conversations about the causes of food insecurity which were illuminating to staff.  What was surprising, for instance, was the extent to which families saw changing seasonal child-field labor practices in favor of 100% child-school attendance as great indicators.  School attendance (or lack thereof) was dependent on families’ need for children’s seasonal labor in the fields. Community members said they knew who sent their children or not, which no only ‘cleaned’ the publicly posted data but triangulated implementer surveys and opened room for discussions of vulnerability.





Not only is this exciting for the project’s outcome tracking but even more importantly, our team proposed to create a community self-monitoring system, suggested in by Causemann/Gohl in an IIED PLA Notes article– “Tools for measuring change: self-assessment by communities” used in Africa and Asia. This learning, management and reporting process will fill a gaping need as current “monitoring systems serve only for donor accountability, but neither add value for poor people nor for the implementing NGOs because they do not improve effectiveness on the ground.” The authors found that not only “participatory data collection produces higher quality data in some fields than standard extractive methodologies [as] understanding the context leads to a higher accuracy of data and learning processes [which] increase the level of accountability… “ but also that such shared collaboration builds mutual learning and bridge-building.” While our community members may have offered to track this publicly to make this partner happy, men and women discussed this excitedly and embraced the idea of self-monitoring happily. ERCS will be discussing with communities to either track data monthly in notebooks or on a large chart hung in the woreda office for transparency.  Data (Chart 1) would include these asset and quality of life indicators as well as loan repayments (tracked vertically) while households (tracked horizontally) could see who was meeting the goal (checked boxes), not meeting it fully (dashed boxes) or not meeting it at all yet (blank boxes).  Community members corrected each other as they devised the indicators during our participatory research and this openness reassures us that the public monitoring will be quite transparent as well.





Further, what was especially satisfying was getting feedback from across the three tibias (sub-regions) on what activities they felt they could sustain themselves irrespective of the project’s continuation. Table 2 shows us which activities communities felt were most self-sustainable by households; these could form the core of the follow on project. Sheet/goats, poultry and oxen for fattening were highly prioritized by both women and men, in addition to a few choosing improved dairy cows. The convergence of similar responses was gratifying and somewhat unexpected, as there were several other project activities.  The communities’ own priorities need to be seriously considered as currently they get only one loan per family and thus self-sustainable activities are key.




There is more to incorporate in future project planning by NGOs like ERCS. The NGO-IDEAs concept mentioned above also includes involving project participants in setting goals and targets themselves, differentiating between who achieved them and why, and brainstorming who/what contributes to it and what they should do next. Peer groups, development agencies and any actors could collect and learn from the data. Imagine the empowerment were communities to design, monitor and evaluate and tell us as their audience!

And they must, according to ODI UK’s Watkins, who has a clear vision on how to achieve a global equity agenda for the post-2015 MDG goalsHe suggests converting the principle of ‘leave no one behind’ into measurable targets. He argues that, by introducing a series of ‘stepping stone’ benchmarks, the world can set ambitious goals on equity by 2030. He writes, wisely, that “narrowing these equity deficits is not just an ethical imperative but a condition for accelerated progress towards the ambitious 2030 targets. There are no policy blueprints. However, the toolkit for governments actively seeking to narrow disparities …has to include some key elements [such as] identifying who is being left behind and why is an obvious starting point. That’s why improvements to the quality of data available to policy-makers is an equity issue in its own right”. Valuing Voices believes who creates that data is an equally compelling equity issue.


So how will we reach these ambitious targets by 2030? By putting in stepping stone targets, returning project design functions to the ultimate clients – the communities themselves- and matching their wants with what we long to transfer to them. In this way we will be Valuing their Voices so much that they evaluate our projects jointly and we can respond. That’s how it should always have been.

What are your thoughts on this? We long to know.




[1] Ashley, H., Kenton, N., & Milligan, A. (Eds.). (2013). Tools for supporting sustainable natural resource management and livelihoods. Participatory Learning and Action, (66). Retrieved from

[2] Watkins, K. (2013, October 17). Leaving no-one behind: An equity agenda for the post- 2015 goals. Retrieved from