International NGOs do their best to demonstrate the impact of their work, to be accountable, to learn and improve. But it’s very challenging and complicated to measure change in social-justice work, and even harder to prove attribution. At least, to do these things in affordable and participatory ways…
Two times in Plan International, earlier in my career, I had worked to develop and implement systems that would demonstrate impact – and both times, we had failed.
In this article I want to describe how, in ChildFund Australia, we succeeded, and were able to build and implement a robust and participatory system for measuring and attributing impact in our work.
Call it the Holy Grail!
I’ve been writing a series of blog posts about climbing each of the 48 mountains in New Hampshire that are at least 4000 feet tall. And, each time, I’ve also been reflecting a bit on the journey since I joined Peace Corps, 33 years ago: on development, social justice, conflict, experiences along the way, etc.
So far, I’ve described climbing 36 of the 48 peaks, and covered my journey from Peace Corps in Ecuador (1984-86) through to my arrival in Sydney in 2009, where I joined ChildFund Australia as the first “International Program Director.” This is my 37th post in the series.
In recent posts in this series I’ve been describing aspects of the ChildFund Australia “Development Effectiveness Framework” (“DEF”) the system that would help us make sure we were doing what we said we were going to do and, crucially, verify that we were making a difference in the lives of children and young people living in poverty. So we could learn and improve our work…
There are three particular components of the overall DEF that I am detailing in more depth, because I think they were especially interesting and innovative. In my previous blog I described how we used Case Studies to complement the more quantitative aspects of the system. These Case Studies were qualitative narratives of the lived experience of people experiencing change related to ChildFund’s work, which we used to gain human insights, and to reconnect ourselves to the passions that brought us to the social-justice sector in the first place.
This time, I want to go into more depth on two final, interrelated components of the ChildFund Australia DEF: Outcome Indicator Surveys and Statements of Impact. Together, these two components of the DEF enabled us to understand the impact that ChildFund Australia was making, consistent with our Theory of Change and organizational vision and mission. Important stuff!
Last time I described climbing to the top of Mt Bond on 10 August 2017, after having gotten to the top of Bondcliff. After Mt Bond, I continued on to West Bond (4540ft, 1384m), the last of three 4000-footers I would scale that day. (But, since this was an up-and-back trip, I would climb Mt Bond and Bondcliff twice! It would be a very long day.)
As I described last time, I had arrived at the top of Bondcliff at about 10:30am, having left the trail-head at Lincoln Woods Visitor Center just after 6:30am. This early start was enabled by staying the night before at Hancock Campsite on the Kancamagus road, just outside of Lincoln, New Hampshire. Then I had reached the top of Bondcliff at 10:30am, and the summit of Mt Bond at about 11:30am.
Now I would continue to the top of West Bond, and then retrace my steps to Lincoln Woods:
So, picking up the story from the top of Mt Bond, the Bondcliff Trail drops down fairly quickly, entering high-altitude forest, mostly pine and ferns.
After 20 minutes I reached the junction with the spur trail that would take me to the top of West Bond. I took a left turn here. The spur trail continues through forest for some distance:
I reached the top of West Bond at 12:30pm, and had lunch there. The views here were remarkable; it was time for lunch, and I was fortunate to be by myself, so I took my time at the summit.
Here are two spectacular videos from the top of West Bond. The first simply shows Bondcliff, with the southern White Mountains in the background:
And this second video is more of a full panorama, looking across to Owl’s Head, Franconia Ridge, Garfield, the Twins, Zealand, and back:
Isn’t that spectacular?!
After eating lunch at the top of West Bond, I left at a bit before 1pm, and began to retrace my steps towards Lincoln Woods. To get there, I had to re-climb Mt Bond and Bondcliff.
I reached the top of Mt Bond, for the second time, at 1:20pm. The view down towards Bondcliff was great!:
Here is a view from near the saddle between Mt Bond and Bondcliff, looking up at the latter:
As I passed over Bondcliff, at 2:15pm, I was slowing down, and my feet were starting to be quite sore. I was beginning to dread the descent down Bondcliff, Wilderness, and Lincoln Woods Trails… it would be a long slog.
Here’s a view from there back up towards Mt Bond:
But there were still 8 or 9 miles to go! And since I had declined the kind offer I had received to ferry my car up to Zealand trail-head, which would have saved me 3 miles, I had no other option but to walk back to Lincoln Woods.
It was nearly 5pm by the time I reached the junction with Twinway and the Lincoln Woods Trail. By that time, I was truly exhausted, and my feet were in great pain, but (as I said) I had no option but to continue to the car: no tent or sleeping bag, no phone service here.
The Lincoln Woods Trail, as I’ve described in more detail elsewhere, is long and flat and wide, following the remnants of an old forest railway:
Scratches from walking poles?
It was around 5:30 when I got to the intersection with Franconia Notch Trail, which is the path up Owl’s Head.
It was a very long slog down Lincoln Woods Trail – put one foot in front of the other, and repeat! And repeat and repeat and repeat and repeat …
Finally I reached the Lincoln Woods Visitor Center, where I had parked my car at 6:30am that morning, at 6:40pm, having climbed three 4000-footers, walked 22 miles, and injured my feet in just over 12 hours.
Looking back, I had accomplished a great deal, and the views from the top of three of New Hampshire’s highest and most-beautiful were amazing. But, at the time, I had little feeling of accomplishment!
Here is the diagram I’ve been using to describe the ChildFund Australia DEF:
In this article I want to describe two components of the DEF: #2, the Outcome Indicator Surveys; and #12, how we produced “Statements of Impact.” Together, these two components enabled us to measure the impact of our work.
First, some terminology: as presented in an earlier blog article in this series, we had adopted fairly standard definitions of some related terms, consistent with the logical framework approach used in most mature INGOs:
According to this way of defining things:
- A Project is a set of Inputs (time, money, technology) producing a consistent set of Outputs (countable things delivered in a community);
- A Program is a set of Projects producing a consistent set of Outcomes (measurable changes in human conditions related to the organization’s Theory of Change);
- Impact is a set of Programs producing a consistent set of changes to Outcome Indicators as set forth in the organization’s Strategic Plan.
But that definition of “Impact,” though clear and correct, wasn’t nuanced enough for us to design a system to measure it. More specifically, before figuring out how to measure “Impact,” we needed to grapple with two fundamental questions:
- How “scientific” did we want to be in measuring impact? In other words, were we going to build the infrastructure needed to run randomized control group trials, or would we simply measure change in our Outcome Indicators? Or somewhere in between?;
- How would we gather data about change in the communities where we worked? A census, surveying everybody in a community, which would be relatively costly? If not, what method for sampling would we use that would enable us to claim that our results were accurate (enough)?
The question “how ‘scientific’ did we want to be” when we assessed our impact was a fascinating one, getting right to the heart of the purpose of the DEF. The “gold standard” at that time, in technical INGOs and academic institutions, was to devise “randomized control group” trials, in which you would: implement your intervention in some places, with some populations; identify ahead of time a comparable population that would serve as a “control group” where you would not implement that intervention; and then compare the two groups after the intervention had concluded.
For ChildFund Australia, we needed to decide if we would invest in the capability to run randomized control group trials. It seemed complex and expensive but, on the other hand, it would have the virtue of being at the forefront of the sector and, therefore, appealing to technical donors.
When we looked at other comparable INGOs, in Australia and beyond, there were a couple that had gone that direction. When I spoke with my peers in some of those organizations, they were generally quite cautious about the randomized control trial (“RCT”) approach: though appealing in principle, in practice it was complex, requiring sophisticated technical staff to design and oversee the measurements, and to interpret results. So RCTs were very expensive. Because of the cost, people with practical experience in the matter recommended using RCTs, if at all, only for particular interventions that were either expensive or were of special interest for other reasons.
For ChildFund Australia, this didn’t seem suitable, mainly because we were designing a comprehensive system that we hoped would allow us to improve the effectiveness of our development practice, while also involving our local partners, authorities, and people in communities where we worked. Incorporating RCTs into such a comprehensive system would be very expensive, and would not be suitable for local people in any meaningful way.
The other option we considered, and ultimately adopted, hinged upon an operational definition of “Impact.” Building on the general definition shown above (“Impact is a set of Programs producing a consistent set of changes to Outcome Indicators as set forth in the organization’s Strategic Plan”), operationally we decided that:
In other words, we felt that ChildFund could claim that we had made an significant impact in the lives of children in a particular area if, and only if:
- There had been a significant, measured, positive change in a ChildFund Australia Outcome Indicator; and
- Local people (community members, organizations, and government staff) determined in a rigorous manner that ChildFund had contributed to a significant degree to that positive change.
In other words:
- If there was no positive change in a ChildFund Australia Outcome Indicator over three years (see below for a discussion of why we chose three years), we would not be able to claim impact;
- If there was a positive change in a ChildFund Australia Outcome Indicator over three years, and local people determined that we had contributed to that positive change, we would be able to claim impact.
(Of course, sometimes there might be a negative change in a ChildFund Australia Outcome Indicator, which would have been worse if we hadn’t been working in the community. We were able to handle that situation in practice, in community workshops.)
I felt that, if we approached measuring impact in this way it would be “good enough” for us – perhaps not as academically robust as using RCT methods, but (if we did it right) certainly good enough for us to work with local people to make informed decisions, together, about improving the effectiveness of our work, and to make public claims of the impact of our work.
So that’s what we did!
As a reminder, soon after I had arrived in Sydney we had agreed a “Theory of Change” which enabled us to design a set of organization-wide Outcome Indicators. These indicators, designed to measure the status of children related to our Theory of Change, were described in a previous article, and are listed here:
These Outcome Indicators had been designed technically, and were therefore robust. And they had been derived from the ChildFund Australia Vision, Mission, and Program Approach, so they measured changes that would be organically related to the claims we were making in the world.
So we needed to set up a system to measure these Outcome Indicators; this would become component #2 in the DEF (see Figure 1, above). And we had to design a way for local partners, authorities, and (most importantly) people from the communities where we worked to assess changes to these Outcome Indicators and reach informed conclusions about who was responsible for causing the changes.
First, let me outline how we measured the ChildFund Australia Outcome Indicators.
Outcome Indicator Surveys (Component #2 in Figure 1, Above)
Because impact comes rather slowly, an initial, baseline survey was carried out in each location and then, three years later, another measurement was carried out. A three-year gap was somewhat arbitrary: one year was too short, but five years seemed a bit long. So we settled on three years!
Even though we had decided not to attempt to measure impact using complex randomized control trials, these survey exercises were still quite complicated, and we wanted the measurements to be reliable. This was why we ended up hiring a “Development Effectiveness and Learning Manager” in each Country Office – to support the overall implementation of the DEF and, in particular, to manage the Outcome Indicator Surveys. And these surveys were expensive and tricky to carry out, so we usually hired students from local universities to do the actual surveying.
Then we needed to decide what kind of survey to carry out. Given the number of people in the communities where we worked, we quickly determined that a “census,” that is, interviewing everybody, was not feasible.
So I contacted a colleague at the US Member of the ChildFund Alliance, who was an expert in this kind of statistical methodology. She strongly advised me to use the survey method that they (the US ChildFund) were using, called “Lot Quality Assurance Sampling.” LQAS seemed to be less expensive than other survey methodologies, and it was highly recommended by our expert colleague.
(In many cases, during this period, we relied on technical recommendations from ChildFund US. They were much bigger than the Australia Member, with excellent technical staff, so this seemed logical and smart . But, as with Plan International during my time there, the US ChildFund Member had very high turnover, which led to many changes in approach. This meant, in practice for us, although ChildFund Australia had adopted several of the Outcome Indicators that ChildFund US was using, in the interests of commonality, and – as I said – we had begun to use LQAS for the same reason, soon the US Member was changing their Indicators and abandoning the use of LQAS because new staff felt it wasn’t the right approach. This led to the US Member expressing some disagreement with how we, in Australia, were measuring Impact – even though we were following their – previous – recommendations! Sigh.)
Our next step was to carry out baseline LQAS surveys in each field location. It took time to accomplish this, as even the relatively-simple LQAS was a complex exercise than we were typically used to. Surveys were supervised by the DEL Managers, carried out usually by students from local universities. Finally, the DEL Managers prepared baseline reports summarizing the status of each of the ChildFund Australia Outcome Indicators.
Then we waited three years and repeated the same survey in each location.
(In an earlier article I described how Plan International, where I had worked for 15 years, had failed twice to implement a DEF-like system, at great expense. One of the several mistakes that Plan had made was that they never held their system constant enough to be comparable over time. In other words, in the intervening years after measuring a baseline, they tinkered with [“improved”] the system so much that the second measurement couldn’t be compared to the first one! So it was all for naught, useless. I was determined to avoid this mistake, so I was very reluctant to change our Outcome Indicators after they were set, in 2010; we did add a few Indicators as we deepened our understanding of our Theory of Change, but that didn’t get in the way of re-surveying the Indicators that we had started with, which didn’t change.)
Once the second LQAS survey was done, three years after the baseline, the DEL Manager would analyze differences and prepare a report, along with a translation of the report that could be shared with local communities, partners, and government staff. The DEL Manager, at this point, did not attempt to attribute changes to any particular development actor (local government, other NGOs, the community themselves, ChildFund, etc.), but did share the results with the communities for validation.
Rather, the final DEF component I want to describe was used to determine impact.
Statements of Impact (Component #12 in Figure 1, Above)
The most exciting part of this process was how we used the changes measured over three years in the Outcome Indicators to assess Impact (defined, as described above, as change plus attribution.)
The heart of this process was a several-day-long workshop at which local people would review and discuss changes in the Outcome Indicators, and attribute the changes to different actors in the area. In other words, if a particular indicator (say, the percentage of boys and girls between 12 and 16 years of age who had completed primary school) had changed significantly, people at the workshop would discuss why the change had occurred – had the local education department done something to cause the change? Had ChildFund had an impact? Other NGOs? The local community members themselves?
Finally, people in the workshop would decide the level of ChildFund’s contribution to the change (“attribution”) on a five-point scale: none, little, some, a lot, completely. This assessment, made by local people in an informed and considered way, would then serve as the basic content for a “Statement of Impact” that would be finalized by the DEL Manager together with his or her senior colleagues in-country, Sydney-based IPT staff and, finally, myself.
We carried out the very first of these “Impact” workshops in Svay Rieng, Cambodia, in February 2014. Because this was the first of these important workshops, DEL Managers from Laos and Viet Nam attended, to learn, along with three of us from Sydney.
Here are some images of the ChildFund team as we gathered and prepared for the workshop in Svay Rieng:
Here are images of the workshop. First, I’m opening the session:
Lots of group discussion:
The DEL Manager in Cambodia, Chan Solin, prepared a summary booklet for each participant in the workshop. These booklets were a challenge to prepare, because they would be used by local government, partners, and community members; but Solin did an outstanding job. (He also prepared the overall workshop, with Richard Geeves, and managed proceedings very capably.) The booklet presented the results of the re-survey of the Outcome Indicators as compared with the baseline:
Here participants are discussing results, and attribution to different organizations that had worked in Svay Rieng District over the three years:
Subgroups would then present their discussions and recommendations for attribution. Note the headphones – since this was our first Impact Workshop, and ChildFund staff were attending from Laos, Viet Nam, and Australia in addition to Cambodia, we provided simultaneous translation:
Here changes in several Outcome Indicators over the three years (in blue and red) can be seen. The speaker is describing subgroup deliberations on attribution of impact to the plenary group:
Finally, a vote was taken to agree the attribution of positive changes to Outcome Indicators. Participants voted according to their sense of ChildFund’s contribution to the change: none, a little, some, a lot, or completely. Here is a ballot and a tabulation sheet:
Finally, here is an image of the participants in that first Statement of Impact Workshop: Local Community Members, Government Staff, ChildFund Staff (From The Local Area, Country Office, Sydney, and From Neighboring Viet Nam):
Once the community workshops were finished, our local Senior Management would review the findings and propose adjustments to our work. Then the DEL Managers would prepare a final report, which we described as “Statements of Impact.”
Generally speaking, these reports would include:
- An introduction from the Country Director;
- A description of the location where the Statement of Impact was produced, and a summary of work that ChildFund had done there;
- An outline of how the report was produced, noting the three-year gap between baseline and repeat survey;
- Findings agreed by the community regarding changes to each Outcome Indicator along with any attribution of positive change to ChildFund Australia;
- Concluding comments and a plan of action for improvement, agreed by the local Country Office team and myself.
Examples of these reports are shared below.
This process took some time to get going, because of the three-year delay to allow for re-surveying, but once it commenced it was very exciting. Seeing the “Statement of Impact” reports come through to Sydney, in draft, from different program countries, was incredible. They showed, conclusively, that ChildFund was really making a difference in the lives of children, in ways that were consistent with our Theory of Change.
Importantly, they were credible, at least to me, because they showed some areas where we were not making a difference, either because we had chosen not to work in a particular domain (to focus on higher priorities) or because we needed to improve our work.
I’m able to share four ChildFund Australia Statements of Impact, downloaded recently from the organization’s website. These were produced as described in this blog article:
- Cambodia: Svay Chrum, 2014 – ChildFund Cambodia Statement of Impact – Svay Chrum – 2014. Many images shown above in this article are from the workshop that led to this Statement of Impact;
- Cambodia: Romeas Haek, 2014 – ChildFund Cambodia Statement of Impact – Romeas Haek – 2014;
- Cambodia: Chhloung, 2015 – ChildFund Cambodia Statement of Impact – Chhloung – 2015;
- Viet Nam: Quang Uyen, 2014 – ChildFund Vietnam Statement of Impact – Quang Uyen – 2014.
Here are a few of the findings from that first “Statement of Impact” in Svay Chrum:
- ChildFund made a major contribution to the increase in primary-school completion in the district:
- Although the understanding of diarrhea management had improved dramatically, it was concluded that ChildFund had not contributed to this, because we hadn’t implemented any related projects. “Many development actors contributed to the change”:
- ChildFund had a major responsibility for the improvement in access to hygienic toilets in the district:
- ChildFund made a significant contribution to the increase in access to improved, affordable water in the district:
- ChildFund had made a major contribution to large increases in the percentage of children and youth who reported having opportunities to voice their opinions:
- Although the percentage of women of child-bearing age in the district who were knowledgeable regarding how to prevent infection with HIV, it was determined the ChildFund had made only a minor contribution to this improvement. And recommendations were made by the group regarding youth knowledge, which had actually declined:
To me, this is fantastic stuff, especially given that the results emerged from deep and informed consultations with the community, local partners, and local authorities. Really, this was the Holy Grail – accountability, and lots of opportunity for learning. The results were credible to me, because they seemed to reflect the reality of what ChildFund had worked on, and pointed out areas where we needed to improve; the report wasn’t all positive!
For me, the way that the Outcome Indicator Surveys and Statements of Impact worked was a big step forward, and a major accomplishment. ChildFund Australia now had a robust and participatory way of assessing impact so that we could take steps to confidently improve our work. With these last two components of the DEF coming online, we had managed to put in place a comprehensive development-effectiveness system, the kind of system that we had not been able to implement in Plan.
As I shared the DEF – its design, the documents and reports it produced – with our teams, partners, Australian government, donors – I began to get lots of positive feedback. At least for its time, in Australia, the ChildFund Australia DEF was the most comprehensive, robust, participatory, useful system put into place that anybody had ever seen. Not the most scientific, perhaps, but something much better: usable, useful, and empowering.
My congratulations and thanks to the people who played central roles in creating, implementing, and supporting the DEF:
- In Sydney: Richard Geeves and Rouena Getigan;
- And the DEL Managers in our Country Offices: Chan Solin (Cambodia), Joe Pasen (PNG), Marieke Charlet (Laos), and Luu Ngoc Thuy and Bui Van Dung (Viet Nam).
Here are links to earlier blogs in this series. Eventually there will be 48 articles, each one about climbing one of New Hampshire’s 4000-footers, and also reflecting on a career in international development:
- Mt Tom (1) – A New Journey;
- Mt Field (2) – Potable Water in Ecuador;
- Mt Moosilauke (3) – A Water System for San Rafael (part 1);
- Mt Flume (4) – A Windmill for San Rafael (part 2);
- Mt Liberty (5) – Onward to Colombia, Plan International in Tuluá;
- Mt Osceola (6) – Three Years in Tuluá;
- East Osceola (7) – Potable Water for Cienegueta;
- Mt Passaconaway (8) – The South America Regional Office;
- Mt Whiteface (9) – Empowerment!;
- North Tripyramid (10) – Total Quality Management for Plan International;
- Middle Tripyramid (11) – To International Headquarters!;
- North Kinsman (12) – Fighting Fragmentation and Building Unity: New Program Goals and Principles for Plan International;
- South Kinsman (13) – A Growth Plan for Plan International;
- Mt Carrigain (14) – Restructuring Plan International;
- Mt Eisenhower (15) – A Guest Blog: Max van der Schalk Reflects on 5 Years at Plan’s International Headquarters;
- Mt Pierce (16) – Four Years At Plan’s International Headquarters;
- Mt Hancock (17) – Hanoi, 1998;
- South Hancock (18) – Plan’s Team in Viet Nam (1998-2002);
- Wildcat “D” Peak (19) – Plan’s Work in Viet Nam;
- Wildcat Mountain (20) – The Large Grants Implementation Unit in Viet Nam;
- Middle Carter (21) – Things Had Changed;
- South Carter (22) – CCF’s Organizational Capacity Assessment and Child Poverty Study;
- Mt Tecumseh (23) – Researching CCF’s New Program Approach;
- Mt Jackson (24) – The Bright Futures Program Approach;
- Mt Isolation (25) – Pilot Testing Bright Futures;
- Mt Lincoln (26) – Change, Strategy and Culture: Bright Futures 101;
- Mt Lafayette (27) – Collective Action for Human Rights;
- Mt Willey (28) – Navigating Principle and Pragmatism, Working With UUSC’s Bargaining Unit;
- Cannon Mountain (29) – UUSC Just Democracy;
- Carter Dome (30) – A (Failed) Merger In the INGO Sector (1997);
- Galehead Mountain (31) – What We Think About When We Think About A Great INGO Program;
- Mt Garfield (32) – Building Strong INGO Teams: Clarity, Trust, Inspiration;
- Mt Moriah (33) – Putting It All Together (Part 1): the ChildFund Australia International Program Team;
- Owls’ Head (34) – Putting It All Together (Part 2): ChildFund Australia’s Theory of Change;
- Bondcliff (35) – ChildFund Australia’s Development Effectiveness System;
- Mt Bond (36) – “Case Studies” in ChildFund Australia’s Development Effectiveness System.