In my last blog in this series, I wrote about the first of three major projects carried out when I served as Program Director at Plan International’s International Headquarters (“IH”). When I moved from my previous post as Regional Director for South America, Plan’s then-new International Executive Director, Max van der Schalk, and I had agreed that I would stay in the Program Director role for three years, accomplish some specific goals, and then I would return to the field.
Those three carefully-chosen major projects would be:
- We would articulate a set of program goals for the organization, high-level enough to be suitable across our six Regions, yet specific enough to build unity, align our work with best practices, and enable accountability. I wrote about this last time;
- We would create a growth plan for the organization, so that resource allocations would be more rational, less political, less dependent on the force of character of a particular management presentation. That’s the subject this time;
- We would finish the restructuring of the agency. Now that regionalization was complete, and IH had been right-sized, we needed to finish the job and review how Plan was structured in the field, at country level. That’s for next time.
With clear goals, an objective way of allocating resources across countries, and the completion of our restructuring, I felt that Plan would be well-positioned to focus clearly on program effectiveness, and be less internally-distracted. More united. And I was determined to take a systems approach – fix the problems Plan faced by changing the system using those three key levers – goals, structure and resource allocation. I sought to change the system in part by creating a new and shared language with which Plan staff would describe and understand our work in common ways, a new lexicon.
In this post I want to describe the second of those three projects – the preparation of an objective, data-driven, rigorous growth plan for Plan International.
(Portions of the content below have been adapted from two journal articles I wrote and published in “Nonprofit Management and Leadership,” after I left IH. Copies of those original articles can be found here: NML – Fragmentation Article and here: how-should-an-international-ngo-allocate-growth.)
I’ve been writing over the last few months about climbing each of the 48 mountains in New Hampshire that are over 4000 feet tall. Each time I’ve also been reflecting a bit on the journey since I joined Peace Corps, 30 years ago: on development, social justice, conflict, experiences along the way, etc.
On July 3, 2016, Eric and I climbed North and South Kinsman, two of the three 4000-footers in the Cannon-Kinsman range, just west of Franconia Notch. Last time, I wrote about getting to the top of North Kinsman, which was really just the first 25% of the day! Here I’ll describe the second part of that long, long day here – the ascent of South Kinsman (4358ft, 1328m), and our return to the beginning of the hike.
We had arrived at the top of North Kinsman at around 2pm, after leaving the parking area on NH 116 at 11am. The short, 0.9m hike over from there to the summit of South Kinsman didn’t take too long – we arrived there at around 3pm. It was a beautiful day, but you can see how I had perspired through both shirts on the way up!:
The walk down off of South Kinsman was “steep and rough,” but otherwise a beautiful, typical White Mountains forest walk, with a nice rock sculpture along the way.
About 20 minutes after leaving the top of South Kinsman, we passed just to the east of Harrington Pond, with a beautiful view of the sky towards the south-west:
It was a steep drop off of the top of South Kinsman, with several small waterfalls along Eliza Brook:
This section of Kinsman Ridge Trail forms a small part of the famous Appalachian Trail, which runs from Springer Mountain in Georgia to Mt Katahdin in Maine, some 2190 miles, end-to-end. Along the Appalachian Trail there are lean-tos and huts used by thru-hikers for overnights, as well as for day-hikers like Eric and I for quick rests. One of those huts, Eliza Brook Shelter, is found along Kinsman Ridge Trail:
We arrived at the Shelter at 4:45pm and, about a half-hour later, we arrived at the junction of Reel Brook Trail, which we took, heading west, downhill.
After descending down Reel Brook to NH Rt 116 in around 3.5m of pleasant White-Mountain forest we arrived back where we started – it was nearly 8pm!
The loop over North and South Kinsman had taken us 9 hours, 13 hours if you include the drive up from Durham and back home. But it was a fantastic day.
My second major priority at IH was finding a better way for Plan to allocate resources, which meant deciding where the agency would grow. This felt like a very strategic question: Plan was growing quickly those days, and deciding where to invest those new resources was important. It would be a tangible manifestation of our strategy.
My own experience with this topic was, in some ways, an example of how not to approach these decisions. As Regional Director for South America, before going to IH, I had obtained authorisation to negotiate with the government of Paraguay with the aim of reaching an agreement for Plan to work there. From my perspective as Regional Director, this made sense, and with my old friend Andy Rubi acting as International Executive Director at the time, before Max’s arrival, I was able easily to get approval and so we began to work in Paraguay. My well-known ability to dazzle senior-management meetings with slick presentations didn’t hurt, either!
In retrospect, even by the time I arrived at IH soon after we opened in Paraguay, that decision seemed questionable: there were many places in the world with more need than Paraguay. I had been very parochial in my approach, battling to expand as much as possible in South America, my “patch,” not really considering what was best, overall. But there had been no overall strategy for allocating resources across countries in Plan at that point, no analytical approach to balance the normal political advocacy and rhetorical skill that was all we had. So I was approaching things in the “normal” way.
Helping the organization make these sensitive decisions in a strategic manner would be valuable, a key lever of change that would help us “think globally and act locally.” Once at IH, I thought that if I could find a way to approach resource allocation in a skilful way, it might help us pull together and operate as a united organisation despite the centrifugal forces created by regionalisation.
But, could I find a way for Plan to allocate resources in an objective way?
International nongovernmental organizations (INGOs) can scale up their work and impact in several ways, but they often find expansion to be difficult to manage. Of course, there are well-known strategic and managerial challenges facing growing organizations in all sectors of the economy, and INGOs in particular face tough choices when seeking to scale up their impact.1 In addition, unlike private and public sector organizations, INGOs lack simple and commonly accepted analytical tools for targeting additional resources consistent with their organizational aims. A slow but steady blurring of institutional focus can result.
As I have described earlier, by the time I arrived at IH, Plan was quite decentralized, with a structure divided into six regions spanning the globe; within these regions were 42 program country offices. Day-to-day management was undertaken by the International Executive Director (“IED”) and six Regional Directors; International Headquarters staff, based in Woking, England, provided services to program and donor country operations. Members of the International Board of Directors, who were all voluntary, were nominated by the national boards of the donor country offices, in numbers based on the number of children supported by each donor country. Staff in Plan’s fourteen national donor country offices were responsible for recruiting and serving individual sponsors and other donors.
Plan’s income grew strongly over the 1990s, and therefore annual field expenditures were increased from around $50 million in 1987 to over $219 million in 1997, an impressive increase in real terms of more than 220%.
Before 1995, when we created a new approach, Plan’s geographical expansion was guided pragmatically and opportunistically. The result was that incremental resources were directed toward countries where the organizational capacity to grow already existed. Although there is nothing inherently wrong with opportunistic growth, or pragmatism for that matter, this approach allowed the organization to drift.
For example, as can be seen in the Figure, the world average under-five mortality rate (U5MR), weighted for population, dropped continuously from 1975 to 1993. The world was making good progress! The weighted-average U5MR corresponding to Plan’s caseload distribution rose from 1975 to 1980, indicating that Plan was gradually moving toward needier countries. But after 1981 this trend reversed, and the organization gradually began to work in relatively less needy countries. In fact, Plan gradually was, unintentionally, evolving toward working in countries in which under-five mortality rates were decreasing more quickly than the global average.
Two examples illustrate the trend. First, from 1977 to 1978, Plan’s weighted-average U5MR increased from 126 to 132. This increase took place because of strong expansion in Burkina Faso, Bolivia, Haiti, Mali, and Sierra Leone, countries with U5MRs above the Plan average, and a reduction of caseload in Korea, with a relatively low U5MR. So although Plan was reducing its caseload in Ethiopia, a high-U5MR country, and increasing it somewhat in Colombia and the Philippines, which had U5MRs lower than Plan’s average, the net effect was to increase global weighted-average U5MRs.
From 1981 to 1982, Plan’s weighted-average U5MR dropped from 137 to 132. Here an increase in caseload in countries with U5MRs above the Plan-wide average, such as Burkina Faso, Mali, and the Sudan, was more than offset by strong growth in Colombia, Ecuador, and the Philippines, which were relatively low-U5MR countries. Caseloads were increased in Colombia, Ecuador, and the Philippines at least in part because it was easier for staff to manage growth in these countries, a trend that continued through the 1980s.
For an organization seeking to build better futures for deprived children, families, and communities, this drift toward relatively less needy environments was unsettling and inappropriate. Especially during a decade of exceptional growth, a mechanism to enable Plan managers to target organizational expansion was needed.
Plan’s situation was not unique. Geographic expansion experienced by INGOs is often strongly influenced by where growth can be managed. Internal politics, pressure from governmental development agencies and other external funders, attention from the mass media, theories currently in vogue among development professionals, the ability of an individual manager to speak persuasively in public, or simply the dynamics of a particular meeting often drive these decisions. As a consequence, organizational strategy – particularly concerning target populations – can become less of a focus. Day-to-day pressures dominate the attention of managers.
That sounds a lot like what driven me with the (in retrospect, wrong) decision to open in Paraguay!
Such pressures are not necessarily harmful. But without objective analytical tools that can demonstrate that resource allocation decisions are consistent (or inconsistent) with institutional strategy, organizational drift of the sort that Plan was experiencing can result.
To help correct this evolution toward less-needy populations, I proposed that a methodology be developed to direct Plan’s geographical expansion, and Senior Management approval was obtained.
A wide-ranging in-house analysis of global poverty trends, funding prospects, and organizational capacities was then carried out in 1994. The culmination of this strategic review was the November 1994 approval by Plan’s International Board of nine “Strategic Directions for Growth,” covering a range of issues such as program effectiveness, priorities for institutional strengthening, the fundraising approach, and a policy for human resource development.
One of these Strategic Directions was particularly relevant in developing a methodology to guide resource allocation: in the section entitled “Where to Work,” it was stated that “Plan should gradually evolve towards needier countries, and towards poorer regions within new and exist- ing program countries. The essence of Plan’s intervention is that useful and sustainable development is achieved, so that the quality of life of deprived children in developing countries is improved. The potential for this impact should be verified before entry into new program countries” (emphases added).
Therefore, the first step for the growth plan was to develop indicators to gauge the two central points of the policy statement: the need of a country and the potential for impact of Plan’s program there. Such indicators would have to be intuitive and useful for managers rather than suitable only for experts, employ data that were widely available in a regularly updated form and generally accepted, and amenable to quantitative techniques so that results could be as objective as possible.
Of course, a data-driven approach would only take us so far; but I thought it was the right place to start.
Because of the focus of Plan’s work on children, any management indicator of need had to be related to child welfare. The Under 5 Mortality Rate (“U5MR”) can be viewed as the “single most important indicator of the state of a nation’s children” for a variety of compelling reasons:2
- “It measures an end result of the development process, rather than an ‘input’”;
- It is “known to be the result of a wide variety of inputs”;
- It is less susceptible to the fallacy of the average because an advantaged child cannot be a thousand times more likely to survive than a deprived child.
At the same time, the U5MR is intuitive and useful to managers, and data are updated regularly by many agencies. Finally, the U5MR is amenable to quantitative manipulation because it is an absolute, not a relative, measure.
On this basis, I selected U5MR as the parameter by which Plan would assess need for its growth plan.
Measuring Potential for Impact
The creation of a simple indicator for potential for impact was more challenging, but the concept of a national performance gap, pioneered by UNICEF, turned out to be helpful.
The idea starts with the fact that a strong correlation exists between national wealth, as measured by gross national product (GNP) per capita, and various measures of social welfare. In general, the richer a country is, the better off its citizens are: average U5MR are lower, educational levels are higher, and maternal mortality rates are lower, for example. Because of this strong correlation, given a nation’s wealth, various indicators of social welfare can be predicted with a fair degree of certainty.
However, some countries achieve more than can be expected given their levels of national income, and others achieve less. These countries perform better than others. War, corruption, the political system of the country, budgetary priorities, and many other factors can affect this performance. In short, the performance of a country in deploying its national wealth, no matter how meagre, to achieve expected levels of social welfare must depend on a wide variety of factors – I felt that these were just the sorts of factors that could determine the potential for impact of Plan’s programs.
Just to go a bit deeper, consider two hypothetical countries with similar national wealth, as measured by their respective GNP per capita. The solid line in the Figure depicts the global correlation between income and some hypothetical measure of child welfare, constructed by carrying out a log regression analysis on the performance of all countries. As can be seen, country A has a (say, marginally) higher level of child welfare than does country B and is in fact doing better than the correlation analysis would have predicted. With the same economic resources, country A must somehow be creating a socioeconomic environment that is more amenable to child development than is country B. It is important to note that the absolute level of child poverty in both country A and country B can be quite severe, with many needy children in each country, but the relative performance of the two countries varies.
But we can see that something is going right in country A, relative to country B.
Bearing in mind that Plan sought to focus its work in areas where conditions are not hostile to sustainable development (it was not a humanitarian organization, at least in the mid-1990’s), the organisation might anticipate having more impact in the country that is achieving all that can be expected (no matter how little) with the resources (no matter how meagre) it has. In other words, Plan should target its marginal resources on country A instead of country B.
Thus, instead of somehow directly measuring the likely impact of Plan’s program in a given country, a task that is conceptually complex, I decided to use an indirect measure: the performance of that nation in achieving child development, no matter its national wealth.
To assess this performance concretely, a compound index of the status of children was created. The index was formed by combining the U5MR, the percentage of primary school children reaching grade 5, and the enrollment ratio of females as a percentage of males in primary school. These data are all readily available, intuitively simple to use, and absolute rather than relative measures. (The U5MR is therefore used twice in this analysis: once directly, to measure need, and again indirectly, as one of three components combined and analyzed to measure government performance. The U5MR was chosen again because it is an effective measure of need and at the same time well represents the impact of efforts of a government in the health and education areas.)
This index, which I referred to as the “Plan Index”, was then analyzed to determine whether a given country, while qualifying as a Plan program country, was achieving more or less than could be expected given its national income. The difference between actual and expected performance was denoted as the “Plan Gap”.
I calculated the Plan gap by performing a standard log regression on the Plan Index against per capita income at purchasing power parity. A graphical portrayal of the result is given in the Figure; the gap between the smooth series of diamond-shaped points, which represents expected levels of the Plan Index for all countries qualifying as program countries, and real levels, shown as round points, represents the Plan Gap. A positive Plan Gap (actual points above predicted levels) indicates that a country is performing better than would be expected given its national wealth; a negative gap suggests that performance is lagging.
The analysis described was carried out on the eighty-one countries that Plan considered for program operations. Then these countries were prioritized by combining the U5MR (measuring need) with the Plan Gap (measuring potential for impact); the U5MR was added to 2.5 times the Plan Gap to produce a compound index that was used for sorting.
The results are shown next: the table orders countries by this compound index; current program countries are shown in italic type, and countries selected for active consideration as new program countries are shown in boldface type. Thus Niger would appear to have the highest priority and the Dominican Republic the lowest. Four countries in which Plan had program operations in 1995 – Colombia, Paraguay, Sri Lanka, and Thailand – no longer qualified and therefore we decided to discuss their phase-out.
All that data analysis was great, but it took us only so far. We thought that a methodology based exclusively on data would still miss much of value: informed judgment, experience, and intuition – also valuable tools when considering resource allocation. And responsiveness and flexibility are two of the virtues of NGOs. These attributes can be especially useful when employed in the light of the rigorous data-driven analysis that was carried out.
Therefore, we arranged for the quantitative analysis outlined above to be reviewed by a panel of Plan staff, a member of Plan’s International Board of Directors, and an invited guest from another large INGO. A few of the qualitative factors examined in this review included:
- Projected U5MR. What is the trend for need in the country? Is the effect of HIV/AIDS likely to increase U5MRs beyond current trends?
- Development climate. Is the environment in the country conducive to development? Is the government in favor of NGOs working there? Has the government signed the Convention on the Rights of the Child, and produced a plan of action to implement the convention?
- Risk. How risky is the environment in the country? Is it stable? Are international investors working there? How likely is conflict, war, or some other similar problem?
- Market potential. Is there likely to be interest from sponsors and other donors? Are there ties between the country and any of Plan’s donor countries?
- Saturation. How many INGOs, bilateral agencies, and multilaterals operate in the country? What are their budget and geographical coverage? Is there room for Plan?
- Caseload potential. Is the population of needy children large enough to enable sufficient economies of scale for Plan?
Starting with the quantitative analysis outlined above, this discussion produced a proposal for resource allocation (a growth plan), which was reviewed by Plan’s senior management team of field and headquarters-based staff. Thus the objective analysis was complemented by extensive discussion based on real, informed experience.
For example, although analytical work highlighted Niger as the highest priority in 1995, political instability there (not completely captured in the quantitative analysis outlined above) meant that Plan did not consider working in that nation until later. And though some Plan Regional Directors felt strongly that Plan should continue to direct resources to countries such as Colombia and Sri Lanka, analytical results were helpful in convincing managers that these countries, though undeniably poor, had less child-related need than others and should thus be lower priorities for the organization.
The final growth plan was therefore created by combining the priorities and recommendations emerging from rigorous analysis with the informed experience of field-based staff. Decisions were influenced, still, by political influence within the organisation and by rhetorical flourish, but these factors were now balanced by data.
I attach here a version of the growth plan prepared for consideration by Plan’s International Board of Directors in June, 1995 – plan-international-growth-plan. Note, on page 7, a recommendation that Plan phase out operations in Paraguay!
During the rest of my time at IH, Plan’s senior management team frequently reviewed resource allocation requests, both when annual budgets were formally approved and when adjustments were made during the year. Since discussions began with a review of the analytical results from the growth plan, the entire process became less confrontational, more objective, less emotional, and more productive. The competing views of field managers were tempered with objective and rigorous analysis. Rarely, when consensus on a particular resource-allocation decision was not reached, Max made the final decision. In most, but not all, cases, he endorsed the course of action recommended by the growth plan. Where his decision varied from the plan, it was often to strike a geographical balance across Plan’s regions. These more-objective discussions had a significant effect on resource allocation decisions.
However, the process used to develop the growth plan was far from perfect. I managed the project, partly this was because of my own background and training in engineering, I was comfortable with the mathematics underlying the growth plan. In particular, explaining the “Plan Gap” to those in senior management with different backgrounds was challenging.
Feedback was sought and endorsement gained at several points along the way as we developed the methodology but, unlike the development of Plan’s organizational goals (described last time), real involvement from the field was minimal, limited to giving feedback rather than, as in the earlier project, managing parts of the effort. The emotional commitment of members of my department to the redirection of Plan’s growth toward particular areas (Africa) or issues (HIV and AIDS) was strong; a vocal “Africa lobby” took vigorous part in the discussions as well as behind the scenes. And, in contrast to our work on Plan’s goals, the process did not begin with an organization-wide workshop, and communication of results to the wider organization was sporadic.
Personally, I was quite enamored of the elegant methodology that emerged, taken by its rigour and the insights embedded in the Plan Gap and Plan Index. As a result, even though Max was just as pleased with the end result as I was, and greatly appreciated its rigour (he was also an engineer by training), ownership of the growth plan was less evident outside headquarters, and resistance to the results that came from its application was pretty strong.
Why did development of the growth plan stray from the lessons learned in successfully developing the Program Directions (and, as will be described, the final of the three projects, the restructuring of Plan’s country operations)?
I think that, in part, it was because, unlike the other two projects, the growth plan was by nature a win-lose proposition. The growth plan led to quantitative growth of the organization being redirected from one area to another, with some regions gaining resources and others losing. This led to a high level of anxiety on the part of field staff. Together with the emotional attachment of staff in my department and myself to the growth plan model, the trap was set and we fell into the old top-down behaviors that had been common in earlier reincarnations of Plan’s headquarters.
Still, I think that the growth plan served a useful purpose. By the end of 1999, another review of Plan’s growth strategy concluded with recommendations forwarded to senior management. This review was based on the approach outlined here, further refining the model built in 1995. Although reaching similar conclusions, the study focused on internal systems needed to ensure effective short-term management of growth supply and demand, while updating the long-term, strategic aspect of the original plan with identical methods and similar results.
So, while not entirely successful, the Growth Plan helped us to allocate resources more strategically, and I certainly learned some lessons on how NOT to manage sensitive projects like this one!
My next blog in this series will describe how we finished the restructuring of Plan’s field operations, which led to the creation of Country Offices. It was a big effort, with huge implications for many people… and it went much better.
Stay tuned for more!
Here are links to earlier blogs – climbing 48 New Hampshire peaks and reflecting on a career in international development:
- Mt Tom (1) – A New Journey;
- Mt Field (2) – Potable Water in Ecuador;
- Mt Moosilauke (3) – A Water System for San Rafael (part 1);
- Mt Flume (4) – A Windmill for San Rafael (part 2);
- Mt Liberty (5) – Onward to Colombia, Plan International in Tuluá;
- Mt Osceola (6) – Three Years in Tuluá;
- East Osceola (7) – Potable Water for Cienegueta;
- Mt Passaconaway (8) – The South America Regional Office;
- Mt Whiteface (9) – Empowerment!;
- North Tripyramid (10) – Total Quality Management for Plan International;
- Middle Tripyramid (11) – To International Headquarters!;
- North Kinsman (12) – Fighting Fragmentation and Building Unity: New Program Goals and Principles for Plan International.
- See (Edwards and Hulme, 1992; Billis and MacKeith, 1992; Hodson, 1992)
- Reference to UNICEF here?