CASE STUDY
What Great Impact Looks Like
One truly impressive example of nonprofit impact is that of the National Foundation for Infantile Paralysis (NFIP), later rebranded as the March of Dimes, which played a central role in largely eliminating polio in the US.
In the 1930s, polio caused paralysis in hundreds of thousands of victims, mostly children, each year, including thousands in the US alone. The NFIP was founded in 1938 by President Roosevelt, himself an adult victim of polio. Its mission was “to lead, direct, and unify the fight” against polio. Central to NFIP’s mission was funding research into a vaccine for polio. Among many other initiatives, NFIP funded the efforts of a young Dr. Jonas Salk to produce a vaccine and then conduct a massive controlled trial on two million children in 1954. Those efforts proved successful, and the Salk vaccine was approved for mass immunization throughout the US in 1955 through a campaign heavily promoted by NFIP. Within two years, polio cases had been cut by 90 percent. The disease was largely eliminated in the US by 1961 and fully eliminated there by 1979.
Note that NFIP’s mission wasn’t to eliminate polio but rather to play a leadership role in the fight against it. NFIP went above and beyond what was already an ambitious mission. This is one of those happy situations where the impact of a nonprofit (working with many other partners, including the US government) well and truly exceeded the expectations set by its big mission.*
THE BENEFITS OF MEASURING IMPACT
Measuring impact effectively can be hard, so you may well ask yourself: Why should we bother when we can get by on measuring activities, such as reports published or meals delivered? Or by preparing selective, feel-good case studies as evidence of impact? Why go the extra mile, especially if it will consume precious funding and staff time? These are fair questions.
Measuring activities is useful, and often a precondition to measuring outcomes, but the investment in measuring your actual impact (i.e., outcomes) can be a force multiplier for your work in a number of ways.
1. Understanding What Works and What Doesn’t
When you know what works most effectively, you can allocate your resources accordingly, generating more impact for the funding available. You may decide to reallocate funding to more successful initiatives, or drop or retool initiatives that are not proving as effective. Good evidence gives you the tools to invest your resources more efficiently. This is the most important reason to invest in rigorous impact measurement. Successful businesses have the discipline of the financial bottom line, so are constantly assessing which of their initiatives are profitable and which are not, and adjusting behavior accordingly. Nonprofits should bring a similar discipline to their performance, but based on impact, not profit.
The most effective way to do this, though one that is not always feasible (due to cost or lack of available data), is to get empirical evidence of your impact. With this, you can assess whether your organization’s efforts are delivering the change you want to see. Of course, you can rely on anecdotal evidence or your own judgment, but these are not substitutes for rigorous evaluation.
Efforts to eliminate malaria are instructive. Insecticide-treated bednets are known to be effective tools in preventing infection. But experts disagreed over whether nonprofits could better promote the use of bednets among vulnerable populations in Kenya by giving them away or selling them cheaply. Those advocating for selling the nets (for a nominal fee) believed this would screen out those who would not use the nets and increase the likelihood that those who bought them would use them. Yet a randomized control trial* provided clear evidence that the better strategy was to give nets away.6 Now, based on this research, a number of highly cost-effective nonprofits like Against Malaria Foundation have built their missions around giving away free nets.
2. A Powerful Fundraising Tool
Clear evidence of impact provides validation to your existing donors, and allows you to make a strong case to potential donors. Donors—from members of the public to the largest foundations—want to know that their support will make a difference. In the absence of evidence, we tell them stories and share details of activities. These can be compelling in building engagement, but they are no substitute for evidence of genuine impact.
One of the most important recent trends in philanthropy demonstrates this. Donors have long been resistant to the idea of giving cash to poor individuals in developing countries, instead of, say, providing food or job training or shelter. They worried that support in the form of direct cash payments would cause local prices to rise and stoke resentment from community members who did not receive cash. Underlying these concerns were deeply held and often discriminatory beliefs that poor people don’t know how to spend money responsibly and shouldn’t be trusted to make their own financial decisions. Over time, evidence built up by early pioneers of cash payments have led to widespread donor support and significantly increased funding for cash transfer programs7 and the growth of effective organizations like GiveDirectly. Studies in rural Kenya have shown that basic income not only positively impacts individual households (which invest in things like livestock and better housing) but also benefits those in nearby villages and provides a stimulus for local economies. This evidence has disproven negative donor assumptions and prejudices and built support for not just GiveDirectly but the whole model of direct cash payments to reduce poverty.8
3. Mobilizing Others
In addition to helping you raise resources, robust evidence of impact allows you to amplify your influence, accelerating progress in achieving your mission. It helps you mobilize others in support of your cause. Evidence of impact can generate media interest. For example, stories about Give-Directly’s impact have appeared in the Economist and other major publications and generated additional support for its mission and work.
Evidence of impact carries more weight with policymakers because they are often inured to inflated and unverified claims of nonprofit impact. And it can aid the work of peer organizations—particularly those who cannot afford to invest in robust impact measurement—as they can utilize your findings in their own programs and communications. Perhaps most critically, evidence can unlock more sustainable sources of funding and uptake by governments. In the case of cash transfers, longterm investment in evidence eventually convinced governments to begin using cash transfers as a form of poverty alleviation, with governments like Pakistan9 and India10 now distributing hundreds of millions in cash transfers to their populations.11
Another example comes from the organization More in Common, mentioned earlier. It works on a wide range of programs to reduce community polarization and the threat of “us-versus-them” divisions. For several years, they have advocated Canada’s model of community-based refugee sponsorship as a policy that both inspires greater public confidence and results in better outcomes than traditional top-down refugee resettlement. More in Common conducts extensive polling of public perceptions, and they have consistently found that significantly higher numbers of Americans and Europeans will support the intake of refugees when they are directly sponsored by local communities. This evidence has been used widely by networks of refugee advocate policymakers and elected officials and has been highly influential with the US and UK governments in particular.*
4. Accountability to Those You Serve and Your Staff
Properly measuring and communicating impact also allows your organization to build accountability and trust with the individuals and communities your organization serves, and provides them with evidence that you’re upholding your commitments. It’s not just donors who care about how charitable funds are spent. Those you serve also care deeply. If you want community members to actively participate in your work and give feedback, you need to show them you’re holding up your end of the agreement. Providing concrete evidence of impact is also helpful if you work in an environment where, for whatever reason, people tend to be skeptical of civil society groups. The same goes for staff. Effective measurement of impact gives your staff a clear understanding of the change that their work is contributing to and can be a powerful motivating force.
THE CHALLENGES OF MEASURING IMPACT
When it comes to measuring impact, nonprofits commonly make four errors.
They measure overhead, not outcomes;
They measure what’s easy, not what counts;
They decide that it is “impossible” to measure impact, so they don’t try, instead of trying to find suitable proxies; and
They fail to distinguish between when they are having direct impact and when their impact is indirect (for example, by influencing systems).
We’ll explore each in turn.
1. Overhead Is Not Impact
Given the challenges of assessing nonprofit performance, those seeking to do so (most often, donors) often turn to one thing they can measure easily—overhead. But overhead is an input. It is an organization’s administrative and fundraising costs as a percentage of total expenditure. Some arbitrary overhead percentage (10 or 15 percent) is often set as a target. These costs are easy to measure, as they will be broken out in the organization’s budget, but they tell you nothing about the organization’s impact.
Overheads are a necessary cost of running a nonprofit. If you want to raise funds, you need to spend money on fundraising. If you want high-quality staff to carry out your ambitious mission, you need to invest in recruitment and training and benefits. If you want to run a tight financial ship, you need a strong finance team and good financial systems. All of these costs are overhead. Of course, it’s always appropriate to ask whether or not you are spending the right amount on administration and fundraising, but that is about good management, not impact.
The superficial attraction of using overhead as a proxy for impact is the assumption that an organization with lower overhead is more efficient than a peer organization with larger costs. But this is not necessarily the case, as overhead is only relevant to the extent it drives impact or diverts resources from it. Nonprofits that invest more in their staff and internal systems will have higher overhead, but that may translate into greater impact than a peer that spends less on staff and internal systems. It may also reduce the risks of something going wrong, such as financial malfeasance.* Those who judge organizations primarily by their overhead lack a fundamental understanding of how nonprofits work, and they overlook many high-performing organizations in the process.†
Business practice can provide a useful counterpoint. Because they have a clear metric to measure performance—namely, financial returns— no one seeks to compare businesses primarily on their overheads. Investors in Amazon or Tesla are not spending a lot of time comparing those organizations on the basis of what they spend on their sales, finance, or human resources departments. Rather, they’re focused on their financial returns (impact) and factors that most directly impact those returns over time. Nonprofits should be judged similarly.
2. Measure What Matters
Inputs and activities are much easier to quantify than the results of those activities. But as has been wisely observed elsewhere: “Not everything that counts can be counted, and not everything that can be counted counts.”12
Let’s focus on the second half of that statement. As we saw in the previous section, overhead is often used as a measure of impact, as it’s easy to measure, even though it doesn’t tell you much about impact. The same often goes for activities. In this section we’ll look at how the Freedom Fund moved from measuring what was easiest to count, to measuring what really mattered. And then we’ll look at the different situations that think tanks and advocacy organizations find themselves in as they try to measure their impact.
As I set out in the introduction to this chapter, at the Freedom Fund we started measuring direct impact in the form of “lives liberated” (as this was easier to count) and only later added other, more indirect, measures of impact, such as measurable reductions in slavery at the community level and at a broader systems level. This was our effort to measure what counts (i.e., overall reductions in the level of slavery).
The sustainability of your impact also counts. While the number of people in situations of trafficking (or poverty, hunger, etc.) in a community may decrease over the life span of a two-year program, will the rates go back up after the program is over, or after a few years? Are the interventions taking aim at alleviating the symptoms of a problem, or actually getting at root causes? If long-term sustainability is a part of your impact targets—that is, you aim to actually make a permanent or at least longterm impact on a person, community, or issue—then simply measuring impact at the end of a short-term program won’t give you a good sense of whether those changes will stick.
These examples show us that the most obvious, direct output figures often don’t tell the whole story, or even the right one. One way to improve impact measurement when it comes to modern slavery (or poverty, or hunger, or other societal change) is to focus on prevalence, i.e., the percentage of the population affected by the issue you are seeking to address. If we can start by establishing a baseline prevalence rate in a particular area before we start a program, then (ideally) we should be able to compare that to prevalence three, five, or ten years later to understand change over time. Using this approach, the Freedom Fund has been able to build a robust body of evidence, most of it generated through evaluations by leading research institutions, that shows that our approach has resulted in dramatic reductions in the prevalence of modern slavery in targeted communities over four-year periods.13
The second step to measuring prevalence reduction is assessing attribution. For example, modern slavery is tied to complex economic, social, and political systems and trends, so we can’t automatically assume that our programs have caused a documented fall in rates of modern slavery; there could be other causes. For instance, if at the same time we were working with a community, the police (independently of our collective efforts) launched a sustained crackdown on traffickers, that would likely cause slavery numbers to fall significantly, and the change couldn’t be attributed to our programs.