Give Feedback or Contact us

We welcome your comments and questions. We'll get back to you as soon as we can.

Tips for nonprofits See another tip
Fixed vs. marginal costs

Fixed costs don't increase when you expand your program. For example, you won't hire a second executive director if you win a big grant. Marginal costs expand with each new participant, site and so on.

When analyzing your costs, both are good to keep in mind. However, marginal costs are usually more important. You have to pay your fixed costs regardless. But how you choose to implement your program effects marginal costs. Is your counseling going to be more or less extensive? Is the travel voucher for $50 or $100?

Comparing marginal costs of a program to the outcomes that program generates tells you whether it is a good use of your precious resources.


A program is cost-effective if it is achieving high impact relative to alternative uses of its resources.

Imagine you are designing training. One option is to have a weeklong, onsite meeting. A second is to hold virtual training sessions. The onsite meeting may be more effective - but it will also be substantially more expensive. Analyzing cost-effectiveness allows you to determine if it is "worth it".

How do you analyze cost-effectiveness? Simple: divide cost by impact. For example: A program graduates an additional 10 students (net of counterfactual) at a cost of $10,000. The cost-effectiveness of the program is ($10,000 / 10 graduations) = $1,000 per additional graduation.

Survey attrition

"Attrition" is when some people stop responding to a survey. People who stop responding might have different outcomes from the people you do survey, leading you to overestimate impact (or, less commonly, underestimate).

For example, a permanent housing program may survey participants one year after the program ended. The people who lost housing don't have an address, so you might not be able to collect data on them.


The counterfactual is a hypothetical: what would have happened if we had made a different choice? The answer is: lots of things. By comparing what would have happened if we didn't make the choice (counterfactual) with what actually happened we can determine what our choice (for instance, to run a program) accomplished.

This is a fancy way of saying whether we caused a change or whether it is attributable to us. This is crucial to know because it tells us whether we should make the same choice in the future.

Funding is scarce: should we expand our program or find a different model that causes the lives of the people we serve to improve?


Outcome metrics are the direct measures of the success of a program in achieving its mission. Consider a program to reduce childhood mortality that provides free vaccinations to children in a rural area with poor medical infrastructure. The program might track the number of children treated, and the incidence of preventable childhood illnesses in the area it serves.

The first is an intermediate outcome. It captures progress toward achieving a primary outcome. The second is a primary outcome. It indicates the program’s success.

Outputs vs. Outcomes

Outputs track delivery of a treatment (for example, 10 trainees participated in job training). In contrast, outcomes reflect success toward a philanthropic mission.

One way to distinguish outputs from outcomes to assess whether the variable is valuable in its own right or, instead, a means to an end. Some outputs – like meals served – can be in and of themselves valuable (as well as a mean to ends like budget relief and improved health status).

Other outputs are solely a means to an end; for example, 700 children taught with new curriculum. Most observers including the nonprofits themselves would not consider switching to a new curriculum "good" unless doing so engendered some desired changed. In this case, implementing the new curriculum might have been a step toward achieving an outcome but not valuable in and of itself.


Here are six tips for defining good outcome metrics:

  1. Ensure the quantity is clear and explicit. “A case of malaria treated” is better than “reduced malaria”.
  2. Avoid statistical terms or jargon.
  3. Present outcomes on a per person or per animal basis where possible. “One live saved” is easier to interpret than “a five percentage point reduction in mortality”.
  4. Limit one change per metric. Avoid composite metrics such as “psychosocial wellbeing”.
  5. Make sure the metric means the same thing each time it is collected. Use standardized scales and checklists.
Discounting of future benefits

Some benefits happen immediately and some happen in the future. In general, social scientists assume that a $1 benefit received in a year is worth less than a $1 benefit received today. And a cost of $1 incurred in a year is also worth less than a cost of $1 incurred today.

Here’s the intuition. Consider you have the choice to receive a check for $1,000 today or one in a year. All else equal, you would choose to get the check today - because you can spend it on something pressing or invest it and generate a return. In other words, although the benefits are the same on paper, you discount the benefit you can only use in the future. Cost-effectiveness analysis captures this in the form of a discount rate, which is typically set at 5%. This means we would value that same $1,000 payment in a year at $950 today.

Third-Party and Other Effects

Third-party effects, also known as externalities, are costs or benefits borne by non-participating individuals. For example, a job training program might benefit participants but harm non-participants by creating short-run competition for available positions and by reducing wage growth in the long run.

Externalities can be difficult to measure. However, it’s important to keep them in mind when designing and evaluating programs, as sometimes they can be substantial.