The Lack of Controversy over Well-Targeted Aid. There are a number of high-profile public debates about the value of overseas aid (example).
These debates generally have intelligent people and arguments on both sides, and they rightly give many people the sense that “Does aid work?” Is a complex question with no simple answer. However, we believe these debates are sometimes misinterpreted, causing unnecessary confusion and concern. Giving More Globally, and Less Locally. There was a lot of good news this week in the annual report on American philanthropy published by the Giving USA Foundation.
Overall giving passed its prerecession peak, according to the organization’s calculations. Generosity in nearly every category was up, in some cases strongly. But there was one bit of bad news for the people in the world who have the least: Giving to international affairs was down 3.6 percent over the last year in the catchall category that includes aid, development, relief and human rights organizations. Giving to support the relief effort for the Nepal earthquake. GiveWell aims to find giving opportunities that allow donors to do as much good as possible with their donations, and our research efforts focus on that goal.
We have not researched giving opportunities related to the relief effort for the Nepal earthquake, specifically. Passive vs. rational vs. quantified. We’re excited about the project of making giving more analytical, more intellectual, and overall more rational.
At the same time, we have mixed feelings about the project of quantifying good accomplished: of converting the impacts of all gifts into “cost per life saved” or “cost per DALY” type figures that can then be directly compared to each other. We believe that these two projects are too often confused. Many expect (or assume) GiveWell to make all its recommendations on the basis of “cost per unit of good” formulas, and it often seems that the rare people who want to be intellectual and rational about their giving are overly interested in explicit quantification. Donor coordination and the “giver’s dilemma” This year we’ve dealt with some particularly intense manifestations of what one might call the “giver’s dilemma.”
Imagine that two donors, Alice and Bob, are both considering supporting a charity whose room for more funding is $X, and each is willing to give the full $X to close that gap. If Alice finds out about Bob’s plans, her incentive is to give nothing to the charity, since she knows Bob will fill its funding gap. Conversely, if Bob finds out about Alice’s funding plans, his incentive is to give nothing to the charity and perhaps support another instead. Putting the problem of bed nets used for fishing in perspective. A recent article in the New York Times describes people using insecticide treated bed nets for fishing instead of sleeping under the nets to protect themselves from malaria-carrying mosquitoes.
The article warns that fishing with insecticide treated nets may deplete fish stocks, because the mosquito nets trap more fish than traditional fishing nets and because the insecticide contaminates the water and kills fish (“the risks to people are minimal, because the dosages are relatively low and humans metabolize permethrin [the insecticide] quickly”). We recommend donating to the Against Malaria Foundation (AMF), an organization that funds distributions of long-lasting insecticide treated bed nets, so we’d like to address the concerns raised in the article.
Net distributions funded by the Against Malaria Foundation. Our take on “earning to give” GiveWell exists to help people do as much good as possible with their financial giving.
We’re interested in the related question of how to do as much good as possible with one’s talents and career choice, and so we’ve been interested in the debate that has sprung up around last month’s article by Dylan Matthews on “earning to give.” One of the reasons that we have chosen to focus our analysis on how to give well – rather than on how to choose a career well – is that we feel the latter is much harder to provide general insight about.
Everyone’s dollars are the same, but everyone’s talents are different – so even if two people have identical views about the most important causes, the most promising solutions and the best organizations, they may rightly end up doing two very different jobs if they have different abilities. Small, unproven charities. Imagine that someone came to you with an idea for a startup business and offered you a chance to invest in it.
Which of the following would you require before taking the plunge? Familiarity with (or at least a lot of information about) the people behind the project Very strong knowledge of the project’s “space” (understanding of any relevant technologies, who the potential customers might be, etc.) As much information as possible about similar projects, both past and present Unless you’re an unusually adventurous investor, you probably answered with “All of the above.” After all, there’s a risk of losing your investment – and unlike with established businesses (which have demonstrated at least some track record of outdoing the competition), here your default assumption should be that that’s exactly what will happen.
Now what is the difference between this situation and giving to a startup charity? When is a charity’s logo a donor illusion? Qualitative evidence vs. stories. Our reviews have a tendency to discount stories of individuals, in favor of quantitative evidence about measurable outcomes.
There is a reason for this, and it’s not that we only value quantitative evidence – it’s that (in our experience) qualitative evidence is almost never provided in a systematic and transparent way. If a charity selected 100 of its clients in a reasonable and transparent way, asked them all the same set of open-ended questions, and published their unedited answers in a single booklet, I would find this booklet to be extremely valuable information about their impact. The problem is that from what we’ve seen, what charities call “qualitative evidence” almost never takes this form – instead, charities share a small number of stories without being clear about how these stories were selected, which implies to me that charities select the best and most favorable stories from among the many stories they could be telling.
Why We Can’t Take Expected Value Estimates Literally (Even When They’re Unbiased) While some people feel that GiveWell puts too much emphasis on the measurable and quantifiable, there are others who go further than we do in quantification, and justify their giving (or other) decisions based on fully explicit expected-value formulas.
The latter group tends to critique us – or at least disagree with us – based on our preference for strong evidence over high apparent “expected value,” and based on the heavy role of non-formalized intuition in our decisionmaking. This post is directed at the latter group. A conflict of Bayesian priors? This question might be at the core of our disagreements with many: When you have no information one way or the other about a charity’s effectiveness, what should you assume by default?
Our default assumption, or prior, is that a charity – at least in one of the areas we’ve studied most, U.S. equality of opportunity or international aid – is falling far short of what it promises donors, and very likely failing to accomplish much of anything (or even doing harm). This doesn’t mean we think all charities are failing – just that, in the absence of strong evidence of impact, this is the appropriate starting-point assumption. Many others seem to have the opposite prior: they assume that a charity is doing great things unless it is proven not to be.
Updated Thoughts on Our Key Criteria. For years, the 3 key things we’ve looked for in a charity have been (a) evidence of effectiveness; (b) cost-effectiveness; (c) room for more funding. Over time, however, our attitude toward all three of these things – and the weight that we should put on our analysis of each – has changed. This post discusses why On the evidence of effectiveness front, we used to look for charities that collected their own data that could make a compelling case for impact. We no longer expect to see this in the near future. GiveWell_ Organizations running evidence-backed programs - Google Sheets. Poor in the U.S. = rich. Surveying the Research on a Topic. We’ve previously discussed how we evaluate a single study. For the questions we try to answer, though, it’s rarely sufficient to consult a single study; studies are specific to a particular time, place, and context, and to get a robust answer to a question like “Do insecticide-treated nets reduce child mortality?”
One should conduct – or ideally, find – a thorough and unbiased survey of the available research. Our Story. Guest Post: Proven Programs are the Exception, not the Rule. This is a guest post from David Anderson, Assistant Director at the Coalition for Evidence-Based Policy, the group responsible for the Evidenced-based Programs website. Mr.
Anderson’s responsibilities include reviewing studies of effectiveness and looking for proven programs. He’s worked at the Coalition since 2004. Your dollar goes further when you fund the right program. Diseases. A note on this page's publication date The content we created in 2009 appears below. This content is likely to be no longer fully accurate, both with respect to the research it presents and with respect to what it implies about our views and positions. Social Programs that Just Don't Work. Most charities' evidence. Your dollar goes further overseas. We understand the sentiment that "charity starts at home," and we used to agree with it, until we learned just how different U.S. charity is from charity aimed at the poorest people in the world.
Helping people in the U.S. usually involves tackling extremely complex, poorly understood problems. Many popular approaches simply don't work. Evidence of Impact for Long-Term Benefits. GiveWell. Thoughts on the End of Hewlett’s Nonprofit Marketplace Initiative. Celebrated charities that we don’t recommend. Normally, we focus on identifying outstanding charities, and minimize the time spent on opaque or otherwise lackluster ones. Your donation can change someone's life. Giving 101: the basics. The worst way to pick a charity. Guest posts from donors. The following is a guest post from one of our Board members, Tom Rutledge, that he wrote to reflect on his personal experiences as a GiveWell supporter.
I was a Jerk for GiveWell When I first learned about GiveWell, I … [CLICK TO READ MORE]
Common Problems with Formal Evaluations: Selection Bias and Publication Bias. How We Evaluate A Study. About GiveWell. A medium-depth overview. Why We Recommend So Few Charities. Charity site visits. Official Records. Our People. Common Questions. Frequently Asked Questions About Our Charity Research and Recommendations. Official Records.
Our Shortcomings. GW's credibility. Our Principles for Assessing Evidence.
Guide to impact analysis. All international charities we considered (2008-2009 report) Some Considerations Against More Investment in Cost-Effectiveness Estimates. Interpreting the Disability-Adjusted Life-Year (DALY) metric. Cost-effectiveness. The root causes of poverty. Non-English-speaking charities reviewed. Exploring Policy-Oriented Philanthropy. "Priority" programs for international aid: proven health interventions. Process for identifying top charities. Frequently Asked Questions About Our Charity Research and Recommendations. Top charities. Strategic Cause Selection. Economic empowerment charity (international) Developing-world education (in-depth review) Charities.
Intervention Reports. Organizations promoting generous, effective giving. Our updated top charities.