Extrême pauvreté: les surprenantes lois de puissance du don efficace

Inspiré d’un billet de Jeff Kaufman

J’ai pour la première fois découvert l’altruisme efficace à travers le plaidoyer de Peter Singer pour une approche stratégique à la lutte contre la pauvreté.

S’il me fallait résumer la force de ses arguments en une seule image, ce serait avec la juxtaposition ces deux graphiques.

1

worldincome_nolabel
Source: Doing Good Better1

Le premier graphique montre la distribution mondiale du revenu. En abscisse, les plus pauvres sont à gauche et les plus riches à droite. Cette distribution est très inégale, de telle façon que les plus riches sont extrêmement riches alors que la majorité de la population mondiale est relativement très pauvre. On appelle lois de puissance2 ce genre de distribution très asymétrique. On leur oppose des distributions moins extrêmes, telles que la distribution normale. La taille chez les humains suit une distribution normale : les plus grands humains ont une taille au plus 60% supérieure à la moyenne. Mais les humains les plus riches sont des centaines de fois plus riches que la moyenne. Les 10% les plus riches de la planète ont donc une capacité d’aider énormément les plus pauvres.

Mais qui est cette élite mondiale d’oligarques? Vous en faites probablement partie. J’ai pris soin d’effacer l’échelle de l’axe des ordonnées. Tentez de deviner à quel centile vous vous trouvez. Une fois que vous avez écrit votre réponse, regardez le graphique complet. Sans tricher, auriez-vous deviné à quelle partie de la courbe vous appartenez?3.

Ce résultat est peu intuitif: vous n’avez pas l’impression d’être nanti, mais vous faites partie des personnes les plus riches au monde, ce qui vous donne une opportunité d’aider énormément les plus pauvres. Ceci est la première des surprenantes lois de puissance de l’altruisme efficace.

2

distribution_dcp2
Source: DCP2.

Le second graphique montre le rapport coût-efficacité de 108 interventions de santé dans les pays en développement. Les données proviennent de la base de données DCP2, qui répertorie le coût de chaque intervention et son bénéfice en termes d’années de vie pondérées par la qualité, ou quality-adjusted life-year (QALY). La QALY est un outil qui permet de comparer différentes interventions de santé. Une année en pleine santé vaut 1 QALY. Une année d’une personne infectée par le VIH vaut 0,5 QALY, une année d’une personne atteinte de surdité 0,78 QALY4. Une intervention qui soigne la surdité d’une personne en parfaite santé pour 10 ans vaut donc (1-0,78)*10=2,2 QALY5.

Ces calculs de QALYs servent avant tout d’exemple illustratif. Ils soulignent l’importance de la quantification et l’utilité d’une mesure d’impact standardisée. L’altruisme efficace ne se limite pourtant pas aux calculs de QALYs, loin s’en faut. Dès qu’il s’agit de comparer des interventions en-dehors du domaine de la médecine ou de la santé publique, d’autres méthodes s’imposent, et sont fréquemment utilisées.6

Comme on peut le voir sur le graphique, les interventions de santé les plus efficaces sont non pas 30% plus efficaces que la moyenne, ni même 3 fois plus efficaces, mais bien des dizaines de fois plus efficaces. L’intervention la plus efficace dans la base de données DCP2 produit 15 000 fois plus de bénéfice que la moins efficace, et 60 fois plus que l’intervention médiane. De plus, il faut imaginer le graphique comme s’il était coupé à droite, avec des barres bien plus hautes qui ne sont pas montrées ici. En effet, au-delà de la base DCP2, les interventions de santé les plus efficaces sont particulièrement exceptionnelles : l’éradication de la variole en 1979 a prévenu plus de 100 millions de morts, pour un coût de 400 millions de dollars7.

Cela aussi est contre-intuitif. Les différentes ONG travaillant dans le domaine de la santé se ressemblent toutes, et peuvent sembler interchangeables. Mais en réalité, il est crucial de choisir la plus efficace. Si l’on choisit une ONG qui met en place de manière compétente une bonne intervention qui pourtant n’est pas exceptionelle, l’on risque de perdre plus de 90% de la valeur potentielle de son don.

Ainsi, ces deux graphiques8 résument l’importance d’un altruisme efficace. Ce mouvement se fonde sur l’idée que les chiffres ne sont pas décoratifs: lorsque l’on observe des ratios aussi extrêmes que ceux-ci, cela est un appel à agir.

  1. Doing Good Better, William MacAskill. Les données utilisées par l’auteur pour produire ce graphique proviennent de plusieurs sources. Entre le premier et le 21ème centile des plus riches, les données proviennent d’enquêtes auprès des ménages apportées par Branko Milanovic (voir par exemple Milanovic 2012). Pour les 73% les plus pauvres, les données proviennent de l’initiative PovcalNet de la Banque Mondiale. Pour les 0.1% les plus riches, le chiffre provient de The Haves and the Have-Nots: A Brief and Idiosyncratic History of Global Inequality, Branko Milanovic. 

  2. Voir Wikipédia, Loi de puissance

  3. L’application de Giving What We Can peut vous donner votre centile exactement. 

  4. Organisation Mondiale de la Santé 

  5. Les pondérations expriment la moyenne des préférences exprimées par les patients. Il y a plusieurs méthodes pour les mesurer, mais la plus commune consiste à demander aux patients de choisir s’ils préfèrent rester en vie avec une certaine maladie pendant une période donnée, ou vivre moins longtemps mais dans un état de santé parfaite (Torrance, George E. (1986). “Measurement of health state utilities for economic appraisal: A review”. Journal of Health Economics. 5: 1–30). Cette méthode présente certains désavantages, mais les systèmes de santé sont de toute façon obligés de hiérarchiser les maladies pour faire le meilleur usage de leur budget limité, et le QALY est pour l’instant l’outil de plus utilisé.

    Parmi les désavantages, il peut par exemple y avoir des biais dans les préférences exprimées. Si l’on interroge ceux qui n’ont pas la maladie, ils pourraient surestimer son impact, car le fait de poser la question donne une prééminence psychologique à la maladie. Le fait d’y penser lorsque la question est posée donne l’impression que la maladie va déterminer notre qualité de vie alors qu’en réalité la qualité de vie est déterminée par de multiples composantes. Mais l’inverse pourrait aussi se produire, si les participants à l’expérience ne réalisent pas à quel point une maladie est douloureuse avant d’en avoir souffert. Demander aux patients atteints de la maladie pourrait aussi mener à des biais dans les deux sens. Le fait de poser la question rappelle aux patients qu’ils vivent avec cette maladie et leur demande d’imaginer une vie en bonne santé, ce qui pourrait les amener à surestimer son influence sur leur qualité de vie. Au contraire, le fait d’avoir une maladie incurable pourrait pousser le patient à positiver sa situation pour ne pas perdre espoir, alors qu’une personne en bonne santé serait plus lucide. Au-delà de ces questions de biais cognitifs, certains philosophes considèrent que c’est l’expérience hédonique et non les préférences (même parfaitement dé-biaisées) qui est déterminante moralement. Enfin, le QALY ne permet généralement pas de dire qu’il est meilleur de mettre fin à une vie, même si les souffrances sont extrêmes, car les QALY négatifs sont rarement utilisés. Pour une discussion critique voir:

    Prieto, Luis; Sacristán, José A (2003). “Problems and solutions in calculating quality-adjusted life years (QALYs)” . Health and Quality of Life Outcomes. 1: 80. (archive)

    Broome, John (1993). QALYs. Journal of Public Economics. Volume 50, Issue 2, February 1993, Pages 149-167

    Mortimer, D.; Segal, L. (2007). “Comparing the Incomparable? A Systematic Review of Competing Techniques for Converting Descriptive Measures of Health Status into QALY-Weights”. Medical Decision Making. 28 (1): 66–89. 

  6. Voir par example l’Oxford Prioritisation Project, la comparaison de causes de 80,000 Hours, ou ce billet de Michael Dickens. 

  7. Toby Ord, The moral imperative towards cost-effectiveness (archive

  8. L’on pourrait se demander pourquoi nous rencontrons de telles distributions. Pour quelle raison les interventions de santé ne sont-elles pas distribuées normalement? Sans doute car l’efficacité d’une intervention est le résultat de la multiplication (plutôt que de la somme) d’un grand nombre de petits facteurs indépendants. Voir Wikipédia Loi log-normale

August 27, 2017

On the experience of confusion

I recently discovered something about myself: I have a particularly strong aversion to the experience of confusion. For example, yesterday I was looking into the relationship between common knowledge of rationality and Nash equilibrium in game theory. I had planned to spend just an hour on this, leisurely dipping into the material and perhaps coming out with a clarified understanding. Instead, something else happened. I became monomanically focused on this task. I found some theorems, but there was still this feeling that things were just slightly off, that my understanding was not quite right. I intensely desired to track down the origin of the feeling. And to destroy the feeling. I grew restless, especially because I was making some progress: I wasn’t completely stuck, it felt like I must be on the cusp of clarity. The first symptom of this restlessness was skipping my pomodoro breaks, usually a sure sign that I am losing self-control and will soon collapse into an afternoon nap. The second smyptom was to develop an unhelpful impatience, opening ever more new tabs to search for the answer elsewhere, abandoning trains of thought earlier and earlier. In the end I didn’t have time to do any of the other work I had planned that day!

This happens to me about once a week.

I don’t know if this description was at all effective at communicating my experience. It’s something far more specific than simple curiosity. I’m fine with not knowing things. I’m even happy to have big gaping holes in my knowledge, like a black rectangle on an otherwise-full map of the city. Provided the rectangle has clear boundaries and I know that, as a matter of principle, I could go explore that part of the city, and if I made no mistakes, I could draw the correct map.

Here’s another way of putting this. I’m not at all bothered if a tutor tells me: “The proof of this theorem, in appendix 12.B., relies on complicated maths. You may never understand it. But you have a good grasp of what the theorem states.” I have a picture in my head like:

I am infuriated if a tutor tells me: “When there are sticky prices, equation A looks like this.” What do we mean by sticky prices? And how does the equation follow? Tutor: “Here’s the mathematical statement of sticky prices. It involves completely different objects than equation A. Also, here’s a vague, hand-wavey intuition why the two are related.”

The problem here is not that there’s an empirical fact that I don’t know, or a proof step I don’t understand. I don’t even have a label to put on my confusion. It’s not that I don’t see how the conclusion follows, it’s that I don’t see how it could follow. It’s not that the map has dark patches. I don’t even know if I’m holding the map rightside up or upside down, and the map is written in cyrillic.

In school, I used to make myself unpopular by pursuing these lines of inquiry as far as they would let me, leading to long back-and-forths with my teachers. These conversations were often unproductive. Sometimes the implication was that I should just learn the words of the vague hand-wavey intuition as a kind of password. Naturally, I resented this. Both possibilities were enraging: either the educators themsleves believed that the words could pass for real understanding, or they just expected me to shut up and learn the password. Sometimes I was gently chided (or complimented?) for my curiosity, my apparent desire to know EVERYTHING, not to rest until the whole map was filled in. This too felt wrong: I’m not complaning about a small corner of the map left unfilled. The entire eastern part of the map is in cyrillic!

Although I hope that some people reading this might relate to my experiences, I suspect that I am out of the ordinary in the strength of my aversion to confusion. I have long thought that any of the success I’ve had in my academic pursuits was not due to intelligence, but to my refusal of explanations that felt unsatisfying in some sublte way. I say this not to humble-brag: I have good evidence that I am less intelligent than many of my peers. In school everyone used to participate in this maths competition every year. The questions required clever problem-solving, I consider them pretty close to an IQ test. They were completely different from our maths exams, which prized definitional clarity and rewarded practice. I was around the class median at the competition, but among the best at the exams. As another piece of evidence, I am seriously terrible at mental arithmetic: I routinely get simple sums wrong at the bakery, and not for lack of trying!

So I had long been aware that there was something different about how I asked questions, but only recently did I acquire the language to describe it accurately. I used to think it was “intellectual curiosity”, but as we have seen, “visceral aversion to even slight confusion” would be a more accurate label. I loathe contradiction and dissonance, not ignorance or uncertainty.

I have already talked a bit about how I think I’ve benefitted from this habit of thought. I think it may be one thing that people who get really into analytic philosophy have in common. It also comes with costs, mostly in the form of getting sucked into a productivity-wrecking hole of confusion, like with the game theory example. It would be much more rational to remain clam and composed, let the confusion go for a day or two, and then decide whether it makes sense to allocate more time to it. Part of why I’m getting sucked in so much, I suspect, is because I fear that if I stop, I will let the confusion slip by. I find that thought distressing. Perhaps it’s because I don’t want to forget I was confused, later remember the password, and adopt the confused knowledge that comes with it.

One way to help solve this is to keep a list of everything I am confused about. Then I can set a time limit on my intellectual escapades, and if I’m still confused by the end, I can write it down. Even if I never return to it, it feels much more satisfying to have a degree of meta-clarity (clarity about what I’m confused about) than to let confusion slip into a dark corner of my mind.

August 6, 2017

How much donation splitting is there, and should you split?

Cross-posted to the effective altruism forum.

Table of contents

  1. Table of contents
  2. Summary
  3. Many of us split
    1. Individual examples
    2. EA funds data
      1. Naive approach: distribution of allocation percentages
      2. Less naive apprach: weighted distribution of allocation percentages
      3. Best approach: user totals
    3. Other data
  4. Arguments for splitting
    1. Empirical uncertainty combined with risk aversion
    2. Moral uncertainty
    3. Diminishing returns
    4. Achieving a community-wide split
      1. Cooperation with other donors
      2. Lack of information
    5. Remaining open-minded or avoiding confirmation bias
    6. Memetic effects
  5. Recommendation
  6. Appendix: R code

Summary

Many aspiring effective altruists report splitting their donations between two or more charities. I analyse EA funds data to estimate the extent of splitting. Expected utility reasoning suggests that for small donations, one should never split, and always donate all the money to the organisation with the highest expected cost-effectiveness. So prima facie we should not split. Are there any convincing reasons to split? I review 6 arguments in favour of splitting. I end with my recommendation.

Many of us split

Individual examples

For example, in CEA Staff’s Donation Decisions for 2016, out of 14 staff members who disclosed significant donations, I count 10 who report splitting. (When only small amounts are donated to secondary charities, it is sometimes ambiguous what counts as splitting.) In 2016, Peter Hurford gave about 2/3 to Rethink Charity, and 1/3 to other recipients. Jeff Kaufman and Julia Wise gave about equal amounts to AMF and the EA Giving Group Fund.

EA funds data

I wanted to study EAs’ splitting behaviour more systematically, so I looked at anonymised data from the EA funds, with permission from CEA.

In the following sections, I describe various possible analyses of the data. You can skip to “best approach: user totals” if you just want the bottom line. The R code I used is in the appendix.

I was given access to a list of every EA funds donation between 2017-03-23 and 2017-06-19. Data on allocation precentages was included. For example, if a donor went to the EA funds website and gave $1000, setting the split to 50% “global health and development” and 50% “long-term future”, there would be two entries, each for $500 and with an allocation percentage of 50%. In the following, I call these two entries EA funds donations, and the $1000 an EA funds allocation.

Naive approach: distribution of allocation percentages

The simplest analysis is to look at a histogram of the “allocation percentage” variable. The result looks like this1:

naive

Here, most of the probability mass is on the left, because most donations are strongly split. But what we really care about is how much of the money being donated is split. For that we need to weight by donation size.

Less naive apprach: weighted distribution of allocation percentages

I compute a histogram of allocation percentages weighted by donation size. In other words, I ask: “if I pick a random dollar flowing through EA funds, what is its probability of being part of an EA funds donation which itself represents X% of an EA funds allocation?”, and then plot this for 20 buckets of Xs2.

lessnaive

Here, much more of the probability mass is on the right hand side. This means larger donors split less, and are much more likely to set the allocation percentage to 100%.

But this approach might still be problematic, because it is not invariant to how donors decide to spread their donations across allocations. For instance, suppose we have the following:

Allocation ID Name Fund Allocation % Donation amount
2 Alice Future 100% $1000
1 Alice Health 100% $1000
3 Bob Health 50% $1000
3 Bob Future 50% $1000

Here, Alice and Bob both split their $2000 donations equally between two funds. They merely used the website interface differently: Alice by creating two separate 100% allocations (perhaps the next month), and Bob by creating just one allocation but setting the sliders for each of the funds to 50%.

However, if we used this approach, we would count Alice as not splitting at all.

It’s an open question how much time should elapse between two donations to different charities until it is no longer considered splitting, but rather changing one’s mind. In the individual examples I gave above, I took one month, which seems like a clear case of splitting. Up to a year seems reasonable to me. Since we have less than a year of EA funds data, it’s plausible to consider any donations made to more than one fund as splitting. This is the approach I take in the next section.

Best approach: user totals

For each user, I compute:

  • Their fund sum, i.e. for each fund they donated to, the sum of their donations to that fund
  • Their user totals, i.e. the sum of all their donations to EA Funds

This allows me to create a histogram of the fraction of a user total represented by each fund sum, weighted by the fund sum3.

best

This is reasonably similar to the weighted distribution of allocation percentages, but with a bit more splitting.

Other data

One could also look at the Donations recorded for Vipul Naik database, or Giving What We Can’s data, and conduct similar analyses. The additional value of this over the EA funds analysis seemed limited, so I didn’t do it.

Arguments for splitting

Empirical uncertainty combined with risk aversion

Sometimes being (very) uncertain about which donation opportunity is best is presented as an argument for splitting. For example, the EA funds FAQ says that “there are a number of circumstances where choosing to allocate your donation to multiple Funds might make more sense” such as “if you are more uncertain about which ways of doing good will actually be most effective (you think that the Long-Term Future is most important, but you think that it’s going to be really difficult to make progress in that area)”.

High uncertainty is only a reason to split or diversify if one is risk averse. Is it sensible to be risk averse about one’s altruistic decisions? No. As Carl Schulman writes:

What am I going to do with my tenth vaccine? Vaccinate another kid!

While Sam’s 10th pair of shoes does him little additional good, a tenth donation can vaccinate a tenth child, or a pay for the work of a tenth scientist doing high impact research such as vaccine development. So long as Sam’s donations don’t become huge relative to the cause he is working on (using up the most efficient donation opportunities) he can often treat a charitable donation of $1,000 as just as worthwhile as a 1 in 10 chance of a $10,000 donation.

Moral uncertainty

The EA funds FAQ says that another reason for splitting could be “If you are more uncertain about your values (for example, you think that Animal Welfare and the Long-Term Future are equally important causes)”.

Does it make any difference if the uncertainty posited is about morality or our values rather than the facts? In other words, is it reasonable for a risk-neutral donor facing moral uncertainty to split?

This depends on our general theory for dealing with cases of moral uncertainty. (Will MacAskill has written his thesis on this.) We can start by distinguising moral theories which value acts cardinally (like utilitarianism) from moral theories which only value acts ordinally. The latter category would include theories which only admit of two possible ranks, permissible and impermissible (like some deontlogical theories), as well as theories with finer-grained ranking.

If the only theories in which you have non-zero credence are cardinal theories, we can simply treat our normative uncertainty like empirical uncertainty, by computing the expected value. (MacAskill argues persuasively against competing proposals like ‘my favourite theory’, see Chapter 1).

What if you also hold some credence in merely ordinal theories? In that case, according to MacAskill, you should treat the situation as a voting problem. Each theory can “vote” by ranking your possible actions, and gets a number of votes that is proportional to your credence in that theory.

The question is which voting rule to use. Different voting rules have different properties. A simple property might be:

Unanimity: if all voters agree that X>Y, then the ouput of the voting rule must have X>Y.

Let’s say we are comparing the following acts:

  1. Donate $1000 to charity A
  2. Donate $500 to charity A and $500 to charity B.

Unanimity implies that if all the first-order theories in which you have credence favour (1), then your decision after accounting for moral uncertainty will also favour (1). So provided our voting rule satisfies unanimity, moral uncertainty provides no additional reason to split. (In fact, a much weaker version of unanimity will usually do, if you have sufficiently low credence in pro-splitting moral theories.)

Diminishing returns

A good reason to split would be if you face diminishing returns. At what margins do we begin to see returns diminish sufficiently to justify splitting? This depends on how much donation opportunities differ in cost-effectiveness.

Suppose there are two charities, whose impact $ and are monotone increasing with monotone decreasing first derivatives. Then you should start splitting at such that .

If you have , you should donate to charity f and to charity g such that .

It’s generally thought that for small (sub-six-figure) donors, , that is, returns aren’t diminishing noticeably compared to the difference in cost-effectiveness between charities.

However, many people believe that at the level of the EA community, there should be splitting. What does this imply in the above model?

Let’s assume that the EA community moves per year (including Good Ventures). Some people take the view that the best focus area is more than an order of magnitude more cost-effective than others (although it’s not always clear which margin this claim applies to). Under some such view, marginal returns would need to diminish by more than 10 times over the 0-100M range in order to get a significant amount of splitting. To me, this seems intuitively unlikely. (Of course, some areas may have much faster diminishing returns than others4.) Michael Dickens writes:

The US government spends about $6 billion annually on biosecurity5. According to a Future of Humanity Institute survey, the median respondent believed that superintelligent AI was more than twice as likely to cause complete extinction as pandemics, which suggests that, assuming AI safety isn’t a much simpler problem than biosecurity, it would be appropriate for both fields to receive a similar amount of funding. (Sam Altman, head of Y Combinator, said in a Business Insider interview, “If I were Barack Obama, I would commit maybe $100 billion to R&D of AI safety initiatives.”) Currently, less than $10 million a year goes into AI safety research.

Open Phil can afford to spend something like $200 million/year. Biosecurity and AI safety, Open Phil’s top two cause areas within global catastrophic risk, could likely absorb this much funding without experiencing much diminishing marginal utility of money. (AI safety might see diminishing marginal utility since it’s such a small field right now, but if it were receiving something like $1 billion/year, that would presumably make marginal dollars in AI safety “only” as useful as marginal dollars in biosecurity.)

To take another approach, let’s look at animal advocacy. Extrapolating from Open Phil’s estimates, its grants on cage-free campaigns are probably about ten thousand times more cost-effective than GiveDirectly (if you don’t heavily discount non-human animals, which you shouldn’t) (more on this later), and perhaps a hundred times better after adjusting for robustness. Since grants on criminal justice reform are not significantly more robust than grants on cage-free campaigns, the robustness adjustments look similar for each, so it’s fair to compare their cost-effectiveness estimates rather than their posteriors.

Open Phil’s estimate for PSPP suggests that cage-free campaigns are a thousand times more effective. If we poured way more money into animal advocacy, we’d see diminishing returns as the top interventions became more crowded, and then less strong interventions became more crowded. But for animal advocacy grants to look worse than grants in criminal justice, marginal utility would have to diminish by a factor of 1000. I don’t know what the marginal utility curve looks like, but it’s implausible that we would hit that level of diminished returns before increasing funding in the entire field of farm animal advocacy by a factor of 10 at least. If I’m right about that, that means we should be putting $100 million a year into animal advocacy before we start making grants on criminal justice reform.

I find this line of argument moderately convincing. Therefore, my guess is that people who believe that their preferred focus area is orders of magnitude better than others, should generally also believe that the whole EA community should donate only to that focus area.

Achieving a community-wide split

Suppose you do think, for reasons like those described in the previous section, that because of diminishing returns, the community’s split should be . (There may be other reasons to believe this, for instance if the impact of different causes is multiplicative rather than additive.)

There are two ways that this could lead to you prefer splitting your individual donation: cooperation with other donors, and lack of information.

Cooperation with other donors

Suppose that at time t, before you donate, the communty splits . You are trying to move the final allocation to , so you should donate everything to (assuming your donation is small relative to the community). If the community’s allocation was , however, you should donate everything to . We can call this view the single-player perspective.

From this perspective, it’s very important to find out what the community’s current allocation is, since this completely changes how you should act.

But now suppose that there other donors, who also use the single-player perspective. For the sake of simplicity we can assume they also believe the correct community-wide split is 5. The following problem occurs:

Everyone is encouraged to spend a lot of time looking into current margins, to work out what the best cause is. Worse, if the community as a whole is being close to efficient in allocation, in fact what is best at the margin changes a whole lot as things scale up, and probably isn’t much better than second- or third-best thing. This means that it’s potentially lots of work to form a stable view on where to give, and it doesn’t even matter that much.6

Imagine the donors could all agree to donate in the same proportional split as the optimal community allocation (call this the cooperative perspective). They would obtain the same end result of a 70%/30% split, while saving a lot of effort. When everyone uses the single-player perspective, the group is burning a lot of resources on a zero-sum game.

From a rule-consequentialist perspective, you should cooperate in prisonner’s dilemmas, that is, you should use the cooperative perspective, even if, to the best of you knowledge, this will lead to less impact.

Even if we find rule-consequentialism unconvincing, act-consequentialism would still recommend investing resources to make it more likely that the community as a whole cooperates. This could include publicly advocating for the cooperative perspective, or getting a group of high-profile EA donors to promise to cooperate amongst themselves.

Lack of information

Suppose information about the community’s split was impossible or prohibitively expensive to come by. Then someone using the single-player perspective would have to rely on their priors. One reasonable-sounding prior would be one that is symmetrical on either side of , or otherwise has the expected value . This prior assumes that given no information about where others are donating, they are equally likely to collectively undershoot as to overshoot their preferred community-wide split.

On this prior, the best thing you can do is to donate 70% to and 30% to . So given some priors, and when there is no information about others’ donations, the single-player perspective converges with the cooperative perspective.

Remaining open-minded or avoiding confirmation bias

Because of confirmation bias and consistency effects, donating 100% to one charity may bias us in the direction of believing that this charity is more cost-effective. For example, one GiveWell staff member writes7:

I believe that it is important to keep an open mind about how to give to help others as much as possible. Since I spend a huge portion of my time thinking within the GiveWell framework, I want to set aside some of my time and money for exploring opportunities that I might be missing. I am not yet sure where I’ll give these funds, but I’m currently leaning toward giving to a charity focused on improving farm animal welfare.

I tend to find this type of argument from bias less convincing than other members of the EA community8. I suspect that the biases involved are insensitive to the scope of the donations, that is, it’s sufficient to donate a nomial amount to other causes in order to reduce or eliminate the bias. Then such considerations would offer no reason for significant splitting. It’s also questionable whether such self-deception is even likely to work. Claire Zabel’s post “How we can make it easier to change your mind about cause areas” also offers five techniques for reducing this bias. Applying these techniques seems like a less costly approach than sacrificing significant expected impact to splitting.

Memetic effects

Sometimes people justify splitting like so: “splitting will reduce my direct impact, but it will allow me to have more indirect impact by affecting how others view me”.

For example:

In the past we’ve split our donations approximately 50% to GiveWell-recommended charities and 50% to other promising opportunities, mostly in EA movement building. […] GiveWell charities are easier to talk about and arguably allow us to send a less ambiguous signal to outsiders.

And also:

I’ll probably also give a nominal amount to a range of different causes within EA (likely AMF, GFI and MIRI), in order to keep up to date with the research across the established cause areas, and signal that I think that other cause areas are worthwhile.

The soundness of these reasons depends very much on each donor’s personal social circumstances, so it’s hard to evaluate any specific instance. A few general points to keep in mind are:

  • There may be memetic costs as well as benefits to splitting. For example, donating to only one charity reinforces the important message that EAs try to maximise expected value.
  • From a rule-consequentialist perspective, it may be better to always be fully transparent, and not to make donations decisions based on how they will affect what others think of us.
  • There could be cheaper ways of achieving the same benefits. For example, saying “This year I donated 100% to X, but in the past I’ve donated to Z and Y” or “Many of my friends in the community donate to Z and Y” could send some of the intended signals without requiring any actual splitting.

Recommendation

I’m not convinced by most of the reasons people give for splitting. Cooperation with donors appears to me to be the best proposed reason for splitting.

To some degree, we may be using splitting to satisfy our urge to purchase “fuzzies”. I say this without negative judgement, I agree with Claire Zabel that we should “de-stigmatize talking about emotional attachment to causes”. I think we should satisfy our various desires, like emotional satisfaction or positive impact, in the most efficient way possible. It may not be psychologically realistic to plan to stop splitting altogether. Instead, one could give as much as possible to the recipient with the highest expected value, while satisfying the desire to split with the small remaining part. Personally, I donate 90% to the Far Future EA fund and 10% to the Animal Welfare fund for this reason.

Appendix: R code

library(readr)
library(plotrix)
library(plyr)

f <- data.frame(read_csv("~/split/Anonomized EA Funds donation spreadsheet - Amount given by fund.csv"))

exchr <- 1.27

# convert everything to usd
f$dusd <- ifelse(f$Currency=="GBP",exchr*f$Donation.amount,f$Donation.amount)

#naive histogram of allocation percentages
bseq=seq(0,100,5)
n <- hist(f$Allocation.percentage, breaks=bseq, freq=FALSE, xlab="Allocation Percentage", main="")
n_n <- data.frame(bucket=w$breaks[2:length(n$breaks)],prob=(n$counts/sum(n$counts)))


# weighted histogram
bseq=seq(0,100,5)
w <- weighted.hist(f$Allocation.percentage,f$dusd,breaks=bseq,freq = FALSE, xlab="Allocation Percentage", ylab = "Density, weighted by donation size")
w_n <- data.frame(bucket=w$breaks[2:length(w$breaks)],prob=(w$counts/sum(w$counts)))

#user totals
f <- ddply(f,.(UserID),transform,usersum=sum(dusd))
u <- ddply(u,.(UserID),transform,usersum=sum(dusd))

#user totals by fund
f <- ddply(f,.(UserID, Fund.Name),transform,usersum_fund=sum(dusd))

# fundfrac
f$fundfrac <- f$usersum_fund/f$usersum

# remove appropriate duplicates
f$isdupl <- duplicated(f[,c(2,5)])
f2 <- subset(f, isdupl == FALSE)

# weighted histogram of fundfrac
z <- weighted.hist(f2$fundfrac,f2$usersum_fund,breaks=bseq/100,freq = FALSE, xlab="Fund fraction (per user)", ylab = "Density, weighted by fund sum (per user)")
z_n <- data.frame(bucket=z$breaks[2:length(z$breaks)],prob=(z$counts/sum(z$counts)))
  1. The underlying data are:

    allocation percentage bucket probability
    5% 0.088
    10% 0.137
    15% 0.105
    20% 0.166
    25% 0.098
    30% 0.050
    35% 0.037
    40% 0.039
    45% 0.067
    50% 0.061
    55% 0.011
    60% 0.016
    65% 0.005
    70% 0.011
    75% 0.006
    80% 0.019
    85% 0.004
    90% 0.007
    95% 0.002
    100% 0.069

  2. The data:

    allocation percentage bucket probability
    5% 0.009
    10% 0.011
    15% 0.025
    20% 0.020
    25% 0.043
    30% 0.060
    35% 0.032
    40% 0.066
    45% 0.015
    50% 0.037
    55% 0.083
    60% 0.004
    65% 0.015
    70% 0.002
    75% 0.004
    80% 0.007
    85% 0.022
    90% 0.003
    95% 0.075
    100% 0.467

  3. The data are:

    fraction of user total bucket probability
    5% 0.013
    10% 0.019
    15% 0.021
    20% 0.024
    25% 0.044
    30% 0.052
    35% 0.074
    40% 0.065
    45% 0.025
    50% 0.033
    55% 0.097
    60% 0.074
    65% 0.017
    70% 0.003
    75% 0.004
    80% 0.008
    85% 0.025
    90% 0.003
    95% 0.002
    100% 0.395

  4. One extreme example would be a disease eradication programme, where returns stay high until they go to zero after eradicaiton has been successful, vs. cash transfers where returns diminish very slowly. 

  5. The extension to the general case would go like this: everyone truthfully states their preferred split and donation amount, and a weighted average is used to compute the resulting community-preferred spit. See also “Donor coordination under simplifying assumptions”

  6. Adapted from Owen Cotton-Barratt, personal communication. 

  7. In addition, this OpenPhil post on worldview diversification, and this comment give reasons a large funder may want to make diversified donations in order to retain the ability to pivot to a better area. Some of them may transfer to the individual donor case. 

  8. The points in this paragraph apply similarly to oher “arguments from bias”, such as donating for learning value or to motivate oneself to do reasearch in the future (both of which I have seen made). 

August 3, 2017

The logical empiricist picture of the world

If we take in our hand any volume; of divinity or school metaphysics, for instance; let us ask, Does it contain any abstract reasoning concerning quantity or number? No. Does it contain any experimental reasoning concerning matter of fact and existence? No. Commit it then to the flames: for it can contain nothing but sophistry and illusion.

— David Hume, An Enquiry Concerning Human Understanding, 1777, Section XII, Part 3

I made this diagram as an attempted summary of logical empiricism.1

We can then think of various counters to logical empiricism within this framework. For example, Kripke denies the Kantian thesis. In “Two dogmas of empiricism”, Quine attacks all three vertices of the golden triangle. I may write a post about these in the future.

  1. The above construction, in javascript, might break one day. Backups are in .png and .xml

August 1, 2017

Why ain't causal decision theorists rich? Some speculations

Note added on 24 June 2018: This is an old post which no longer reflects my views. It likely contains mistakes.

I.

CDT is underspecified

Standard causal decision theory is underspecified. It needs a theory of counterfactual reasoning. Usually we don’t realise that there is more than one possible way to reason counterfactually in a situation. I illustrate this fact using the simple case Game, described below, where a CDT agent using bad counterfactuals loses money.

But before that I need to set up some formalisms. Suppose we are given the set of possible actions an agent could take in a situation. The agent will in fact choose only one of those actions. What would have happened under each of the other possible actions? We can think of the answer to this question as a list of counterfactuals.

Let’s call such a list K of counterfactuals a “causal situation” (Arntzenius 2008). The list will have n elements when there are n possible actions. Start by figuring out what all the possible lists of counterfactuals K are. They form a set P which we can call the “causal situation partition”. Once you have determined P, then for each possible K, figure out what the expected utility:

of each of your n acts is. (Where are the outcomes, is the utility function and is the credence function.) Then, take the average of these expected utilities, weighted by your credence in each causal situation:

Perform the act with the highest .

What will turn out to be crucial for our purposes is that there is more than one causal situation partition P one can consistently use. So it’s not just a matter of figuring out “the” possible Ks that form “the” P. We also need to choose a P among a variety of possible Ps.

In other words, there is the following hierarchy:

  • Choose a causal situation partition P out of the set of possible partitions (the causal situation superpartition?).
  • This partition defines a list of possible causal situations: .
  • Each causal situation K defines a list of counterfactuals of length n: . Where each counterfactual is of the form “”. You have a credence distribution over Ks.

Game

Now let’s consider the following case, Game, also from Arntzenius (2008):

Harry is going to bet on the outcome of a Yankees versus Red Sox game. Harry’s credence that the Yankees will win is 0.9. He is offered the following two bets, of which he must pick one:

(1) A bet on the Yankees: Harry wins $1 if the Yankees win, loses $2 if the Red Sox win

(2) A bet on the Red Rox: Harry wins $2 if the Red Sox win, loses $1 if the Yankees win.

What are the possible Ps? According to Arntzenius, they are:

P1: Yankees Win, Red Sox Win

P2: I win my bet, I lose my bet

To make this very explicit using the language I describe above, we can write that the set of causal situations (the “superpartition”) is . (I use for “baseball” and for “win/lose”.)

Let’s first deal with the baseball partition: . (I use for “Yankees win” and for “Sox win”.)





And Harry has the following credences1:


When using this partition and the prodecure described above, Harry finds that the expected value of betting on the Yankees is 70c, whereas the expected value of betting on the Sox is -70c, so he bets on the Yankees. This is the desired result.

And now for the win/lose partition: . (I use for “Harry wins his bet” and for “Harry loses his bet”.)





What are Harry’s credences in and ? It turns out that it doesn’t matter. Arntzenius writes: “no matter what and are, the expected utility of betting on the Sox is always higher”.

So Harry should bet on the Sox regardless of his credences. But the Yankees win 90% of the time, so once Harry has placed his bet, he will correctly infer that . Harry will lose 70c in expectation, and he can foresee that this will be so! It’s because he is using a bad partition.

Predictor

Now consider the case Predictor, which is identical to Game except for the fact that:

[…] on each occasion before Harry chooses which bet to place, a perfect predictor of his choices and of the outcomes of the game announces to him whether he will win his bet or lose it.

Arntzenius crafts this thought experiment as a case where, purportedly:

  • An evidential decision theories predictably loses money.2
  • A causal decision theorist using the baseball partition predictably wins money.

I’ll leave both of these claims undefended for now, taking them for granted.

I’ll also skip over the crucial question of how one is supposed to systematically determine which partition is the “correct” one, since Arntzenius provides an answer3 that is long and technical, and I believe correct.

What is the point of proposing Predictor? We know that EDT does predictably better than CDT in Newcomb. Predictor is a case where CDT does predictably better than EDT, provided that it uses the appropriate partition. But we already knew this from more mundane cases like Smoking lesion (Egan 2007).

II.

The value of WAYR arguments

Arntzenius’ view appears to be that “Why ain’cha rich?”-style arguments (henceforth WAYRs) give us no reason to choose any decision theory over another. There is one sense in which I agree, but I think it has nothing to do with Predictor, and, more importantly, that this is not an argument for being poor, but instead a problem for decision theory as currently conducted.

One way to think of decision theory is as a conceptual analysis of the word “rational”, i.e. a theory of rationality. Some causal decision theorists say that in Newcomb, rational people predictably lose money. But this, they say, is not an argument against CDT, for in Newcomb, the riches were reserved for the irrational: “if someone is very good at predicting behavior and rewards predicted irrationality richly, then irrationality will be richly rewarded” (Gibbard and Harper 1978).

This line of reasoning appearts particularly compelling in Arntzenius’ Insane Newcomb:

Consider again a Newcomb situation. Now suppose that the situation is that one makes a choice after one has seen the contents of the boxes, but that the predictor still rewards people who, insanely, choose only box A even after they have seen the contents of the boxes. What will happen? Evidential decision theorists and causal decision theorists will always see nothing in box A and will always pick both boxes. Insane people will see $10 in box A and $1 in box B and pick box A only. So insane people will end up richer than causal decision theorists and evidential decision theorists, and all hands can foresee that insanity will be rewarded. This hardly seems an argument that insane people are more rational than either of them are.

But, others will reply: “The reason I am interested in decision theory is so that I can get rich, not so that I can satisfy some platonic notion of rationality. If I were actually facing that case, I’d rather be insane than rational.”

What is happening? The disputants are using the word “rational” in different ways. When language goes on holiday to the strange land of Newcomb, the word “rational” loses its everyday usefulness. This shows the limits of conceptual analysis.

Instead, we should use different words depending on what we are interested in. For instance, I am interested in getting rich, so I could say that act-decision theory is the theory that tells me how to get rich if I find myself in a particular situation and am not bound by any decision rule. Rule-decision theory would be the theory that tells you which rules are best for getting rich. Inspired by Ord (2009), we could even define global decision theory as the theory which, for any X, tells you which X will make you the most money.

Which X to use will depend on the context. Specifically, you should use the X which you can choose, or causally intervene on. If you are choosing a decision rule, for example by programming an AI, you should use rule-decision theory. (If you want to think of “choosing a rule for the AI” as an act, act-decision theory will tell you to choose the rule that rule decision theory identifies. That’s a mere verbal issue.) If you are choosing an act, such as deciding whether to smoke, ou should use act-decision theory.

Kenny Easwaran has similar thoughts:

Perhaps there just is a notion of rational action, and a notion of rational character, and they disagree with each other. That the rational character is being the sort of person that would one-box, but the rational action is two-boxing, and it’s just a shame that the rational, virtuous character doesn’t give rise to the rational action. I think that this is a thought that we might be led to by thinking about rationality in terms of what are the effects of these various types of intervention that we can have. […]

I think one way to think about this is […] trying to understand causation through what they call these causal graphs. They say if you consider all the possible things that might have effects on each other, then we can draw an arrow from anything to the things that it directly affects. Then they say, well, we can fill in these arrows by doing enough controlled experiments on the world, we can fill in the probabilities behind all these arrows. And we can understand how one of these variables, as we might call it, contributes causally to another, by changing the probabilities of these outcomes.

The only way, they say, that we can understand these probabilities, is when we can do controlled experiments. When we can sort of break the causal structure and intervene on some things. This is what scientists are trying to do when they do controlled experiments. They say, “If you want to know if smoking causes cancer, well, the first thing you can do is look at smokers and look at whether they have cancer and look at non-smokers and look at whether they have cancer.” But then you’re still susceptible to the issues that Fisher was worrying about. What you should actually do if you wanted to figure out whether smoking causes cancer, is not observe smokers and observe non-smokers, but take a bunch of people, break whatever causes would have made them smoke or made them not smoke, and you either force some people to smoke or force some people not to smoke.

Obviously this experiment would never get ethical approval, but if you can do that – if you can break the causal arrows coming in, and just intervene on this variable and force some people to be smokers and force others to not be smokers, and then look at the probabilities – then we can understand what are the downstream effects of smoking.

In some sense, these causal graphs only make sense to the extent that we can break certain arrows, intervene on certain variables and observe downstream effects. Then, I think, in all these Newcomb type problems, it looks like there’s several different levels at which one might imagine intervening. You can intervene on your act. You can say, imagine a person who’s just like you, who had the same character as you, going into the Newcomb puzzle. Now imagine that we’re able to, from the outside, break the effect of that psychology and just force this person to take the one box or take the two boxes. In this case, forcing them to take the two boxes, regardless of what sort of person they were like, will make them better off. So that’s a sense in which two-boxing is the rational action.

Whereas if we’re intervening at the level of choosing what the character of this person is before they even go into the tent, then at that level the thing that leaves them better off is breaking any effects of their history, and making them the sort of person who’s a one-boxer at this point. If we can imagine having this sort of radical intervention, then we can see, at different levels, different things are rational.

To what extent we human beings can intervene at the level our acts, or at the level of our rules, is, I suspect, an empirically and philosophically deep issue. But I would be delighted to be proven wrong about that.

A problem for any decision theory?

I think using these distinctions can solve much of the confusion about WAYRs in Newcomb and analogous cases. But Insane Newcomb hints at a more fundamental problem. Both EDT and CDT can be made vulerable to an WAYR, for example in Insane Newcomb.

Moreover, any decision theory can be made vulnerable to WAYRs. Imagine the following generalised Newcomb problem.

The predictor has a thousand boxes, some transparent and some opaque, and the opaque boxes have arbitrary amounts of money in them. Suppose you use decision theory X, which, conditional on your credences, determines a certain pattern of box-taking (e.g. take box 1, leave boxes 2 and 4, take boxes 3 and 5, etc). The predictor announces that if he has predicted that you will take boxes in this pattern, he has put $0 in all opaque boxes, while otherwise he has put $1000 in each opaque box.

This case has the consequence that X-decision theorists will end up poor. Since X can be anything, a sufficiently powerful predictor can punish the user of any decision theory. Newcomb is a special case where the predictor punishes causal decision theorists.

So I’m inclined to say that there exists no decision theory which will make you rich in all cases. So we need to be pragmatic and choose the decision theory that works best given the cases we expect to face. But this just means using the meta-decision theory that tells you to do that.

  1. This isn’t fully rigorous, since Ks are lists of (counterfactual) propositions, so you can’t have a credence in a K. What I mean by is that Harry has credence 0.9 in every C in K, and (importantly) he also has credence 0.9 in in their conjunction . But I drop this formalism in the body of the post, which I feel already suffers from an excess of pedantry as it stands! 

  2. This is denied by Ahmed and Price (2012), but I ultimately don’t find their objection convincing. 

  3. See section 6, “Good and Bad Partitions”. Importantly, this account fails to identify any adequate partition in Newcomb, so the established conclusion that causal decision theorists tend to lose money in Newcomb still holds. 

June 28, 2017