# Philosophy success story V: Bayesianism

This is part of my series on success stories in philosophy. See this page for an explanation of the project and links to other items in the series.

# Contents

1. Bayesianism: the correct theory of rational inference
1. Probabilism
2. Conditionalisation
3. Justifications for probabilism and conditionalisation
2. Science as a special case of rational inference
3. Previous theories of science
4. The Quine-Duhem problem
5. Uncertain judgements and value of information (resilience)
6. Issues around Occam’s razor

# Bayesianism: the correct theory of rational inference

Unless specified otherwise, by “Bayesianism” I mean normative claims constraining rational credences (degrees of belief), not any descriptive claim. Bayesianism so understood has, I claim, consensus support among philosophers. It has two core claims: probabilism and conditionalisation.

## Probabilism

What is probabilism? (Teruji Thomas, Degrees of Belief, Part I: degrees of belief and their structure.)

Suppose that Clara has some confidence that $$P$$ is true. Then, in so far as Clara is rational:

1. We can quantify credences: we can represent Clara’s credence in $$P$$ by a number, $$Cr(P)$$. The higher the number, the more confident Clara is that $$P$$ is true.
2. More precisely, we can choose these numbers to fit together in a certain way: they satisfy the probability axioms, that is, they behave like probabilities do: (a) $$Cr(P)$$ is always between 0 and 1. (b) $$Cr(\neg P) = 1−Cr(P)$$ (c) $$Cr(P \lor Q) = Cr(P)+Cr(Q)−Cr(P \land Q)$$.

## Conditionalisation

Suppose you gain evidence E. Let Cr be your credences just before and Cr_NEW new your credences just afterwards. Then, insofar as you are rational, for any proposition P: $$Cr_{\text{new}} (P) = \frac{Cr(P and E)}{Cr(E)} \stackrel{\text{def}}{=} Cr(P|E)$$.1

## Justifications for probabilism and conditionalisation

### Dutch book arguments

The basic idea: an agent failing to use probabilism or conditionalisation can be made to accept a series of bets that will lead to a sure loss (such a series of bets is called a dutch book).

I won’t go into detail here, as this has been explained very well in many places. See for instance, Teruji Thomas, Degrees of Belief II or Earman, Bayes or Bust Chapter 2.

### Cox’s theorem

Bayes or Bust, Chapter 2, p 45: Jaynes (2011, 1.7 p.17) thinks the axioms formalise “qualitative correspondence with common sense” — but his argument is sketchy and I rather agree with Earman that the assumptions of Cox’s theorem do not recommend themselves with overwhelming force.

### Obviousness argument

Dutch books and Cox’s theorem aside, there’s something to be said for the sheer intuitive plausibility of probabilism and conditionalisation. If you want to express your beliefs as a number between 0 and 1, it just seems obvious that they should behave like probabilities. To me, accepting probabilism and conditionalisation outright feels more compelling than the premises of Cox’s theorem do. “Degrees of belief should behave like probabilities” seems near-tautological.

# Science as a special case of rational inference

Philosophers have long realised that science was extremely successful: predicting the motions of the heavenly bodies, building aeroplanes, producing vaccines, and so on. There must be a core principle underlying the disparate activities of scientists — measuring, experimenting, writing equations, going to conferences, etc. So they set about trying to find this core principle, in order to explain the success of science (the descriptive project) and to apply the core principle more accurately and more generally (normative project). This was philosophy of science.

Scientists are presitigious people in universities. Science, lab coats and all, seems like a specific activity separate from normal life. So it seemed natural that there should be a philosophy of science. This turned out to be a blind alley. The solution to philosophy of science was to come from a far more general theory — the theory of rational inference. This would reveal science as merely a watered-down special case of rational inference.

We will now see how Bayesianism solves most of the problems philosophers of science were preoccupied with. As far as I can tell, this view has wide acceptance among philosophers.

Now let’s review how people were confused and how Bayesianism dissolved the confusion.

# Previous theories of science

## Hypothetico-deductivism

SEP:

In a seminal essay on induction, Jean Nicod (1924) offered the following important remark:

Consider the formula or the law: F entails G. How can a particular proposition, or more briefly, a fact affect its probability? If this fact consists of the presence of G in a case of F, it is favourable to the law […]; on the contrary, if it consists of the absence of G in a case of F, it is unfavourable to this law. (219, notation slightly adapted)

SEP:

The central idea of hypothetico-deductive (HD) confirmation can be roughly described as “deduction-in-reverse”: evidence is said to confirm a hypothesis in case the latter, while not entailed by the former, is able to entail it, with the help of suitable auxiliary hypotheses and assumptions. The basic version (sometimes labelled “naïve”) of the HD notion of confirmation can be spelled out thus:

For any $$h, e, k$$ such that $$h\wedge k$$ is consistent:

• $$e$$ HD-confirms $$h$$ relative to $$k$$ if and only if $$h\wedge k \vDash e$$ and $$k \not\vDash e$$;

• $$e$$ HD-disconfirms $$h$$ relative to $$k$$ if and only if $$h\wedge k \vDash \neg e$$, and $$k \not\vDash \neg e$$;

• $$e$$ is HD-neutral for hypothesis $$h$$ relative to $$k$$ otherwise.

### Hypothetico-deductivism and the problem of irrelevant conjunction

SEP:

The irrelevant conjunction paradox. Suppose that $$e$$ confirms $$h$$ relative to (possibly empty) $$k$$. Let statement $$q$$ e logically consistent with $$e\wedge h\wedge k$$, but otherwise ntirely irrelevant for all of those conjuncts. Does $$e$$ confirm $$h\wedge q$$ (relative to $$k$$) as it does with $$h$$? One would want to say no, and this implication can be suitably reconstructed in Hempel’s theory. HD-confirmation, on the contrary, can not draw yhis distinction: it is easy to show that, on the conditions specified, if the HD clause for confirmation is satisfied for $$e$$ and $$h$$ (given $$k$$), so it is for $$e$$ and $$h\wedge q$$ (given $$k$$). (This is simply because, if $$h\wedge k \vDash e$$, then $$h\wedge q\wedge k \vDash e$$, too, by the monotonicity of classical logical entailment.)

The Bayesian solution:

In the statement below, indicating this result, the irrelevance of $$q$$ for hypothesis $$h$$ and evidence $$e$$ (relative to $$k$$) is meant to amount to the probabilistic independence of $$q$$ from $$h, e$$ and their conjunction (given $$k$$), that is, to $$P(h \wedge q\mid k) = P(h\mid k)P(q\mid k),$$ $$P(e \wedge q\mid k) = P(e\mid k)P(q\mid k)$$, and $$P(h \wedge e \wedge q\mid k) = P(h \wedge e\mid k)P(q\mid k)$$, respectively.

Confirmation upon irrelevant conjunction (ordinal solution) (CIC)
For any $$h, e, q, k$$ and any $$P$$ if $$e$$ confirms $$h$$ relative to $$k$$ and $$q$$ is irrelevant for $$h$$ and $$e$$ relative to $$k$$, then</p>

$C_{P}(h, e\mid k) \gt C_{P}(h \wedge q, e\mid k).$

So, even in case it is qualitatively preserved across the tacking of $$q$$ onto $$h$$, the positive confirmation afforded by $$e$$ is at least bound to quantitatively decrease thereby.

## Instance confirmation

Bayes or Bust (p. 63):

When Carl Hempel published his seminal “Studies in the Logic of Conﬁr- mation” (1945), he saw his essay as a contribution to the logical empiricists’ program of creating an inductive logic that would parallel and comple- ment deductive logic. The program, he thought, was best carried out in three stages: the ﬁrst stage would provide an explication of the qualitative concept of conﬁrmation (as in ‘E conﬁrms H’); the second stage would tackle the comparative concept (as in ‘E conﬁrms H more than E’ conﬁrms H”); and the ﬁnal stage would concern the quantitative concept (as in ‘E conﬁrms H to degree r’). In hindsight it seems clear (at least to Bayesians) that it is best to proceed the other way around: start with the quantitative concept and use it to analyze the comparative and qualitative notions. […]

Hempel’s basic idea for ﬁnding a deﬁnition of qualitative conﬁrmation satisfying his adequacy conditions was that a hypothesis is conﬁrmed by its positive instances. This seemingly simple and straightforward notion turns out to be notoriously difﬁcult to pin down. Hempel’s own explica— tion utilized the notion of the development of a hypothesis for a ﬁnite set I of individuals. Intuitively, $$dev_I (H)$$ is what $$H$$ asserts about a domain consisting ofjust the individuals in $$I$$. Formally, $$dev_I (H)$$ for a quantiﬁed $$H$$ is arrived at by peeling off universal quantiﬁers in favor of conjunctions over I and existential quantiﬁers in favor of disjunctions over I . Thus, for example, if $$I = \{a,b\}$$ and H is $$\forall x \exists y Lxy$$ (e.g., “Everybody loves somebody”), $$dev_I (H)$$ is $$(Laa \lor Lab) \land (Lbb \lor Lba)$$. We are now in a position to state the main deﬁnition[] that constitute[s] Hempel’s account:

• E directly Hempel-confirms H iff $$E \vDash dev_I(H)$$, where $$I$$ is the class of individuals mentioned in $$E$$.

It’s easy to check that Hempel’s instance confirmation, like Bayesiansim, successfully avoids the paradox or irrelevant conjunction. But it’s famously vulnerable to the following problem case.

### Instance confirmation and the paradox of the ravens

The ravens paradox (Hempel 1937, 1945). Consider the following statements:

• $$\forall x(raven(x) \rightarrow black(x))$$, i.e., all ravens are black;

• $$raven(a) \wedge black(a)$$, i.e., $$a$$ is a black raven;

• $$\neg black(a^*) \wedge \neg raven(a^*)$$, i.e., $$a^*$$ is a non-black non-raven (say, a green apple).

Is hypothesis $$h$$ confirmed by $$e$$ and $$e^*$$ alike? One would want to say no, but Hempel’s theory is unable to draw this distinction. Let’s see why.

As we know, $$e$$ (directly) Hempel-confirms $$h$$, according to Hempel’s reconstruction of Nicod. By the same token, $$e^*$$ (directly) Hempel-confirms the hypothesis that all non-black objects are non-ravens, i.e., $$h^* = \forall x(\neg black(x) \rightarrow \neg raven(x))$$. But $$h^* \vDash h$$ ($$h$$ and $$h^*$$ are just logically equivalent). So, $$e^*$$ (the observation report of a non-black non-raven), like $$e$$ (black raven), does (indirectly) Hempel-confirm $$h$$ (all ravens are black). Indeed, as $$\neg raven(a)$$ entails $$raven(a) \rightarrow black(a)$$, it can be shown that $$h$$ is (directly) Hempel-confirmed by the observation of any object that is not a raven (an apple, a cat, a shoe, or whatever), apparently disclosing puzzling “prospects for indoor ornithology” (Goodman 1955, 71).

Just as HD, Bayesian relevance confirmation directly implies that $$e = black(a)$$ confirms $$h$$ given $$k = raven(a)$$ and $$e^* =\neg raven(a)$$ confirms $$h$$ given $$k^* =\neg black(a)$$ (provided, as we know, that $$P(e\mid k)\lt 1$$ and $$P(e^*\mid k^*)\lt 1).$$ That’s because $$h \wedge k\vDash e$$ and $$h \wedge k^*\vDash e^*.$$ But of course, to have $$h$$ confirmed, sampling ravens and finding a black one is intuitively more significant than failing to find a raven while sampling the enormous set of the non-black objects. That is, it seems, because the latter is very likely to obtain anyway, whether or not $$h$$ is true, so that $$P(e^*\mid k^*)$$ is actually quite close to unity. Accordingly, (SP) implies that $$h$$ is indeed more strongly confirmed by $$black(a)$$ given $$raven(a)$$ than it is by $$\neg raven(a)$$ given $$\neg black(a)$$—that is, $$C_{P}(h, e\mid k)\gt C_{P}(h, e^*\mid k^*)$$—as long as the assumption $$P(e\mid k)\lt P(e^*\mid k^*)$$ applies.

### Bootstrapping and relevance relations

In a pre-Bayesian attempt to solve the problem of the ravens, people developed some complicated and ultimately unconvincing theories.

SEP:

To overcome the latter difficulty, Clark Glymour (1980a) embedded a refined version of Hempelian confirmation by instances in his analysis of scientific reasoning. In Glymour’s revision, hypothesis h is confirmed by some evidence e even if appropriate auxiliary hypotheses and assumptions must be involved for e to entail the relevant instances of h. This important theoretical move turns confirmation into a three-place relation concerning the evidence, the target hypothesis, and (a conjunction of) auxiliaries. Originally, Glymour presented his sophisticated neo-Hempelian approach in stark contrast with the competing traditional view of so-called hypothetico-deductivism (HD). Despite his explicit intentions, however, several commentators have pointed out that, partly because of the due recognition of the role of auxiliary assumptions, Glymour’s proposal and HD end up being plagued by similar difficulties (see, e.g., Horwich 1983, Woodward 1983, and Worrall 1982).

## Falsificationism

“statements or systems of statements, in order to be ranked as scientific, must be capable of conflicting with possible, or conceivable observations” (Popper 1962, 39).

SEP:

For Popper […] the important point was not whatever confirmation successful prediction offered to the hypotheses but rather the logical asymmetry between such confirmations, which require an inductive inference, versus falsification, which can be based on a deductive inference. […]

Popper stressed that, regardless of the amount of confirming evidence, we can never be certain that a hypothesis is true without committing the fallacy of affirming the consequent. Instead, Popper introduced the notion of corroboration as a measure for how well a theory or hypothesis has survived previous testing.

Popper was clearly onto something, as in his critique of psychoanalysis:

Neither Freud nor Adler excludes any particular person’s acting in any particular way, whatever the outward circumstances. Whether a man sacrificed his life to rescue a drowning child (a case of sublimation) or whether he murdered the child by drowning him (a case of repression) could not possibly be predicted or excluded by Freud’s theory; the theory was compatible with everything that could happen.

But his stark asymmetry between logically disproving a theory and “corroborating” it was actually a mistake. And it led to many problems.

First, successful science often did not involve rejecting a theory as disproven when it failed an empirical test. SEP:

Originally, Popper thought that this meant the introduction of ad hoc hypotheses only to save a theory should not be countenanced as good scientific method. These would undermine the falsifiabililty of a theory. However, Popper later came to recognize that the introduction of modifications (immunizations, he called them) was often an important part of scientific development. Responding to surprising or apparently falsifying observations often generated important new scientific insights. Popper’s own example was the observed motion of Uranus which originally did not agree with Newtonian predictions, but the ad hoc hypothesis of an outer planet explained the disagreement and led to further falsifiable predictions.

Second, Popper’s idea of corroboration was intolerably vague. A theory is supposed to be well-corroborated if it stuck its neck out by being falsifiable, and has resisted falsification for a long time. But how, for instance, do we compare how well-corroborated two theories are? And how are we supposed to act in the meantime, when there are still several contending theories? The intuition is that well-tested theories should have higher probability, but Popper’s “corroboration” idea is ill-equipped to account for this.

Bayesianism dissolves these problems, but captures the grain of truth in falsificationism. I’ll just quote from the Arbital page on the bayesian view of scientific virtues, which is despite its silly style is excellent, and should probably be read in full.

In a Bayesian sense, we can see a hypothesis’s falsifiability as a requirement for obtaining strong likelihood ratios in favor of the hypothesis, compared to, e.g., the alternative hypothesis “I don’t know.”

Suppose you’re a very early researcher on gravitation, named Grek. Your friend Thag is holding a rock in one hand, about to let it go. You need to predict whether the rock will move downward to the ground, fly upward into the sky, or do something else. That is, you must say how your theory $$Grek$$ assigns its probabilities over $$up, down,$$ and $$other.$$

As it happens, your friend Thag has his own theory $$Thag$$ which says “Rocks do what they want to do.” If Thag sees the rock go down, he’ll explain this by saying the rock wanted to go down. If Thag sees the rock go up, he’ll say the rock wanted to go up. Thag thinks that the Thag Theory of Gravitation is a very good one because it can explain any possible thing the rock is observed to do. This makes it superior compared to a theory that could only explain, say, the rock falling down.

As a Bayesian, however, you realize that since $$up, down,$$ and $$other$$ are mutually exclusive and exhaustive possibilities, and something must happen when Thag lets go of the rock, the conditional probabilities $$\mathbb P(\cdot\mid Thag)$$ must sum to $$\mathbb P(up\mid Thag) + \mathbb P(down\mid Thag) + \mathbb P(other\mid Thag) = 1.$$

If Thag is “equally good at explaining” all three outcomes - if Thag’s theory is equally compatible with all three events and produces equally clever explanations for each of them - then we might as well call this $$1/3$$ probability for each of $$\mathbb P(up\mid Thag), \mathbb P(down\mid Thag),$$ and $$\mathbb P(other\mid Thag)$$. Note that Thag theory’s is isomorphic, in a probabilistic sense, to saying “I don’t know.”

But now suppose Grek make falsifiable prediction! Grek say, “Most things fall down!”

Then Grek not have all probability mass distributed equally! Grek put 95% of probability mass in $$\mathbb P(down\mid Grek)!$$ Only leave 5% probability divided equally over $$\mathbb P(up\mid Grek)$$ and $$\mathbb P(other\mid Grek)$$ in case rock behave like bird.

Thag say this bad idea. If rock go up, Grek Theory of Gravitation disconfirmed by false prediction! Compared to Thag Theory that predicts 1/3 chance of $$up,$$ will be likelihood ratio of 2.5% : 33% ~ 1 : 13 against Grek Theory! Grek embarrassed!

Grek say, she is confident rock does go down. Things like bird are rare. So Grek willing to stick out neck and face potential embarrassment. Besides, is more important to learn about if Grek Theory is true than to save face.

Thag let go of rock. Rock fall down.

This evidence with likelihood ratio of 0.95 : 0.33 ~ 3 : 1 favoring Grek Theory over Thag Theory.

“How you get such big likelihood ratio?” Thag demand. “Thag never get big likelihood ratio!”

Grek explain is possible to obtain big likelihood ratio because Grek Theory stick out neck and take probability mass away from outcomes $$up$$ and $$other,$$ risking disconfirmation if that happen. This free up lots of probability mass that Grek can put in outcome $$down$$ to make big likelihood ratio if $$down$$ happen.

Grek Theory win because falsifiable and make correct prediction! If falsifiable and make wrong prediction, Grek Theory lose, but this okay because Grek Theory not Grek.

# The Quine-Duhem problem

SEP:

Duhem (he himself a supporter of the HD view) pointed out that in mature sciences such as physics most hypotheses or theories of real interest can not be contradicted by any statement describing observable states of affairs. Taken in isolation, they simply do not logically imply, nor rule out, any observable fact, essentially because (unlike “all ravens are black”) they involve the mention of unobservable entities and processes. So, in effect, Duhem emphasized that, typically, scientific hypotheses or theories are logically consistent with any piece of checkable evidence. […]

Let us briefly consider a classical case, which Duhem himself thoroughly analyzed: the wave vs. particle theories of light in modern optics. Across the decades, wave theorists were able to deduce an impressive list of important empirical facts from their main hypothesis along with appropriate auxiliaries, diffraction phenomena being only one major example. But many particle theorists’ reaction was to retain their hypothesis nonetheless and to reshape other parts of the “theoretical maze” (i.e., k; the term is Popper’s, 1963, p. 330) to recover those observed facts as consequences of their own proposal.

Quine took this idea to its radical conclusion with his confirmation holism. Wikipedia:

Duhem’s idea was, roughly, that no theory of any type can be tested in isolation but only when embedded in a background of other hypotheses, e.g. hypotheses about initial conditions. Quine thought that this background involved not only such hypotheses but also our whole web-of-belief, which, among other things, includes our mathematical and logical theories and our scientific theories. This last claim is sometimes known as the Duhem–Quine thesis. A related claim made by Quine, though contested by some (see Adolf Grünbaum 1962), is that one can always protect one’s theory against refutation by attributing failure to some other part of our web-of-belief. In his own words, “Any statement can be held true come what may, if we make drastic enough adjustments elsewhere in the system.”

Bayes or Bust p 73:

It makes a nice sound when it rolls off the tongue to say that our claims about the physical world face the tribunal of experience not individually but only as a corporate body. But scientists, no less than business executives, do not typically act as if they are at a loss as to how to distribute praise through the corporate body when the tribunal says yea, or blame when the tribunal says nay. This is not to say that there is always a single correct way to make the distribution, but it is to say that in many cases there are ﬁrm intuitions.

Howson and Urbach 2006 (p 108):

We shall illustrate the argument through a historical example that Lakatos (1970, pp. 138-140; 1968, pp. l74-75) drew heavily upon. In the early nineteenth century, William Prout (1815, 1816), a medical practitioner and chemist, advanced the idea that the atomic weight of every element is a whole-number multiple of the atomic weight of hydrogen, the underlying assumption being that all matter is built up from different combinations of some basic element. Prout believed hydrogen to be that fundamental building block. Now many of the atomic weights recorded at the time were in fact more or less integral multiples of the atomic weight of hydrogen, but some deviated markedly from Prout’s expectations. Yet this did not shake the strong belief he had in his hypothesis, for in such cases he blamed the methods that had been used to measure those atomic weights. Indeed, he went so far as o adjust the atomic weight of the element chlorine, relative to that f hydrogen, from the value 35.83, obtained by experiment, to 36, he nearest whole number. […]

Prout’s hypothesis t, together with an appropriate assumption a, asserting the accuracy (within specified limits) of the measuring techniques, the purity of the chemicals employed, and so forth , implies that the ratio of the measured atomic weights of chlorine and hydrogen will approximate (to a specified degree) a whole number. In 1815 that ratio was reported as 35.83-call this the evidence e-a value judged to be incompatible with the conjunction of t and a. The posterior and prior probabilities of t and of a are related by Bayes’s theorem, as follows: […] Consider first the prior probabilities of $$t$$ and of $$a$$. J.S. Stas, a distinguished Belgian chemist whose careful atomic weight measurements were highly influential, gives us reason to think that chemists of the period were firmly disposed to believe in t. […] It is less easy to ascertain how confident Prout and his contemporaries were in the methods used to measure atomic weights, but their confidence was probably not great, in view of the many clear sources of error. […] On the other hand, the chemists of the time must have felt that that their atomic weight measurements were more likely to be accurate than not, otherwise they would hardly have reported them. […] For these reasons, we conjecture that $$P(a)$$ was in the neighbourhood of 0.6 and that $$P(t)$$ was around 0.9, and these are the figures we shall work with. […]

We will follow Dorling in taking $$t$$ and $$a$$ to be independent, viz, $$P(a \mid t) = P(a)$$ and hence, $$P(\neg a \mid t) = P(\neg a)$$. As Dorling points out (1996), this independence assumption makes the calculations simpler but is not crucial to the argument. […]

Finally, Bayes’s theorem allows us to derive the posterior probabilities in which we are interested:

$$P(t\mid e) = 0.878$$ $$P(a\mid e) = 0.073$$

(Recall that $$P(t) = 0.9$$ and $$P(a) = 0.6$$ ) We see then that the evidence provided by the measured atomic weight of chlorine affects Prout’s hypothesis and the set of auxiliary hypotheses very differently; for while the probability of the first is scarcely changed, that of the second is reduced to a point where it has lost all credibility

# Uncertain judgements and value of information (resilience)

Crash course in state spaces and events: There is a set of states $$\Omega$$ which represents the ways the world could be. Sometimes $$\Omega$$ is described as the set of “possible worlds” (SEP). An event $$E$$ is a subset of $$\Omega$$. There are many states of the world where Labour wins the next election. The event “Labour wins the next election” is the set of these states.

Here is the important point: a single numerical probability for event $$E$$ is not just the probability you assign to one state of the world. It’s a sum over the probabilities assigned to states in $$E$$. We should think of ideal Bayesians as having probability distributions over the state space, not just scalar probabilities for events.

This simple idea is enough to cut through many decades of confusion. SEP:

probability theory seems to impute much richer and more determinate attitudes than seems warranted. What should your rational degree of belief be that global mean surface temperature will have risen by more than four degrees by 2080? Perhaps it should be 0.75? Why not 0.75001? Why not 0.7497? Is that event more or less likely than getting at least one head on two tosses of a fair coin? It seems there are many events about which we can (or perhaps should) take less precise attitudes than orthodox probability requires. […] As far back as the mid-nineteenth century, we find George Boole saying:

It would be unphilosophical to affirm that the strength of that expectation, viewed as an emotion of the mind, is capable of being referred to any numerical standard. (Boole 1958 : 244)

People have long thought there is a distinction between risk (probabilities different from 0 or 1) and ambiguity (imprecise probabilities):

One classic example of this is the Ellsberg problem (Ellsberg 1961).

I have an urn that contains ninety marbles. Thirty marbles are red. The remainder are blue or yellow in some unknown proportion.

Consider the indicator gambles for various events in this scenario. Consider a choice between a bet that wins if the marble drawn is red (I), versus a bet that wins if the marble drawn is blue (II). You might prefer I to II since I involves risk while II involves ambiguity. A prospect is risky if its outcome is uncertain but its outcomes occur with known probability. A prospect is ambiguous if the outcomes occur with unknown or only partially known probabilities.

To deal with purported ambiguity, people developed models where the probability lies in some range. These probabilities were called “fuzzy” or “mushy”.

Evidence can be balanced because it is incomplete: there simply isn’t enough of it. Evidence can also be balanced if it is conflicted: different pieces of evidence favour different hypotheses. We can further ask whether evidence tells us something specific—like that the bias of a coin is 2/3 in favour of heads—or unspecific—like that the bias of a coin is between 2/3 and 1 in favour of heads.

Fuzzy probabilities gave rise to a number of problem cases, which, predictably engendered a wide literature. The SEP article notes the problems of:

1. Dilation (Imprecise probabilists violate the reflection principle)
2. Belief intertia (How do we learn from an imprecise prior?)
3. Decision making (How should an imprecise probabilist act? Can she avoid dutch books?)

A PhilPapers search indicates that at least 65 papers have been published on these topics.

The Bayesian solution is simply: when you are less confident, you have a flatter probability distribution, though it may have the same mean. Flatter distributions move more in response to evidence. They are less resilient. See Skyrms (2011) or Leitgeb (2014). It’s not surprising that single probabilities don’t adequately describe your evidential state, since they are summary statistics over a distribution.

# Issues around Occam’s razor

SEP distinguishes three questions about simplicity:

(i) How is simplicity to be defined? [Definition]

(ii) What is the role of simplicity principles in different areas of inquiry? [Usage]

(iii) Is there a rational justification for such simplicity principles? [Justification]

The Bayesian solution to (i) is to formalise Occam’s razor as a statement about which priors are better than others. Occam’s razor is not, as many philosophers have thought, a rule of inference, but a constraint on prior belief. One should have a prior that assigns higher probability to simpler worlds. SEP:

Jeffreys argued that “the simpler laws have the greater prior probability,” and went on to provide an operational measure of simplicity, according to which the prior probability of a law is $$2^{−k}$$, where k = order + degree + absolute values of the coefficients, when the law is expressed as a differential equation (Jeffreys 1961, p. 47).

Since then, the definition of simplicity has been further formalised using algorithmic information theory. The very informal gloss is that we formalise hypotheses as by the shortest computer program that can fully describe them, and our prior weights each hypothesis by its simplicity ($$2^{-n}$$, where $$n$$ is the program length).

This algorithmic formalisation, finally, sheds light on the limits of this understanding of simplicity, and provides an illuminating new interpretation of Goodman’s new riddle of induction. The key idea is that we can only formalise simplicity relative to a programming language (or relative to a universal turing machine).

Hutter and Rathmanner 2011, Section 5.9 “Andrey Kolmogorov”:

Natural Turing Machines. The final issue is the choice of Universal Turing machine to be used as the reference machine. The problem is that there is still subjectivity involved in this choice since what is simple on one Turing machine may not be on another. More formally, it can be shown that for any arbitrarily complex string $$x$$ as measured against the UTM $$U$$ there is another UTM machine $$U ′$$ for which $$x$$ has Kolmogorov complexity $$1$$. This result seems to undermine the entire concept of a universal simplicity measure but it is more of a philosophical nuisance which only occurs in specifically designed pathological examples. The Turing machine $$U ′$$ would have to be absurdly biased towards the string $$x$$ which would require previous knowledge of $$x$$. The analogy here would be to hard-code some arbitrary long complex number into the hardware of a computer system which is clearly not a natural design. To deal with this case we make the soft assumption that the reference machine is natural in the sense that no such specific biases exist. Unfortunately there is no rigorous definition of natural but it is possible to argue for a reasonable and intuitive definition in this context.

Vallinder 2012, Section 4.1 “Language dependence”:

In section 2.4 we saw that Solomonoff’s prior is invariant under both reparametrization and regrouping, up to a multiplicative constant. But there is another form of language dependence, namely the choice of a uni- versal Turing machine.

There are three principal responses to the threat of language dependence. First, one could accept it flat out, and admit that no language is better than any other. Second, one could admit that there is language dependence but argue that some languages are better than others. Third, one could deny language dependence, and try to show that there isn’t any.

For a defender of Solomonoff’s prior, I believe the second option is the most promising. If you accept language dependence flat out, why intro- duce universal Turing machines, incomputable functions, and other need- lessly complicated things? And the third option is not available: there isn’t any way of getting around the fact that Solomonoff’s prior depends on the choice of universal Turing machine. Thus, we shall somehow try to limit the blow of the language dependence that is inherent to the framework. Williamson (2010) defends the use of a particular language by saying that an agent’s language gives her some information about the world she lives in. In the present framework, a similar response could go as follows. First, we identify binary strings with propositions or sensory observations in the way outlined in the previous section. Second, we pick a UTM so that the terms that exist in a particular agent’s language gets low Kolmogorov complexity.

If the above proposal is unconvincing, the damage may be limited some- what by the following result. Let $$K_U ( x )$$ be the Kolmogorov complexity of $$x$$ relative to universal Turing machine $$U$$, and let $$K_T ( x )$$ be the Kolmogorov complexity of $$x$$ relative to Turing machine $$T$$ (which needn’t be universal). We have that $$K_U ( x ) \leq K_T ( x ) + C_{TU}$$ That is: the difference in Kolmogorov complexity relative to $$U$$ and relative to $$T$$ is bounded by a constant $$C_TU$$ that depends only on these Turing machines, and not on $$x$$. (See Li and Vitanyi (1997, p. 104) for a proof.) This is somewhat reassuring. It means that no other Turing machine can outperform $$U$$ infinitely often by more than a fixed constant. But we want to achieve more than that. If one picks a UTM that is biased enough to start with, strings that intuitively seem complex will get a very low Kolmogorov complexity. As we have seen, for any string $$x$$ it is always possible to find a UTM $$T$$ such that $$K_T ( x ) = 1$$. If $$K_T ( x ) = 1$$, the corresponding Solomonoff prior $$M_T ( x )$$ will be at least $$0.5$$. So for any binary string, it is always possible to find a UTM such that we assign that string prior probability greater than or equal to $$0.5$$. Thus some way of discriminating between universal Turing machines is called for.

1. Technically, the diachronic language “just before”/”just after” is a mistake. It fails to model cases of forgetting, or loss of discriminating power of evidence. This was shown by Arntzenius (2003)

March 31, 2018

# Philosophy success story IV: the formalisation of probability

Thus, joining the rigour of demonstrations in mathematics with the uncertainty of chance, and conciliating these apparently contradictory matters, it can, taking its name from both of them, with justice arrogate the stupefying name: The Mathematics of Chance (Aleae Geometria).

— Blaise Pascal, in an address to the Académie Parisienne de Mathématiques, 1654

Researchers in the field have wondered why the development of probability theory was so slow—especially why the apparently quite simple mathematical theory of dice throwing did not appear until the 1650s. The main part of the answer lies in appreciating just how difficult it is to make concepts precise.

— James Franklin, The Science of Conjecture

Wherefore in all great works are Clerkes so much desired? Wherefore are Auditors so richly fed? What causeth Geometricians so highly to be enhaunsed? Why are Astronomers so greatly advanced? Because that by number such things they finde, which else would farre excell mans minde.

— Robert Recorde, Arithmetic (1543)

This is part of my series on success stories in philosophy. See this page for an explanation of the project and links to other items in the series.

# How people were confused

## Degrees of belief

The first way to get uncertainty spectacularly wrong is given to us by Plato, who outright rejects non-certain reasoning (The Science of Conjecture: Evidence and Probability Before Pascal, James Franklin):

Plato has Socrates say to Theaetetus, “You are not offering any argument or proof, but relying on likelihood (eikoti). If Theodorus, or any other geometer, were prepared to rely on likelihood when doing geometry, he would be worth nothing. So you and Theodorus must consider whether, in matters as important as these, you are going to accept arguments from plausibility and likelihood (pithanologia te kai eikosi).”

## Probability as a binary property

One step in the right direction would be to accept that statements can fail to be definite truths, yet be in some sense be “more likely” than definite falsehoods. On this view, such statements have the property of being “probable”. SEP writes:

Pre-modern probability was not a number or ratio, but mainly a binary property which a proposition either had or did not have.

In this vein, Circeo wrote:

That is probable which for the most part usually comes to pass, or which is a part of the ordinary beliefs of mankind, or which contains in itself some resemblance to these qualities, whether such resemblance be true or false. (Cicero, De inventione, I.29.46)

The quote not only displays the error of thinking of probability as binary. It also shows that Cicero mixed the most promising notion of probability (that which “for the most part usually comes to pass”) with the completely different notions of ordinary belief and opinion, resulting in a general mess of confusion. According to SEP: “Until the thirteenth century, the definitions of “probable” by Cicero and Boethius very much shaped the medieval understanding of probability”.

## Ordinal probability

Going further, one might realise that there are degrees of probability. With a solid helping of the principle of charity, Aristotle can be read as saying this:

Therefore it is not enough for the defendant to refute the accusation by proving that the charge is not bound to be true; he must do so by showing that it is not likely to be true. For this purpose his objection must state what is more usually true than the statement attacked.

Here is another quote:

Hence, in this proposal we have men and women, who at age 25 buy a life-long annuity for a price which they recover within eight years and although they can die within these eight years it is more probable that they live twice the time. In this way what happens more frequently and is more probable is to the advantage of the buyer. (Alexander of Alessandria, Tractatus de usuris, c. 72, Y f. 146r)

Aristotle did not realise that probabilities could be applied to chancy events, and nor did his medieval followers. According to A. Hall:

According to van Brake (1976) and Schneider (1980), Aristotle classified events into three types: (1) certain events that happen necessarily; (2) probable events that happen in most cases; and (3) unpredictable or unknowable events that happen by pure chance. Furthermore, he considered the outcomes of games of chance to belong to the third category and therefore not accessible to scientific investigation, and he did not apply the term probability to games of chance.

The cardinal notion of probability did not emerge before the seventeenth century.

## Stakes-sensitivity

One can find throughout history people grasping at the intuition that when the stakes are high, unlikely things can be important. In many cases, legal scholars were interested in what to do if no definite proof of innocence or guilt can be given. Unfortunately, they invariably get the details wrong. James Franklin writes:

In the Talmud itself, the demand for a high standard of evidence in criminal cases developed into a prohibition of any uncertainty in evidence:

Witnesses in capital charges were brought in and warned: perhaps what you say is based only on conjecture, or hearsay, or is evidence from the mouth of another witness, or even from the mouth of an untrustworthy person: perhaps you are unaware that ultimately we shall scrutinize your evidence by cross-examination and inquiry? Know then that capital cases are not like monetary cases. In civil suits, one can make restitution in money, and thereby make his atonement; but in capital cases one is held responsible for his blood and the blood of his descendants till the end of the world . . . whoever destroys a single soul of Israel, scripture imputes to him as though he had destroyed a whole world . . . Our Rabbis taught: What is meant by “based only on conjecture”?—He [the judge] says to them: Perhaps you saw him running after his fellow into a ruin, you pursued him, and found him sword in hand with blood dripping from it, whilst the murdered man was writhing. If this is what you saw, you saw nothing.

Thomas Aquinas wrote:

And yet the fact that in so many it is not possible to have certitude without fear of error is no reason why we should reject the certitude which can probably be had [quae probabiliter haberi potest] through two or three witnesses … (Thomas Aquinas, Summa theologiae, II-II, q. 70, 2, 1488)

James Franklin writes:

Further reflection on the kinds of evidence short of certainty led to a word that expressed the most significant and original idea of the Glossators for probabilistic argument: half-proof (semiplena probatio). In the 1190s, this word was invented for the class of items of evidence that were neither null nor full proof. The word expresses the natural thought that, if two witnesses are in theory full proof, then one witness must be half.

## The problem of points

By the renaissance, thinkers had sharpened these intuitions into a concrete problem. It took centuries of fallacies to arrive at the correct answer to this problem.

The problem of points concerns a game of chance with two players who have equal chances of winning each round. The players contribute equally to a prize pot, and agree in advance that the first player to have won a certain number of rounds $$s$$ will collect the entire prize. Now suppose that the game is interrupted by external circumstances before either player has achieved victory. Player 1 has won $$s_1<s$$ rounds and player 2 has won $$s_2<s$$ rounds. How does one then divide the pot fairly? (Wikipedia, The problem of points)

Before Pascal formalised the now-obvious concept of expected value, this problem was a matter of debate. The problem of points is especially clear-cut evidence that people were confused about probability, since they arrived at different numerical answers.

Anders Hald writes (Section 4.2, p. 35ff):

The division problem is presumably very old. It is first found in print by Pacioli (1494) for $$s$$ = 6, $$s_1 = 5$$, and $$s_2 = 2$$. Pacioli considers it as a problem in proportion and proposes to divide the stakes as $$s_1$$ to $$s_2$$. […] The next attempt to solve the problem is by Cardano (1539). He shows by example that Pacioli’s proposal is ridiculous [in a game interrupted after only one round, Pacioli’s method would award the entire pot to the player with the single point, even though the outcome would be far from certain] and proceeds to give a deeper analysis of the problem. We shall return to this after a discussion of some other, more primitive, proposals. Tartaglia (1556) criticizes Pacioli and is sceptical of the possibility of finding a mathematical solution. He thinks that the problem is a juridical one. Nevertheless, he proposes that if $$s_1$$ is larger than $$s_2$$, A should have his own stake plus the fraction $$(s_l - s_2)/s$$ of B’s stake. Assuming that the stakes are equal, the division will be as $$s + s_1 - s_2$$ to $$s - s_1 + s_2$$. Forestani (1603) formulates the following rule: First A and B should each get a portion of the total stake determined by the number of games they have won in relation to the maximum duration of the play, i.e., the proportions $$s_1/(2s- 1)$$ and $$s_2/(2s- 1)$$, as also proposed by Pacioli. But then Forestani adds that the remainder should be divided equally between them, because Fortune in the next play may reverse the results. Hence the division will be as $$2s - 1 + s_1 - s_2$$ to $$2s - 1 - s_1 + s_2$$. Comparison with Tartaglia’s rule will show that $$s$$ has been replaced by $$2s - 1$$. Cardano (1539) is the first to realize that the division rule should not depend on $$(s,s_1,s_2)$$ but only on the number of games each player lacks in winning, $$a = s - s_1$$ and $$b = s - s_2$$, say. He introduces a new play where A, starting from scratch, is the winner if he wins $$a$$ games before B wins $$b$$ games, and he asks what the stakes should be for the play to be fair. He then takes for a fair division rule in the stopped play the ratio of the stakes in this new play and concludes that the division should be as $$b(b + 1)$$ to $$a(a + 1)$$. His reasons for this result are rather obscure. Considering an example for $$a = 1$$ and $$b = 3$$ he writes:

He who shall win 3 games stakes 2 crowns; how much should the other stake. I say that he should stake 12 crowns for the following reasons. If he shall win only one game it would suffice that he stakes 2 crowns; and if he shall win 2 games he should stake three times as much because by winning two games he would win 4 crowns but he has had the risk of losing the second game after having won the first and therefore he ought to have a threefold compensation. And if he shall win three games his compensation should be sixfold because the difficulty is doubled, hence he should stake 12 crowns. It will be seen that Cardano uses an inductive argument. Setting B’s stake equal to 1, A’s stake becomes successively equal to $$1$$, $$1 +2=3$$, and $$1 + 2 + 3 = 6$$. Cardano then concludes that in general A’s stake should be $$1 + 2 + ... + b = b(b + 1)/2$$. He does not discuss how to go from the special case $$(1, b)$$ to the general case $$(a, b)$$, but presumably he has just used the symmetry between the players.1

Note how different this type of disagreement is from mathematical disagreements. When people reach different solutions about a “toy” problem case, and muddle through with heursitics, they are not facing a recalcitrant mathematical puzzle. They are confused on a much deeper level. Newcomb’s problem might be a good analogy.

Anders Hald also has this interesting quote:

In view of the achievements of the Greeks in mathematics and science, it is surprising that they did not use the symmetry of games of chance or the stability of relative frequencies to create an axiomatic theory of probability analogous to their geometry. However, the symmetry and stability which is obvious to us may not have been noticed in ancient times because of the imperfections of the randomizers used. David (1955, 1962) has pointed out that instead of regular dice, astragali (heel bones of hooved animals) were normally used, and Samburski (1956) remarks that in a popular game with four astragali, a certain throw was valued higher than all the others despite the fact that other outcomes have smaller probabilities, which indicates that the Greeks had not noticed the magnitudes of the corresponding relative frequencies.

# Pascal and Fermat’s solution

Pascal and Fermat’s story is well known. In a famous correspondence in the 1654, they developed the basic notion of probability and expected value.

Keith Devlin (2008):

Before we take a look at their exchange and the methods it contains, let’s look at a present-day solution of the simple version of the problem. In this version, the players, Blaise and Pierre, place equal bets on who will win the best of five tosses of a fair coin. We’ll suppose that on each round, Blaise chooses heads, Pierre tails. Now suppose they have to abandon the game after three tosses, with Blaise ahead 2 to 1. How do they divide the pot? The idea is to look at all possible ways the game might have turned out had they played all five rounds. Since Blaise is ahead 2 to 1 after round three, the first three rounds must have yielded two heads and one tail. The remaining two throws can yield

HH HT TH TT

Each of these four is equally likely. In the first (H H), the final outcome is four heads and one tail, so Blaise wins; in the second and the third (H T and T H), the final outcome is three heads and two tails, so again Blaise wins; in the fourth (T T), the final outcome is two heads and three tails, so Pierre wins. This means that in three of the four possible ways the game could have ended, Blaise wins, and in only one possible play does Pierre win. Blaise has a 3-to-1 advantage over Pierre when they abandon the game; therefore, the pot should be divided 3/4 for Blaise and 1/4 for Pierre. Many people, on seeing this solution, object, saying that the first two possible endings (H H and H T) are in reality the same one. They argue that if the fourth throw gives a head, then at that point, Blaise has his three heads and has won, so there would be no fifth throw. Accordingly, they argue, the correct way to think about the end of the game is that there are actually only three possibilities, namely

H TH TT

in which case, Blaise has a 2-to-1 advantage and the pot should be divided 2/3 for Blaise and 1/3 for Pierre, not 3/4 and 1/4. This reasoning is incorrect, but it took Pascal and Fermat some time to resolve this issue. Their colleagues, whom they consulted as they wrestled with the matter, had differing opinions. So if you are one of those people who finds this alternative argument appealing (or even compelling), take heart; you are in good company (though still wrong).

The issue behind the dilemma here is complex and lies at the heart of probability theory. The question is, What is the right way to think about the future (more accurately, the range of possible futures) and model it mathematically?

The key insight was one that Cardano had already flailingly grapsed at, but was difficult to understand even for Pascal:

As I observed earlier in this chapter, Cardano had already realized that the key was to look at the number of points each player would need in order to win, not the points they had already accumulated. In the second section of his letter to Fermat, Pascal acknowledged the tricky point we just encountered ourselves, that you have to look at all possible ways the game could have played out, ignoring the fact that the players would normally stop once one person had clearly won. But Pascal’s words make clear that he found this hard to grasp, and he accepted it only because the great Fermat had explained it in his previous letter.

Elsewhere, Keith Devlin writes:

Today, we would use the word probability to refer to the focus of Pascal and Fermat’s discussion, but that term was not introduced until nearly a century after the mathematicians’ deaths. Instead, they spoke of “hazards,” or number of chances. Much of their difficulty was that they did not yet have the notion of mathematical probability—because they were in the process of inventing it.

From our perspective, it is hard to understand just why they found it so difficult. But that reflects the massive change in human thinking that their work led to. Today, it is part of our very worldview that we see things in terms of probabilities.

# Extensions

## Handing over to mathematics

Solving a philosophical problem is to take it out of the realm of philosophy. Once the fundamental methodology is agreed upon, the question can be spun off into its own independent field.

The development of probability is often considered part of Pascal’s mathematical rather than philosophical work. But I think the mathematisation of probability is in an important sense philosophical. In another post, I write much more about why successful philosophy often looks like mathematics in retrospect.

After Pascal and Fermat’s breakthrough, things developed very fast, highlighting once again the specificity of that ititial step.

Keith Devlin writes:

In 1654, Pascal had struggled hard to understand why Fermat counted endings of the unfinished game that would never have arisen in practice (“it is not a general method and it is good only in the case where it is necessary to play exactly a certain number of times”). Just fifteen years later, in 1669, Christian Huygens was using axiom-based abstract mathematics on top of statistically processed data tables to determine the probability that a sixteen-year-old young man would die before he reached thirty-six.

After the crucial first step for formalisation, probability was ripe to be handed over to mathematicians. SEP writes:

These early calculations [of Pascal, Fermay and Huygens] were considerably refined in the eighteenth century by the Bernoullis, Montmort, De Moivre, Laplace, Bayes, and others (Daston 1988; Hacking 2006; Hald 2003).

For example, the crucial idea of conditional probability was developed. According to MathOverflow, in the 1738 second edition of The Doctrine of Chances, de Moivre writes,

The Probability of the happening of two Events dependent, is the product of the Probability of the happening of one of them, by the Probability which the other will have of happening, when the first shall be consider’d as having happened; and the same Rule will extend to the happening of as many Events as may be assigned.

People began to get it, philosophically speaking. We begin to see quotes that, unlike those of Circeo, sound decidedly modern. In his book Ars conjectandi (The Art of Conjecture, 1713), Jakob Bernoulli wrote:

To conjecture about something is to measure its probability. The Art of Conjecturing or the Stochastic Art is therefore defined as the art of measuring as exactly as possible the probabilities of things so that in our judgments and actions we can always choose or follow that which seems to be better, more satisfactory, safer and more considered.

Keth Devlin writes:

Within a hundred years of Pascal’s letter, life-expectancy tables formed the basis for the sale of life annuities in England, and London was the center of a flourishing marine insurance business, without which sea transportation would have remained a domain only for those who could afford to assume the enormous risks it entailed.

## Axiomatisation

Much later, probability theory was put on an unshakeable footing, with Kolomogorov’s axioms.

# Counter-intuitive implications of probability theory

I’ve given many examples of how people used to be confused about probability. In case you find it hard to empathise with these past thinkers, I should remind you that even today probability theory can be hard to grasp intuitively.

## The conjunction fallacy

The most often-cited example of this fallacy originated with Amos Tversky and Daniel Kahneman. Although the description and person depicted are fictitious, Amos Tversky’s secretary at Stanford was named Linda Covington, and he named the famous character in the puzzle after her.

Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations.

Which is more probable?

1. Linda is a bank teller.
2. Linda is a bank teller and is active in the feminist movement.

The majority of those asked chose option 2. However, the probability of two events occurring together (in “conjunction”) is always less than or equal to the probability of either one occurring alone.

## The monty hall problem

Suppose you’re on a game show, and you’re given the choice of three doors: Behind one door is a car; behind the others, goats. You pick a door, say No. 1, and the host, who knows what’s behind the doors, opens another door, say No. 3, which has a goat. He then says to you, “Do you want to pick door No. 2?” Is it to your advantage to switch your choice?

Vos Savant’s response was that the contestant should switch to the other door (vos Savant 1990a). Under the standard assumptions, contestants who switch have 2/3 chance of winning the car, while contestants who stick to their initial choice have only a 1/3 chance.

The given probabilities depend on specific assumptions about how the host and contestant choose their doors. A key insight is that, under these standard conditions, there is more information about doors 2 and 3 that was not available at the beginning of the game, when the door 1 was chosen by the player: the host’s deliberate action adds value to the door he did not choose to eliminate, but not to the one chosen by the contestant originally.

## The mammography problem

Yudkowsky:

1% of women at age forty who participate in routine screening have breast cancer. 80% of women with breast cancer will get positive mammographies. 9.6% of women without breast cancer will also get positive mammographies. A woman in this age group had a positive mammography in a routine screening. What is the probability that she actually has breast cancer?

What do you think the answer is? If you haven’t encountered this kind of problem before, please take a moment to come up with your own answer before continuing.

Next, suppose I told you that most doctors get the same wrong answer on this problem - usually, only around 15% of doctors get it right. (“Really? 15%? Is that a real number, or an urban legend based on an Internet poll?” It’s a real number. See Casscells, Schoenberger, and Grayboys 1978; Eddy 1982; Gigerenzer and Hoffrage 1995; and many other studies. It’s a surprising result which is easy to replicate, so it’s been extensively replicated.)

Most doctors estimate the probability to be between 70% and 80%. The correct answer is 7.8%.

1. More on Cardano, in Section 4.3 of Hald:

[Cardano’s] De Ludo Aleae is a treatise on the moral, practical, and theoretical aspects of gambling, written in colorful language and containing some anecdotes on Cardano’s own experiences. Most of the theory in the book is given in the form of examples from which general principles are or may be inferred. In some cases Cardano arrives at the solution of a problem through trial and error, and the book contains both the false and the correct solutions. He also tackles some problems that he cannot solve and then tries to give approximate solutions. […] In Chap. 14, he defines the concept of a fair games in the following terms:

So there is one general rule, namely, that we should consider the whole circuit [the total number of equally possible cases], and the number of those casts which represents in how many ways the favorable result can occur, and compare that number to the remainder of the circuit, and according to that proportion should the mutual wagers be laid so that one may contend on equal terms.

March 31, 2018

# Philosophy success story III: possible words semantics

This is part of my series on success stories in philosophy. See this page for an explanation of the project and links to other items in the series.

# Intensions rescued from darkness

Grant that “animals with a kidney” and “animals with a heart” designate the same set. They have the same extension. Yet their meaning is clearly different.1 In On Sense and Reference, (“Über Sinn und Bedeutung”, 1892) Frege had already noticed this.

Classical predicate logic’s achievement was to give a precise and universal account of how the designation of a sentence depends on the designation of its parts. It was a powerful tool for both deduction and clarification, revealing the ambiguity of ordinary language. I discuss this in detail in the first success story.

Classical logic was developed to model the reasoning needed in mathematics, where the difference between meaning and designation is unimportant. Outside of mathematics, where meaning and designation can come apart, classical logic was inadequate. A formal account of meaning was lacking. Frege called it sense (“Sinn”). According to Sam Cumming, “Frege left his notion of sense somewhat obscure”. Frege appeared to endorse the criterion of difference for senses:

Two sentences S and S* differ in sense if and only if some rational agent who understood both could, on reflection, judge that S is true without judging that S* is true.

This is not adequately formal. Letting meaning depend on the conclusions of some “rational agent” leaves it at the level of intuition. The criterion does not even attempt to give a formal model of meaning; it simply gives a condition for meanings to differ.

Meaning began to seem metaphysically suspect, like a ghostly “extra” property tacked on to every predicate. SEP tells us:

Intensional entities have of course featured prominently in the history of philosophy since Plato and, in particular, have played natural explanatory roles in the analysis of intentional attitudes like belief and mental content. For all their prominence and importance, however, the nature of these entities has often been obscure and controversial and, indeed, as a consequence, they were easily dismissed as ill-understood and metaphysically suspect “creatures of darkness”2 (Quine 1956, 180) by the naturalistically oriented philosophers of the early- to mid-20th century.

The contribution of possible worlds semantics was to give a precise formal description of these “creatures of darkness”, bringing them into the realm of respectability.

Simply: intensions are extensions across possible worlds.

Sider (Logic for Philosophy p.290) writes:

we relativize the interpretation of predicates to possible worlds. The interpretation of a two-place predicate, for example, was in nonmodal predicate logic a set of ordered pairs of members of the domain; now it is a set of ordered triples, two members of which are in the domain, and one member of which is a possible world. When $$\langle u_1 ,u_2 ,w \rangle$$ is in the interpretation of a two-place predicate $$R$$, that represents $$R$$’s applying to $$u_1$$ and $$u_2$$ in possible world $$w$$. This relativization makes intuitive sense: a predicate can apply to some objects in one possible world but fail to apply to those same objects in some other possible world. These predicate-interpretations are known as “intensions”. The name emphasizes the analogy with extensions, which are the interpretations of predicates in nonmodal predicate logic. The analogy is this: the intension of a predicate predicate can be thought of as determining an extension within each possible world”.

# Applications

## Future contingents

Aristotle famously used the case of a sea-battle to (seemingly) argue against the law of the excluded middle:

Let me illustrate. A sea-fight must either take place to-morrow or not, but it is not necessary that it should take place to-morrow, neither is it necessary that it should not take place, yet it is necessary that it either should or should not take place to-morrow. Since propositions correspond with facts, it is evident that when in future events there is a real alternative, and a potentiality in contrary directions, the corresponding affirmation and denial have the same character.

This is the case with regard to that which is not always existent or not always nonexistent. One of the two propositions in such instances must be true and the other false, but we cannot say determinately that this or that is false, but must leave the alternative undecided. One may indeed be more likely to be true than the other, but it cannot be either actually true or actually false. It is therefore plain that it is not necessary that of an affirmation and a denial one should be true and the other false. For in the case of that which exists potentially, but not actually, the rule which applies to that which exists actually does not hold good. The case is rather as we have indicated.

People appear to have been confused about this for many centuries. It doesn’t help that Aristotle wrote very ambiguously. Colin Strang (1960) tells us:

VERY briefly, what Aristotle is saying in De Interpretatione, chapter ix is this: if of two contradictory propositions it is necessary that one should be true and the other false, then it follows that everything happens of necessity; but in fact not everything happens of necessity; therefore it is not the case that of two contradictory propositions it is necessary that one should be true and the other false; the propositions for which this does not hold are certain particular propositions about the future.

The reader is warned that what Aristotle is saying is ambiguous (cf. Miss Anscombe, loc. cit. p. 1).

SEP tells us:

The interpretative problems regarding Aristotle’s logical problem about the sea-battle tomorrow are by no means simple. Over the centuries, many philosophers and logicians have formulated their interpretations of the Aristotelian text (see Øhrstrøm and Hasle 1995, p. 10 ff.).

The SEP article is very long, and features Leibniz and some pretty funky-looking graphs. I recommend it if you want to experience some confusion.

Aristole’s could be taken to reason thus:

1. If Battle, then it cannot be that No Battle
2. If if cannot be that no Battle, then necessarily Battle
3. If Battle, then Necessarily Battle

But this is an obvious modal fallacy, drawing on the ambiguity of (1) between

• The true statement $$\Box (B \lor \neg B)$$ which implies $$\Box(B \rightarrow \neg\neg B)$$
• The false statement $$(B \rightarrow \Box \neg\neg B) \iff (B \rightarrow \Box B)$$

Philosophy is littered with variations on this confusion between necessity of the consequence and necessity of the consequent.

## Modality de dicto vs modality de re

As the SEP page on Medieval theories of modality will amply demonstrate, confusion reigned long after Aristotle’s day. Quine (Word and Object) was baffled by talk of a difference between necessary and contingent attributes of an object, but used some quite fallacious arguments in attacking that difference:

Perhaps I can evoke the appropriate sense of bewilderment as follows. Mathematicians may conceivably be said to be necessarily rational and not necessarily two-legged; and cyclists necessarily two-legged and not necessarily rational. But what of an individual who counts among his eccentricities both mathematics and cycling? Is this concrete individual necessarily rational and contingently two-legged or vice versa? Just insofar as we are talking referentially of the object, with no special bias towards a background grouping of mathematicians as against cyclists or vice versa, there is no semblance of sense in rating some of his attributes as necessary and others as contingent. Some of his attributes count as important and others as unimportant, yes, some as enduring and others as fleeting; but none as necessary or contingent.

SEP writes: “Most philosophers are now convinced, however, that Quine’s “mathematical cyclist” argument has been adequately answered by Saul Kripke (1972), Alvin Plantinga (1974) and various other defenders of modality de re.”

And elsewhere:

(15) Algol is a dog essentially: $$\Box (\exists a \rightarrow Da)$$

Sentences like (15) in which properties are ascribed to a specific individual in a modal context are said to exhibit modality de re (modality of the thing). Modal sentences that do not, like

Necessarily, all dogs are mammals: $$\Box \forall x (Dx \rightarrow Mx)$$ are said to exhibit modality de dicto (roughly, modality of the proposition).

As Plantiga writes Quine has us confused:

The essentialist, Quine thinks, will presumably accept (35) Mathematicians are necessarily rational but not necessarily bipedal and (36) Cyclists are necessarily bipedal but not necessarily rational.

But now suppose that (37) Paul J. Swiers is both a cyclist and a mathematician. From these we may infer both (38) Swiers is necessarily rational but not necessarily bipedal and (39) Swiers is necessarily bipedal but not necessarily rational

which appear to contradict each other twice over. This argument is unsuccessful as a refutation of the essentialist. For clearly enough the inference of (39) from (36) and (37) is sound only if (36) is read de re; but, read de re, there is not so much as a ghost of a reason for thinking that the essentialist will accept it.

But possible worlds semantics also illuminates the intuition that was likely behind Quine’s dismissal of de re modality. SEP:

Possible world semantics provides an illuminating analysis of the key difference between [modality de re and modality de dicto]: The truth conditions for both modalities involve a commitment to possible worlds; however, the truth conditions for sentences exhibiting modality de re involve in addition a commitment to the meaningfulness of transworld identity, the thesis that, necessarily, every individual (typically, at any rate) exists and exemplifies (often very different) properties in many different possible worlds.

Beautiful.

1. Ordinary-language predicates can be ambiguous between sense and reference. Ordinary-language names can also be ambiguous in the same way, as with “Hesperus = Phosoporus”. But Kripke himself (!) didn’t appear to see this, and it took the development of two-dimensional semantics (Stanford, see also Sider’s Logic for Philosophy, chapter 10, and Chalmers). I don’t count this as a success story because 2D semantics has yet to gain consensus approval.

2. In Quantifiers and Propositional Attitudes (1956) Quine wrote: “Intensions are creatures of darkness, and I shall rejoice with the reader when they are exorcised, but first I want to make certain points with help of them.” My understanding is that Quine had a pre-possible worlds understanding of “intensions”, equivalent to Frege’s senses and hence still informal. So in today’s usage the quote would be rendered as “Meanings are creatures of darkness”. Quine was writing in 1956. Kripke published Semantical Considerations on Modal Logic in 1963.

March 31, 2018