# How I budgeted my time for Oxford final exams

PPE finals are eight high-stakes examinations on two years’ worth of material, composed of eight modules. Everyone dedicates at least their last term to revision, during which no new modules are added. Many PPE students even finish their eight modules two terms before exams (at the end of Michaelmas term of their third year), which theoretically leaves six months between the time they stop learning new material and the time they are examined.

How much explicit planning should go into finals revision? I am generally wary of over-planning, especially with rigid, brittle plans over long time horizons. The plan often ends up being inadequate, not following the plan causes guilt, and constantly revising it costs a lot of effort1. On the other hand, when the stakes and the risk of akrasia are both high, planning could have outsize returns.

There are many aspects of planning for exams. Here I focus on just one: budgeting time between different topics for revision. This is especially difficult to do with raw intuition: it always feels like you’ve got ages left until exams, until you don’t. It’s typical for people to take a leisurely stroll through material that they enjoy, deepening their understanding, which leaves little time for the more difficult and aversive stuff. (Given how easily I get nerd-sniped by my favourite topics, this is an especially worrying pitfall for me.)

To budget my time, I used this spreadsheet, which you can copy and adapt2. I’ll discuss some of its features now.

# Dividing the remaining time

The most basic feature is a simple reality check: how many days until exams? (This is calculated dynamically using =TODAY().) Dividing by eight, how many per module? This simple calculation could be enough to snap you out of the vague feeling that there is “a lot” of time left. Maybe there really is ample time. Why not find out exactly how much, so you can use it best? Maybe once you do the maths you realise there isn’t. In that case this simple division provides a salutary wake-up call.

Further adjustments could be useful. I’d recommend planning some days off, like one day a week, and certainly a few full days before exams (cramming is counter-productive). If you’re travelling or doing any projects, subtract those days explicitly from the total.

If your modules have themselves a modular structure, like independent chapters of a textbook, you might consider further subdividing the time between them. Maybe you’ll get something like 0.12 days per chapter, which at five hours a day is 36 minutes, something so close to the skin your system 1 might actually be able to process it. In my experience it’s pretty rare for intellectual material to actually have such independent chunks, when you zoom in all the way, even though it might superficially be organised into discrete topics.

# Allocating time between modules in proportion to their variance

If you’re trying to maximise your expected mark, allocating more time to higher-variance papers makes sense. To be precise, if your mark in each paper is the square root of time allocated times the standard deviation, the sum of the marks is maximised by allocating time in proportion to the variance.

I looked up the variance in marks for each paper since 2015, the first year this information was available in the PPE examiner’s reports. I then averaged the three variances3 for each paper. For a PPE student taking my eight papers, I computed how much of the total variance has historically been contributed by each paper.

The results are pretty striking. For example Game Theory contributed more than four times as much variance as Knowledge and Reality. I think these fractions are a better starting point for time budgeting than allocating time equally between the modules.

But allocating time purely by the variance has some pretty obvious flaws. For starters, there are strong selection effects: some papers are mandatory for everyone, while others select for the most capable and interested students. For instance, if all but the nerdiest of nerds avoid econometrics, we should expect the variance to be artificially low for that paper. Then there is the fact that some modules build on each other while others do not: econometrics is basically an advanced version of quantitative economics, so there is little point doing quantitative economics-specific revision over and above what I do for econometrics. And finally you need to adjust for factors idiosyncratic to you: I basically snoozed through Macroeconomics last year while I was busy with an unrelated research project. I ended up with these target allocations:

# Planning for humans: built-in updating

A crucial feature of a good revision plan is that it adapts gracefully when you don’t get as much done as you hoped. You shouldn’t have to scramble to adjust your plan after the fact, cursing your weakness of will. It should be baked into the design from the start.

On one view of plans, they are what you should do, and a feeling of guilt when you don’t follow them is not only natural but appropriate. A view that I’ve often found more fruitful is to treat plans as just another tool of instrumental rationality. This may seem like an obvious point, but for many people, myself included, it’s much harder to grasp on an intuitive level, and harder still to implement. On this topic I highly recommend reading Nate Soares’ replacing guilt series.

When you fall behind your initial plan, it can be tempting to think you can accelerate to make up for lost time. But I think this is rarely realistic. When you don’t work as hard as you had planned, this constitutes evidence that your plan is too ambitious for the future as well as the past. I have often ignored this evidence and paid the price for it. Accelerating is an especially bad idea when you need to allocate your effort over weeks rather than days or hours. Like many beginning endurance runners, if you run that fast you’ll end up collapsing before the finish line.

I used the Pomodoro technique: 25-minute segments of focused work followed by a five-minute break. Apart from its other benefits, this technique provides a nice and tangible unit to measure time. At any point in time I want to be solving this equation:

$\frac{PA_i+S_i}{D+ \sum_i S_i} = T_i$

This graph shows $$A_i/S_i$$:

Where $$P$$ is the number of pomodoros left until exams, $$S_i$$ is the number of pomodoros spent on module $$i$$ so far, $$T_i$$ is the overall target allocation for module $$i$$, and $$A_i$$ is the allocation (out of $$P$$) to now be spent on module $$i$$. For this, I need to manually keep track of an additional variable, $$S_i$$. I do this on Sheet2.

We can do interesting things with $$P$$. The simplest estimate of $$P$$ is simply a constant number of pomodoros times the number of days left until exams (running the entire marathon at the same speed). This is sufficient to give you the first feature of the equation: automatic adjustment to the passing of time.

Tracking $$S_i$$ enables a second nice feature. By computing your average number of pomos per day ($$\sum_i S_i$$ divided by the number of days since you started revising), and extrapolating it, you obtain a realistic estimate of $$P$$. This gives you automatic adjustment for your actual capacity to do work. You needn’t slavishly extrapolate this average. But it should feed into your estimate of $$P$$. If you plan to work 10 pomos a day but so far you’ve only done an average of 4 pomos a day, that should raise a red flag.

Diminishing returns suggest that it’s best to always work on the module for which $$T_i/S_i$$ (depicted below) is largest.

# The budget as a part of a larger decision-making procedure

I often disregarded the numbers based on hunches and intuitions, especially when I felt that my intuitions were capturing some unmodeled factor (for instance when I felt confident about a topic without having spent much time revising it, or when I postponed revision on exams which came later).

I fully expected that I would do this. Getting up every day and following the dictates of the spreadsheet would have been an instance of Spock-like “straw vulcan” rationality. Instead, I viewed the model and the intuitive view as two different tools at my disposal, or as two adivsors, each with her own bias.

I thought of the division of labour in something like the following way. The whole case for making an explicit, numerical budget is that the intuitive System 1 is about as good at long-term planning as a toddler in a casino. The spreadsheet is excellent at remembering what you did, keeping track of the long-term goals, feeding historical data into your decision making process, and most importantly, it does not self-delude. However, it is a woefully simplified model of the actual task of taking eight exams while trapped in a human body with a fleshy brain. Millions of variables are boiled down to a handful. The model is computationally puny next to the awesome power of your System 1, whose inclinations are based on a great deal of contextual information which your brain constantly gobbles up. System 1 shines at taking in a huge amount of relevant information and boiling it down to an up-or-down judgement: Game Theory today, yea or nay? The spreadsheet is good at correcting some of the biases of System 1 and at giving enough weight to the data on variances, which is crucial but not at all salient.

# How useful was all this planning?

Looking back, how much did the budget actually change my decisions? I ended up using the model mostly as a guardrail, reminding me to allocate more time to a module when its ratio $$A_i/S_i$$ became something outrageous like 300%. I didn’t pay much attention to the exact numbers on a day-to-day basis. But averaging over the long run, I think the budget substantially affected my decisions. In particular, it made me spend much less time on low-variance philosophy papers and more on game theory and microeconomics.

Another intended purpose of the spreadsheet was to help me smooth my effort more over time. Looking back, I’m a bit disappointed by how much harder I worked when finals were approaching than when they were more distant. But it probably would have been even worse with less planning. My best guess is that the spreadsheet had a minor positive impact in this respect. To be fair, effort and consumption smoothing is, in general, a very difficult task for human motivation.

My final allocations turned out to be pretty close to the targets even though I didn’t strongly intend them to.

Given the high stakes and the relatively low time cost, I think this project was amply worth it. Setting up the spreadsheet took only a few hours. The main cost was the hassle of keeping track of daily time use. The single biggest win was to do the research on the variances. For this insight I thank my friend Rune Tybirk Kvist, who completed finals one year before me.

1. That’s why I like Complice, which makes you choose fresh, relevant actions every day and prohibits pile-ups of unfinished tasks.

2. I’ve removed most of the data to protect my privacy, but I’ve kept the percentages for illustration in Sheet1_hardcoded_data. You’ll also come across some negative numbers because the beginning of exams is now in the past. Data input cells are in orange. Sheets 2015, 2016, and 2017 are where the data from examiner’s reports is stored.

3. Not weighted by number of candidates each year, too much of a hassle for something I expect would make little difference.

June 11, 2018

# Philosophy frivolity

Oh love
why are you
so elusive?
This morning
you disappeared
from my ontology.
I too can be reduced
to tears.

— Oxford, September 2017

I met a Dutch gambler in a dream;
she had a raw beauty, Savage and mischievous.

We bet and all her wagers did seem
so full of promise, innocent and auspicious.

But no matter the twists and turns of chance

Oh I fell for her, but what I felt for her
was only love.

For even the reasoner’s art
bends to the treasonous heart.

— Oxford, May 2018

May 29, 2018

# Mindful tech: 22 concrete tips

I often spend more time on distracting websites than I would like. Far beyond a mere productivity issue, aimless browsing and scrolling has become a major source of unhappiness for me.

If you want to do something about this, it’s important to know the forces you’re up against. These apps deliver stimuli that have been optimised more than almost anything in human history. Billions of dollars of incentives and powerful A/B testing have conspired to produce what is perhaps literally the most addictive possible set of 700x1200 pixels to display on your phone.

Over a year ago now, I started thinking about how to regain control. Let me share some of my tricks. (I also endorse everything on this list.) Some of these will be more extreme than others. You could view them as a menu of options to consider, I hope there will be something for everyone. Perhaps you’re just looking to waste a bit less time every day on social media. Or maybe you want to develop full-fledged counter-measures to the onslaught of the attention industry.

1. Mindfulness
2. Android
4. Browser
5. Windows

# Mindfulness

The first thing to say is that these tools and techniques best go hand in hand with broader mindfulness training. These tricks can make it a little bit easier to resist the pull of social media on a distracted or anxious mind. But as long as the underlying distraction, anxiety, or craving is still there, the monkey in you will eventually find a way to evade the obstacles you have put in its way. Paul Christiano writes: “The monkey executes a set of reflexes trained to maximize a complex reward function, which was in turn tuned by evolution to maximize reproductive fitness.” If what your monkey really wants is to escape the present moment, it will ultimately have its way: “No matter how “dumb” the monkey is, if it is unbiased then there is no free lunch”.

I use Headspace for mindfulness training. To avoid getting distracted right when I want to meditate, I actually have a separate phone (my old one) which I use exclusively for Headspace. Unlike my primary phone, I don’t mind keeping my Headspace phone on my bedside table.

To some extent, of course, Headspace is also optimised for engagement. Sometimes, you can fight fire with fire.

# Android

## Don’t get a new phone

Don’t get a new, faster, shinier phone. Your unappealing, three-year-old Android is your best friend here, as long as it works well for essential uses like Google Maps.

## Constant do not disturb

That thing when your phone buzzes or beeps? I haven’t experienced it in a few years, and it’s been great. You still want alarms though. And the good thing about nobody calling each other any more is that if you do get a call it’s probably important. So set do not disturb to Priority only and make it permanent with Until you turn off Do not disturb. Then in the advanced settings for Do not disturb, set Priority only allows to allow All callers. (Don’t only allow calls from contacts. I’ve missed some important and time-sensitive calls because of that.)

Disabling notifications can be a double-edged sword. You might remove one potential distraction. Or you might end up doing more mindless clicking because now you open the apps regularly because there might be something there. Experiment with both and see what happens.

## Blocking websites

You can use the “trend micro security” app to block websites. I only weakly recommend it because the interface is crap and the blocking is pretty shallow. You can always go edit your settings. You could install another app like AppBlock to in turn block Trend micro and to block itself. But apps can be very easily and quickly uninstalled on Android, so you’d probably just uninstall them both if you got a big craving. At some point I had a clever scheme with blocking the android system app that uninstalls apps, but it became too brittle. Still, Trend micro’s blocking can give you a couple of seconds to reconsider whether you really want to open Twitter for the thirty-second time today.

## Home Screen

### White wallpaper

Choose a pure white background. This should make your phone a little bit less appealing.

White wallpaper, before and after (with Nova Launcher)

### Remove shortcuts

Remove all but the least distracting apps from your home screen.

Home screen, before and after

### Nova launcher

Use this custom Android launcher1 to

• remove the left-hand-side Google feed from the homescreen
• remove the Google search bar
• set the Drawer App Grid to 2x2 (this displays fewer apps at once, and makes it more laborious to find an app)

Removing the search bar, before and after

Setting the drawer app grid, before and after

## Black and white

You can manually set your phone to black and white in the following way. Go to developer options, then Hardware-accelerated rendering and set Simulate colour space to Monochrome.

Black and white, before and after

But colour has genuine uses, like viewing photos. You may also want to keep colour for “virtuous” apps like Headspace. By using Tasker, you can actually automate when the grayscale setting is turned on or off.

• To create a task that sets your phone to grayscale, select Custom Setting, set the type to Secure, set the Name to accessibility_display_daltonizer_enabled and the Value to 1. Set it to 0 to go back to colour.
• Then you can create a Tasker Profile that disables grayscale when you open certain apps and enables it again when you close them.

You can import my setup by simply clicking this link from an Android device that has Tasker installed (XML backup).

## No animations

Go to Developer options, Drawing, and set all of

• Window animation scale
• Transition animation scale
• Animator duration scale to Animation off. This will get rid of these smooth and pleasant animations, and make your phone feel more like Windows XP (in a good way).

Animations, before and after

## Track how long you can keep your phone locked

I’ve built a little project in Tasker that tracks how long it has been since you last unlocked your phone. If it’s been more than 30 minutes, you get a notification congratulating you. If you unlock the phone, it’s like losing your highscore.

Keeping track of how long your phone has been locked

This is extremely inexpertly built; if I remember correctly it just writes the date & time to a file every time you lock or unlock the phone. The need to write to a file is to avoid resetting your counter when you turn off your phone. You can import the Tasker project by clicking this link from an Android device that has Tasker installed (XML backup).

I built another Tasker project that relies on AutoNotification to block some apps’ notifications (e.g. WhatsApp), and then unblock them again.

Blocking notifications can be counter-productive (you just end up checking the app all the time). So I don’t recommend it in general. It has only one use case for me: when I’m doing deep work, but I still want to use my phone to listen to music on Spotify.

Warning: this is some hacky shit, very brittle, I don’t really understand how it handles device reboots. If you’re brave enough to try: Magic Tasker link, XML backup.

Facebook is very harmful in some ways, but beneficial in others. Here’s what I like to use. (Many of these tips apply to most social media services).

## Delete the app from your phone

Duh. And if you find yourself using the mobile web interface, log out, set it to never remember your username or password, and pick a really long and annoying password. Because you can sync Facebook events to your Google calendar, I’ve found I honestly never have a legitimate need for Facebook on my phone.

## In the browser

Here’s their combined effect:

## Buffer

If you just want to post a status update, do it through Buffer instead of going to Facebook and giving it another opportunity to suck you in. Web and Mobile.

## Messenger Lite

I loathe “My day”. Thankfully there’s Messenger lite, which removes “My Day”, and a bunch of other crap.

## Turn off ‘Active now’

In messenger, it’s in Settings, then Availability.

# Browser

## Grayscale the Web

Install Grayscale the Web. Navigate to the distracting website, and click Save Site. (You have to do this manually for every offending domain).

Grayscale the web, before and after

## Delayed gratification

Install Delayed Gratification for distracting websites. The key thing here is that the 15-30 second delay gives you a chance to reconsider and close the tab, but since it’s only a delay you’re not tempted to circumvent the tool.

Delayed Gratification

## Ascetic monk mode

This stylish extension immediately turns you into an Enlightened One, or comes close.

Ascetic monk mode

More generally, with custom stylish extensions, the sky is the limit for customising the web, but that requires some fiddling with css. Consider using my fork of ascetic monk mode.

# Windows

## Block distracting services

This is where most of the action for this section will happen. Use a programme like Cold Turkey (Windows) or Self-Control (Mac) to block distractions. This is some deep-level blocking. You can’t circumvent it short of reinstalling the operating system, I think. So start with small experiments, and work your way up.

I’ll give more detail only about Cold Turkey, since that’s what I know. Cold Turkey can block both .exe programmes and websites. You can set different block lists and schedule your lists to automatically activate during certain times. I’ve found it useful to have three lists:

• Quit entirely, for websites that provide no conceivable value
• Deep work. When doing a deep work session on a single project, I sometimes use this to block everything except a few project-relevant websites for the duration of the session.

For Quit entirely, go to Timers and block the list until 2021 or something.

For Distractions, go to Schedule and block them for a few hours a day at first. Remember, it’s a negotiation between you and the monkey, not an all-out war. These are seriously addictive products. If you block them totally, you might actually re-install the whole OS, or more likely find another device to log in from. What worked for me is to slowly increase the daily window of time during which this list is blocked.

Block Deep work for a few hours as needed.

Don’t forget to go to Settings to lock the schedule.

Cold Turkey

Remove shortcuts and tiles for all but the least distracting apps. I’ve removed all shortcuts except Chrome, and all tiles except the calendar and weather tiles.

## Single-use instances for webapps

When you use webapps like Gmail or Google Calendar in a normal browser window, there are several visual cues encouraging you to get distracted: the new tab button, and the address bar (which likely suggests facebook when you type the single letter f). Instead, use a dedicated window. In Chrome, you need to go to the Settings menu, and then More tools and Add to desktop. Don’t forget to delete the shortcut from your desktop (use search instead). (Personally I’ve created my own electron wrappers instead of Chrome, which uses a lot of memory if you have a lot of extensions.)

Google Calendar, in the browser and as a single-use instance

If you’re still here, congratulations on making it to the end! I wish you contentment and calm.

1. I’ve been told there’s also another launcher, called Siempo, which is designed for mindful use. It’s in Beta but looks like it has some cool features.

April 17, 2018

# Philosophy success story V: Bayesianism

This is part of my series on success stories in philosophy. See this page for an explanation of the project and links to other items in the series.

# Contents

1. Bayesianism: the correct theory of rational inference
1. Probabilism
2. Conditionalisation
3. Justifications for probabilism and conditionalisation
2. Science as a special case of rational inference
3. Previous theories of science
4. The Quine-Duhem problem
5. Uncertain judgements and value of information (resilience)
6. Issues around Occam’s razor

# Bayesianism: the correct theory of rational inference

Unless specified otherwise, by “Bayesianism” I mean normative claims constraining rational credences (degrees of belief), not any descriptive claim. Bayesianism so understood has, I claim, consensus support among philosophers. It has two core claims: probabilism and conditionalisation.

## Probabilism

What is probabilism? (Teruji Thomas, Degrees of Belief, Part I: degrees of belief and their structure.)

Suppose that Clara has some confidence that $$P$$ is true. Then, in so far as Clara is rational:

1. We can quantify credences: we can represent Clara’s credence in $$P$$ by a number, $$Cr(P)$$. The higher the number, the more confident Clara is that $$P$$ is true.
2. More precisely, we can choose these numbers to fit together in a certain way: they satisfy the probability axioms, that is, they behave like probabilities do: (a) $$Cr(P)$$ is always between 0 and 1. (b) $$Cr(\neg P) = 1−Cr(P)$$ (c) $$Cr(P \lor Q) = Cr(P)+Cr(Q)−Cr(P \land Q)$$.

## Conditionalisation

Suppose you gain evidence E. Let Cr be your credences just before and Cr_NEW new your credences just afterwards. Then, insofar as you are rational, for any proposition P: $$Cr_{\text{new}} (P) = \frac{Cr(P and E)}{Cr(E)} \stackrel{\text{def}}{=} Cr(P|E)$$.1

## Justifications for probabilism and conditionalisation

### Dutch book arguments

The basic idea: an agent failing to use probabilism or conditionalisation can be made to accept a series of bets that will lead to a sure loss (such a series of bets is called a dutch book).

I won’t go into detail here, as this has been explained very well in many places. See for instance, Teruji Thomas, Degrees of Belief II or Earman, Bayes or Bust Chapter 2.

### Cox’s theorem

Bayes or Bust, Chapter 2, p 45:

Jaynes (2011, 1.7 p.17) thinks the axioms formalise “qualitative correspondence with common sense” — but his argument is sketchy and I rather agree with Earman that the assumptions of Cox’s theorem do not recommend themselves with overwhelming force.

### Obviousness argument

Dutch books and Cox’s theorem aside, there’s something to be said for the sheer intuitive plausibility of probabilism and conditionalisation. If you want to express your beliefs as a number between 0 and 1, it just seems obvious that they should behave like probabilities. To me, accepting probabilism and conditionalisation outright feels more compelling than the premises of Cox’s theorem do. “Degrees of belief should behave like probabilities” seems near-tautological.

# Science as a special case of rational inference

Philosophers have long realised that science was extremely successful: predicting the motions of the heavenly bodies, building aeroplanes, producing vaccines, and so on. There must be a core principle underlying the disparate activities of scientists — measuring, experimenting, writing equations, going to conferences, etc. So they set about trying to find this core principle, in order to explain the success of science (the descriptive project) and to apply the core principle more accurately and more generally (normative project). This was philosophy of science.

Scientists are presitigious people in universities. Science, lab coats and all, seems like a specific activity separate from normal life. So it seemed natural that there should be a philosophy of science. This turned out to be a blind alley. The solution to philosophy of science was to come from a far more general theory — the theory of rational inference. This would reveal science as merely a watered-down special case of rational inference.

We will now see how Bayesianism solves most of the problems philosophers of science were preoccupied with. As far as I can tell, this view has wide acceptance among philosophers.

Now let’s review how people were confused and how Bayesianism dissolved the confusion.

# Previous theories of science

## Hypothetico-deductivism

SEP:

In a seminal essay on induction, Jean Nicod (1924) offered the following important remark:

Consider the formula or the law: F entails G. How can a particular proposition, or more briefly, a fact affect its probability? If this fact consists of the presence of G in a case of F, it is favourable to the law […]; on the contrary, if it consists of the absence of G in a case of F, it is unfavourable to this law. (219, notation slightly adapted)

SEP:

The central idea of hypothetico-deductive (HD) confirmation can be roughly described as “deduction-in-reverse”: evidence is said to confirm a hypothesis in case the latter, while not entailed by the former, is able to entail it, with the help of suitable auxiliary hypotheses and assumptions. The basic version (sometimes labelled “naïve”) of the HD notion of confirmation can be spelled out thus:

For any $$h, e, k$$ such that $$h\wedge k$$ is consistent:

• $$e$$ HD-confirms $$h$$ relative to $$k$$ if and only if $$h\wedge k \vDash e$$ and $$k \not\vDash e$$;

• $$e$$ HD-disconfirms $$h$$ relative to $$k$$ if and only if $$h\wedge k \vDash \neg e$$, and $$k \not\vDash \neg e$$;

• $$e$$ is HD-neutral for hypothesis $$h$$ relative to $$k$$ otherwise.

### Hypothetico-deductivism and the problem of irrelevant conjunction

SEP:

The irrelevant conjunction paradox. Suppose that $$e$$ confirms $$h$$ relative to (possibly empty) $$k$$. Let statement $$q$$ e logically consistent with $$e\wedge h\wedge k$$, but otherwise ntirely irrelevant for all of those conjuncts. Does $$e$$ confirm $$h\wedge q$$ (relative to $$k$$) as it does with $$h$$? One would want to say no, and this implication can be suitably reconstructed in Hempel’s theory. HD-confirmation, on the contrary, can not draw yhis distinction: it is easy to show that, on the conditions specified, if the HD clause for confirmation is satisfied for $$e$$ and $$h$$ (given $$k$$), so it is for $$e$$ and $$h\wedge q$$ (given $$k$$). (This is simply because, if $$h\wedge k \vDash e$$, then $$h\wedge q\wedge k \vDash e$$, too, by the monotonicity of classical logical entailment.)

The Bayesian solution:

In the statement below, indicating this result, the irrelevance of $$q$$ for hypothesis $$h$$ and evidence $$e$$ (relative to $$k$$) is meant to amount to the probabilistic independence of $$q$$ from $$h, e$$ and their conjunction (given $$k$$), that is, to $$P(h \wedge q\mid k) = P(h\mid k)P(q\mid k),$$ $$P(e \wedge q\mid k) = P(e\mid k)P(q\mid k)$$, and $$P(h \wedge e \wedge q\mid k) = P(h \wedge e\mid k)P(q\mid k)$$, respectively.

Confirmation upon irrelevant conjunction (ordinal solution) (CIC)
For any $$h, e, q, k$$ and any $$P$$ if $$e$$ confirms $$h$$ relative to $$k$$ and $$q$$ is irrelevant for $$h$$ and $$e$$ relative to $$k$$, then</p>

$C_{P}(h, e\mid k) \gt C_{P}(h \wedge q, e\mid k).$

So, even in case it is qualitatively preserved across the tacking of $$q$$ onto $$h$$, the positive confirmation afforded by $$e$$ is at least bound to quantitatively decrease thereby.

## Instance confirmation

Bayes or Bust (p. 63):

When Carl Hempel published his seminal “Studies in the Logic of Conﬁr- mation” (1945), he saw his essay as a contribution to the logical empiricists’ program of creating an inductive logic that would parallel and comple- ment deductive logic. The program, he thought, was best carried out in three stages: the ﬁrst stage would provide an explication of the qualitative concept of conﬁrmation (as in ‘E conﬁrms H’); the second stage would tackle the comparative concept (as in ‘E conﬁrms H more than E’ conﬁrms H”); and the ﬁnal stage would concern the quantitative concept (as in ‘E conﬁrms H to degree r’). In hindsight it seems clear (at least to Bayesians) that it is best to proceed the other way around: start with the quantitative concept and use it to analyze the comparative and qualitative notions. […]

Hempel’s basic idea for ﬁnding a deﬁnition of qualitative conﬁrmation satisfying his adequacy conditions was that a hypothesis is conﬁrmed by its positive instances. This seemingly simple and straightforward notion turns out to be notoriously difﬁcult to pin down. Hempel’s own explica— tion utilized the notion of the development of a hypothesis for a ﬁnite set I of individuals. Intuitively, $$dev_I (H)$$ is what $$H$$ asserts about a domain consisting ofjust the individuals in $$I$$. Formally, $$dev_I (H)$$ for a quantiﬁed $$H$$ is arrived at by peeling off universal quantiﬁers in favor of conjunctions over I and existential quantiﬁers in favor of disjunctions over I . Thus, for example, if $$I = \{a,b\}$$ and H is $$\forall x \exists y Lxy$$ (e.g., “Everybody loves somebody”), $$dev_I (H)$$ is $$(Laa \lor Lab) \land (Lbb \lor Lba)$$. We are now in a position to state the main deﬁnition[] that constitute[s] Hempel’s account:

• E directly Hempel-confirms H iff $$E \vDash dev_I(H)$$, where $$I$$ is the class of individuals mentioned in $$E$$.

It’s easy to check that Hempel’s instance confirmation, like Bayesiansim, successfully avoids the paradox or irrelevant conjunction. But it’s famously vulnerable to the following problem case.

### Instance confirmation and the paradox of the ravens

The ravens paradox (Hempel 1937, 1945). Consider the following statements:

• $$\forall x(raven(x) \rightarrow black(x))$$, i.e., all ravens are black;

• $$raven(a) \wedge black(a)$$, i.e., $$a$$ is a black raven;

• $$\neg black(a^*) \wedge \neg raven(a^*)$$, i.e., $$a^*$$ is a non-black non-raven (say, a green apple).

Is hypothesis $$h$$ confirmed by $$e$$ and $$e^*$$ alike? One would want to say no, but Hempel’s theory is unable to draw this distinction. Let’s see why.

As we know, $$e$$ (directly) Hempel-confirms $$h$$, according to Hempel’s reconstruction of Nicod. By the same token, $$e^*$$ (directly) Hempel-confirms the hypothesis that all non-black objects are non-ravens, i.e., $$h^* = \forall x(\neg black(x) \rightarrow \neg raven(x))$$. But $$h^* \vDash h$$ ($$h$$ and $$h^*$$ are just logically equivalent). So, $$e^*$$ (the observation report of a non-black non-raven), like $$e$$ (black raven), does (indirectly) Hempel-confirm $$h$$ (all ravens are black). Indeed, as $$\neg raven(a)$$ entails $$raven(a) \rightarrow black(a)$$, it can be shown that $$h$$ is (directly) Hempel-confirmed by the observation of any object that is not a raven (an apple, a cat, a shoe, or whatever), apparently disclosing puzzling “prospects for indoor ornithology” (Goodman 1955, 71).

Just as HD, Bayesian relevance confirmation directly implies that $$e = black(a)$$ confirms $$h$$ given $$k = raven(a)$$ and $$e^* =\neg raven(a)$$ confirms $$h$$ given $$k^* =\neg black(a)$$ (provided, as we know, that $$P(e\mid k)\lt 1$$ and $$P(e^*\mid k^*)\lt 1).$$ That’s because $$h \wedge k\vDash e$$ and $$h \wedge k^*\vDash e^*.$$ But of course, to have $$h$$ confirmed, sampling ravens and finding a black one is intuitively more significant than failing to find a raven while sampling the enormous set of the non-black objects. That is, it seems, because the latter is very likely to obtain anyway, whether or not $$h$$ is true, so that $$P(e^*\mid k^*)$$ is actually quite close to unity. Accordingly, (SP) implies that $$h$$ is indeed more strongly confirmed by $$black(a)$$ given $$raven(a)$$ than it is by $$\neg raven(a)$$ given $$\neg black(a)$$—that is, $$C_{P}(h, e\mid k)\gt C_{P}(h, e^*\mid k^*)$$—as long as the assumption $$P(e\mid k)\lt P(e^*\mid k^*)$$ applies.

### Bootstrapping and relevance relations

In a pre-Bayesian attempt to solve the problem of the ravens, people developed some complicated and ultimately unconvincing theories.

SEP:

To overcome the latter difficulty, Clark Glymour (1980a) embedded a refined version of Hempelian confirmation by instances in his analysis of scientific reasoning. In Glymour’s revision, hypothesis h is confirmed by some evidence e even if appropriate auxiliary hypotheses and assumptions must be involved for e to entail the relevant instances of h. This important theoretical move turns confirmation into a three-place relation concerning the evidence, the target hypothesis, and (a conjunction of) auxiliaries. Originally, Glymour presented his sophisticated neo-Hempelian approach in stark contrast with the competing traditional view of so-called hypothetico-deductivism (HD). Despite his explicit intentions, however, several commentators have pointed out that, partly because of the due recognition of the role of auxiliary assumptions, Glymour’s proposal and HD end up being plagued by similar difficulties (see, e.g., Horwich 1983, Woodward 1983, and Worrall 1982).

## Falsificationism

“statements or systems of statements, in order to be ranked as scientific, must be capable of conflicting with possible, or conceivable observations” (Popper 1962, 39).

SEP:

For Popper […] the important point was not whatever confirmation successful prediction offered to the hypotheses but rather the logical asymmetry between such confirmations, which require an inductive inference, versus falsification, which can be based on a deductive inference. […]

Popper stressed that, regardless of the amount of confirming evidence, we can never be certain that a hypothesis is true without committing the fallacy of affirming the consequent. Instead, Popper introduced the notion of corroboration as a measure for how well a theory or hypothesis has survived previous testing.

Popper was clearly onto something, as in his critique of psychoanalysis:

Neither Freud nor Adler excludes any particular person’s acting in any particular way, whatever the outward circumstances. Whether a man sacrificed his life to rescue a drowning child (a case of sublimation) or whether he murdered the child by drowning him (a case of repression) could not possibly be predicted or excluded by Freud’s theory; the theory was compatible with everything that could happen.

But his stark asymmetry between logically disproving a theory and “corroborating” it was actually a mistake. And it led to many problems.

First, successful science often did not involve rejecting a theory as disproven when it failed an empirical test. SEP:

Originally, Popper thought that this meant the introduction of ad hoc hypotheses only to save a theory should not be countenanced as good scientific method. These would undermine the falsifiabililty of a theory. However, Popper later came to recognize that the introduction of modifications (immunizations, he called them) was often an important part of scientific development. Responding to surprising or apparently falsifying observations often generated important new scientific insights. Popper’s own example was the observed motion of Uranus which originally did not agree with Newtonian predictions, but the ad hoc hypothesis of an outer planet explained the disagreement and led to further falsifiable predictions.

Second, Popper’s idea of corroboration was intolerably vague. A theory is supposed to be well-corroborated if it stuck its neck out by being falsifiable, and has resisted falsification for a long time. But how, for instance, do we compare how well-corroborated two theories are? And how are we supposed to act in the meantime, when there are still several contending theories? The intuition is that well-tested theories should have higher probability, but Popper’s “corroboration” idea is ill-equipped to account for this.

Bayesianism dissolves these problems, but captures the grain of truth in falsificationism. I’ll just quote from the Arbital page on the bayesian view of scientific virtues, which is despite its silly style is excellent, and should probably be read in full.

In a Bayesian sense, we can see a hypothesis’s falsifiability as a requirement for obtaining strong likelihood ratios in favor of the hypothesis, compared to, e.g., the alternative hypothesis “I don’t know.”

Suppose you’re a very early researcher on gravitation, named Grek. Your friend Thag is holding a rock in one hand, about to let it go. You need to predict whether the rock will move downward to the ground, fly upward into the sky, or do something else. That is, you must say how your theory $$Grek$$ assigns its probabilities over $$up, down,$$ and $$other.$$

As it happens, your friend Thag has his own theory $$Thag$$ which says “Rocks do what they want to do.” If Thag sees the rock go down, he’ll explain this by saying the rock wanted to go down. If Thag sees the rock go up, he’ll say the rock wanted to go up. Thag thinks that the Thag Theory of Gravitation is a very good one because it can explain any possible thing the rock is observed to do. This makes it superior compared to a theory that could only explain, say, the rock falling down.

As a Bayesian, however, you realize that since $$up, down,$$ and $$other$$ are mutually exclusive and exhaustive possibilities, and something must happen when Thag lets go of the rock, the conditional probabilities $$\mathbb P(\cdot\mid Thag)$$ must sum to $$\mathbb P(up\mid Thag) + \mathbb P(down\mid Thag) + \mathbb P(other\mid Thag) = 1.$$

If Thag is “equally good at explaining” all three outcomes - if Thag’s theory is equally compatible with all three events and produces equally clever explanations for each of them - then we might as well call this $$1/3$$ probability for each of $$\mathbb P(up\mid Thag), \mathbb P(down\mid Thag),$$ and $$\mathbb P(other\mid Thag)$$. Note that Thag theory’s is isomorphic, in a probabilistic sense, to saying “I don’t know.”

But now suppose Grek make falsifiable prediction! Grek say, “Most things fall down!”

Then Grek not have all probability mass distributed equally! Grek put 95% of probability mass in $$\mathbb P(down\mid Grek)!$$ Only leave 5% probability divided equally over $$\mathbb P(up\mid Grek)$$ and $$\mathbb P(other\mid Grek)$$ in case rock behave like bird.

Thag say this bad idea. If rock go up, Grek Theory of Gravitation disconfirmed by false prediction! Compared to Thag Theory that predicts 1/3 chance of $$up,$$ will be likelihood ratio of 2.5% : 33% ~ 1 : 13 against Grek Theory! Grek embarrassed!

Grek say, she is confident rock does go down. Things like bird are rare. So Grek willing to stick out neck and face potential embarrassment. Besides, is more important to learn about if Grek Theory is true than to save face.

Thag let go of rock. Rock fall down.

This evidence with likelihood ratio of 0.95 : 0.33 ~ 3 : 1 favoring Grek Theory over Thag Theory.

“How you get such big likelihood ratio?” Thag demand. “Thag never get big likelihood ratio!”

Grek explain is possible to obtain big likelihood ratio because Grek Theory stick out neck and take probability mass away from outcomes $$up$$ and $$other,$$ risking disconfirmation if that happen. This free up lots of probability mass that Grek can put in outcome $$down$$ to make big likelihood ratio if $$down$$ happen.

Grek Theory win because falsifiable and make correct prediction! If falsifiable and make wrong prediction, Grek Theory lose, but this okay because Grek Theory not Grek.

# The Quine-Duhem problem

SEP:

Duhem (he himself a supporter of the HD view) pointed out that in mature sciences such as physics most hypotheses or theories of real interest can not be contradicted by any statement describing observable states of affairs. Taken in isolation, they simply do not logically imply, nor rule out, any observable fact, essentially because (unlike “all ravens are black”) they involve the mention of unobservable entities and processes. So, in effect, Duhem emphasized that, typically, scientific hypotheses or theories are logically consistent with any piece of checkable evidence. […]

Let us briefly consider a classical case, which Duhem himself thoroughly analyzed: the wave vs. particle theories of light in modern optics. Across the decades, wave theorists were able to deduce an impressive list of important empirical facts from their main hypothesis along with appropriate auxiliaries, diffraction phenomena being only one major example. But many particle theorists’ reaction was to retain their hypothesis nonetheless and to reshape other parts of the “theoretical maze” (i.e., k; the term is Popper’s, 1963, p. 330) to recover those observed facts as consequences of their own proposal.

Quine took this idea to its radical conclusion with his confirmation holism. Wikipedia:

Duhem’s idea was, roughly, that no theory of any type can be tested in isolation but only when embedded in a background of other hypotheses, e.g. hypotheses about initial conditions. Quine thought that this background involved not only such hypotheses but also our whole web-of-belief, which, among other things, includes our mathematical and logical theories and our scientific theories. This last claim is sometimes known as the Duhem–Quine thesis. A related claim made by Quine, though contested by some (see Adolf Grünbaum 1962), is that one can always protect one’s theory against refutation by attributing failure to some other part of our web-of-belief. In his own words, “Any statement can be held true come what may, if we make drastic enough adjustments elsewhere in the system.”

Bayes or Bust p 73:

It makes a nice sound when it rolls off the tongue to say that our claims about the physical world face the tribunal of experience not individually but only as a corporate body. But scientists, no less than business executives, do not typically act as if they are at a loss as to how to distribute praise through the corporate body when the tribunal says yea, or blame when the tribunal says nay. This is not to say that there is always a single correct way to make the distribution, but it is to say that in many cases there are ﬁrm intuitions.

Howson and Urbach 2006 (p 108):

We shall illustrate the argument through a historical example that Lakatos (1970, pp. 138-140; 1968, pp. l74-75) drew heavily upon. In the early nineteenth century, William Prout (1815, 1816), a medical practitioner and chemist, advanced the idea that the atomic weight of every element is a whole-number multiple of the atomic weight of hydrogen, the underlying assumption being that all matter is built up from different combinations of some basic element. Prout believed hydrogen to be that fundamental building block. Now many of the atomic weights recorded at the time were in fact more or less integral multiples of the atomic weight of hydrogen, but some deviated markedly from Prout’s expectations. Yet this did not shake the strong belief he had in his hypothesis, for in such cases he blamed the methods that had been used to measure those atomic weights. Indeed, he went so far as o adjust the atomic weight of the element chlorine, relative to that f hydrogen, from the value 35.83, obtained by experiment, to 36, he nearest whole number. […]

Prout’s hypothesis t, together with an appropriate assumption a, asserting the accuracy (within specified limits) of the measuring techniques, the purity of the chemicals employed, and so forth , implies that the ratio of the measured atomic weights of chlorine and hydrogen will approximate (to a specified degree) a whole number. In 1815 that ratio was reported as 35.83-call this the evidence e-a value judged to be incompatible with the conjunction of t and a. The posterior and prior probabilities of t and of a are related by Bayes’s theorem, as follows:

[…] Consider first the prior probabilities of $$t$$ and of $$a$$. J.S. Stas, a distinguished Belgian chemist whose careful atomic weight measurements were highly influential, gives us reason to think that chemists of the period were firmly disposed to believe in t. […] It is less easy to ascertain how confident Prout and his contemporaries were in the methods used to measure atomic weights, but their confidence was probably not great, in view of the many clear sources of error. […] On the other hand, the chemists of the time must have felt that that their atomic weight measurements were more likely to be accurate than not, otherwise they would hardly have reported them. […] For these reasons, we conjecture that $$P(a)$$ was in the neighbourhood of 0.6 and that $$P(t)$$ was around 0.9, and these are the figures we shall work with. […]

We will follow Dorling in taking $$t$$ and $$a$$ to be independent, viz, $$P(a \mid t) = P(a)$$ and hence, $$P(\neg a \mid t) = P(\neg a)$$. As Dorling points out (1996), this independence assumption makes the calculations simpler but is not crucial to the argument. […]

Finally, Bayes’s theorem allows us to derive the posterior probabilities in which we are interested:

$$P(t\mid e) = 0.878$$ $$P(a\mid e) = 0.073$$

(Recall that $$P(t) = 0.9$$ and $$P(a) = 0.6$$ ) We see then that the evidence provided by the measured atomic weight of chlorine affects Prout’s hypothesis and the set of auxiliary hypotheses very differently; for while the probability of the first is scarcely changed, that of the second is reduced to a point where it has lost all credibility

# Uncertain judgements and value of information (resilience)

Crash course in state spaces and events: There is a set of states $$\Omega$$ which represents the ways the world could be. Sometimes $$\Omega$$ is described as the set of “possible worlds” (SEP). An event $$E$$ is a subset of $$\Omega$$. There are many states of the world where Labour wins the next election. The event “Labour wins the next election” is the set of these states.

Here is the important point: a single numerical probability for event $$E$$ is not just the probability you assign to one state of the world. It’s a sum over the probabilities assigned to states in $$E$$. We should think of ideal Bayesians as having probability distributions over the state space, not just scalar probabilities for events.

This simple idea is enough to cut through many decades of confusion. SEP:

probability theory seems to impute much richer and more determinate attitudes than seems warranted. What should your rational degree of belief be that global mean surface temperature will have risen by more than four degrees by 2080? Perhaps it should be 0.75? Why not 0.75001? Why not 0.7497? Is that event more or less likely than getting at least one head on two tosses of a fair coin? It seems there are many events about which we can (or perhaps should) take less precise attitudes than orthodox probability requires. […] As far back as the mid-nineteenth century, we find George Boole saying:

It would be unphilosophical to affirm that the strength of that expectation, viewed as an emotion of the mind, is capable of being referred to any numerical standard. (Boole 1958 [1854]: 244)

People have long thought there is a distinction between risk (probabilities different from 0 or 1) and ambiguity (imprecise probabilities):

One classic example of this is the Ellsberg problem (Ellsberg 1961).

I have an urn that contains ninety marbles. Thirty marbles are red. The remainder are blue or yellow in some unknown proportion.

Consider the indicator gambles for various events in this scenario. Consider a choice between a bet that wins if the marble drawn is red (I), versus a bet that wins if the marble drawn is blue (II). You might prefer I to II since I involves risk while II involves ambiguity. A prospect is risky if its outcome is uncertain but its outcomes occur with known probability. A prospect is ambiguous if the outcomes occur with unknown or only partially known probabilities.

To deal with purported ambiguity, people developed models where the probability lies in some range. These probabilities were called “fuzzy” or “mushy”.

Evidence can be balanced because it is incomplete: there simply isn’t enough of it. Evidence can also be balanced if it is conflicted: different pieces of evidence favour different hypotheses. We can further ask whether evidence tells us something specific—like that the bias of a coin is 2/3 in favour of heads—or unspecific—like that the bias of a coin is between 2/3 and 1 in favour of heads.

Fuzzy probabilities gave rise to a number of problem cases, which, predictably engendered a wide literature. The SEP article notes the problems of:

1. Dilation (Imprecise probabilists violate the relfection principle)
2. Belief intertia (How do we learn from an imprecise prior?)
3. Decision making (How should an imprecise probabilist act? Can she avoid dutch books?)

A PhilPapers search indicates that at least 65 papers have been published on these topics.

The Bayesian solution is simply: when you are less confident, you have a flatter probability distribution, though it may have the same mean. Flatter distributions move more in response to evidence. They are less resilient. See Skyrms (2011) or Leitgeb (2014). It’s not surprising that single probabilities don’t adequately describe your evidential state, since they are summary statistics over a distribution.

# Issues around Occam’s razor

SEP distinguishes three questions about simplicity:

(i) How is simplicity to be defined? [Definition]

(ii) What is the role of simplicity principles in different areas of inquiry? [Usage]

(iii) Is there a rational justification for such simplicity principles? [Justification]

The Bayesian solution to (i) is to formalise Occam’s razor as a statement about which priors are better than others. Occam’s razor is not, as many philosophers have thought, a rule of inference, but a constraint on prior belief. One should have a prior that assigns higher probability to simpler worlds. SEP:

Jeffreys argued that “the simpler laws have the greater prior probability,” and went on to provide an operational measure of simplicity, according to which the prior probability of a law is $$2^{−k}$$, where k = order + degree + absolute values of the coefficients, when the law is expressed as a differential equation (Jeffreys 1961, p. 47).

Since then, the definition of simplicity has been further formalised using algorithmic information theory. The very informal gloss is that we formalise hypotheses as by the shortest computer program that can fully describe them, and our prior weights each hypothesis by its simplicity ($$2^{-n}$$, where $$n$$ is the program length).

This algorithmic formalisation, finally, sheds light on the limits of this understanding of simplicity, and provides an illuminating new interpretation of Goodman’s new riddle of induction. The key idea is that we can only formalise simplicity relative to a programming language (or relative to a universal turing machine).

Hutter and Rathmanner 2011, Section 5.9 “Andrey Kolmogorov”:

Natural Turing Machines. The final issue is the choice of Universal Turing machine to be used as the reference machine. The problem is that there is still subjectivity involved in this choice since what is simple on one Turing machine may not be on another. More formally, it can be shown that for any arbitrarily complex string $$x$$ as measured against the UTM $$U$$ there is another UTM machine $$U ′$$ for which $$x$$ has Kolmogorov complexity $$1$$. This result seems to undermine the entire concept of a universal simplicity measure but it is more of a philosophical nuisance which only occurs in specifically designed pathological examples. The Turing machine $$U ′$$ would have to be absurdly biased towards the string $$x$$ which would require previous knowledge of $$x$$. The analogy here would be to hard-code some arbitrary long complex number into the hardware of a computer system which is clearly not a natural design. To deal with this case we make the soft assumption that the reference machine is natural in the sense that no such specific biases exist. Unfortunately there is no rigorous definition of natural but it is possible to argue for a reasonable and intuitive definition in this context.

Vallinder 2012, Section 4.1 “Language dependence”:

In section 2.4 we saw that Solomonoff’s prior is invariant under both reparametrization and regrouping, up to a multiplicative constant. But there is another form of language dependence, namely the choice of a uni- versal Turing machine.

There are three principal responses to the threat of language dependence. First, one could accept it flat out, and admit that no language is better than any other. Second, one could admit that there is language dependence but argue that some languages are better than others. Third, one could deny language dependence, and try to show that there isn’t any.

For a defender of Solomonoff’s prior, I believe the second option is the most promising. If you accept language dependence flat out, why intro- duce universal Turing machines, incomputable functions, and other need- lessly complicated things? And the third option is not available: there isn’t any way of getting around the fact that Solomonoff’s prior depends on the choice of universal Turing machine. Thus, we shall somehow try to limit the blow of the language dependence that is inherent to the framework. Williamson (2010) defends the use of a particular language by saying that an agent’s language gives her some information about the world she lives in. In the present framework, a similar response could go as follows. First, we identify binary strings with propositions or sensory observations in the way outlined in the previous section. Second, we pick a UTM so that the terms that exist in a particular agent’s language gets low Kolmogorov complexity.

If the above proposal is unconvincing, the damage may be limited some- what by the following result. Let $$K_U ( x )$$ be the Kolmogorov complexity of $$x$$ relative to universal Turing machine $$U$$, and let $$K_T ( x )$$ be the Kolmogorov complexity of $$x$$ relative to Turing machine $$T$$ (which needn’t be universal). We have that $$K_U ( x ) \leq K_T ( x ) + C_{TU}$$ That is: the difference in Kolmogorov complexity relative to $$U$$ and rela- tive to $$T$$ is bounded by a constant $$C_TU$$ that depends only on these Turing machines, and not on $$x$$. (See Li and Vitanyi (1997, p. 104) for a proof.) This is somewhat reassuring. It means that no other Turing machine can outperform $$U$$ infinitely often by more than a fixed constant. But we want to achieve more than that. If one picks a UTM that is biased enough to start with, strings that intuitively seem complex will get a very low Kolmogorov complexity. As we have seen, for any string $$x$$ it is always possible to find a UTM $$T$$ such that $$K_T ( x ) = 1$$. If $$K_T ( x ) = 1$$, the corresponding Solomonoff prior $$M_T ( x )$$ will be at least $$0.5$$. So for any binary string, it is always possible to find a UTM such that we assign that string prior probability greater than or equal to $$0.5$$. Thus some way of discriminating between universal Turing machines is called for.

1. Technically, the diachronic language “just before”/”just after” is a mistake. It fails to model cases of forgetting, or loss of discriminating power of evidence. This was shown by Arntzenius (2003)

March 31, 2018

# Philosophy success story IV: the formalisation of probability

Thus, joining the rigour of demonstrations in mathematics with the uncertainty of chance, and conciliating these apparently contradictory matters, it can, taking its name from both of them, with justice arrogate the stupefying name: The Mathematics of Chance (Aleae Geometria).

— Blaise Pascal, in an address to the Académie Parisienne de Mathématiques, 1654

Researchers in the field have wondered why the development of probability theory was so slow—especially why the apparently quite simple mathematical theory of dice throwing did not appear until the 1650s. The main part of the answer lies in appreciating just how diffi- cult it is to make concepts precise.

— James Franklin, The Science of Conjecture

Wherefore in all great works are Clerkes so much desired? Wherefore are Auditors so richly fed? What causeth Geometricians so highly to be enhaunsed? Why are Astronomers so greatly advanced? Because that by number such things they finde, which else would farre excell mans minde.

— Robert Recorde, Arithmetic (1543)

This is part of my series on success stories in philosophy. See this page for an explanation of the project and links to other items in the series.

# How people were confused

## Degrees of belief

The first way to get uncertainty spectacularly wrong is given to us by Plato, who outright rejects non-certain reasoning (The Science of Conjecture: Evidence and Probability Before Pascal, James Franklin):

Plato has Socrates say to Theaetetus, “You are not offering any argument or proof, but relying on likelihood (eikoti). If Theodorus, or any other geometer, were prepared to rely on likelihood when doing geometry, he would be worth nothing. So you and Theodorus must consider whether, in matters as important as these, you are going to accept arguments from plausibility and likelihood (pithanologia te kai eikosi).”

## Probability as a binary property

One step in the right direction would be to accept that statements can fail to be definite truths, yet be in some sense be “more likely” than definite falsehoods. On this view, such statements have the property of being “probable”. SEP writes:

Pre-modern probability was not a number or ratio, but mainly a binary property which a proposition either had or did not have.

In this vein, Circeo wrote:

That is probable which for the most part usually comes to pass, or which is a part of the ordinary beliefs of mankind, or which contains in itself some resemblance to these qualities, whether such resemblance be true or false. (Cicero, De inventione, I.29.46)

The quote not only displays the error of thinking of probability as binary. It also shows that Cicero mixed the most promising notion of probability (that which “for the most part usually comes to pass”) with the completely different notions of ordinary belief and opinion, resulting in a general mess of confusion. According to SEP: “Until the thirteenth century, the definitions of “probable” by Cicero and Boethius very much shaped the medieval understanding of probability”.

## Ordinal probability

Going further, one might realise that there are degrees of probability. With a solid helping of the principle of charity, Aristotle can be read as saying this:

Therefore it is not enough for the defendant to refute the accusation by proving that the charge is not bound to be true; he must do so by showing that it is not likely to be true. For this purpose his objection must state what is more usually true than the statement attacked.

Here is another quote:

Hence, in this proposal we have men and women, who at age 25 buy a life-long annuity for a price which they recover within eight years and although they can die within these eight years it is more probable that they live twice the time. In this way what happens more frequently and is more probable is to the advantage of the buyer. (Alexander of Alessandria, Tractatus de usuris, c. 72, Y f. 146r)

Aristotle did not realise that probabilities could be applied to chancy events, and nor did his medieval followers. According to A. Hall:

According to van Brake (1976) and Schneider (1980), Aristotle classified events into three types: (1) certain events that happen necessarily; (2) probable events that happen in most cases; and (3) unpredictable or unknowable events that happen by pure chance. Furthermore, he considered the outcomes of games of chance to belong to the third category and therefore not accessible to scientific investigation, and he did not apply the term probability to games of chance.

The cardinal notion of probability did not emerge before the seventeenth century.

## Stakes-sensitivity

One can find throughout history people grasping at the intuition that when the stakes are high, unlikely things can be important. In many cases, legal scholars were interested in what to do if no definite proof of innocence or guilt can be given. Unfortunately, they invariably get the details wrong. James Franklin writes:

In the Talmud itself, the demand for a high standard of evidence in criminal cases developed into a prohibition of any uncertainty in evidence:

Witnesses in capital charges were brought in and warned: perhaps what you say is based only on conjecture, or hearsay, or is evidence from the mouth of another witness, or even from the mouth of an untrustworthy person: perhaps you are unaware that ultimately we shall scrutinize your evidence by cross-examination and inquiry? Know then that capital cases are not like monetary cases. In civil suits, one can make restitution in money, and thereby make his atonement; but in capital cases one is held responsible for his blood and the blood of his descendants till the end of the world . . . whoever destroys a single soul of Israel, scripture imputes to him as though he had destroyed a whole world . . . Our Rabbis taught: What is meant by “based only on conjecture”?—He [the judge] says to them: Perhaps you saw him running after his fellow into a ruin, you pursued him, and found him sword in hand with blood dripping from it, whilst the murdered man was writhing. If this is what you saw, you saw nothing.

Thomas Aquinas wrote:

And yet the fact that in so many it is not possible to have certitude without fear of error is no reason why we should reject the certitude which can probably be had [quae probabiliter haberi potest] through two or three witnesses … (Thomas Aquinas, Summa theologiae, II-II, q. 70, 2, 1488)

James Franklin writes:

Further reflection on the kinds of evidence short of certainty led to a word that expressed the most significant and original idea of the Glossators for probabilistic argument: half-proof (semiplena probatio). In the 1190s, this word was invented for the class of items of evidence that were neither null nor full proof. The word expresses the natural thought that, if two witnesses are in theory full proof, then one witness must be half.

## The problem of points

By the renaissance, thinkers had sharpened these intuitions into a concrete problem. It took centuries of fallacies to arrive at the correct answer to this problem.

The problem of points concerns a game of chance with two players who have equal chances of winning each round. The players contribute equally to a prize pot, and agree in advance that the first player to have won a certain number of rounds $$s$$ will collect the entire prize. Now suppose that the game is interrupted by external circumstances before either player has achieved victory. Player 1 has won $$s_1<s$$ rounds and player 2 has won $$s_2<s$$ rounds. How does one then divide the pot fairly? (Wikipedia, The problem of points)

Before Pascal formalised the now-obvious concept of expected value, this problem was a matter of debate. The problem of points is especially clear-cut evidence that people were confused about probability, since they arrived at different numerical answers.

Anders Hald writes (Section 4.2, p. 35ff):

The division problem is presumably very old. It is first found in print by Pacioli (1494) for $$s$$ = 6, $$s_1 = 5$$, and $$s_2 = 2$$. Pacioli considers it as a problem in proportion and proposes to divide the stakes as $$s_1$$ to $$s_2$$. […] The next attempt to solve the problem is by Cardano (1539). He shows by example that Pacioli’s proposal is ridiculous [in a game interrupted after only one round, Pacioli’s method would award the entire pot to the player with the single point, even though the outcome would be far from certain] and proceeds to give a deeper analysis of the problem. We shall return to this after a discussion of some other, more primitive, proposals. Tartaglia (1556) criticizes Pacioli and is sceptical of the possibility of finding a mathematical solution. He thinks that the problem is a juridical one. Nevertheless, he proposes that if $$s_1$$ is larger than $$s_2$$, A should have his own stake plus the fraction $$(s_l - s_2)/s$$ of B’s stake. Assuming that the stakes are equal, the division will be as $$s + s_1 - s_2$$ to $$s - s_1 + s_2$$. Forestani (1603) formulates the following rule: First A and B should each get a portion of the total stake determined by the number of games they have won in relation to the maximum duration of the play, i.e., the proportions $$s_1/(2s- 1)$$ and $$s_2/(2s- 1)$$, as also proposed by Pacioli. But then Forestani adds that the remainder should be divided equally between them, because Fortune in the next play may reverse the results. Hence the division will be as $$2s - 1 + s_1 - s_2$$ to $$2s - 1 - s_1 + s_2$$. Comparison with Tartaglia’s rule will show that $$s$$ has been replaced by $$2s - 1$$. Cardano (1539) is the first to realize that the division rule should not depend on $$(s,s_1,s_2)$$ but only on the number of games each player lacks in winning, $$a = s - s_1$$ and $$b = s - s_2$$, say. He introduces a new play where A, starting from scratch, is the winner if he wins $$a$$ games before B wins $$b$$ games, and he asks what the stakes should be for the play to be fair. He then takes for a fair division rule in the stopped play the ratio of the stakes in this new play and concludes that the division should be as $$b(b + 1)$$ to $$a(a + 1)$$. His reasons for this result are rather obscure. Considering an example for $$a = 1$$ and $$b = 3$$ he writes:

He who shall win 3 games stakes 2 crowns; how much should the other stake. I say that he should stake 12 crowns for the following reasons. If he shall win only one game it would suffice that he stakes 2 crowns; and if he shall win 2 games he should stake three times as much because by winning two games he would win 4 crowns but he has had the risk of losing the second game after having won the first and therefore he ought to have a threefold compensation. And if he shall win three games his compensation should be sixfold because the difficulty is doubled, hence he should stake 12 crowns. It will be seen that Cardano uses an inductive argument. Setting B’s stake equal to 1, A’s stake becomes successively equal to $$1$$, $$1 +2=3$$, and $$1 + 2 + 3 = 6$$. Cardano then concludes that in general A’s stake should be $$1 + 2 + ... + b = b(b + 1)/2$$. He does not discuss how to go from the special case $$(1, b)$$ to the general case $$(a, b)$$, but presumably he has just used the symmetry between the players.1

Note how different this type of disagreement is from mathematical disagreements. When people reach different solutions about a “toy” problem case, and muddle through with heursitics, they are not facing a recalcitrant mathematical puzzle. They are confused on a much deeper level. Newcomb’s problem might be a good analogy.

Anders Hald also has this interesting quote:

In view of the achievements of the Greeks in mathematics and science, it is surprising that they did not use the symmetry of games of chance or the stability of relative frequencies to create an axiomatic theory of probability analogous to their geometry. However, the symmetry and stability which is obvious to us may not have been noticed in ancient times because of the imperfections of the randomizers used. David (1955, 1962) has pointed out that instead of regular dice, astragali (heel bones of hooved animals) were normally used, and Samburski (1956) remarks that in a popular game with four astragali, a certain throw was valued higher than all the others despite the fact that other outcomes have smaller probabilities, which indicates that the Greeks had not noticed the magnitudes of the corresponding relative frequencies.

# Pascal and Fermat’s solution

Pascal and Fermat’s story is well known. In a famous correspondence in the 1654, they developed the basic notion of probability and expected value.

Keith Devlin (2008):

Before we take a look at their exchange and the methods it contains, let’s look at a present-day solution of the simple version of the problem. In this version, the players, Blaise and Pierre, place equal bets on who will win the best of five tosses of a fair coin. We’ll suppose that on each round, Blaise chooses heads, Pierre tails. Now suppose they have to abandon the game after three tosses, with Blaise ahead 2 to 1. How do they divide the pot? The idea is to look at all possible ways the game might have turned out had they played all five rounds. Since Blaise is ahead 2 to 1 after round three, the first three rounds must have yielded two heads and one tail. The remaining two throws can yield

HH HT TH TT

Each of these four is equally likely. In the first (H H), the final outcome is four heads and one tail, so Blaise wins; in the second and the third (H T and T H), the final outcome is three heads and two tails, so again Blaise wins; in the fourth (T T), the final outcome is two heads and three tails, so Pierre wins. This means that in three of the four possible ways the game could have ended, Blaise wins, and in only one possible play does Pierre win. Blaise has a 3-to-1 advantage over Pierre when they abandon the game; therefore, the pot should be divided 3/4 for Blaise and 1/4 for Pierre. Many people, on seeing this solution, object, saying that the first two possible endings (H H and H T) are in reality the same one. They argue that if the fourth throw gives a head, then at that point, Blaise has his three heads and has won, so there would be no fifth throw. Accordingly, they argue, the correct way to think about the end of the game is that there are actually only three possibilities, namely

H TH TT

in which case, Blaise has a 2-to-1 advantage and the pot should be divided 2/3 for Blaise and 1/3 for Pierre, not 3/4 and 1/4. This reasoning is incorrect, but it took Pascal and Fermat some time to resolve this issue. Their colleagues, whom they consulted as they wrestled with the matter, had differing opinions. So if you are one of those people who finds this alternative argument appealing (or even compelling), take heart; you are in good company (though still wrong).

The issue behind the dilemma here is complex and lies at the heart of probability theory. The question is, What is the right way to think about the future (more accurately, the range of possible futures) and model it mathematically?

The key insight was one that Cardano had already flailingly grapsed at, but was difficult to understand even for Pascal:

As I observed earlier in this chapter, Cardano had already realized that the key was to look at the number of points each player would need in order to win, not the points they had already accumulated. In the second section of his letter to Fermat, Pascal acknowledged the tricky point we just encountered ourselves, that you have to look at all possible ways the game could have played out, ignoring the fact that the players would normally stop once one person had clearly won. But Pascal’s words make clear that he found this hard to grasp, and he accepted it only because the great Fermat had explained it in his previous letter.

Elsewhere, Keith Devlin writes:

Today, we would use the word probability to refer to the focus of Pascal and Fermat’s discussion, but that term was not introduced until nearly a century after the mathematicians’ deaths. Instead, they spoke of “hazards,” or number of chances. Much of their difficulty was that they did not yet have the notion of mathematical probability—because they were in the process of inventing it.

From our perspective, it is hard to understand just why they found it so difficult. But that reflects the massive change in human thinking that their work led to. Today, it is part of our very worldview that we see things in terms of probabilities.

# Extensions

## Handing over to mathematics

Solving a philosophical problem is to take it out of the realm of philosophy. Once the fundamental methodology is agreed upon, the question can be spun off into its own independent field.

The development of probability is often considered part of Pascal’s mathematical rather than philosophical work. But I think the mathematisation of probability is in an important sense philosophical. In another post, I write much more about why successful philosophy often looks like mathematics in retrospect.

After Pascal and Fermat’s breakthrough, things developed very fast, highlighting once again the specificity of that ititial step.

Keith Devlin writes:

In 1654, Pascal had struggled hard to understand why Fermat counted endings of the unfinished game that would never have arisen in practice (“it is not a general method and it is good only in the case where it is necessary to play exactly a certain number of times”). Just fifteen years later, in 1669, Christiaan Huygens was using axiom-based abstract mathematics on top of statistically processed data tables to determine the probability that a sixteen-year-old young man would die before he reached thirty-six.

After the crucial first step for formalisation, probability was ripe to be handed over to mathematicians. SEP writes:

These early calculations [of Pascal, Fermay and Huygens] were considerably refined in the eighteenth century by the Bernoullis, Montmort, De Moivre, Laplace, Bayes, and others (Daston 1988; Hacking 2006; Hald 2003).

For example, the crucial idea of conditional probability was developed. According to MathOverflow, in the 1738 second edition of The Doctrine of Chances, de Moivre writes,

The Probability of the happening of two Events dependent, is the product of the Probability of the happening of one of them, by the Probability which the other will have of happening, when the first shall be consider’d as having happened; and the same Rule will extend to the happening of as many Events as may be assigned.

People began to get it, philosophically speaking. We begin to see quotes that, unlike those of Circeo, sound decidedly modern. In his book Ars conjectandi (The Art of Conjecture, 1713), Jakob Bernoulli wrote:

To conjecture about something is to measure its probability. The Art of Conjecturing or the Stochastic Art is therefore defined as the art of measuring as exactly as possible the probabilities of things so that in our judgments and actions we can always choose or follow that which seems to be better, more satisfactory, safer and more considered.

Keth Devlin writes:

Within a hundred years of Pascal’s letter, life-expectancy tables formed the basis for the sale of life annuities in England, and London was the center of a flourishing marine insurance business, without which sea transportation would have remained a domain only for those who could afford to assume the enormous risks it entailed.

## Axiomatisation

Much later, probability theory was put on an unshakeable footing, with Kolomogorov’s axioms.

# Counter-intuitive implications of probability theory

I’ve given many examples of how people used to be confused about probability. In case you find it hard to empathise with these past thinkers, I should remind you that even today probability theory can be hard to grasp intuitively.

## The conjunction fallacy

The most often-cited example of this fallacy originated with Amos Tversky and Daniel Kahneman. Although the description and person depicted are fictitious, Amos Tversky’s secretary at Stanford was named Linda Covington, and he named the famous character in the puzzle after her.

Linda is 31 years old, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice, and also participated in anti-nuclear demonstrations.

Which is more probable?

1. Linda is a bank teller.
2. Linda is a bank teller and is active in the feminist movement.

The majority of those asked chose option 2. However, the probability of two events occurring together (in “conjunction”) is always less than or equal to the probability of either one occurring alone.

## The monty hall problem

Suppose you’re on a game show, and you’re given the choice of three doors: Behind one door is a car; behind the others, goats. You pick a door, say No. 1, and the host, who knows what’s behind the doors, opens another door, say No. 3, which has a goat. He then says to you, “Do you want to pick door No. 2?” Is it to your advantage to switch your choice?

Vos Savant’s response was that the contestant should switch to the other door (vos Savant 1990a). Under the standard assumptions, contestants who switch have 2/3 chance of winning the car, while contestants who stick to their initial choice have only a 1/3 chance.

The given probabilities depend on specific assumptions about how the host and contestant choose their doors. A key insight is that, under these standard conditions, there is more information about doors 2 and 3 that was not available at the beginning of the game, when the door 1 was chosen by the player: the host’s deliberate action adds value to the door he did not choose to eliminate, but not to the one chosen by the contestant originally.

## The mammography problem

Yudkowsky:

1% of women at age forty who participate in routine screening have breast cancer. 80% of women with breast cancer will get positive mammographies. 9.6% of women without breast cancer will also get positive mammographies. A woman in this age group had a positive mammography in a routine screening. What is the probability that she actually has breast cancer?

What do you think the answer is? If you haven’t encountered this kind of problem before, please take a moment to come up with your own answer before continuing.

Next, suppose I told you that most doctors get the same wrong answer on this problem - usually, only around 15% of doctors get it right. (“Really? 15%? Is that a real number, or an urban legend based on an Internet poll?” It’s a real number. See Casscells, Schoenberger, and Grayboys 1978; Eddy 1982; Gigerenzer and Hoffrage 1995; and many other studies. It’s a surprising result which is easy to replicate, so it’s been extensively replicated.)

Most doctors estimate the probability to be between 70% and 80%. The correct answer is 7.8%.

1. More on Cardano, in Section 4.3 of Hald:

[Cardano’s] De Ludo Aleae is a treatise on the moral, practical, and theoretical aspects of gambling, written in colorful language and containing some anecdotes on Cardano’s own experiences. Most of the theory in the book is given in the form of examples from which general principles are or may be inferred. In some cases Cardano arrives at the solution of a problem through trial and error, and the book contains both the false and the correct solutions. He also tackles some problems that he cannot solve and then tries to give approximate solutions. […] In Chap. 14, he defines the concept of a fair games in the following terms:

So there is one general rule, namely, that we should consider the whole circuit [the total number of equally possible cases], and the number of those casts which represents in how many ways the favorable result can occur, and compare that number to the remainder of the circuit, and according to that proportion should the mutual wagers be laid so that one may contend on equal terms.

March 31, 2018