Debugging surprising behavior in SciPy numerical integration

I wrote a Python app to apply Bayes’ rule to continuous distributions. It looks like this:


I’m learning a lot about numerical analysis from this project. The basic idea is simple:

def unnormalized_posterior_pdf(x):
    return prior.pdf(x)*likelihood.pdf(x)

# integrate unnormalized_posterior_pdf over the reals
normalization_constant = integrate.quad(unnormalized_posterior_pdf,-np.inf,np.inf)[0]

def posterior_pdf(x):
    return unnormalized_posterior_pdf(x)/normalization_constant

However, when testing my code on complicated distributions, I ran into some interesting puzzles.

A first set of problems was caused by the SciPy numerical integration routines that my program relies on. They were sometimes returning incorrect results or RuntimeErorrs. These problems appeared when the integration routines had to deal with ‘extreme’ values: small normalization constants or large inputs into the cdf function. I eventually learned to hold the integration algorithm’s hand a little bit and show it where to go.

A second set of challenges had to do with how long my program took to run: sometimes 30 seconds to return the percentiles of the posterior distribution. While 30 seconds might be acceptable for someone who desperately needed that bayesian update, I didn’t want my tool to feel like a punch card mainframe. I eventually managed to make the program more than 10 times faster. The tricks I used all followed the same strategy. In order to make it less expensive to repeatedly evaluate the posterior’s cdf by numerical integration, I tried to find ways to make the interval to integrate narrower.

You can follow along with all the tests described in this post using this file, whereas the code doing the calculations for the webapp is here.

Small normalization constants

Alt text

When the prior and likelihood are far apart, the unnormalized posterior takes tiny values.

It turns out that SciPy’s integration routine, integrate.quad, (incidentally, written in actual Fortran!) has trouble integrating such a low-valued pdf.

prior = stats.lognorm(s=.5,scale=math.exp(.5)) # a lognormal(.5,.5) in SciPy notation
likelihood = stats.norm(20,1)

class Posterior_scipyrv(stats.rv_continuous):
    def __init__(self,d1,d2):
        super(Posterior_scipyrv, self).__init__()
        self.d1= d1
        self.d2= d2

        self.normalization_constant = integrate.quad(self.unnormalized_pdf,-np.inf,np.inf)[0]

    def unnormalized_pdf(self,x):
        return self.d1.pdf(x) * self.d2.pdf(x)

    def _pdf(self,x):
        return self.unnormalized_pdf(x)/self.normalization_constant

posterior = Posterior_scipyrv(prior,likelihood)

print('normalization constant:',posterior.normalization_constant)
print("CDF values:")
for i in range(30):

The cdf converges to… 52,477. This is not so good.

Because the cdf does converge, but to an incorrect value, we can conclude that the normalization constant is to blame. Because the cdf converges to a number greater than 1, posterior.normalization_constant, about 3e-12, is an underestimate of the true value.

If we shift the likelihood distribution just a little bit to the left, to likelihood = stats.norm(18,1), the cdf converges correctly, and we get a normalization constant of about 6e-07. Obviously, the normalization constant should not jump five orders of magnitude from 6e-07 to 3e-12 as a result of this small shift.

The program is not integrating the unnormalized pdf correctly.

Difficulties with integration usually have to do with the shape of the function. If your integrand zig-zags up and down a lot, the algorithm may miss some of the peaks. But here, the shape of the posterior is almost the same whether we use stats.norm(18,1) or stats.norm(20,1)1. So the problem really seems to occur once we are far enough in the tails of the prior that the unnormalized posterior pdf takes values below a certain absolute (rather than relative) threshold. I don’t yet understand why. Perhaps some of the values are becoming too small to be represented with standard floating point numbers.

This seems rather bizarre, but here’s a piece of evidence that really demonstrates that low absolute values are what’s tripping up the integration routine that calculates the normalization constant. We just multiply the unnormalized pdf by 10000 (which will cancel out once we normalize).

def unnormalized_pdf(self,x):
    return 10000*self.d1.pdf(x) * self.d2.pdf(x)

Now the cdf converges to 1 perfectly (??!).

Large inputs into cdf

We take a prior and likelihood that are unproblematically close together:

prior = stats.lognorm(s=.5,scale=math.exp(.5))# a lognormal(.5,.5) in SciPy notation
likelihood = stats.norm(5,1)
posterior = Posterior_scipyrv(prior,likelihood)

for i in range(100):

At first, the cdf goes to 1 as expected, but suddenly all hell breaks loose and the cdf decreases to some very tiny values:

22 1.0000000000031484
23 1.0000000000095246
24 1.0000000000031442
25 2.4520867144186445e-09
26 2.7186998869943613e-12
27 1.1495658559228458e-15

What’s going on? When asked to integrate the pdf from minus infinity up to some large value like 25, quad doesn’t know where to look for the probability mass. When the upper bound of the integral is in an area with still enough probability mass, like 23 or 24 in this example, quad finds its way to the mass. But if you ask it to find a peak very far away, it fails.

A piece of confirmatory evidence is that if we make the peak spikier and harder to find, by setting the likelihood’s standard deviation to 0.5 instead of 1, the cdf fails earlier:

22 1.000000000000232
23 2.9116983489798973e-12

We need to hold the integration algorithm’s hand and show it where on the real line the peak of the distribution is located. In SciPy’s quad, you can supply the points argument to point out places ‘where local difficulties of the integrand may occur’, but only when the integration interval is finite. The solution I came up with is to split the interval into two halves.

def split_integral(f,splitpoint,integrate_to):
    a,b = -np.inf,np.inf
    if integrate_to < splitpoint:
        # just return the integral normally
        return integrate.quad(f,a,integrate_to)[0]
        integral_left = integrate.quad(f, a, splitpoint)[0]
        integral_right = integrate.quad(f, splitpoint, integrate_to)[0]
        return integral_left + integral_right

This definitely won’t work for every difficult integral, but should help for many cases where most of the probability mass is not too far from the splitpoint.

For splitpoint, a simple choice is the average of the prior and likelihood’s expected values.

class Posterior_scipyrv(stats.rv_continuous):
    def __init__(self,d1,d2):
        self.splitpoint = (self.d1.expect()+self.d2.expect())/2

We can now override the built-in cdf method, and specify our own method that uses split_integral:

class Posterior_scipyrv(stats.rv_continuous):
    def _cdf(self,x):
        return split_integral(self.pdf,self.splitpoint,x)

Now things run correctly:

22 1.0000000000000198
23 1.0000000000000198
24 1.0000000000000198
25 1.00000000000002
26 1.0000000000000202
98 1.0000000000000198
99 1.0000000000000193

Defining support of posterior

So far I’ve only talked about problems that cause the program to return the wrong answer. This section is about a problem that only causes inefficiency, at least when it isn’t combined with other problems.

If you don’t specify the support of a continuous random variable in SciPy, it defaults to the entire real line. This leads to inefficiency when querying quantiles of the distribution. If I want to know the 50th percentile of my distribution, I call ppf(0.5). As I described previously, ppf works by numerically solving the equation \(cdf(x)=0.5\). The ppf method automatically passes the support of the distribution into the equation solver and tells it to only look for solutions inside the support. When a distribution’s support is a subset of the reals, searching over the entire reals is inefficient.

To remedy this, we can define the support of the posterior as the intersection of the prior and likelihood’s support. For this we need a small function that calculates the intersection of two intervals.

def intersect_intervals(two_tuples):
    d1 , d2 = two_tuples

    d1_left,d1_right = d1[0],d1[1]
    d2_left,d2_right = d2[0],d2[1]

    if d1_right < d2_left or d2_right < d2_left:
        raise ValueError("the distributions have no overlap")
    intersect_left,intersect_right = max(d1_left,d2_left),min(d1_right,d2_right)

    return intersect_left,intersect_right

We can then call this function:

class Posterior_scipyrv(stats.rv_continuous):
    def __init__(self,d1,d2):
        super(Posterior_scipyrv, self).__init__()
        a1, b1 =
        a2, b2 =

        # 'a' and 'b' are scipy's names for the bounds of the support
        self.a , self.b = intersect_intervals([(a1,b1),(a2,b2)])

To test this, let’s use a beta distribution, which is defined on \([0,1]\):

prior = stats.beta(1,1)
likelihood = stats.norm(1,3)

We know that the posterior will also be defined on \([0,1]\). By defining the support of the posterior inside the the __init__ method of Posterior_scipyrv, we give SciPy access to this information.

We can time the resulting speedup in calculating posterior.ppf(0.99):

s = time.time()
e = time.time()
print(e-s,'seconds to evalute ppf')
support: (-inf, inf)
result: 0.9901821216897447
3.8804399967193604 seconds to evalute ppf

support: (0.0, 1.0)
result: 0.9901821216904315
0.40013647079467773 seconds to evalute ppf

We’re able to achieve an almost 10x speedup, with very meaningful impact on user experience. For less extreme quantiles, like posterior.ppf(0.5), I still get a 2x speedup.

The lack of properly defined support causes only inefficiency if we continue to use split_integral to calculate the cdf. But if we leave the cdf problem unaddressed, it can combine with the too-wide support to produce outright errors.

For example, suppose we use a beta distribution again for the prior, but we don’t use the split integral for the cdf, and nor do we define the support of the posterior as \([0,1]\) instead of \({\rm I\!R}\).

prior = stats.beta(1,1)
likelihood = stats.norm(1,3)

class Posterior_scipyrv(stats.rv_continuous):
    def __init__(self,d1,d2):
        super(Posterior_scipyrv, self).__init__()
        self.d1= d1
        self.d2= d2

        self.normalization_constant = integrate.quad(self.unnormalized_pdf,-np.inf,np.inf)[0]
    def unnormalized_pdf(self,x):
        return self.d1.pdf(x) * self.d2.pdf(x)

    def _pdf(self,x):
        return self.unnormalized_pdf(x)/self.normalization_constant

posterior = Posterior_scipyrv(prior,likelihood)

print("cdf values:")
for i in range(20):

The cdf fails quickly now:

3.2 0.9999999999850296
3.4 0.0
3.6 0.0

When the integration algorithm is looking over all of \((-\infty,3.4]\), it has no way of knowing that all the probability mass is in \([0,1]\). The posterior distribution has only one big bump in the middle, so it’s not surprising that the algorithm misses it.

If we now ask the equation solver in ppf to find quantiles, without telling it that all the solutions are in \([0,1]\), it will try to evaluate points like cdf(4), which return 0 – but ppf is assuming that the cdf is increasing. This leads to catastrophe. Running posterior.ppf(0.5) gives a RuntimeError: Failed to converge after 100 iterations. At first I wondered why beta distributions would always give me RuntimeErrors…

Optimization: CDF memoization

When we call ppf, the equation solver calls cdf for the same distribution many times. This suggests we could optimize things further by storing known cdf values, and only doing the integration from the closest known value to the desired value. This will result in the same number of integration calls, but each will be over a smaller interval (except the first). This is a form of memoization.

We can also squeeze out some additional speedup by considering the cdf to be 1 forevermore once it reaches values close to 1.

class Posterior_scipyrv(stats.rv_continuous):
    def _cdf(self,x):
        # exploit considering the cdf to be 1
        # forevermore once it reaches values close to 1
        for x_lookup in self.cdf_lookup:
            if x_lookup < x and np.around(self.cdf_lookup[x_lookup],5)==1.0:
                return 1

        # check lookup table for largest integral already computed below x
        sortedkeys = sorted(self.cdf_lookup ,reverse=True)
        for key in sortedkeys:
            #find the greatest key less than x
            if key<x:
                ret = self.cdf_lookup[key]+integrate.quad(self.pdf,key,x)[0]
                self.cdf_lookup[float(x)] = ret
                return ret
        # Initial run
        ret = split_integral(self.pdf,self.splitpoint,x)
        self.cdf_lookup[float(x)] = ret
        return ret

If we return to our earlier prior and likelihood

prior = stats.lognorm(s=.5,scale=math.exp(.5)) # a lognormal(.5,.5) in SciPy notation
likelihood = stats.norm(5,1)

and make calls to ppf([0.1, 0.9, 0.25, 0.75, 0.5]), the memoization gives us about a 5x speedup:

memoization False
[2.63571613 5.18538207 3.21825988 4.56703016 3.88645864]
length of lookup table: 0
2.1609253883361816 seconds to evalute ppf

memoization True
[2.63571613 5.18538207 3.21825988 4.56703016 3.88645864]
length of lookup table: 50
0.4501194953918457 seconds to evalute ppf

These speed gains again occur over a range that makes quite a difference to user experience: going from multiple seconds to a fraction of a second.

Optimization: ppf with bounds

In my webapp, I give the user some standard percentiles: 0.1, 0.25, 0.5, 0.75, 0.9.

Given that ppf works by numerical equation solving on the cdf, if we give the solver a smaller domain in which to look for the solutions, it should find them more quickly. When we calculate multiple percentiles, each percentile we calculate helps us close in on the others. If the 0.1 percentile is 12, we have a lower bound of 12 for on any percentile \(p>0.1\). If we have already calculated a percentile on each side, we have both a lower and upper bound.

We can’t directly pass the bounds to ppf, so we have to wrap the method, which is found here in the source code. (To help us focus, I give a simplified presentation below that cuts out some code designed to deal with unbounded supports. The code below will not run correctly).

class Posterior_scipyrv(stats.rv_continuous):
    def ppf_with_bounds(self, q, leftbound, rightbound):
        left, right = self._get_support()

        # SciPy ppf code to deal with case where left or right are infinite.
        # Omitted for simplicity.

        if leftbound is not None:
          left = leftbound
        if rightbound is not None:
          right = rightbound

        # brentq is the equation solver (from Brent 1973)
        # _ppf_to_solve is simply cdf(x)-q, since brentq
        # finds points where a function equals 0
        return optimize.brentq(self._ppf_to_solve,left, right, args=q)

To get some bounds, we run the extreme percentiles first, narrowing in on the middle percentiles from both sides. For example in 0.1, 0.25, 0.5, 0.75, 0.9, we want to evaluate them in this order: 0.1, 0.9, 0.25, 0.75, 0.5. We store each of the answers in result.

class Posterior_scipyrv(stats.rv_continuous):
    def compute_percentiles(self, percentiles_list):
        result = {}

        # put percentiles in the order they should be computed
        percentiles_reordered = sum(zip(percentiles_list,reversed(percentiles_list)), ())[:len(percentiles_list)] # see

        def get_bounds(dict, p):
            # get bounds (if any) from already computed `result`s
            keys = list(dict.keys())
            i = keys.index(p)
            if i != 0:
                leftbound = dict[keys[i - 1]]
                leftbound = None
            if i != len(keys) - 1:
                rightbound = dict[keys[i + 1]]
                rightbound = None
            return leftbound, rightbound

        for p in percentiles_reordered:
            leftbound , rightbound = get_bounds(result,p)
            res = self.ppf_with_bounds(p,leftbound,rightbound)
            result[p] = np.around(res,2)

        sorted_result = {key:value for key,value in sorted(result.items())}
        return sorted_result

The speedup is relatively minor when calculating just 5 percentiles.

Using ppf bounds? True
total time to compute percentiles: 3.1997928619384766 seconds

Using ppf bounds? False
total time to compute percentiles: 3.306936264038086 seconds

It grows a little bit with the number of percentiles, but calculating a large number of percentiles would just lead to information overload for the user.

This was surprising to me. Using the bounds dramatically cuts the width of the interval for equation solving, but leads to only a minor speedup. Using fulloutput=True in optimize.brentq, we can see the number of function evaluations that brentq uses. This lets us see that the number of evaluations needed by brentq is highly non-linear in the width of the interval. The solver gets quite close to the solution very quickly, so giving it a narrow interval hardly helps.

Using ppf bounds? True
brentq looked between 0.0 10.0 and took 11 iterations
brentq looked between 0.52 10.0 and took 13 iterations
brentq looked between 0.52 2.24 and took 8 iterations
brentq looked between 0.81 2.24 and took 9 iterations
brentq looked between 0.81 1.73 and took 7 iterations
total time to compute percentiles: 3.1997928619384766 seconds

Using ppf bounds? False
brentq looked between 0.0 10.0 and took 11 iterations
brentq looked between 0.0 10.0 and took 10 iterations
brentq looked between 0.0 10.0 and took 10 iterations
brentq looked between 0.0 10.0 and took 10 iterations
brentq looked between 0.0 10.0 and took 9 iterations
total time to compute percentiles: 3.306936264038086 seconds

Brent’s method is a very efficient equation solver.

  1. It has a very similar shape to the likelihood (because the likelihood has much lower variance than the prior). 

July 1, 2020

How long does it take to sample from a distribution?

Suppose a study comes out about the effect of a new medication and you want to precisely compute how to update your beliefs given this new evidence. You might use Bayes’ theorem for continuous distributions.

\[p(\theta | x) =\frac{p(x | \theta) p(\theta) }{p(x)}=\frac{p(x | \theta) p(\theta) }{\int_\Theta p(x | \theta) p(\theta) d \theta}\]

The normalization constant (the denominator of the formula) is an integral that is not too difficult to compute, as long as the distributions are one-dimensional.

For example, with:

from scipy import stats
from scipy import integrate

prior = stats.lognorm(scale=math.exp(1),s=1)
likelihood = stats.norm(loc=5,scale=20)
def unnormalized_posterior_pdf(x):
	return prior.pdf(x)*likelihood.pdf(x)
normalization_constant = integrate.quad(

the integration runs in less than 100 milliseconds on my machine. So we can get a PDF for an arbitrary 1-dimensional posterior very easily.

But taking a single sample from the (normalized) distribution takes about a second:

# Normalize unnormalized_posterior_pdf
# using the method above and return the posterior as a
# scipy.stats.rv_continuous object.
# This takes about 100 ms
posterior = update(prior,likelihood) 

# Take 1 random sample, this takes about 1 s

And this difference can be even starker for higher-variance posteriors (with s=4 in the lognormal prior, I get 250 ms for the normalization constant and almost 10 seconds for 1 random sample).

For a generic continuous random variable, rvs uses inverse transform sampling. It first generates a random number from the uniform distribution between 0 and 1, then passes this number to ppf, the percent point function, or more commonly quantile function, of the distribution. This function is the inverse of the CDF. For a given percentile, it tells you what value corresponds to that percentile of the distribution. Randomly selecting a percentile \(x\) and evaluating the \(x\) th percentile of the distribution is equivalent to randomly sampling from the distribution.

How is ppf evaluated? The CDF, which in general (and in fact most of the time1) has no explicit expression at all, is inverted by numerical equation solving, also known as root finding. For example, evaluating ppf(0.7) is equivalent to solving cdf(x)-0.7=0, which can be done with numerical methods. The simplest such method is the bisection algorithm, but more efficient ones have been developed (ppf uses Brent’s method). The interesting thing for the purposes of runtime is that the root finding algorithm must repeatedly call cdf in order to narrow in on the solution. Each call to cdf means an expensive integration of the PDF.

CDF Bisection
The bisection algorithm to solve cdf(x)-0.7=0

An interesting corollary is that getting one random number is just as expensive as computing a chosen percentile of the distribution using ppf (assuming that drawing a random number between 0 and 1 takes negligible time). For approximately the cost of 10 random numbers, you could characterize the distribution by its deciles.

On the other hand, sampling from a distribution whose family is known (like the lognormal) is extremely fast with rvs. I’m getting 10,000 samples in a millisecond (prior.rvs(size=10000)). This is not because there exists an analytical expression for its inverse CDF, but because there are very efficient algorithms2 for sampling from these specific distributions3.

So far I have only spoken about 1-dimensional distributions. The difficulty of computing the normalization constant in multiple dimensions is often given as a reason for using numerical approximation methods like Markov chain Monte Carlo (MCMC). For example, here:

Although in low dimension [the normalization constant] can be computed without too much difficulties, it can become intractable in higher dimensions. In this last case, the exact computation of the posterior distribution is practically infeasible and some approximation techniques have to be used […]. Among the approaches that are the most used to overcome these difficulties we find Markov Chain Monte Carlo and Variational Inference methods.

However, the difficulty of sampling from a posterior distribution that isn’t in a familiar family could be a reason to use such techniques even in the one-dimensional case. This is true despite the fact that we can easily get an analytic expression for the PDF of the posterior.

For example, with the MCMC package emcee, I’m able to get 10,000 samples from the posterior in 8 seconds, less than a millisecond per sample and a 1,000x improvement over rvs!

ndim, nwalkers, nruns = 1, 20, 500

start = time.time()
def log_prob(x):
    if posterior.pdf(x)>0:
        return math.log(posterior.pdf(x))
        return -np.inf
sampler = emcee.EnsembleSampler(nwalkers, 1, log_prob)
sampler.run_mcmc(p0, nruns) #p0 are the starting samples

These samples will only be drawn from a distribution approximating the posterior, whereas rvs is as precise as SciPy’s root finding and integration algorithms. However, I think there are MCMC algorithms out there that converge very well.

Here’s the code for running the timings on your machine.

  1. “For a continuous distribution, however, we need to integrate the probability density function (PDF) of the distribution, which is impossible to do analytically for most distributions (including the normal distribution).” Wikipedia on Inverse transform sampling

  2. “For the normal distribution, the lack of an analytical expression for the corresponding quantile function means that other methods (e.g. the Box–Muller transform) may be preferred computationally. It is often the case that, even for simple distributions, the inverse transform sampling method can be improved on: see, for example, the ziggurat algorithm and rejection sampling. On the other hand, it is possible to approximate the quantile function of the normal distribution extremely accurately using moderate-degree polynomials, and in fact the method of doing this is fast enough that inversion sampling is now the default method for sampling from a normal distribution in the statistical package R.” Wikipedia on Inverse transform sampling

  3. The way it works in Python is that, in the definition of the class Lognormal (a subclass of the continuous random variable class), the generic inverse transform rvs method is overwritten with a more tailored sampling algorithm. SciPy will know to apply the more efficient method when rvs is called on an instance of class Lognormal. 

May 31, 2020

Hidden subsidies for cars

Personal vehicles are ubiquitous. They dominate cities. They are actually so entrenched that they can blend into the background, no longer rising to our attention. Having as many cars as we do can seem to be the ‘natural’ state of affairs.

Our level of car use could perhaps be called natural if it were the result of people’s preferences interacting in well-functioning markets. No reader of this blog, I take it, would believe such a claim. The negative externalities of cars are well-documented: pollution, congestion, noise, and so on.

The subsidies for cars are less obvious, but I think they’re also important.

In our relationship to cars in the urban environment, we’re almost like David Foster Wallace’s fish who asked ‘what the hell is water?’. I want to flip that perspective and point out some specific government policies that increase the number of cars in cities.

"Manhattan, 1964 by Evelyn Hofer"
“Manhattan, 1964” by Evelyn Hofer

Free or cheap street parking

Privately provided parking in highly desirable city centres can cost hundreds of dollars a month. But the government provides car storage on the side of the street for a fraction of that, often for free.1

The width of roads

Streets and sidewalks sit on large amounts of strategically placed land that is publicly owned. Most of that land is devoted to cars. On large thoroughfares, I’d guess cars take easily 70% of the space, leaving only thin slivers on each side for pedestrians.

This blogger estimates, apparently by eyeballing Google Maps, that streets take up 43% of the land in Washington DC, 25% in Paris, and 20% in Tokyo.

Space that is now used for parked cars or moving cars could be used, for example, by shops and restaurants, for bikeshare stations, to plant trees, for parklets, or even to add more housing. And if there was a market for this land I’m sure people would come up with many other clever uses.


Even if highways aren’t actually inside the city, they have important indirect effects on urban life. Whether the government pays for highways or train lines to connect cities to each other is a policy choice with clear effects on day to day life in the city, even for those who do not travel.

In the United States, this implicit subsidy for cars is large. According to the department of transportation, in 2018 $49 billion out of the department’s budget of $87 billion was spent on highways2.

In this post I don’t want to get into the very complicated question of how much governments should optimally spend on highways. For all I know the U.S. policy may be optimal. My point is only that any government spending on highways indirectly subsidises the presence of cars in cities. This is non-obvious and worth pointing out. When the government pays for a Metro in your city, the subsidy to Metros plain to see. Meanwhile, the subsidy to cars via a huge network of roads across the country passes unnoticed by many.

To be fair, in the United States federal spending on highways is largely financed by taxes on taxes on vehicle fuel. So it’s not clear whether federal highways policy is a net subsidy to cars. However, the way highway spending is financed varies by country. For example, in Germany, “federal highways are funded by the federation through a combination of general revenue and receipts from tolls imposed on truck traffic”.

Minimum parking requirements

Many zoning codes require new buildings to include some fixed number of off-street parking spaces. This isn’t as much of a problem in the European cities I’m familiar with, but in the US, parking minimums are far beyond what the market would provide, and are a significant cost to developers. One paper estimated that the cost of parking in Los Angeles increases the cost of office space by 27-67%3.

Suburban sprawl

United States built sprawling suburbs in the postwar period. I still remember the famous aerial view of Levittown, the prototypical prefabricated suburb, from my middle school history book.

The growth of suburbia was aided by specific government policies that tipped the scales in favour of individual homes in the suburbs, and against apartments in cities. The growth of suburbia led to more cars in the city, because people who live in suburbs are much more likely to drive to work.

Devon Zuegel has an excellent exposition of how federal mortgage insurance subsidized suburbia4:

[The federal housing administration] provides insurance on mortgages that meet certain criteria, repaying the principal to lenders if borrowers default. […] Mortgages had to meet an opinionated set of criteria to qualify for the federal insurance. […] The ideal house had “sunshine, ventilation, scenic outlook, privacy, and safety”, and “effective landscaping and gardening” added to its worth. The guide recommended that houses should be set back at least 15 feet from the road, and well-tended lawns that matched the neighbors’ yards helped the rating. […] [The FHA manual] prescribed minimum street widths and other specific measurements.

The federal government was effectively prescribing how millions of Americans should live, down to their landscaping and gardening! I wonder if Khrushchev brought up this interesting fact about American life in his conversations with Eisenhower. ;)

Further reading

  • A study from the Canadian Victoria Tansport Policy Institute, Transportation Land Valuation
  • Anything by Donald Shoup, an economist and urban planner
  • Some cool colour-coded maps of U.S. cities, showing the surface area devoted to surface parking, above-ground parking garages, and park space.
  • Barcelona’s superblocks
  1. If you want more on this topic, economist and urban planner Donald Shoup has a 733-page tome called The High Cost of Free Parking

  2. See the supporting summary table on page 82 of this document. The sum of spending for the Federal Highway Administration, the Federal Motor Carrier Safety Administration, and the National Traffic Safety Administration comes to $49 billion. Thanks to Devin Jacob for the pointer. 

  3. Shoup 1999, The trouble with minimum parking requirements, in section 3.1, estimates that parking requirements in Los Angeles increase the cost of office space by 27% for aboveground parking, and 67% for underground parking. 

  4. Devon wrote a two-part series: Part 1, quoted above, deals with federal mortgage policy, and lays out a convincing case that it included large implicit subsidies. Part 2 is about “how suburban sprawl gets special treatment in our tax code”. It shows that owning and building homes is heavily subsidized, for example by the gargantuan mortgage interest deduction. I agree that this means people are encouraged to consume more housing, but I don’t see how it differentially encourages suburban housing. Devon quotes economist Edward Glaeser, who says that

    More than 85 percent of people in detached homes are owner-occupiers, in part because renting leads to home depreciation. More than 85 percent of people in larger buildings rent. Since ownership and structure type are closely connected, subsidizing homeownership encourages people to leave urban high-rises and move into suburban homes.

    So the key link in the argument is the connection between ownership and structure type. I’d like to see it spelled out and sourced better. Could the observed correlation just be due to a selection effect? If there’s a true causal effect, do large buildings have more renters because it’s genuinely more efficient that way, or is there some some market failure that prevents people from being apartment-owners in the city? 

December 6, 2019

Why scientific fraud is hard to catch

It’s nearly impossible to catch a scientific fraudster if they’re halfway competent.

Uri Simonsohn has become a minor nerd celeb by exposing fraudulent academic scientists who used fabricated data to get published. The Atlantic called him “the data vigilante”. I’ll describe two simple statistical techniques he has used – and why I’m pessimistic about the impact of such techniques.

If a parameter is measured with many significant digits, the last digit should be distributed uniformly 0-9. In a study of an intervention to increase factory workers’ use of hand sanitizer, sanitizer use was measured with a scale sensitive to the 100th of a gram. But the data had an unusual prevalence of 6s, 7s and 9s on the last digit. Uri Simonsohn and colleagues conducted a chi-square test and reject the hypothesis that the digits follow a uniform distribution, p=0.00000000000000001.1

A second sign of fraudulent data is if the baseline means are too similar between treatment groups. In one of the hand sanitizer studies, there were 40 participants, 20 in the control condition and 20 in the treatment condition. Simonsohn used a “bootstrapping” technique – randomly shuffling the 40 observations into two groups of 20, and repeating this millions of times, in order to estimate how often we would see such similar means if the data were truly drawn randomly (less than once in a 100,000)2.

There are other, more mathematically intense techniques for forensic data analysis3, but the common theme among them is to detect fraudsters creating suspiciously non-random data.

I want to tell these hand sanitizer people: come on, how hard can it be to use a random number generator? We know people are bad at producing randomness. In poker, it’s often optimal to play a mixed strategy, which requires randomising your play. But we have a strong natural tendency to play non-randomly, so poker players have developed ad hoc randomisation devices, like looking at your watch and playing call if you’re in the first half of the minute and fold if you’re in the second half. A similar incapacity to produce enough randomness seems to have befallen these amateurish scientific fakers. In order to produce data that violates the last-digit-uniformity law, you have to literally be writing the fake numbers by hand into a computer!

Savvier baddies would not shoot themselves in the foot in this way. It’s very easy to just draw some random numbers from a pre-specified distribution.

I can imagine that as you run more complex experiments, with multiple treatment arms and many potentially correlated parameters, it becomes difficult to create realistic fake data, even if you randomly draw it from a distribution. Some inconsistency could always escape your notice, and a sufficiently determined data sleuth might catch you.

But there’s a much easier solution: just run a legitimate experiment, and then add a constant of your choice to all observations in the treatment group. This data would look exactly like the real thing – the only lie would be that the “treatment” was you logging on to the computer in the middle of the night and changing the numbers. I can’t think of any way this misconduct could be detected statistically. And it has the additional benefit that you’re running an experiment, so people in your department won’t be wondering where you’re getting all that data from.

Statistical sleuthing is fun, but I suspect it’s powerless against the majority of fraud.

My broader hope is that we’ll see a rise in the norm of having multiple independent replications of a study. This single tide should wash away many of the problems with current science. If a study fails to replicate multiple times, the result will lose credibility – even if we never find out whether it was due to outright fraud or merely flawed science.

  1., Figure 2 

  2., Problem 4 

  3. see the “fake data” category of Simonsohn’s blog Data Colada, which by the way is excellent on many topics besides fraud. 

August 2, 2019

A shift in arguments for AI risk

Different arguments have been made for prioritising AI. In Superintelligence, we find a detailed argument with three features: (i) the alignment problem as the source of AI risk, (ii) the hypothesis that there will be a sharp, discontinuous jump in AI capabilities, and (iii) the resulting conclusion that an existential catastrophe is likely. Arguments that abandon some of these features have recently become prominent. Christiano and Grace drop the discontinuity hypothesis, but keep the focus on alignment. Even under more gradual scenarios, they argue, misaligned AI could cause human values to lose control of the future. Moreover, others have proposed AI risks that are unrelated to the alignment problem: for example, the risk that AI might be misused or could make war between great powers more likely. It would be beneficial to clarify which arguments actually motivate people who prioritise AI.1

Long summary

Many people now work on ensuring that advanced AI has beneficial consequences. But members of this community have made several quite different arguments for prioritising AI.

Early arguments, and in particular Superintelligence, identified the “alignment problem” as the key source of AI risk. In addition, the book relies on the hypothesis that superintelligent AI is likely to emerge through a discontinuous jump in the capabilities of an AI system, rather than through gradual progress. This premise is crucial to the argument that a single AI system could gain a “decisive strategic advantage”, that the alignment problem cannot be solved through trial and error, and that there is likely to be a “treacherous turn”. Hence, the discontinuity hypothesis underlies the book’s conclusion that existential catastrophe is a likely outcome.

The argument in Superintelligence combines three features: (i) a focus on the alignment problem, (ii) the discontinuity hypothesis, and (iii) the resulting conclusion that an existential catastrophe is likely.

Arguments that abandon some of these features have recently become prominent. They also generally tend to have been made in less detail than the early arguments.

One line of argument, promoted by Paul Christiano and Katja Grace, drops the discontinuity hypothesis, but continues to view the alignment problem as the source of AI risk. Even under more gradual scenarios, they argue that, unless we solve the alignment problem before advanced AIs are widely deployed in the economy, these AIs will cause human values to eventually fade from prominence. They appear to be agonistic about whether these harms would warrant the label “existential risk”.

Moreover, others have proposed AI risks that are unrelated to the alignment problem. I discuss three of these: (i) the risk that AI might be misused, (ii) that it could make war between great powers more likely, and (iii) that it might lead to value erosion from competition. These arguments don’t crucially rely on a discontinuity, and the risks are rarely existential in scale.

It’s not always clear which of the arguments actually motivates members of the beneficial AI community. It would be useful to clarify which of these arguments (or yet other arguments) are crucial for which people. This could help with evaluating the strength of the case for prioritising AI, deciding which strategies to pursue within AI, and avoiding costly misunderstanding with sympathetic outsiders or sceptics.


  1. Long summary
  2. Early arguments: the alignment problem and discontinuity hypotheses
    1. Concerns about AI before Superintelligence
    2. Bostrom’s Superintelligence
      1. How a single AI system could obtain a decisive strategic advantage
      2. The impossibility of alignment by trial and error
      3. The treacherous turn
  3. The alignment problem without a discontinuity
    1. The basic picture
    2. The importance of competitive pressures
    3. Questions about this argument
  4. Arguments unrelated to the alignment problem
    1. Misuse risks
      1. The basic argument
        1. Questions about this argument
      2. Robust totalitarianism
        1. Questions about this argument
    2. Increased likelihood of great-power war
      1. Questions about this argument
    3. Value erosion from competition
      1. Questions about this argument
  5. People who prioritise AI risk should clarify which arguments are causing them to do so
    1. How crucial is the alignment problem?
    2. What is the attitude towards discontinuity hypotheses?
    3. Benefits of clarification
  6. Appendix: What I mean by “discontinuities”
    1. Discontinuities aren’t defined by absolute speed
    2. Discontinuities could happen before “human-level”

Early arguments: the alignment problem and discontinuity hypotheses

Concerns about AI before Superintelligence

Since the early days of the field of AI, people have expressed scattered concerns that AI might have a large-scale negative impact. In a 1959 lecture, Speculations on Perceptrons and other Automata, I.J. Good wrote that

whether [an intelligence explosion2] will lead to a Utopia or to the extermination of the human race will depend on how the problem is handled by the machines. The important thing will be to give them the aim of serving human beings.

Around the turn of the millenium, related concerns were being gestured at in Ray Kurzweil’s The Age of Spiritual Machines (1999) and in a popular essay by Bill Joy, Why the Future Doesn’t Need Us (2000). These concerns did not directly draw on I.J. Good’s concept of an intelligence explosion, but did suggest that progress in artificial intelligence could ultimately lead to human extinction. Joy’s emphasizes the idea that AI systems “would compete vigorously among themselves for matter, energy, and space,” suggesting this may cause their prices to rise “beyond human reach” and therefore causing biological humans to be “squeezed out of existence.”

As early as 1997, in How long before superintelligence?, Nick Bostrom highlighted the need to suitably “arrange the motivation systems of [….] superintelligences”. In 2000, Eliezer Yudkowsky co-founded the Machine Intelligence Research Institute (MIRI), then named Singularity institute, with the goal of “sparking the Singularity” by creating a “transhuman AI.” From its inception, MIRI emphasized the importance of ensuring that advanced AI systems are “Friendly,” in the sense of being “beneficial to humans and humanity.” Over the following decade, MIRI’s aims shifted away from building the first superintelligent AI system and toward ensuring that the first such system – no matter who it is built by – will be beneficial to humanity. In a series of essays, Yudkowsky produced the first extensive body of writing describing what is now known as the alignment problem: the problem of building powerful AI systems which reliably try to do what their operators want them to do. He argued that superintelligent AI is likely to come very suddenly, in a single event that leaves humans powerless; if we haven’t already solved the alignment problem by that time, the AI will cause an existential catastrophe.

In Facing the Intelligence Explosion (2013), Luke Muehlhauser, a former executive director of MIRI, gave a succinct account of this concern:

AI leads to intelligence explosion, and, because we don’t know how to give an AI benevolent goals, by default an intelligence explosion will optimize the world for accidentally disastrous ends. A controlled intelligence explosion, on the other hand, could optimize the world for good.

The intelligence explosion, where an AI rapidly and recursively self-improves to become superintelligent, features prominently in this picture. For this essay I find useful the broader notion of a discontinuity in AI capabilities. I’ll define a discontinuity as an improvement in the capabilities of powerful AI that happens much more quickly than what would be expected based on extrapolating past progress. (I further disambiguate this term in the appendix). An intelligence explosion is clearly sufficient, but isn’t necessary for there to be a discontinuity.

In Yudkowsky’s Artificial Intelligence as a Positive and Negative Factor in Global Risk (2008), he expands on the importance of discontinuities to his argument:

From the standpoint of existential risk, one of the most critical points about Artificial Intelligence is that an Artificial Intelligence might increase in intelligence extremely fast. […]

The possibility of sharp jumps in intelligence […] implies a higher standard for Friendly AI techniques. The technique cannot assume the programmers’ ability to monitor the AI against its will, rewrite the AI against its will, bring to bear the threat of superior military force; nor may the algorithm assume that the programmers control a “reward button” which a smarter AI could wrest from the programmers; et cetera.3

Bostrom’s Superintelligence

Superintelligence remains by far the most detailed treatment of the issue, and came to be viewed by many as the canonical statement of the case for prioritising AI. It retains some of the key features of the earlier writing by Bostrom, Yudkowsky, and Muehlhauser.

In particular, in the book we find:

  • the alignment problem as the key source of AI risk
  • discontinuities in AI trajectories as a premise4 for the argument that:
    • 1) a single AI system could gain a decisive strategic advantage5
    • 2) we cannot use trial and error to ensure that this AI is aligned
    • 3) the treacherous turn will make it much more difficult to react
  • the resulting conclusion that an existential catastrophe is likely

If a decisive strategic advantage were gained by an AI that is not aligned with human values, the result would likely be human extinction:

Taken together, these three points [decisive strategic advantage, the orthogonality thesis, and instrumental convergence] thus indicate that the first superintelligence may shape the future of Earth-originating life, could easily have non-anthropomorphic final goals, and would likely have instrumental reasons to pursue open-ended resource acquisition. If we now reflect that human beings consist of useful resources (such as conveniently located atoms) and that we depend for our survival and flourishing on many more local resources, we can see that the outcome could easily be one in which humanity quickly becomes extinct. (Chapter 8).

Let us now turn to the three ways in which the discontinuity hypothesis is a crucial premise in the argument.

How a single AI system could obtain a decisive strategic advantage

It is the discontinuity hypothesis that enables Bostrom to argue that a single AI system will gain a decisive strategic advantage, over humans and other AI systems.

If there is no discontinuity, the AI frontrunner is unlikely to obtain far more powerful capabilities than its competitors. The first system that could be deemed superintelligent will emerge in a world populated by only slightly less powerful systems. On the other hand, if an AI system does make discontinuous progress, this progress would put it head and shoulders above the competition, and it could even gain a decisive strategic advantage.

Bostrom’s analysis of AI trajectories focuses on “takeoff”, the time between the “human-level general intelligence” and “radical superintelligence”. A “fast take-off” is one that occurs over as minutes, hours, or days. Bostrom argues that “if and when a takeoff occurs, it will likely be explosive.”6

Notice that my definition of a discontinuity in AI capabilities does not exactly coincide with that of a “fast take-off”. This difference, which I explain in more detail in the appendix, is sometimes important. In Chapter 5, Bostrom writes that the frontrunner could “attain a decisive strategic advantage even if the takeoff is not fast”. However, he justifies this with reference to a scenario that involves a strong discontinuity7.

The impossibility of alignment by trial and error

The discontinuity removes the option of using trial and error to solve the alignment problem. The technical problem of aligning an AI with human interests remains regardless of the speed of AI development8. But if AI systems are developed more slowly, one might expect these problems to be solved by trial and error as the AI gains in capability and begins to cause real-world accidents. In a continuous scenario, AI remains at the same level of capability long enough for us to gain experience with deployed systems of that level, witness small accidents, and fix any misalignment. The slower the scenario, the easier it is to do this. In a moderately discontinuous scenario, there could be accidents that kill thousands of people. But it seems to me that a very strong discontinuity would be needed to get a single moment in which the AI causes an existential catastrophe.

The treacherous turn

A key concept in Bostrom’s argument is that of the treacherous turn:

The treacherous turn—While weak, an AI behaves cooperatively (increasingly so, as it gets smarter). When the AI gets sufficiently strong—without warning or provocation—it strikes, forms a singleton9, and begins directly to optimize the world according to the criteria implied by its final values.

The treacherous turn implies that:

  • the AI might gain a decisive strategic advantage without anyone noticing
  • the AI might hide the fact that it is misaligned

Bostrom explains that:

[A]n unfriendly AI of sufficient intelligence realizes that its unfriendly final goals will be best realized if it behaves in a friendly manner initially, so that it will be let out of the box. […] At some point, an unfriendly AI may become smart enough to realize that it is better off concealing some of its capability gains. It may underreport on its progress and deliberately flunk some of the harder tests, in order to avoid causing alarm before it has grown strong enough to attain a decisive strategic advantage. The programmers may try to guard against this possibility by secretly monitoring the AI’s source code and the internal workings of its mind; but a smart-enough AI would realize that it might be under surveillance and adjust its thinking accordingly.

In these scenarios, Bostrom is imagining an AI with the ability for very sophisticated deception. Crucially, the AI goes from being genuinely innocuous to being a cunning deceiver without passing through any intermediate steps: there are no small-scale accidents that could reveal the AI’s misaligned goals, nor does the AI ever make a botched attempt at deception that other actors can discover. This relies on the hypothesis of a very strong discontinuity in the AI’s abilities. The more continuous the scenario, the more experience people are likely to have with deployed systems of intermediate sophistication, the lower the risk of a treacherous turn.

The alignment problem without a discontinuity

More recently, Paul Christiano and Katja Grace have argued that, even if there is no discontinuity, AI misalignment still poses a risk of negatively affecting the long-term trajectory10 of earth-originating intelligent life. According to this argument, once AIs do nearly all productive work, humans are likely to lose control of this trajectory to the AIs. Christiano and Grace argue that (i) solving the alignment problem and (ii) reducing competitive pressures to deploy AI would help ensure that human values continue to shape the future.

In terms of our three properties: Christiano and Grace drop the discontinuity hypothesis, but continue to view the alignment problem as the source of AI risk. It’s unclear whether the risks they have in mind would qualify as existential.

The arguments in this section and the next section (“arguments unrelated to the alignment problem”) have been made much more briefly than the early arguments. As a result, they leave a number of open questions which I’ll discuss for each argument in turn.

The basic picture

The argument appears to be essentially the following. When AIs become more capable than humans at economically useful tasks, they will be given increasingly more control over what happens. The goals programmed into AIs, rather than human values, will become the primary thing shaping the future. Once AIs make most of the decisions, it will become difficult to remove them or change the goals we have given them. So, unless we solve the alignment problem, we will lose (a large chunk of) the value of the future.

This story is most clearly articulated in the writings of Paul Christiano, a prominent member of the AI safety community who works in the safety team at OpenAI. In a 2014 blog post, Three Impacts of Machine Intelligence, he writes:

it becomes increasingly difficult for humans to directly control what happens in a world where nearly all productive work, including management, investment, and the design of new machines, is being done by machines. […] I think human management becomes increasingly implausible as the size of the world grows (imagine a minority of 7 billion humans trying to manage the equivalent of 7 trillion knowledge workers; then imagine 70 trillion), and as machines’ abilities to plan and decide outstrip humans’ by a widening margin. In this world, the AI’s that are left to do their own thing outnumber and outperform those which remain under close management of humans.

As a result, AI values, rather than human values, will become the primary thing shaping the future. The worry is that we might therefore get “a future where our descendants maximiz[e] some uninteresting values we happened to give them because they were easily specified and instrumentally useful at the time.”

In his interview on the 80,000 Hours podcast, Christiano explains that he sees two very natural categories of things that affect the long run trajectory of civilisation: extinction, which is sticky because we can never come back from it, and changes in the distribution of values among agents, which “can be sticky in the sense that if you create entities that are optimizing something, those entities can entrench themselves and be hard to remove”. The most likely way the distribution of values will change, according to him, is that as we develop AI, we’ll “pass the torch from humans, who want one set of things, to AI systems, that potentially want a different set of things.”

Katja Grace, the founder of AI Impacts, explicitly addresses the point about development trajectories (also on the 80,000 Hours podcast): “even if things happen very slowly, I expect the same problem to happen in the long run: AI being very powerful and not having human values.” She gives an example of this slow-moving scenario:

suppose you’re a company mining coal, and you make an AI that cares about mining coal. Maybe it knows enough about human values to not do anything terrible in the next ten years. But it’s a bunch of agents who are smarter than humans and better than humans in every way, and they just care a lot about mining coal. In the long run, the agents accrue resources and gain control over things, and make us move toward mining a lot of coal, and not doing anything that humans would have cared about.11

The importance of competitive pressures

There is likely to be a trade-off, when building an AI, between making it maximally competent at some instrumentally useful goal, and aligning it with human values.12

In the 80,000 Hours interview, Christiano said: “I think the competitive pressure to develop AI, in some sense, is the only reason there’s a problem”, because it takes away the option of slowing down AI development until we have a good solution to the alignment problem.

According to Christiano, there are therefore two ways to make a bad outcome less likely: coordinating to overcome the competitive pressure, or making technical progress to alleviate the trade-off.

Questions about this argument

This argument for prioritising AI has so far only been sketched out in a few podcast interviews and blog posts. It has also been made at a high level of abstraction, as opposed to relying on a concrete story of how things might go wrong. Some key steps in the argument have not yet been spelled out in detail. For example:

  • There isn’t really a very detailed explanation of why misalignment at an early stage (e.g. of a coal-mining AI) couldn’t be reversed as the AI begins to do undesirable things. If AIs only gradually gain the upper hand on humanity, one might think there would be many opportunities to update the AIs’ values if they cease to be instrumentally useful.
  • In particular, competitive pressures explain why we would deploy AI faster than is prudent, but they don’t explain why relatively early misalignment should quickly become irreversible. If my AI system is accidentally messing up my country, and your AI system is accidentally messing up your country, we both still have strong incentives to figure out how to correct the problem in our own AI system.

Arguments unrelated to the alignment problem

Recently, people have given several new arguments for prioritising AI, including: (i) risks that AI might be misused by bad actors, (ii) that it might make great-power war more likely and (iii) value erosion from competition. These risks are unrelated to the alignment problem. Like those in the previous section, these new arguments have mostly been made briefly.

Misuse risks

The basic argument

The Open Philanthropy Project (OpenPhil) is a major funder in AI safety and governance. In OpenPhil’s main blog post on potential risks from advanced AI, their CEO Holden Karnofsky writes:

One of the main ways in which AI could be transformative is by enabling/accelerating the development of one or more enormously powerful technologies. In the wrong hands, this could make for an enormously powerful tool of authoritarians, terrorists, or other power-seeking individuals or institutions. I think the potential damage in such a scenario is nearly limitless (if transformative AI causes enough acceleration of a powerful enough technology), and could include long-lasting or even permanent effects on the world as a whole.13

Karnofsky’s argument (which does not crucially rely on discontinuities) seems to be the following:

  • AI will be a powerful tool
  • If AI will be a powerful tool, then AI presents severe bad-actor risks
  • The damage from bad-actor AI risks could be long-lasting or permanent

For a more detailed description of particular misuse risks, we might turn to the report titled The Malicious Use of Artificial Intelligence: Forecasting, Prevention, and Mitigation (2018). However, this report focuses on negative impacts that are below the level of a global catastrophic risk, for example: cyberattacks, adversarial examples and data poisoning, autonomous weapons, causing autonomous vehicles to crash, and similar.

Questions about this argument

  • Overall, the argument from the misuse risks discussed above seems to have only been briefly sketched out.
  • Karnofsky’s argument is very general, and doesn’t fully explain the focus on AI as opposed to other technologies
  • A similar argument to Karnofsky’s could be made for any potentially transformative technology (e.g. nanotechnology). Why focus on the misuse of AI? There are many potential reasons, for example:
    • AI is far more transformative than other technologies, and therefore far more dangerous in the wrong hands.
    • We are in a particularly good position to prevent misuse of AI, compared to misuse of other technologies.
    • The blog post does not say which reasons are the crucial drivers of Karnofsky’s view that AI misuse risks are particularly deserving of attention.
  • The inference “If AI will be a powerful tool, then AI presents severe bad-actor risks” hasn’t been explained in detail.
    • A technology can be powerful without increasing bad actor risks. Whether a given technology increases bad actor risks seems to hinge on complicated questions around the relative efficacy of offensive vs. defensive applications, the way in which capabilities will be distributed between different actors.
    • Even nuclear weapons have arguably decreased the risk of “bad actor” states initiating invasions or wars.
  • No-one has yet made a detailed case for why we should expect the risks discussed in this section to rise to the level of global catastrophic risks

Robust totalitarianism

One type of misuse risk that has been described in slightly more detail is that of totalitarian regimes using AI to entrench their power, possibly for the very long-run. One of the four sources of catastrophic risk on the research agenda of the Center for the Governance of AI (GovAI) is “robust totalitarianism ​[…] enabled by advanced lie detection, social manipulation, autonomous weapons, and ubiquitous physical sensors and digital footprints.” The research agenda states that “power and control could radically shift away from publics, towards elites and especially leaders, making democratic regimes vulnerable to totalitarian backsliding, capture, and consolidation.” The argument from totalitarianism does not crucially depend on discontinuity assumptions.14

According to this argument, AI technology has some specific properties, such that AI will shift the balance of power towards leaders, and facilitate totalitarian control.

Questions about this argument

  • No detailed case yet regarding the effects of AI on totalitarianism
    • It seems plausible that the technologies mentioned (“advanced lie detection, social manipulation, autonomous weapons, and ubiquitous physical sensors and digital footprints”) would be useful to totalitarians. But some applications of them surely push in the other direction. For example, lie detection could be applied to leaders to screen for people likely to abuse their power or turn away from democratic institutions.
    • In addition, it is conceivable that other AI-enabled technologies might push against totalitarianism.
    • As of yet, in the public literature, there has been no systematic examination of the overall effect of AI on the probability of totalitarianism.
  • Long-term significance has not been much argued for yet
    • Suppose that AI-facilitated totalitarianism is plausible. From a long-termist point of view, the important question is whether this state of affairs is both (i) relatively avoidable and (ii) stable for the very long term.15 Such points of leverage, where something could go one way or the other, but then “sticks” in a foreseeably good or bad way, are probably rare.
    • The only academic discussion of the topic I could find is Caplan 2008, “The Totalitarian Threat”. The article discusses risk factors for stable totalitarianism, including technological ones, but takes the view that improved surveillance technology is unlikely to make totalitarianism last longer.16

Increased likelihood of great-power war

The GovAI research agenda presents four sources of catastrophic risk from AI. One of these is the risk of “preventive, inadvertent, or unmanageable great-power (nuclear) war​.” The research agenda explains that:

Advanced AI could give rise to extreme first-strike advantages, power shifts, or novel destructive capabilities, each of which could tempt a great power to initiate a preventive war. Advanced AI could make crisis dynamics more complex and unpredictable, and enable faster escalation than humans could manage, increasing the risk of inadvertent war.17

Breaking this down, we have two risks, and for each risk, some reasons AI could heighten it:

  1. Preventive war
    1. First-strike advantages
    2. Power shifts
    3. Novel destructive capabilities
  2. Inadvertent war
    1. More complex and unpredictable crisis dynamics
    2. Faster escalation than humans can manage

This publication from the RAND Corporation summarises the conclusions from a series of workshops that brought together experts in AI and nuclear security to explore how AI might affect the risk of nuclear war by 2040. The authors discuss several illustrative cases, for example the possibility that AI might undermine second-strike capability by allowing better targeting and tracking of mobile missile launchers.18

Questions about this argument

  • Specificity to AI is still unclear
    • With the exception of point 2.2 (AIs enabling faster escalation than humans can manage), these arguments don’t seem very specific to AI.
    • Many technologies could lead to more complex crisis dynamics, or give rise to first-strike advantages, power shifts, or novel destructive capabilities.
    • It could still be legitimate to prioritise the AI-caused risks most highly. But it would require additional argument, which I haven’t seen made yet.
  • What is the long-termist significance of a great-power war?
    • Great-power nuclear war would lead to a nuclear winter, in which the burning of cities sendings smoke into the upper atmosphere.
    • There is significant uncertainty about whether a nuclear winter would cause an existential catastrophe. My impression is that most people in the existential risk community believe that even if there were an all-out nuclear war, civilisation would eventually recover, but I haven’t carefully checked this claim19.
    • According to a blog post by Nick Beckstead, many long-termists believe that a catastrophic risk reduction strategy should be almost exclusively focused on reducing risks that would kill 100% of the world’s population, but Beckstead believes that sub-extinction catastrophic risks should also receive attention in a long-termist portfolio.
    • It has been suggested that great-power war could accelerate the development of new and potentially very dangerous technologies.
  • What are the practical implications of the argument? If great-power nuclear war were one of the main risks from AI, this might lead us to work directly on improving relations between great powers or reducing risks of nuclear war rather than prioritising AI.

Value erosion from competition

According to the GovAI research agenda, another source of catastrophic risk from AI is

systematic value erosion from competition, in which each actor repeatedly confronts a steep trade-off between pursuing their final values or pursuing the instrumental goal of adapting to the competition so as to have more power and wealth.

As stated, this is an extremely abstract concern. Loss of value due to competition rather than cooperation is ubiquitous, from geopolitics to advertising. Scott Alexander vividly describes the value that is destroyed in millions of suboptimal Nash equilibria throughout society.

Why might AI increase the risk of such value erosion to a catastrophic level?

In the publicly available literature, this risk has not been described in detail. But some works are suggestive of this kind of risk:

  • In The Age of Em, Robin Hanson speculates about a future in which AI is first achieved through emulations (“ems”) of human minds. He imagines this as a hyper-competitive economy in which, despite fantastic wealth from an economy that doubles every month or so, wages fall close to Malthusian levels and ems spend most of their existence working. However, they “need not suffer physical hunger, exhaustion, pain, sickness, grime, hard labor, or sudden unexpected death.” There is also a section in Superintelligence asking, “would maximally efficient work be fun?”
  • In Artificial Intelligence and Its Implications for Income Distribution and Unemployment (Section 6) Korinek and Stiglitz imagine an economy in which humans compete with much more productive AIs. AIs bid up the price of some scarce resource (such as land or energy) which is necessary to produce human consumption goods. Humans “lose the malthusian race” as growing numbers of them decide that given the prices they face, they prefer not to have offspring.20

Questions about this argument

This argument is highly abstract, and has not yet been written up in detail. I’m not sure I’ve given an accurate rendition of the intended argument. So far I see one key open question:

  • Collective action problems which we currently face typically erode some, but not all value. Why do we expect more of the value to be eroded once powerful AI is present?

People who prioritise AI risk should clarify which arguments are causing them to do so

How crucial is the alignment problem?

The early case for prioritising AI centered on the alignment problem. Now we are seeing arguments that focus on other features of AI; for example, AI’s possible facilitation of totalitarianism, or even just the fact that AI is likely to be a transformative technology. Different members of the broad beneficial AI community might view the alignment problem as more or less central.

What is the attitude towards discontinuity hypotheses?

For long-termists, I see three plausible attitudes21:

  • They prioritise AI because of arguments that rely on a discontinuity, and they think a discontinuous scenario is probable. The likelihood of a discontinuity is a genuine crux of their decision to prioritise AI.
  • They prioritise AI for for reasons that do not rely on a discontinuity
  • They prioritise AI because of possibility of discontinuity, but its likelihood is not a genuine crux, because they see no plausible other ways of affecting the long-term future.

Of course, these are three stylised attitudes. It’s likely that many people have an intermediate view that attaches some credence to each of these stories. Even if most people are somewhere in the middle, identifying these three extreme points on the spectrum can be a helpful starting point.

The third of these attitudes is really exclusive to long-termists. For more conventional ways of prioritising, there are many plausible contenders for the top priority, and the likelihood of a risk scenario should be crucial to the decision of whether to prioritise mitigating that risk. Non long-termists could take either of the other two attitudes towards discontinuities.

Benefits of clarification

My view that people should clarify why they prioritise AI is mostly based on a heuristic that confusion is bad, and we should know why we make important decisions. I can also try to give some more specific reasons:

  • The motivating scenario should have strong implications about which activities to prioritise within AI. To take the most obvious example, technical work on the alignment problem is critical for the scenarios that center around misalignment, and unimportant otherwise. Preparing for a single important ‘deployment’ event only makes sense under discontinuous scenarios.22
  • Hopefully, the arguments that motivate people are better than the other arguments. So focusing on these should facilitate the process of evaluating the strength of the case for AI, and hence the optimal size of the investment in AI risk reduction.
  • Superintelligence remains the only highly detailed argument for prioritising AI. Other justifications have been brief or informal. Suppose we learned that one of the latter group of arguments is what actually motivates people. We would realise that the entire publicly available case for prioritising AI consists of a few blog posts and interviews.
  • Costly misunderstandings could be avoided, both with people who are sceptical of AI risk and with sympathetic people who are considering entering this space.
    • Many people are sceptical of AI risk. It may not currently be clear to everyone involved in the debate why some people prioritise AI risk. I would expect this to lead to unproductive or even conflictual conversations, which could be avoided with more clarification.
    • People who are considering entering this space might be confused by the diversity of arguments, and might be led to the wrong conclusion about whether their skills can be usefully applied.
  • If arguments which assume discontinuities are the true motivators, then the likelihood of discontinuities is plausibly a crux of the decision to prioritise AI. This would suggest that there is very high value of information in forecasting the likelihood of discontinuities.

Appendix: What I mean by “discontinuities”

By discontinuity I mean an improvement in the capabilities of powerful AI that happens much more quickly than what would be expected based on extrapolating past progress. This is obviously a matter of degree. In this document I apply the label “discontinuity” only to very large divergences from trend, roughly those that could plausibly lend themselves to a single party gaining a decisive strategic advantage.

If there is a discontinuity, then the first AI system to undergo this discontinuous progress will become much more capable than other parties. The sharper the discontinuity, the less likely it is that many different actors will experience the discontinuity at the same time and remain at comparable levels of capability.

Below I detail two ways in which this notion of discontinuity differs from Bostrom’s “fast take-off”.

Discontinuities aren’t defined by absolute speed

Bostrom defines a “fast take-off” as one that occurs over minutes, hours, or days.

The strategically relevant feature of the discontinuous scenarios is that a single AI system increases in capabilities much faster than other actors. (These actors could be other AIs, humans, or humans aided by AI tools). No actor can react quickly enough to ensure that the AI system is aligned; and no actor can prevent the AI system from gaining a decisive strategic advantage.

By defining a “fast take-off” with the absolute numerical values “minutes, hours, or days”, Bostrom is essentially making the prediction that such a “take-off” would indeed be fast in a strategically relevant sense. But this could turn out to be false. For example, Paul Christiano predicts that “in the worlds where AI radically increases the pace of technological progress […] everything is getting done by a complex ecology of interacting machines at unprecedented speed.”

The notion of discontinuities is about the shape of the “curve of AI progress” – specifically, how discontinuous or kinked it is – and is agnostic about absolute numerical values. In this way, I think it better tracks the strategically relevant feature.

Discontinuities could happen before “human-level”

Bostrom’s analysis of AI trajectories is focused on the “take-off” period, which he defines as the period of time that lies between the development of the first machine with “human-level general intelligence” and the development of the first machine that is “radically superintelligent”. There is little analysis of trajectories before “human-level general intelligence” is achieved.

One approach is to define a machine as having “human-level general intelligence” if it is at least as good as the average human at performing (or perhaps quickly learning) nearly any given cognitive task. But then it seems that many risky events could occur before human-level general intelligence. For example, one could imagine an AI system that is capable of running most of a country’s R&D efforts, but lacks the ability to engage in subtle forms of human interaction such as telling jokes.

The notion of discontinuity is not restricted in this way. A discontinuity could occur at any point during the development of powerful AI systems, even before “human-level”.

  1. This post was written in February 2019 while at the Governance of AI Programme, within the Future of Humanity Institute. I’m publishing it as it stood in February, since I’m starting a new job and anticipate I won’t have time to update it. I thank Markus Anderljung, Max Daniel, Jeffrey Ding, Eric Drexler, Carrick Flynn, Richard Ngo, Cullen O’Keefe, Stefan Schubert, Rohin Shah, Toby Shevlane, Matt van der Merwe and Remco Zwetsloot for help with previous versions of this document. Ben Garfinkel was especially generous with his time and many of the ideas in this document were originally his. 

  2. In an intelligence explosion, an AI rapidly and recursively self-improves to become superintelligent. 

  3. Yudkowsky does not explicitly say whether discontinuity hypotheses are a crux of his interest in AI risk. He merely remarks: “I tend to assume arbitrarily large potential jumps for intelligence because (a) this is the conservative assumption; (b) it discourages proposals based on building AI without really understanding it; and (c) large potential jumps strike me as probable-in-the-real-world.” In a 2016 Facebook post, reprinted by Bryan Caplan, Yudkowsky describes “rapid capability gain” as one of his three premises for viewing AI as a critical problem to be solved. If discontinuities imply “a higher standard for Friendly AI techniques”, this suggests that AI safety work would still be needed in more continuous scenarios, but would only need to meet a lower standard. But we are not told how low this standard would be, and it if would still, in Yudkowsky’s view, justify prioritising AI. Regardless, Yudkowsky has not given any detailed argument for viewing AI as a catastrophic risk (let alone an existential one) if there are no discontinuities. 

  4. My claim is that discontinuities are a crucial premise for the conclusion that AI is likely to lead to an existential catastrophe. Strictly speaking, it’s incidental to my claim whether Bostorm assigns a high or low likelihood to a discontinuity. In fact, he says a discontinuity is “likely”. He also discusses multipolar scenarios that could result from more continuous trajectories (chapter 11), and some of these scenarios could arguably be sufficiently bad to warrant the label “existential risk” – but these scenarios are not the focus of the book and nor, in my view, did they seem to shape the priorities inspired by the book. 

  5. Defined by Bostrom as “a level of technological and other advantages sufficient to enable […] complete world domination”. 

  6. Bostrom gives detailed arguments for this claim in chapter 4, “the kinetics of an intelligence explosion”. I don’t discuss these arguments because they are incidental to this post. 

  7. Bostrom writes: “Consider the following medium takeoff scenario. Suppose it takes a project one year to increase its AI’s capability from the human baseline to a strong superintelligence, and that one project enters this takeoff phase with a six-month lead over the next most advanced project. The two projects will be undergoing a takeoff concurrently. It might seem, then, that neither project gets a decisive strategic advantage. But that need not be so. Suppose it takes nine months to advance from the human baseline to the crossover point, and another three months from there to strong superintelligence. The frontrunner then attains strong superintelligence three months before the following project even reaches the crossover point. This would give the leading project a decisive strategic advantage […]. Since there is an especially strong prospect of explosive growth just after the crossover point, when the strong positive feedback loop of optimization power kicks in, a scenario of this kind is a serious possibility, and it increases the chances that the leading project will attain a decisive strategic advantage even if the takeoff is not fast.” In this scenario, what enables the frontrunner to obtain a decisive strategic advantage is the existence of crossover point just after which there is explosive growth. But that is precisely a discontinuity. 

  8. The paper Concrete Problems in AI Safety describes five sources of AI accidents. They stand on their own, separate from discontinuity considerations. 

  9. A singleton is “a world order in which there is at the global level a single decision-making agency”. 

  10. Here and in the rest of the document, I mean “long-term” in the sense of potentially many millions of years. Beckstead (2013), On the overwhelming importance of shaping the far future, articulated “long-termism”, the view that we should focus on the trajectory of civilisation over such very long time-scales. See here for a short introduction to long-termism. 

  11. This quote is lightly edited for clarity. 

  12. If we don’t know anything about alignment, the trade-off is maximally steep: we can either have unaligned AI or no AI. Technical progress on the alignment problem would partially alleviate the trade-off. In the limit of a perfect solution to the alignment problem, there would be no trade-off at all. 

  13. To be clear, in addition to misuse risks, OpenPhil is also interested in globally catastrophic accidents from AI. 

  14. Of course, AI trajectories might have some bearing on the argument. One might believe that civil society will be slow to push back against new AI-enabled totalitarian threats, while states and leaders will be quick to exploit AI for totalitarian purposes. If this is true, very fast AI development might slightly increase the risk of totalitarianism. 

  15. If it were the nearly unavoidable consequence of AI being developed, there would be no point trying to oppose it. If the totalitarian regime would eventually collapse, (i.e. fail to be robust for the very long run), then, although an immeasurable tragedy from a normal perspective, its significance would be small from the long-termist point of view. 

  16. Caplan writes: “Orwell’s 1984 described how new technologies would advance the cause of totalitarianism. The most vivid was the “telescreen,” a two-way television set. Anyone watching the screen was automatically subject to observation by the Thought Police. Protagonist Winston Smith was only able to keep his diary of thought crimes because his telescreen was in an unusual position which allowed him to write without being spied upon. Improved surveillance technology like the telescreen would clearly make it easier to root out dissent, but is unlikely to make totalitarianism last longer. Even without telescreens, totalitarian regimes were extremely stable as long as their leaders remained committed totalitarians. Indeed, one of the main lessons of the post-Stalin era was that a nation can be kept in fear by jailing a few thousand dissidents per year.” 

  17. It’s worth noting that this set of risks is distinct from misuse risks. Misuse involves the intentional use of AI for bad purposes, whereas here, the argument is that AI might make war more likely, regardless of whether any party uses an AI system to directly harm an adversary. See this essay for an explanation of how some risks from AI arise neither from misuse nor from accidents. 

  18. Mobile missile launchers move regularly via road or rail. Many states use them because they are difficult to track and target, and therefore constitute a credible second-strike capability. The RAND publication states that “AI could make critical contributions to intelligence, surveillance, and reconnaissance (ISR) and analysis systems, upending these assumptions and making mobile missile launchers vulnerable to preemption.” 

  19. This post by Carl Schulman is relevant. 

  20. The details of the model are in Korinek (2017), a working paper called Humanity, Artificial Intelligence, and the Return of Malthus, which is not publicly available online. Here are slides from a talk about the working paper. 

  21. There are some other conceivable attitudes too. One could, for example, find a discontinuity probable, but still not focus on those scenarios, because one finds that we’re certainly doomed under such a scenario. 

  22. These are just some quick examples. I would be interested in a more systematic investigation of what chunks of the problem people should break off depending on what they believe the most important sources of risk are. 

May 25, 2019