Taking the voodoo out of multiple regression

January 10, 2018

Valerio Filoso (2013) writes:

Most econometrics textbooks limit themselves to providing the formula for the vector of the type

Although compact and easy to remember, this formulation is a sort black box, since it hardly reveals anything about what really happens during the estimation of a multivariate OLS model. Furthermore, the link between the and the moments of the data distribution disappear buried in the intricacies of matrix algebra. Luckily, an enlightening interpretation of the s in the multivariate case exists and has relevant interpreting power. It was originally formulated more than seventy years ago by Frisch and Waugh (1933), revived by Lovell (1963), and recently brought to a new life by Angrist and Pischke (2009) under the catchy phrase regression anatomy. According to this result, given a model with K independent variables, the coefficient for the k-th variable can be written as

where is the residual obtained by regressing on all remaining independent variables.

The result is striking since it establishes the possibility of breaking a multivariate model with independent variables into bivariate models and also sheds light into the machinery of multivariate OLS. This property of OLS does not depend on the underlying Data Generating Process or on its causal interpretation: it is a mechanical property of the estimator which holds because of the algebra behind it.

From, , it’s easy to also show that

I’ll stick to the first expression in what follows. (See Filoso sections 2-4 for a discussion of the two options. The second is the Frisch-Waugh-Lovell theorem, the first is what Angrist and Pischke call regression anatomy).

Multiple regression with (a constant and two or more variables) can feel a bit like voodoo at first. It is shrouded in phrases like “holding constant the effect of”, “controlling for”, which are veiled metaphors for the underlying mathematics. In particular, it’s hard to see what “holding constant” has to do with minimising a loss function. On the other hand, a simple regression has an appealingly intuitive 2D graphical representation, and the coefficients are ratios of familiar covariances.

This is why it’s nice that you can break a model with variables into bivariate models involving the residuals . This is easiest to see in a model with : is the residual from a simple regression. Hence a sequence of three simple regressions is sufficient to obtain the exact coefficients of the regression (see figure 2 below, yellow boxes).

Similarly, it’s possible to arrive at the coefficients of a regression by starting with only simple pairwise regressions of the original independent variables. I do this for in figure 1. From these pairwise regressions (in black and grey1), we work our way up to three regressions of one -variable on the two others (orange boxes), by regressing each -variable on the residuals obtained in the first step. We obtain expressions for each of the , ( in my notation). We regress on these (yellow box). Figure 1 also nicely shows that the number of pairwise regressions needed to compute multivariate regression coefficients grows with the square of . According to this StackExchange answer, the total time complexity is , for observations.

Figure 1:

Judd et al. (2017) have a nice detailed walk-through of the case, pp.107-116. Unfortunately, they use the more complicated Frisch-Waugh-Lovell theorem method of regressing residuals on residuals. I show this method here (in green) and the method we’ve been using (in yellow), for . As you can see, the former method needs two superfluous base-level regressions (in dark blue). For this reason, that method becomes quickly intractable at . But they should be equivalent, hence I use the same coefficients in the yellow and green boxes.

Figure 2:

I made this is PowerPoint, not knowing how to do it better. Here is the file.

  1. The grey ones are redundant and included for ease of notation. 

Diagrams of linear regression

January 10, 2018

I made a big diagram describing some assumptions (MLR1-6) that are used in linear regression. In my diagram, there are categories (in rectangles with dotted lines) of mathematical facts that follow from different subsets of MLR1-6. References in brackets are to Hayashi (2000).


A couple of comments about the diagram are in order.

  • , are a vectors of random variables. may contain numbers or random variables. is a vector of numbers.
  • We measure: realisations of , (realisations of) . We do not measure: , . We have one equation and two unknowns: we need additional assumptions on .
  • We make a set of assumptions (MLR1-6) about the joint distribution . These assumptions imply some theorems relating the distribution of and the distribution of .
  • In the diagram, I stick to the brute mathematics, which is entirely independent of its (causal) interpretation.1
  • Note the difference between MLR4 and MLR4’. The point of using the stronger MLR4 is that, in some cases, provided MLR4, MLR2 is not needed. To prove unbiasedness, we don’t need MLR2. For finite sample inference, we also don’t need MLR2. But whenever the law of large numbers is involved, we do need MLR2 as a standalone condition. Note also that, since MLR2 and MLR4’ together imply MLR4, clearly MLR2 and MLR4 are never both needed. But I follow standard practise (e.g. Hayashi) in including them both, for example in the asymptotic inference theorems.
  • Note that since is a symmetric square matrix, has full rank iff is positive definite; these are equivalent statements (see Wooldridge 2010 p. 57). Furthermore, if has full rank , then has full rank , so MLR3* is equivalent to MLR3 plus the fact that is finite (i.e actually converges).
  • Note that given MLR2 and the law of large numbers, could alternatively be written
  • Note that whenever I write a and set it equal to some matrix, I am assuming the matrix is finite. Some treatments will explicitly say is finite, but I omit this.
  • Note that by the magic of matrix inversion, . 2
  • Note that these expressions are equal: . Seeing this helps with inutition.

The second diagram gives the asymptotic distribution of the IV and 2SLS estimators.3


I made this is PowerPoint, not knowing how to do it better. Here is the file.

  1. But of course what really matters is the causal interpretation.

    As Pearl (2009) writes, “behind every causal claim there must lie some causal assumption that is not discernible from the joint distribution and, hence, not testable in observational studies”. If we wish to interpret (and hence ) causally, we must interpret MLR4 causally; it becomes a (strong) causal assumption.

    As far as I can tell, when econometricians give a causal interpretation it is typically done thus (they are rarely explicit about it):

    • MLR1 holds in every possible world (alternatively: it specifies not just actual, but all potential outcomes), hence is unobservable even in principle.
    • yet we make assumption MLR4 about

    This talk of the distribution of a fundamentally unobservable “variable” is a confusing device. Pearl’s method is more explicit: replace MLR with the causal graph below, where is used to make it extra clear that the causation only runs one way. MLR1 corresponds to the expression for (and, redundantly, the two arrows towards ), MLR4 corresponds to the absence of arrows connecting and . We thus avoid “hiding causal assumptions under the guise of latent variables” (Pearl). (Because of the confusing device, econometricians, to put it kindly, don’t always sharply distinguish the mathematics of the diagram from its (causal) interpretation. To see me rant about this, see here.)


  2. Think about it! This seems intuitive when you don’t think about it, mysterious when you think about it a little, and presumably becomes obvious again if you really understand matrix algebra. I haven’t reached the third stage. 

  3. For IV, it’s even clearer that the only reason to care is the causal interpretation. But I follow good econometrics practice and make only mathematical claims. 

The expected value of the long-term future, and existential risk

December 28, 2017

I wrote an article describing a simple model of the long-term future. Here it is:


A number of ambitious arguments have recently been proposed about the moral importance of the long-term future of humanity, on the scale of millions and billions of years. Several people have advanced arguments for a cluster of related views. Authors have variously claimed that shaping the trajectory along which our descendants develop over the very long run (Beckstead, 2013), or reducing extinction risk, or minimising existential risk (Bostrom, 2002), or reducing risks of severe suffering in the long-term future (Althaus and Gloor, 2016) are of huge or overwhelming importance. In this paper, I develop a simple model of the value of the long-term future, from a totalist, consequentialist, and welfarist (but not necessarily utilitarian) point of view. I show how the various claims can be expressed within the model, clarifying under which conditions the long-term becomes overwhelmingly important, and drawing tentative policy implications.

Relationships between the axiomatic systems of modal propositional logic

December 26, 2017

I made a diagram of this, based on Sider’s Logic for philosophy. An orange arrow from sytems S to system S’ means anything that is provable (and hence valid) in S is provable (and valid) in S’. I don’t add lables to the orange arrows since their meanings are clear. A green arrow from axiom schema to another says that the second schema is provable from the first in a particular system which I label.