How much of the fall in fertility could be explained by lower mortality?

ourworldindata_scatter-fertility-vs-infant-survival

Many people think that lower child mortality causes fertility to decline.

One prominent theory for this relationship, as described by Our World in Data1, is that “infant survival reduces the parents’ demand for children”2. (Infants are children under 1 years old).

In this article, I want to look at how we can precisify that theory, and what magnitude the effect could possibly take. What fraction of the decline in birth rates could the theory explain?

Important. I don’t want to make claims here about how parents actually make fertility choices. I only want to examine the implications of various models, and specifically how much of the observed changes in fertility the models could explain.

Constant number of children

One natural interpretation of “increasing infant survival reduces the parents’ demand for children” is that parents are adjusting the number of births to keep the number of surviving children constant.

Looking at Our World in Data’s graph, we can see that in most of the countries depicted, the infant survival rate went from about 80% to essentially 100%. This is a factor of 1.25. Meanwhile, there were 1/3 as many births. If parents were adjusting the number of births to keep the number of surviving children constant, the decline in infant mortality would explain a change in births by a factor of 1/1.25=0.8, a -0.2 change that is only 30% of the -2/3 change in births.

The basic mathematical reason this happens is that even when mortality is tragically high, the survival rate is still thankfully much closer to 1 than to 0, so even a very large proportional fall in mortality will only amount to a small proportional increase in survival.

Some children survive infancy but die later in childhood. Although Our World in Data’s quote focuses on infant mortality, it makes sense to consider older children too. I’ll look at under-5 mortality, which generally has better data than older age groups, and also captures a large fraction of all child mortality3.

England (1861-1951)

England is a country with an early demographic transition and good data available.

Doepke 2005 quotes the following numbers:

  1861 1951
Infant mortality 16% 3%
1-5yo mortality 13% 0.5%
0-5 yo mortality 27% 3.5%
Survival to 5 years 73% 96.5%
Fertility 4.9 2.1

Fertility fell by 57%, while survival to 5 years rose by 32%. Hence, if parents aim to keep the number of surviving children constant, the change in child survival can explain 43%4 of the actual fall in fertility. (It would have explained only 23% had we erroneously considered only the change in infant survival.)

Sub-Saharan Africa (1990-2017)

If we look now at sub-Saharan Africa data from the World Bank, the 1990-2017 change in fertility is from 6.3 to 4.8, a 25% decrease, whereas the 5-year survival rate went from 0.82 to 0.92, a 12% increase. So the fraction of the actual change in fertility that could be explained by the survival rate is 44%. (This would have been 23% had we looked only at infant survival).

Source data and calculations. Chart not showing up? Go to the .svg file.

So far, we have seen that this very simple theory of parental decision-making can explain 30-44% of the decline in fertility, while also noticing that considering childhood mortality beyond infancy was important to giving the theory its full due.

However, in more sophisticated models of fertility choices, the theory looks worse.

A more sophisticated model of fertility decisions

Let us imagine that instead of holding it constant, parents treat the number of surviving children as one good among many in an optimization problem.

An increase in the child survival rate can be seen as a decrease in the cost of surviving children. Parents will then substitute away from other goods and increase their target number of surviving children. If your child is less likely to die as an infant, you may decide to aim to have more children: the risk of experiencing the loss of a child is lower.5

For a more formal analysis, we can turn to the Barro and Becker (1989) model of fertility. I’ll be giving a simplified version of the presentation in Doepke 2005).

In this model, parents care about their own consumption as well as their number of surviving children. The parents maximise6

\[U(c,n) = u(c) + n^\epsilon V\]

where

  • \(n\) is the number of surviving children and \(V\) is the value of a surviving child
  • \(\epsilon\) is a constant \(\in (0,1)\)
  • \(u(c)\) is the part of utility that depends on consumption7

The income of a parent is \(w\), and there is a cost per birth of \(p\) and an additional cost of \(q\) per surviving child8. The parents choose \(b\), the number of births. \(s\) is the probability of survival of a child, so that \(n=sb\).

Consumption is therefore \(c=w-(p+qs)b\) and the problem becomes \(\max_{b} U = u(w-(p+qs)b) + (sb)^\epsilon V\)

Letting \(b^{*}(s)\) denote the optimal number of births as a function of \(s\), what are its properties?

The simplest one is that \(sb^*(s)\), the number of surviving children, is increasing in \(s\). This is the substitution effect we described intuitively earlier in this section. This means that if \(s\) is multiplied by a factor \(x\) (say 1.25), \(b^*(s)\) will be multiplied more than \(1/x\) (more than 0.8).

When we looked at the simplest model, with a constant number of children, we guessed that it could explain 30-44% of the fall in fertility. That number is a strict upper bound on what the current model could explain.

What we really want to know, to answer the original question, is how \(b^*(s)\) itself depends on \(s\). To do this, we need to get a little bit more into the relative magnitude of the cost per birth \(p\) and the additional cost \(q\) per surviving child. As Doepke writes,

If a major fraction of the total cost of children accrues for every birth, fertility [i.e. \(b^*(s)\)] would tend to increase with the survival probability; the opposite holds if children are expensive only after surviving infancy9.

This tells us that falling mortality could actually cause fertility to increase rather than decrease.10

To go further, we need to plug in actual values for the model parameters. Doepke does this, using numbers that reflect the child mortality situation of England in 1861 and 1951, but also what seem to be some pretty arbitrary assumptions about the parent’s preferences (the shape of \(u\) and the value of \(\epsilon\)).

With these assumptions, he finds that “the total fertility rate falls from 5.0 (the calibrated target) to 4.2 when mortality rates are lowered to the 1951 level”11, a 16% decrease. This represents is 28% of the actually observed fall in fertility to 2.1.

Extensions of Barro-Becker model

The paper then considers various extensions of the basic Barro-Becker model to see if they could explain the large decrease in fertility that we observe.

For example, it has been hypothesized that when there is uncertainty about whether a child will survive (hitherto absent from the models), parents want to avoid the possibility of ending up with zero surviving children. They therefore have many children as a precautionary measure. Declining mortality (which reduces uncertainty since survival rates are thankfully greater than 0.5) would have a strong negative impacts on births.

However, Doepke also considers a third model, that incorporates not only stochastic mortality but also sequential fertility choice, where parents may condition their fertility decisions on the observed survival of children that were born previously. The sequential aspect reduces the uncertainty that parents face over the number of surviving children they will end up with.

The stochastic and sequential models make no clear-cut predictions based on theory alone. Using the England numbers, however, Doepke finds a robust conclusion. In the stochastic+sequential model, for almost all reasonable parameter values, the expected number of surviving children still increases with \(s\) (my emphasis):

To illustrate this point, let us consider the extreme case [where] utility from consumption is close to linear, while risk aversion with regards to the number of surviving children is high. … [W]hen we move (with the same parameters) to the more realistic sequential model, where parents can replace children who die early, … despite the high risk aversion with regards to the number of children, total fertility drops only to 4.0, and net fertility rises to 3.9, just as with the benchmark parameters. … Thus, in the sequential setup the conclusion that mortality decline raises net fertility is robust to different preference specifications, even if we deliberately emphasize the precautionary motive for hoarding children.

So even here, the fall in mortality would only explain 35% of the actually observed change in fertility. It seems that the ability to “replace” children who did not survive in the sequential model is enough to make its predictions pretty similar to the simple Barro-Becker model.

  1. The quote in context on Our World in Data’s child mortality page: “the causal link between infant [<1 year old] survival and fertility is established in both directions: Firstly, increasing infant survival reduces the parents’ demand for children. And secondly, a decreasing fertility allows the parents to devote more attention and resources to their children.” 

  2. As an aside, my impression is that if you asked an average educated person “Why do women in developing countries have more children?”, their first idea would be: “because child mortality is higher”. It’s almost a trope, and I feel that it’s often mentioned pretty glibly, without actually thinking about the decisions and trade-offs faced by the people concerned. That’s just an aside though – the theory clearly has prima facie plausibility, and is also cited in serious places like academia and Our World in Data. It deserves closer examination. 

  3. It should be possible to conduct the Africa analysis for different ages using IMHE’s more granular data, but it’s a bit more work. (There appears to be no direct data on deaths per birth as opposed to per capita, and data on fertility is contained in a different dataset from the main Global Burden of Disease data.) 

  4. All things decay. Should this Google Sheets spreadsheet become inaccessible, you can download this .xlsx copy which is stored together with this blog. 

  5. In this light, we can see that the constant model is not really compatible with parents viewing additional surviving children as a (normal) good. Nor of course is it compatible with viewing children as a bad, for then parents would choose to have 0 children. Instead, it could for example be used to represent parents aiming for a socially normative number of surviving children. 

  6. I collapse Doepke’s \(\beta\) and \(V\) into a single constant \(V\), since they can be treated as such in Model A, the only model that I will present mathematically in this post. 

  7. Its actual expression, that I omit from the main presentation for simplicity, is \(u(c)=\frac{c^{1-\sigma}}{1-\sigma}\), the constant relative risk-aversion utility function. 

  8. There is nothing in the model that compels us to call \(p\) the “cost per birth”, this is merely for ease of exposition. The model itself only assumes that there are two periods for each child: in the first period, costing \(p\) to start, children face a mortality risk; and in the second period, those who survived the first face zero mortality risk and cost \(q\). 

  9. Once again, Doepke calls the model’s early period “infancy”, but this is not inherent in the model. 

  10. It’s difficult to speculate about the relative magnitude of \(p\) and \(q\), especially if, departing from Doepke, we make the early period of the model, say, the first 5 years of life. If the first period is only infancy, it seems plausible to me that \(q \gg p\), but then we also fail to capture any deaths after infancy. On the other hand, extending the early period to 5 incorrectly assumes that parents get no utility from children before they reach the age of 5. 

  11. The following additional context may be helpful to understand this quote:

    The survival parameters are chosen to correspond to the situation in England in 1861 . According to Perston et al. (1972) the infant mortality rate (death rate until first birthday) was \(16 \%\), while the child mortality rate (death rate between first and fifth birthday) was \(13 \%\). Accordingly, I set \(s_{i}=0.84\) and \(s_{y}=0.87\) in the sequential model, and \(s=s_{i} s_{y}=0.73\) in the other models. Finally, the altruism factor \(\beta\) is set in each model to match the total fertility rate, which was \(4.9\) in 1861 (Chenais 1992). Since fertility choice is discrete in Models B and C, I chose a total fertility rate of \(5.0\) as the target.

    Each model is thus calibrated to reproduce the relationship of fertility and infant and child mortality in 1861 . I now examine how fertility adjusts when mortality rates fall to the level observed in 1951 , which is \(3 \%\) for infant mortality and \(0.5 \%\) for child mortality. The results for fertility can be compared to the observed total fertility rate of \(2.1\) in 1951 .

    In Model A (Barro-Becker with continuous fertility choice), the total fertility rate falls from \(5.0\) (the calibrated target) to \(4.2\) when mortality rates are lowered to the 1951 level. The expected number of surviving children increases from \(3.7\) to \(4.0\). Thus, there is a small decline in total fertility, but (as was to be expected given Proposition 1) an increase in the net fertility rate.

August 5, 2021

The special case of the normal likelihood function

Summary1: The likelihood function implied by an estimate \(b\) with standard deviation \(\sigma\) is the probability density function (PDF) of a \(\mathcal{N}(b,\sigma^2)\). Though this might sound intuitive, it’s actually a special case. If we don’t firmly grasp that it’s an exception, it can be confusing. In general, the likelihood function is not equal to any PDF.

Suppose that a study has the point estimator \(B\) for the parameter \(\Theta\). The study results are an estimate \(B=b\) (typically a regression coefficient), and an estimated standard deviation2 \(\hat{sd}(B)=s\).

In order to know how to combine this information with a prior over \(\Theta\) in order to update our beliefs, we need to know what is the likelihood function implied by the study. The likelihood function is the probability of observing the study data \(B=b\) given different values for \(\Theta\). It is formed from the probability of the observation that \(B=b\) conditional on \(\Theta=\theta\), but viewed and used as a function of \(\theta\) only3:

\[\mathcal{L}: \theta \mapsto P(B =b \mid \Theta = \theta)\]

The event “\(B=b\)” is often shortened to just “\(b\)” when the meaning is clear from context, so that the function can be more briefly written \(\mathcal{L}: \theta \mapsto P(b \mid \theta)\).

So, what is \(\mathcal{L}\)? In a typical regression context, \(B\) is assumed to be approximately normally distributed around \(\Theta\), due to the central limit theorem. More precisely, \(\frac{B - \Theta}{sd(B)} \sim \mathcal{N}(0,1)\), and equivalently \(B\sim \mathcal{N}(\Theta,sd(B)^2)\).

\(sd(B)\) is seldom known, and is often replaced with its estimate \(s\), allowing us to write \(B\sim \mathcal{N}(\Theta,s^2)\), where only the parameter \(\Theta\) is unknown4.

We can plug this into the definition of the likelihood function:

\[\mathcal{L}: \theta \mapsto P(b\mid \theta)= \text{PDF}_{\mathcal{N}(\theta,s^2)}(b) = {\frac {1}{s\sqrt {2\pi }}}\exp \left(-{\frac {1}{2}}\left({\frac {b-\theta }{s }}\right)^{2} \right)\]

We could just leave it at that. \(\mathcal{L}\) is the function5 above, and that’s all we need to compute the posterior. But a slightly different expression for \(\mathcal{L}\) is possible. After factoring out the square,

\[\mathcal{L}: \theta \mapsto {\frac {1}{s {\sqrt {2\pi }}}}\exp \left(-{\frac {1}{2}} {\frac {(b-\theta)^2 }{s^2 }} \right),\]

we make use of the fact that \((b-\theta)^2 = (\theta-b)^2\) to rewrite \(\mathcal{L}\) with the positions of \(\theta\) and \(b\) flipped:

\[\mathcal{L}: \theta \mapsto {\frac {1}{s {\sqrt {2\pi }}}}\exp \left(-{\frac {1}{2}}\left({\frac {\theta-b }{s }}\right)^{2} \right).\]

We then notice that \(\mathcal{L}\) is none other than

\[\mathcal{L}: \theta \mapsto \text{PDF}_{\mathcal{N}(b,s^2)}(\theta)\]

So, for all \(b\) and for all \(\theta\), \(\mathcal{L}: \theta \mapsto \text{PDF}_{\mathcal{N}(\theta,s^2)}(b) = \text{PDF}_{\mathcal{N}(b,s^2)}(\theta)\).

The key thing to realise is that this is a special case due to the fact that the functional form of the normal PDF is invariant to substituting \(b\) and \(\theta\) for each other. For many other distributions of \(B\), we cannot apply this procedure.

This special case is worth commenting upon because it has personally lead me astray in the past. I often encountered the case where \(B\) is normally distributed, and I used the equality above without deriving it and understanding where it comes from. It just had a vaguely intuitive ring to it. I would occasionally slip into thinking it was a more general rule, which always resulted in painful confusion.

To understand the result, let us first illustrate it with a simple numerical example. Suppose we observe an Athenian man \(b=200\) cm tall. For all \(\theta\), the likelihood of this observation if Athenian men’s heights followed an \(\mathcal{N}(\theta,10)\) is the same number as the density of observing an Athenian \(\theta\) cm tall if Athenian men’s heights followed a \(\mathcal{N}(200,10)\)6.

Graphical representation of \(\text{PDF}_{\mathcal{N}(\theta,10)}(200) = \text{PDF}_{\mathcal{N}(200,10)}(\theta)\)

When encountering this equivalence, you might, like me, sort of nod along. But puzzlement would be a more appropriate reaction. To compute the likelihood of our 200 cm Athenian under different \(\Theta\)-values, we can substitute a totally different question: “assuming that \(\Theta=200\), what is the probability of seeing Athenian men of different sizes?”.

The puzzle is, I think, best resolved by viewing it as a special case, an algebraic curiosity that only applies to some distributions. Don’t even try to build an intuition for it, because it does not generalise.

To help understand this better, let’s look at at a case where the procedure cannot be applied.

Suppose for example that \(B\) is binomially distributed, representing the number of successes among \(n\) independent trials with success probability \(\Theta\). We’ll write \(B \sim \text{Bin}(n, \theta)\).

\(B\)’s probability mass function is

\[g: k \mapsto \text{PMF}_{\text{Bin}(n, \theta)}(k) = {n \choose k} \phi^k (1-\phi)^{n-k}\]

Meanwhile, the likelihood function for the observation of \(b\) successes is

\[\mathcal{M}: \phi \mapsto \text{PMF}_{\text{Bin}(n, \theta)}(b) = {n \choose b} \phi^b (1-\phi)^{n-b}\]

To attempt to take the PMF \(g\), set its parameter \(\theta\) equal to \(b\), and obtain the likelihood function would not just give incorrect values, it would be a domain error. Regardless of how we set its parameters, \(g\) could never be equal to the likelihood function \(\mathcal{M}\), because \(g\) is defined on \(\{0,1,...,n\}\), whereas \(\mathcal{M}\) is defined on \([0,1]\).

img

The likelihood function \(\mathcal{Q}: P_H \mapsto P_H^2(1-P_H)\) for the binomial probability of a biased coin landing heads-up, given that we have observed \(\{Heads, Heads, Tails\}\). It is defined on \([0,1]\). (The constant factor \(3 \choose 2\) is omitted, a common practice with likelihood functions, because these constant factors have no meaning and make no difference to the posterior distribution.)

It’s hopefully now quite intuitive that the case where \(B\) is normally distributed was a special case.7

Let’s recapitulate.

The likelihood function is the probability of \(b\mid\theta\) viewed as a function of \(\theta\) only. It is absolutely not a density of \(\theta\).

In the special case where \(B\) is normally distributed, we have the confusing ability of being able to express this function as if it were the density of \(\theta\) under a distribution that depends on \(b\).

I think it’s best to think of that ability as an algebraic coincidence, due to the functional form of the normal PDF. We should think of \(\mathcal{L}\) in the case where \(B\) is normally distributed as just another likelihood function.

Finally, I’d love to know if there is some way to view this special case as enlightening rather than just a confusing exception.

I believe that to say that a \(\text{PDF}_{\theta,\Gamma}(b)=\text{PDF}_{b,\Gamma}(\theta)\) (where \(\text{PDF}_{\psi,\Gamma}\) denotes the PDF of a distribution with one parameter \(\psi\) that we wish to single out and a vector \(\Gamma\) of other parameters), is equivalent to saying that the PDF is symmetric around its singled-out parameter. For example, a \(\mathcal{N}(\mu,\sigma^2)\) is symmetric around its parameter \(\mu\). But this hasn’t seemed insightful to me. Please write to me if you know an answer to this.

  1. Thanks to Gavin Leech and Ben West for feedback on a previous versions of this post. 

  2. I do not use the confusing term ‘standard error’, which I believe should mean \(sd(B)\) but is often also used to also denote its estimate \(s\). 

  3. I use uppercase letters \(\Theta\) and \(B\) to denote random variables, and lower case \(\theta\) and \(b\) for particular values (realizations) these random variables could take. 

  4. A more sophisticated approach would be to let \(sd(B)\) be another unknown parameter over which we form a prior; we would then update our beliefs jointly about \(\Theta\) and \(sd(B)\). See for example Bolstad & Curran (2016), Chapter 17, “Bayesian Inference for Normal with Unknown Mean and Variance”

  5. I don’t like the term “likelihood distribution”, I prefer “likelihood function”. In formal parlance, mathematical distributions are a generalization of functions, so it’s arguably technically correct to call any likelihood function a likelihood distribution. But in many contexts, “distribution” is merely used as short for “probability distribution”. So “likelihood distribution” runs the risk of making us think of “likelihood probability distribution” – but the likelihood function is not generally a probability distribution. 

  6. We are here ignoring any inadequacies of the \(B\sim N(\Theta,s^2)\) assumption, including but not limited to the fact that one cannot observe men with negative heights. 

  7. Another simple reminder that the procedure couldn’t possibly work in general is that in general the likelihood function is not even a PDF at all. For example, a broken thermometer that always gives the temperature as 20 degrees has \(P(B=20 \mid \theta) = 1\) for all \(\theta\), which evidently does not integrate to 1 over all values of \(\theta\).

    To take a different tack, the fact that the likelihood function is invariant to reparametrization also illustrates that it is not a probability density of \(\theta\) (thanks to Gavin Leech for the link). 

July 31, 2021

How to circumvent Sci-Hub ISP block

In the UK, many internet service providers (ISPs) block Sci-Hub. However, a simple proxy is enough to circumvent this (you don’t even need a VPN). Routing requests through a suitable1 proxy lets you open Sci-Hub in your regular browser as if it weren’t blocked2.

Routing all your traffic through a proxy may come with privacy and security concerns, and will slow your connection a bit. We want to use our proxy only for accessing Sci-Hub.

You can use extensions like ProxySwitchy to tell your browser to automatically use certain proxies, or no proxy at all, for sets of websites that you define.

Unfortunately, this extension, and others like it, require permissions to insert arbitrary JavaScript into any page you visit (the web store accurately explains that the extension can “read and change all your data on the websites you visit”). That’s likely due to insufficiently granular permission definitions by Chrome, and is not the fault of the presumably well-intentioned extension authors. But it freaks me out a little bit (bad things have happened).

Luckily, we can achieve the same effect by writing our own proxy auto-configuration file. A proxy auto-configuration or PAC file contains just a single JavaScript function like this:

function FindProxyForURL (url, host) {
  
  // Sci-Hub requests
  if (shExpMatch(host, 'sci-hub.se') || shExpMatch(host,'*.sci-hub.se')) {

                  // Your proxy address and port number
    return 'PROXY 123.456.789:9279'; 
  }

  // All other requests
  return 'DIRECT';
}

We can instruct the operating system to read this file. Search for instructions on Google (example).

When you use the proxy to access Sci-Hub for the first time in a browser session, the browser will ask you for the username and password to your proxy server. If you’re using Chrome, I’d recommend saving the credentials into the browser’s password manager to avoid having to enter them again.3

There are many free proxies on the Internet, but I find that using the services of an actual for-profit proxy company is well worth it, for the greater speed and reliability. Currently webshare.io (referral link) offers 1 GB per month free, which is quite a lot of Sci-Hub PDFs. After that you can get 250 GB for $2.99 per month.4

Step by step instructions

  1. Create an account on webshare.io (referral link)
  2. Choose a proxy from your list, and copy its address and port number into your PAC file, following the pattern above.
  3. Set your operating system to read its proxy settings from this PAC file5. Instructions for this are easy to Google (example).
  4. Open Sci-Hub in your browser. Enter your proxy username and password and optionally save these credentials in the browser. You can find the credentials in your webshare.io account.
  5. Don’t forget to only use Sci-Hub to look at really old papers that have lapsed into the public domain :)
  1. Obviously, the proxy must not itself be on a network that blocks Sci-Hub. I have not come across any proxy that blocks Sci-Hub in this way. 

  2. Changing your DNS resolver to a public one like Google’s instead of your ISP’s is not sufficient as of 2021, for two ISPs I’ve tested, and I suspect for all UK ISPs that implement blocking. (Many people believe changing the DNS resolver is sufficient. Probably ISPs used to implement simple DNS level blocking and have recently upped their game.) My guess is that instead of merely blocking the request to resolve sci-hub.se at the DNS resolver level, the ISPs are also doing a reverse lookup on every requested IP address to check whether it corresponds to a blacklisted domain. 

  3. You want to use the Chrome password manager because third-party password managers such as 1Password are not able to auto-fill credentials when logging in to a proxy server (as opposed to logging into a webpage). Note that if you have a third-party password manager extension installed this will disable the browser setting “Offer to save passwords”. I recommend that you temporarily disable your password manager extension, log in to your proxy server, save the password into Chrome, and then enable the extension again. 

  4. Their home page exemplifies a dark pattern by not showing the pricing by the GB; it just says you get ‘up to unlimited’ bandwidth. You’ll be able to see the actual pricing after you create an account. 

  5. One gotcha is that Windows 10 forces you to call your PAC file from a web server; it cannot be a local file (??!). To work around this, you can upload your file as a Gist and link to the /raw

May 15, 2021

Modified respirator to shield myself and others from COVID

Summary: I have tried many types of masks and respirators during the 2020 pandemic. My recommendation is to use ‘elastomeric’ respirators common in industry, and to either filter or completely block off their exhalation valve. The result is a comfortable respirator that I believe offers a high level of protection against airborne diseases to myself and others. I am not an infectious disease expert.

Contents

  1. Elastomeric respirators
  2. CDC recommendation for exhalation valves
  3. Recommendation A: 3M 6500QL Series with KN95 and surgical mask
    1. Choice of respirator
    2. Choice of filters for a 3M respirator
  4. Recommmendation B: Miller LPR-100 with tape and surgical mask
    1. How we need to modify the respirator
    2. Exhalation valve
    3. Inhalation valves
    4. Also use a surgical mask
    5. Choice of respirator
  5. Potential concerns
    1. Repeated use
    2. CDC guidelines
    3. Concerns specific to tape method
      1. C02 rebreathing
      2. Exhaling through filter
      3. Discussion of tape technique by others
  6. Overall recommendation

Elastomeric respirators

The effectiveness of a mask can be broken down into two parts: how well the mask fits on your face, and the filtration efficiency of the mask. A further important consideration is comfort.

Surgical and cloth masks are comfortable but have poor fit and filtration efficiency. I believe it’s possible to do much better.

Respirators that meet the NIOSH N95 or N99 standards for filtration efficiency, such as the N95 respirators pictured below, are popular in healthcare settings.


3M N95 respirator (left), N95 respirator in a medical setting (right)

In my experience, these have two main downsides:

  • They may be scarce during pandemics. You should probably leave limited supplies for healthcare workers.
  • The tight elastic band that ensures a good fit also makes the respirators very uncomfortable for extended use.

KN95 and KF95 are respectively the Chinese and Korean-manufactured masks that claim to have the same efficacy as N95s. They come with ear loops rather than behind-the-head elastic bands, so they have a far looser seal than N95s. I suppose you could add your own elastic bands to them to improve the seal, but then they would be just as uncomfortable as N95s. Therefore, they are not a competitive option.


A KN95 mask

There also exist N95+ masks designed for industrial tasks that produce harmful airborne particles (such as welding or paint spraying). They are called elastomeric respirators, or sometimes industrial respirators.

High filtration efficiency elastomeric respirators for industrial use. Miller LPR-100 (left), 3M 6200 (right).

Compared to healthcare N95s, these respirators:

  • are more widely available
  • achieve superior fit by using elastomers shaped like a human face (there is no need to bend a metal nose bridge)
  • are much more comfortable, mainly because they:
    • spread the pressure onto a wider area of skin
    • come in multiple sizes
    • have adjustable straps
  • won’t fog up your glasses

A downside is that it’s more difficult to be audible through an elastomeric respirator than through an N95 or surgical mask. I am able to be understood by raising my voice, but smooth social interactions are not guaranteed. It’s probably not a great setup for spending time with your friends; you can use a KN95 for that.

The fatal flaw1 of these elastomerics when it comes to disease control is that they have an exhalation valve that allows unfiltered air to exit the mask. In PPE jargon, they do not provide source control. (This may be about to change in 2021, see this footnote2. I will try to keep this post updated.)

The exhalation valve opening on the Miller LPR-100

We can modify these respirators to filter their exhalation valve (recommendaton A), or completely close it off (recommendation B).

(If infection through the mucosal lining of the eyes is an important concern to you, and you don’t wear glasses, you should also wear safety googgles.)

CDC recommendation for exhalation valves

During the 2020 pandemic, the US CDC issued the following recommendation, in a blog post from August 8 20203:

If only a respirator with an exhalation valve is available and source control is needed, cover the exhalation valve with a surgical mask, procedure mask, or a cloth face covering that does not interfere with the respirator fit.

Recommendation A: 3M 6500QL Series with KN95 and surgical mask

If you want to follow something similar to CDC guidance, I recommend:

  • A 3M 6500QL series respirator
  • A part of a KN95/KF95 mask tightly covering the exhalation valve


3M 6502QL. Unmodified (left), KN95 material covering valve (middle and right)

You’ll likely want to add a surgical mask on top of that:

  • as a backup
  • for the very small amount of additional filtration it provides
  • to avoid misunderstandings with strangers


3M 6502QL with KN95 material covering valve and a surgical mask on top

Surgical masks are not primarily designed to filter aerosols4. It seems clear to me that KN95s and KF95s are superior to a surgical mask for covering an exhalation valve (let a alone a cloth mask). (There is a list of such respirators that have received an emergency use authorization from the FDA. There are probably many low-quality masks fraudulently marketed as KN95 and KF95 at the moment, so make sure you buy from an approved manufacturer.)

In the models I have seen, the material in KN95s is far more flexible than in N95s, making allowing you to shape it so that it tightly covers an exhalation valve. It’s slightly fiddly but definitely possible with a bit of dexterity and perseverance. Using a thinner surgical mask would be easier; but the KN95’s extra protection for third parties is well worth it.

Here are the steps you should follow (see video):

  • cut a KN95 in half along the fold
  • cut one half to size further
  • use a rubber band and tape to attach the material over the respirator valve. This is better explained with a video than in words. The main thing to know is that you should use the two small ridges in the plastic below the valve to secure the rubber band.
  • add tape on the upper end of the KN95 material

Instructions

Unfortunately, for any valve covering approach, there is a trade-off between a fit and the surface area usable for exhale filtration. I have not been able to achieve a good fit when placing a KN95 or surgical mask more loosely over the valve, which would give more surface area. In my setup a small rectangle of KN95 has to do all the filtration, which likely lowers the efficiency. However, the N95 specification is for a flow rate of 85 liters per minute, which is many times the 6 liters per minute breathed by an individual at rest5, so I am not very concerned.


Ridges on 6502QL. Note that in the real setup the KN95 will go below the elastic band.


Location of the valve underneath the KN95 material. View form below the respirator.

Choice of respirator

I have tried two industrial respirator models, the 3M 6502QL and the Miller LPR-100. I prefer the build quality and aesthetics of the Miller (see below), but its shape makes it almost impossible to get a good seal if you attempt to cover the valve with a surgical mask or KN95. So for this technique, I recommend the 3M 6500QL series.

I am aware of three 3M half-facepiece reusable respirator groups, the 6000 series, the 6500 series, and the 7500 series.

3M half-facepiece reusable respirators (3M.com)

Since I have only tried a respirator of the 6500 series, I do not have a strong view on which is preferable. I would recommend the 6500, mostly because I have already demonstrated that it’s possible to cover the valve. The 6000 series does not have a downward-facing exhalation valve and may be harder to work with. I’m agnostic about the relative merits of the 7500.

The 6500 series has a quick latch version (difference explained here), which is the one I used. I’d recommend the quick latch 6500QL series, because it seems that the latch makes the fit of the KN95 material to the respirator more secure (see video). By the way, attaching a mask on top of the valve makes the quick-latch mechanism much less effective; I never use it.

Each series comes in three sizes, large, medium and small. I am a male with a medium-to-large head, and I use a medium (the 6502QL).

Regarding whether airlines will accept this setup, I have heard both some positive anecdotes and one negative anecdote.

Choice of filters for a 3M respirator

I use the 3M 2097 P100 filters.

You should use lightweight filters that are rated N100, R100 or P100. The “100” means that 99.97% of particles smaller than 0.3 micrometres are filtered out. The letters N, R and P refer to whether the filter is still effective when exposed to oil-based aerosols, this should be irrelevant for our purposes.

The weight of the filters is a crucial determinant of comfort. I originally used the 3M respirator with the 3M 60926 cartridges, which filter gases and vapors as well as particles. This was a mistake, as filtering gases and vapors is irrelevant from the point of view of infectious disease, and these cartridges are much heavier than the 3M 2097 P100 filters. Switching to the lighter filters made a world of difference; now wearing the 3M doesn’t bother me at all.


The 3M 6502QL respirator weighs 395 g. with 3M 60926 cartridges, but only 128 g. with 3M 2097 filters, a 68% reduction.

Recommmendation B: Miller LPR-100 with tape and surgical mask

I believe that, in expectation, the previous method offers slightly worse protection to third parties than a well-fit valveless medical N95, because our makeshift exhalation valve filter may not be entirely effective.

This section details another technique which may be able to achieve the best of both worlds: the comfort and availability of industrial masks, and the third-party protection offered by valveless masks.

How we need to modify the respirator

Let’s look at how the valves in industrial masks work. I’ll be using the Miller LPR-100, but the 3M is built similarly.

The exhalation valve is at the front. There are also two inhalation valves, one on each side between the mouth and the filter. These only allow air to come into the mask from outside, forcing all the exhaled air to go through the exhalation valve (instead of some of it going back through the filter).

Miller LPR-100 valves

We need to disable both the inhalation and exhalation valves:

  • The unfiltered exhalation valve should be completely sealed off.
  • In order to allow the user to exhale, the inhalation valves need to be turned into simple holes that allow two-way air circulation.

This will mean that both inhaled and exhaled air will go through the P100 filters.

Exhalation valve

We can seal off the exhalation valve from the outside with tape6. On the Miller respirator, there is a little plastic cage covering the valve, and this cage can be taped over. Note that tape sticks very poorly to the elastomer (the dark blue material on the Miller). This is why I only place tape on the plastic; this seems to be sufficient.

Tape on exhalation cage

I am using painter’s tape because it’s supposed to pull off without leaving a residue of glue. It’s possible that it would be better to use tape with a stronger adhesive. (A friend of mine commented: “Some ideas for sealing off the exhalation valve: (1) Butyl tape/self-vulcanizing tape. Not so much a sticky tape as a ribbon of moldable putty, so no adhesive residue. This stuff is pretty much unparalleled if you need to make a fully gas- and watertight seal around an irregularly shaped opening in a pinch without making a mess. The fact that it has no adhesive does put some constraints on the geometry of the part you’re sealing off, but I think it would work (better than painter’s tape, at least) on the Miller. (2) Vinyl tape/electrical tape. It’s relatively water-resistant and can be stretched to some extent. The adhesive also sticks to polymers pretty well (although it does leave a lot of residue after some time, but you can clean that off with a bit of IPA).”)

You can check the seal of your tape by pressing the mask onto your face and attempting to exhale (with the inhalation valves intact). Air should only be able to escape through the sides of the mask.

Inhalation valves

The inhalation valves are removable and can be pulled out. They are very thin and feel like they might be about to break when you pull them out, but I have been able to pull four of them out without a problem.

Touching a valve (left), a valve after it has been pulled out (right)

The two inhalation valves (left), the filter now visible through the holes (right)

Pushing the valves back in is easy.

The tape can be removed and the valves re-inserted, making my modification fully reversible.

Also use a surgical mask

Even if you’re using the tape technique, I recommend also covering the respirator with a surgical mask, since this has no downsides and might have some benefit. The seal on the exhalation valve might not be perfect and may get worse over time, so an extra layer of filtration, however imperfect, is a good backup.

It’s also beneficial because it makes what you’re doing legible to others. You don’t want to explain this weird tape business to strangers, even if it’s for their protection.

Miller LPR-100. Unmodified (left), with tape (middle), with tape and surgical mask (right)

Choice of respirator

For this technique, I recommend the Miller LPR-1007.

I recommend the Miller over the 3M because:

  • its build quality feels superior to me
  • it looks better
  • it blocks less of your field of view

Since the Miller is better than the 3M, and 3M is such a huge player in this market, I think there’s a decent chance that the Miller is in fact one of the very best options that exists.

The Miller weighs 139 g., which is a negligible difference to the 3M’s 128 g

I also like the fact that you can buy a neat rigid case to hold the Miller respirator. The case is called the 283374.


Miller case, 283374

The Miller model comes with replaceable P100-rated filters, while the 3M can be used with many types of filters and cartridges.

If you want to implement this technique on the 3M, it should be possible; all steps will be similar.

Potential concerns

The 3M+KN95 method we discussed earlier can be seen as a simple adaptation of CDC guidelines, so I have fewer concerns about it.

However, the tape technique involves a more fundamental alteration. This might seem unwise. How do I know I haven’t messed up something crucial, endangering myself and others?

Before discussing the specific concerns, it’s useful to consider: what are the relevant alternatives to my recommendation?

My best guess is that constantly wearing a correctly fitted medical N95, with the really tight elastic bands, is very slightly safer for others in expectation than the tape method (due to risks of things going wrong, like the tape getting unstuck). However, it is not a likely alternative for everyday use. First, in my experience, N95s are more difficult to fit correctly than industrial masks. Second, for me, these respirators are prohibitively uncomfortable. I have seen few people use them. I think the realistic alternatives for most people are cloth and surgical masks. I am relatively confident that both of my techniques are an improvement on that, for both the user and third parties.

By the way, I am not an expert in disease control. I studied economics and philosophy and then worked as a researcher.

Repeated use

Healthcare N95s are supposed to be used only once before being decontaminated. However, I plan to use the same filters many times. Is this a problem?

Why are N95s supposed to be used once? According to this CDC guidance,

the most significant risk [of extended use and reuse] is of contact transmission from touching the surface of the contaminated respirator. … Respiratory pathogens on the respirator surface can potentially be transferred by touch to the wearer’s hands and thus risk causing infection through subsequent touching of the mucous membranes of the face. …

While studies have shown that some respiratory pathogens remain infectious on respirator surfaces for extended periods of time, in microbial transfer [touching the respirator] and reaerosolization [coughing or sneezing through the respirator] studies more than ~99.8% have remained trapped on the respirator after handling or following simulated cough or sneeze.

Since I plan to leave the respirator unused for hours or days between each use, and any viral dose on the exterior of the filters is likely to be very small, I don’t think this is a huge concern overall. I am very open to contrary evidence.

By the way, based on this guidance, it seems to me we should also worry less about reusing respirators and masks in general, even without decontamination. (Decontamination makes a lot more sense for health care workers who are exposed to COVID patients).

It’s good to remember to avoid touching the filters.

CDC guidelines

As explained above, the CDC recommends a surgical or cloth mask to cover the valve. There is no evidence that they considered either of the techniques I described above when issuing their blog post.

The tape method is a greater deviation from the CDC guidelines than the KN95-covering method, so if you care about following official guidance you could use the latter.

Concerns specific to tape method

I assign a relatively low chance that the tape method is worse than the CDC recommendation of covering the valve with a surgical mask (my views depend considerably on the tightness of the surgical mask seal). and a very low chance that it’s worse than a surgical mask alone. The probability mass I assign to harm is a combination of concerns about exhaling through the filter reducing its efficacy, and unknown unknowns.

C02 rebreathing

Without the valves, part of the air you inhale will be air that you just exhaled, which contains more C02. I have not personally noticed any effects from this.

Exhaling through filter

Could exhaling through the filter be a bad thing somehow? I wasn’t able to find any source making an explicit statement on this, but I think it’s unlikely to be a problem.

One reason to worry is that the founder of Narwall Mask has told me that, according to one filtration expert he spoke to, one-way airflow greatly prolongs the life of the filters compared to two-way airflow. However, based on my small amount of research, I don’t think the life of the filters would be affected to a degree that is practically important.

The MSA valveless elastomeric respirator that I mentioned in this footnote2 appears to have filters that can be used for more than 1 month of daily use during the workday; and moreover, we can see in the respirator’s brochure that these filters, with model number 815369, are the same as those that are used in MSA’s line of regular, valved elastomeric respirators (see here). From this I conclude that: two-way airflow through regular P100 filters was considered an acceptable design choice by MSA; and these filters can be used two-way for at least a month of hospital use.

In addition, healthcare N95s (without valves) are designed to be exhaled through. They are only rated for a day of use, but I believe this is not because the filter loses efficacy (see section on repeated use).

Exhaled air has a relative humidity close to 100%. Could exposure to humid air reduce the efficacy of the filters? In this study of N95 filters, the difference penetration rose from around 2% to around 4% when relative humidity went from 10% to 80%, and this effect increased with duration of continuous use. The flow rate was 85 L/min.


Combination of figures 3 and 5, Mahdavi et al.

Note that this study, which simulates inhalation of humid air, does not address (except very indirectly) the question of how the exhalation humidity affects the inhalation filtration.

Discussion of tape technique by others

  • This NIOSH study tested three modifications of valved respirators: covering the valve on the interior with surgical tape, covering the valve on the interior with an electrocardiogram (ECG) pad, and stretching a surgical mask over the exterior of the respirator.
    • They found that “penetration was 23% for the masked-over mitigation; penetration was 5% for the taped mitigation; penetration was 2% for the [ECG pad] mitigation”. I would be very interested in more discussion of why the ECG pad did so much better than the surgical tape, the authors don’t say much. One guess could be that the ECG pad has a more powerful adhesive, which would suggest that it’s important to choose a strongly adhesive tape if implementing my technique.
    • When discussing the choice of modification strategies, the authors wrote that “two concerns are that the adhesive could pull away from the surface, thereby not blocking airflow to the same degree over time, and that these adhesives could contain chemicals that have toxicological effects.” study
  • In an FAQ released by 3M, in response to the question of whether one should tape over the exhalation valve, they wrote “3M does not recommend that tape be placed over the exhalation valve”, but do not give any reasons for this beyond the fact that it may become “more difficult to breathe through … if the exhalation valve is taped shut”.
  • The state of Maine’s Department of Public Safety recommends against tape-covering, but merely because “this would be considered altering the device and violates the manufacturer’s recommendation”.

Overall recommendation

I think it’s about 50/50 which of my two methods is better all things considered. They’re close enough that I think the correct decision depends on how much you care about protecting yourself vs source control. If source control is a minor consideration to you, I’d go with the KN95 valve coverage method, otherwise the tape method.

(As I said in a previous footnote2, if a valveless elastomeric mask is widely available by the time you read this, that is absolutely a superior option to the hacks I have developed.)

(The Narwall Mask is a commercial solution based on a snorkel mask that may be appealing if you don’t mind (i) the lack of NIOSH-approval and (ii) buying from a random startup, and (iii) you don’t mind or even prefer the full-facepiece design.)

  1. Or is it fatal? I had always assumed it was a fatal flaw, until I found some experts arguing otherwise. In this commentary, the authors say: “Data characterizing particle release through exhalation valves are presently lacking; it is our opinion that such release will be limited by the complex path particles must navigate through a valve. We expect that fewer respiratory aerosols escape through the exhalation valve than through and around surgical masks, unrated masks, or cloth face coverings, all of which have much less efficient filters and do not fit closely to the face”.

    I have been able to find some data; this recent NIOSH study finds that valved N95s have 1-40% penetration. “some models … had less than 20% penetration even without any mitigation. Other models … had much greater penetration with a median penetration above 40%.” Note that for these tests, the flow rates of 25-85 L/min are higher than the 6 L/min of a person at rest, and that lower flow rates had lower penetration.

    Penetration rates of tens of percent are not very good, and not acceptable for my standards, but it’s less bad than I expected, perhaps competitive with surgical masks, and better than cloth masks!

    Niosh 

  2. In fact, as of November 25 2020, the company MSA Safety announced in a press release that the first elastomeric respirator without an exhalation valve has been approved by NIOSH. It’s called the Advantage 290 Respirator. The product page has some good documentation.

    This journal article from September 2020, although it does not mention MSA, appears to be about the Advantage 290. (This is based on the picture in Fig 1. resembling the picture in the press release, and the fact that the hospitals in the paper are in Pennsylvania and New York states, while MSA is headquartered in Pennsylvania). The article explains how it was rolled out to thousands of healthcare workers (a first wave had 1,840 users). They claim that the cost was “approximately $20 for an elastomeric mask and $10 per cartridge”, which is amazingly low.

    They write: “After more than 1 month of usage, we have found that filters have not needed to be changed more frequently than once a month”.

    Unfortunately, it seems to be difficult to get one’s hands on one of these right now. The website invites you to contact sales, and the lowest option for “your budget” is “less than $9,999”.

    Moreover, even if you were able to get the Advantage 290, it might be too selfish to do so, since this respirator is likely to otherwise be used by healthcare workers. On the other hand, the price signal you create would in expectation lead to greater quantities being produced, partially offsetting the effect. If you are able to get one by paying a large premium over the hospital price, this may even be net positive for others.

    If this respirator became available in large quantities, everything I say here would be obsolete.

    By the way, I am astonished that it took until this November 2020 for a PPE company to create a valveless elastomeric respirator, this seems to be a very useful product for any infectious disease situation.  2 3

  3. It’s unclear to me how much one should downweight this recommendation due to appearing on a CDC blog rather than as more formal CDC guidance. In the post, the recommendations are called “tips”. 

  4. The FDA says: “While a surgical mask may be effective in blocking splashes and large-particle droplets, a face mask, by design, does not filter or block very small particles in the air that may be transmitted by coughs, sneezes, or certain medical procedures.” 

  5. 3M claims that “85 liters per minute (lpm) represents a very high work rate, equivalent to the breathing rate of an individual running at 10 miles an hour”. These lecture notes say that a person has a pulmonary ventilation of 6 L/min at rest, 75 L/min during moderate exercise, and 150L/min during vigorous exercise. 

  6. I tried two other methods before I settled on using tape: gluing a thin silicon wafer over the valve on the inside of the mask, and applying glue to the valve directly. Both these methods are entirely inferior and should not be used. 

  7. The model number is ML00895 for the M/L size, and ML00894 for the S/M size. 

January 2, 2021

Efficient validity checking in monadic predicate logic

Monadic predicate logic (with identity) is decidable. (See Boolos, Burgess, and Jeffrey 2007, Ch. 21. The result goes back to Löwenheim-Skolem 1915).

How can we write a program to check whether a formula is logically valid (and hence also a theorem)?

First, we have to parse the formula, meaning to convert it form a string into a format that represents its syntax in a machine-readable way. That format is an abstract syntax tree like this:

Formula:
∀x(Ax→(Ax∧Bx))

Abstract syntax tree:
∀
├── x
└── →
    ├── A
    │   └── x
    └── ∧
        ├── A
        │   └── x
        └── B
            └── x

Writing the parser was a fun lesson in a fundamental aspect of computer science. But there was nothing novel about this exercise, and not much interesting to say about it.

The focus of this post, instead, is the part of the program that actually checks whether this syntax tree represents a logically valid formula.

To start with, we might try to evaluate the formula under every possible model of a given size. How big does the model need to be?

We can make use of the Löwenheim-Skolem theorem (looking first at the case without identity):

If a sentence of monadic predicate logic (without identity) is satisfiable, then it has a model of size no greater than \(2^k\), where \(k\) is the number of predicates in the sentence. (Lemma 21.8 BBJ).

A sentence’s negation is satisfiable if and only if the sentence is not valid, so the theorem equivalently states: a sentence is valid iff it is true under every model of size no greater than \(2^k\).

For a sentence with \(k\) predicates, every constant \(c\) in the model is assigned a list of \(k\) truth-values, representing for each predicate \(P\) whether \(P(c)\). We can use itertools to find every possible such list, i.e. every possible assignment to a constant.

>>> k = 2
>>> possible_predicate_combinations = [i for i in itertools.product([True,False],repeat=k)]
[(True, True), (True, False), (False, True), (False, False)]

The list of every possible assignment to a constant has a length of \(2^k\).

We can then ask itertools to give us, for a model of size \(m\), every possible combination of \(m\) such lists of possible constant-assignments. We let \(m\) be at most \(2^k\), because of the theorem.

>>> for m in range(1,2**k+1):
>>>     possible_models = [i for i in itertools.product(possible_predicate_combinations,repeat=m)]
>>>     print(len(possible_models),"possible models of size",m)
>>>     for model in possible_models:
>>>         print(list(model))

4 possible models of size 1
[(True, True)]
[(True, False)]
[(False, True)]
[(False, False)]

16 possible models of size 2
[(True, True), (True, True)]
[(True, True), (True, False)]
[(True, True), (False, True)]
[(True, True), (False, False)]
[(True, False), (True, True)]
[(True, False), (True, False)]
[(True, False), (False, True)]
[(True, False), (False, False)]
[(False, True), (True, True)]
[(False, True), (True, False)]
[(False, True), (False, True)]
[(False, True), (False, False)]
[(False, False), (True, True)]
[(False, False), (True, False)]
[(False, False), (False, True)]
[(False, False), (False, False)]

64 possible models of size 3
[(True, True), (True, True), (True, True)]
[(True, True), (True, True), (True, False)]
[(True, True), (True, True), (False, True)]
[(True, True), (True, True), (False, False)]
[(True, True), (True, False), (True, True)]
[(True, True), (True, False), (True, False)]
[(True, True), (True, False), (False, True)]
[(True, True), (True, False), (False, False)]
[(True, True), (False, True), (True, True)]
[(True, True), (False, True), (True, False)]
...

256 possible models of size 4
[(True, True), (True, True), (True, True), (True, True)]
[(True, True), (True, True), (True, True), (True, False)]
[(True, True), (True, True), (True, True), (False, True)]
[(True, True), (True, True), (True, True), (False, False)]
[(True, True), (True, True), (True, False), (True, True)]
[(True, True), (True, True), (True, False), (True, False)]
[(True, True), (True, True), (True, False), (False, True)]
[(True, True), (True, True), (True, False), (False, False)]
[(True, True), (True, True), (False, True), (True, True)]
[(True, True), (True, True), (False, True), (True, False)]
...

What’s unfortunate here is that for our \(k\)-predicate sentence, we will need to check \(\sum_{m=1}^{2^k} (2^k)^m =\frac{2^k ((2^k)^{2^k} - 1)}{2^k - 1}\) models. The sum is very roughly equal to its last term, \((2^k)^{2^k} = 2^{k2^k}\). For \(k=3\), this is a number in the billions, for \(k=4\), it’s a number with 19 zeroes.

So checking every model is computationally impossible in practice. Fortunately, we can do better.

Let’s look back at the Löwenheim-Skolem theorem and try to understand why \(2^k\) appears in it:

If a sentence of monadic predicate logic (without identity) is satisfiable, then it has a model of size no greater than \(2^k\) , where \(k\) is the number of predicates in the sentence. (Lemma 21.8 BBJ).

As we’ve seen, \(2^k\) is the number of possible combinations of predicates that can be true of a constant in the domain. Visually, this is the number of subsets in a partition of the possibility space:

If a model had a size of, say, \(2^k + 1\), one of the subsets in the partition would need to contain more than one element. But this additional element would be superfluous insofar as the truth-value of the sentence is concerned. The partition subset corresponds to a predicate-combination that would already be true with just one element in the subset, and will continue to be true if more elements are added. Take, for example, the subset labeled ‘8’ in the drawing, which corresponds to \(R \land \neg Q \land \neg P\). The sentence \(\exists x R(x) \land \neg Q(x) \land \neg P(x)\) is true whether there are one, two, or a million elements in subset 8. Similarly, \(\forall x R(x) \land \neg Q(x) \land \neg P(x)\) does not depend on the number of elements in subset 8.

Seeing this not only illuminates the theorem, but also let us see that the vast majority of the multitudinous \(\sum_{m=1}^{2^k} (2^k)^m\) models we considered earlier are equivalent. All that matters for our sentence’s truth-value is whether each of the subsets is empty or non-empty. This means there are in fact only \(2^{(2^k)}-1\) model equivalence classes to consider. We need to subtract one because the subsets cannot all be empty, since the domain needs to be non-empty.

>>> k = 2
>>> eq_classes = [i for i in itertools.product(['Empty','Non-empty'],repeat=2**k)]
>>> eq_classes.remove(('Empty',)*k**2)
>>> eq_classes
[('Empty', 'Empty', 'Empty', 'Non-empty'),
 ('Empty', 'Empty', 'Non-empty', 'Empty'),
 ('Empty', 'Empty', 'Non-empty', 'Non-empty'),
 ('Empty', 'Non-empty', 'Empty', 'Empty'),
 ('Empty', 'Non-empty', 'Empty', 'Non-empty'),
 ('Empty', 'Non-empty', 'Non-empty', 'Empty'),
 ('Empty', 'Non-empty', 'Non-empty', 'Non-empty'),
 ('Non-empty', 'Empty', 'Empty', 'Empty'),
 ('Non-empty', 'Empty', 'Empty', 'Non-empty'),
 ('Non-empty', 'Empty', 'Non-empty', 'Empty'),
 ('Non-empty', 'Empty', 'Non-empty', 'Non-empty'),
 ('Non-empty', 'Non-empty', 'Empty', 'Empty'),
 ('Non-empty', 'Non-empty', 'Empty', 'Non-empty'),
 ('Non-empty', 'Non-empty', 'Non-empty', 'Empty'),
 ('Non-empty', 'Non-empty', 'Non-empty', 'Non-empty')]

We are now ready to consider the extension to monadic predicate logic with identity. With identity, it’s possible to check whether any two members of a model are distinct or identical. This means we can distinguish the case where a partition subset contains one element from the case where it contains several. But we can still only distinguish up to a certain number of elements in a subset. That number is bounded above by the number of variables in the sentence1 (e.g. if you only have two variables \(x\) and \(y\), it’s not possible to construct a sentence that asserts there are three different things in some subset). Indeed we have:

If a sentence of monadic predicate logic with identity is satisfiable, then it has a model of size no greater than \(2^k \times r\), where \(k\) is the number of monadic predicates and \(r\) the number of variables in the sentence. (Lemma 21.9 BBJ)

By analogous reasoning to the case without identity, we need only consider \((r+1)^{(2^k)}-1\) model equivalence classes. All that matters for our sentence’s truth-value is whether each of the subsets has \(0, 1, 2 ...\) or \(r\) elements in it.

>>> k = 2
>>> r = 2
>>> eq_classes = [i for i in itertools.product(range(r+1),repeat=2**k)]
>>> eq_classes.remove((0,)*k**2)
>>> eq_classes
[(0, 0, 0, 1),
 (0, 0, 0, 2),
 (0, 0, 1, 0),
 (0, 0, 1, 1),
 (0, 0, 1, 2),
 (0, 0, 2, 0),
 (0, 0, 2, 1),
 (0, 0, 2, 2),
 (0, 1, 0, 0),
 (0, 1, 0, 1),
 (0, 1, 0, 2),
 (0, 1, 1, 0),
 (0, 1, 1, 1),
...
  1. I believe it should be possible to find a tighter bound based on the number of times the equals sign actually appears in the sentence. For example, if equality is only used once, e.g. in \(\exists x \exists y \neg(x =y) \land \phi\) where \(\phi\) does not contain equality, it seems clear that the number of variables in \(\phi\) should have no bearing on the model size that is needed. My hunch is that more generally you need \(n*(n-1)/2\) uses of ‘\(=\)’ to assert that \(n\) objects are distinct, so, for example if ‘\(=\)’ appears 5 times you can distinguish 3 objects in a subset, or with 12 ‘\(=\)’s you can distinguish 5 objects. It’s only an intuition and I haven’t checked it carefully. 

November 27, 2020