Across the social sciences, a particular relationship reoccurs in study after study: As income rises, people are less likely to die, and, as income falls, people are more likely to die. Not surprisingly, then, economists and decision scientists have for years tried to estimate how much of a reduction in income might be enough to result in the loss of a person’s life.
There are two main ways to make this estimation. One way involves collecting data about mortality for people with different incomes and then running a regression, controlling for other relevant variables like education and health status. This approach has a problem, though—the effect of income on mortality is hard to tease out. It occurs over the course of time, and difficult-to-measure traits like a tendency for risky or reckless behavior that also affect mortality can be correlated with income.
To get around these problems, an alternative method relies on economic theory. This theoretical approach was first developed in a classic 1994 paper by economist Kip Viscusi. The main takeaway from Viscusi’s article is that the expenditure level that will lead to one statistical death is equal to the value of a statistical life (VSL) divided by the marginal propensity to spend on health.
Since this relationship will not be intuitive to most people, the following thought experiment helps explain it in simpler terms.
Imagine a society in which a fever is the only way to die. Meanwhile, there is only a single way to reduce the risk of a fever—periodic visits to the doctor to allow for early detection. People are willing to pay up to $100 for their next doctor’s visit, which is also the price doctors charge, since markets are reasonably efficient.
Each visit reduces a person’s risk of early death by one-in-100,000 in their lifetime. This is the only benefit derived from visiting a doctor. Everyone earns an income, with ten cents of each additional dollar earned put aside into a personal fund to pay for more doctor’s visits. When enough money is saved—that is, $100—a person will go to the doctor.
Now imagine that one day the government decides to purchase an additional doctor’s appointment for every citizen. How much should the government be willing to pay?
Consider what happens if the government pays $100 per appointment. It has no money of its own, of course, so it must tax its citizens. To pay for the new program, the government takes $100 from each person.
The program decreases mortality risk, but the tax itself increases it slightly. Each citizen gets one doctor’s appointment. For every $100 the government takes to pay for it, however, $10 would have gone towards an individual’s own fund for doctor’s visits. That means the policy also denies them one-tenth of an appointment—or a one-in-a-million decrease in the odds of an early death—leaving them with essentially nine-tenths of an extra visit.
Now, imagine that the cost of the program includes the creation of an expensive new department. What happens if the government pays $1,000 dollars for each appointment instead?
Now every citizen is taxed $1,000, one hundred dollars of which would pay for a doctor’s visit. Risk is therefore increased by the amount of one lost appointment, but citizens also receive one free appointment, so the policy is a wash on mortality risk.
More generally, however, this is a wasteful policy because citizens are spending ten times more on doctor’s visits than they value their next visits. For each visit, $900 could have gone towards other things or even perhaps extra visits.
Now imagine the government pays $2,000 for each appointment. In this case, mortality risk increases unambiguously. Every citizen sees risk rise by a factor of two lost doctor’s appointments, while risk only falls by a factor of one appointment. On net, the risk of death rises for everyone.
That breakeven point of $1,000 plays a critical role. Every time the government spends more addressing risk than the breakeven amount—in this case, more than $1,000 per appointment—it will increase the overall risk of death.
Furthermore, if there were no offsetting health benefits from spending, then each time the government would spend $100 million—for example, $1,000 per appointment for 100,000 people—it would increase the risk of death by one in a hundred thousand in a lifetime a hundred thousand times. In other words, each $100 million in expenditures results in one expected death.
It turns out that only two pieces of information are needed to calculate this value. The first is the how much people would pay to reduce an additional death risk. The second is how much of each additional dollar people earn is spent addressing death risks. Economists call the first number the VSL and the second number the marginal propensity to spend on mortality risk reduction. The expenditure level that induces one statistical death is the ratio of the two.
This simple model can be applied to the real world with just a few assumptions. For example, in our scenario there was only one risk—a fever—but, in the real world, there are lots of risks. We assume that people are fairly rational in terms of how they allocate funds towards risk reduction. So when income drops, people forego spending on risks of similar size to the last risk they addressed with that much money. The VSL concept relies on a similar assumption, which is that people will spend about the same amount addressing a new risk as they spent addressing the last risk of similar size.
This theoretical approach has some clear benefits. First, it avoids confusing correlation with causation, like the regression approaches sometimes do, and, second, it is closely linked to the VSL concept, which is widely accepted and used by governments across the world.
This analysis also highlights that public policies require tradeoffs involving more than just money, and that income itself offers benefits that extend life. These benefits and tradeoffs deserve closer attention by both researchers and policy decision-makers.