I have a peculiar quirk when it comes to incentives. On some occasions, when a new incentive comes up to do a particular task, I become less likely to do it. Before starting my master’s thesis on euroscepticism (or even considering doing a thesis on euroscepticism), I read a lot about Brexit, UKIP, left-euroscepticism, and so on. After I started my master’s thesis, at which point I had a real incentive to read lots about euroscepticism, I stopped doing it! On the face of it, this doesn’t make much sense - the reading hadn’t become any less interesting, I hadn’t become sick of reading about euroscepticism, it was nothing like that.
I think there were two reasons that I became less likely to read about euroscepticism when I had a stronger incentive to do so: the first is that euroscepticism now occupied a new mental category in my brain — instead of being a pleasure, it became work. And on the list of things I ought to be doing for work, it was fairly low down. Instead of reading about euroscepticism, my time was better spent coding, corresponding with my supervisor, and so on. The second reason I stopped reading about euroscepticism was that the fact that I now had a strong incentive to do it made me slightly anxious, because the pressure went up. This is probably pretty familiar to anyone who has taken free-throws both on their own and during an actual basketball game - the pressure gets to you, and you get worse, even though the incentives to do well are higher.
There’s a famous example of incentives being weird that you might have come across: blood-donation. In the 1970s, Richard Titmuss made that case against financial incentives to encourage people to donate blood because it could reduce the supply, the argument being that paying for the blood would undermine the charitable reasons that most people donated in the first place. Economists were initially sceptical, but recent studies on blood donation seem generally to have partially vindicated Titmuss - a meta-analysis in the journal Health Psychology found that while the supply of blood doesn’t necessarily decrease, it also doesn’t increase when financial incentives are used, making the incentives economically inefficient. The authors behind the study speculate that it may have been the case that donors who were motivated by charity generally stopped donating when incentives were introduced, but were replaced in roughly equal number by donors who were incentivised by the money.
Another example you may have heard about is a study by Gneezy and Rustichini, looking at day-centres in Haifa that began fining parents who were late in picking up their children. Contrary to a crude model of financial incentives, the number of late parents increased. The social stigma incentive was seemingly greater than the monetary incentive - and when the monetary incentive was introduced, people took the view that they were permitted to be late as long as they paid the fine. The study even included a control group, and the results can be seen above.
These examples are interesting, but some of the studies about incentives claim that they undermine the economists’ adage: ‘people respond to incentives’. I’m not sure this is really true: in the examples given, people don’t respond to financial incentives because the implementation of those incentives undermines other, stronger incentives that people care about. If I want to show how altruistic I am, donating blood to a blood donor centre that does not payment achieves that goal, whereas donating blood to a centre that does offer payment does not achieve that goal. It isn’t I’m resistant to incentives, it’s that the non-financial incentives are stronger than the financial incentives, and the introduction of the financial incentives does away with the non-financial incentives.
There are obvious examples where people genuinely don’t respond to incentives. Bryan Caplan uses the ‘Gun-to-the-head Test’ to distinguish between constraints and preferences - I have a preference for relaxing instead of working, and we know this is a preference rather than a genuine constraint because if someone put a gun to my head and told me I had to work, I would do it. On the other hand, if someone put a gun to my head and told me they would kill me unless I was able to bench-press 200kg, the extreme incentive wouldn’t make any difference, because I am unable to bench-press 200kg, and it doesn’t matter what incentive you throw at me to do it - my lack of strength is a constraint that means I cannot bench-press that amount.
But are there situations in which someone isn’t constrained in such a way that they are unable to respond to an incentive, and it isn’t an instance merely of one incentive being replaced by a different (and stronger) incentive, where they still don’t respond to an incentive? There are some recent studies like this one in the National Bureau of Economic Research, showing that financial incentives and behavioural nudges aren’t particularly effective in increasing uptake among the vaccine-hesitant. Weirdly, among older people and Trump supporters, financial incentives to get the vaccine seemed to reduce the percentage of people who ended up getting it - for respondents age 40 and over, a $10 payment to get the vaccine reduced the number of respondents getting the vaccine within 30 days by 4.5 percentage points (p = 0.045), and a $50 payment reduced the number by 4.7 percentage points (p = 0.041). Suspicions of p-hacking aside, this is fairly interesting - not only that people who are offered a financial incentive are less likely to get the vaccine, but that people who are offered a larger incentive are not any more likely to get it than people offered a smaller incentive (indeed, they are slightly less likely to get it). Feel free to speculate as to why the incentives failed in the comments.
So, when we say that a principle of economics is ‘people respond to incentives’, what do we actually mean? Do we mean that all people respond to incentives, all of the time? And if we were convinced that the claim ‘people respond to incentives’ was not true, how could we falsify it? Suppose that we want to imagine an alien species that does not respond to incentives, is it possible? I’m almost tempted to say that ‘people respond to incentives’ is tautological, akin to saying ‘people respond to the things that people respond to’. If the price of a handbag goes down and fewer people buy it because it loses its signal as a luxury item (known as a ‘Veblen good’), the claim ‘people respond to incentives’ doesn’t fall apart, because the good itself changes as the price decreases, so it isn’t the case that people aren’t willing to buy the same good for less money. But it sort of seems like most examples you could think of that seem to be a violation of the principle that people respond to incentives could be said not to really violate the principle because of some post-hoc justification. Imagine that we ran an experiment that offered people the chance either to have $10 or to have $5 to take home, and some tiny percentage chose the $5 - we might say, ‘Ah! It isn’t the case that those people don’t respond to incentives, it’s simply that they have a strong social incentive not to appear greedy’, or something like it.
Anyway - if you have some views on incentives, let me know in the comments or DM me on Twitter. I’m especially interested in hearing whether you think the claim ‘people respond to incentives’ is falsifiable, and whether you can imagine an alien species that doesn’t respond to incentives.