Behavioralism and the Problem of Conflicting Quirks

Cite this Article
Thomas A. Lambert, Behavioralism and the Problem of Conflicting Quirks, Truth on the Market (June 25, 2008), https://truthonthemarket.com/2008/06/25/behavioralism-and-the-problem-of-conflicting-quirks/

I’ve been spending quite a bit of time with the behavioralists lately. I recently read Dan Ariely’s interesting book, Predictably Irrational: The Hidden Forces that Shape Our Decisions. Then I heard Tom Ulen give a nice overview presentation at the recent Silicon Flatirons conference on the New Institutional Economics. I’m currently reading Cass Sunstein and Richard Thaler’s, Nudge: Improving Decisions About Health, Wealth, and Happiness, which argues that public policies should be structured (by “choice architects”) to account for the various cognitive quirks behavioralists have discovered.

I must admit, I’ve had plenty of misgivings about behavioralism in the past. Mainly, I’ve suspected that the empirical evidence purporting to demonstrate major, systematic cognitive quirks was not all that strong. For example, I believe (though I’m not sure) that I was a subject in one of those coffee mug experiments that purports to establish the endowment effect. We did one of those exercises in one of my law school classes. If that’s the sort of experimental data underlying this supposed quirk, it’s hardly robust. Indeed, as Josh has explained, Charles Plott and Kathryn Zeiler recently showed that the endowment effect studies reach quite different conclusions when the questions are posed differently.

In addition to questioning the quality of the underlying empirical data, I’ve suspected that behavioralists are too quick to draw conclusions — both positive and normative — from their experimental findings. I once wrote a short response piece titled Two Mistakes Behavioralists Make, where I criticized two symposium participants for jettisoning rational accounts too quickly in attempting to explain survey findings and for being too quick to advocate governmental solutions to various cognitive quirks (with little regard for government’s own institutional maladies).

But I don’t want to be a knee-jerk ideologue. When facts conflict with theory, facts must prevail. To the extent people do depart systematically from the rational choice model of human behavior, we need to tweak the model accordingly. And when we’re crafting public policies, we ought to take account of the behavior of real people (“humans” as opposed to “econs,” to use Sunstein and Thaler’s lingo).

The challenge for the behavioralists is to set forth a predictable (and thus useable) model of human behavior that improves upon the rational choice model by accounting for the well-established, systematic departures from rationality. That’s presumably what books like Dan Ariely’s Predictably Irrational are trying to do. But there’s still lots of work to be done in this area.

A particularly vexing problem, I believe, is the difficulty of conflicting quirks. What do we do when one heuristic would lead humans to reach a particular non-rational conclusion and another simultaneously operative heuristic would push in the opposite direction? Which heuristic trumps? We need to know that so that we can predict what conclusions people will reach.

Consider, for example, chapter one of Sunstein and Thaler’s book. The chapter, titled “Biases and Blunders” aims to sketch out the mental shortcuts we humans use in judging the magnitude of risks. The authors discuss the well-known “availability heuristic,” pursuant to which people “assess the likelihood of risks by asking how readily examples come to mind.” They explain that “[i]f people can easily think of relevant examples, they are far more likely to be frightened and concerned than if they cannot.” (So, for example, we tend to think that homicide is more common than suicide because we hear about homicides more; in reality, suicide is far more common.) Sunstein and Thaler also discuss people’s overconfidence bias, which leads us to be overly optimistic about our own abilities to avoid bad outcomes (e.g., 90 percent of drivers believe they are above average behind the wheel).

So what would we predict about human risk judgments when both of these heuristics are operative? Take, for example, gay men’s estimates of their risk of contracting HIV. On the one hand, gay men are much more likely to know people infected with HIV and to have observed the highly salient, agonizing death of friend or acquaintance suffering from AIDS. (The more salient a bad outcome, the greater its perceived risk.) On the other hand, because the behavior leading to HIV infection is generally voluntary, the overconfidence bias is likely to kick in. Which bias would we expect to trump?

Sunstein and Thaler point to gay men’s perceptions of their own HIV risk as exemplifying the overconfidence bias: “Gay men systematically underestimate the chance that they will contract AIDS, even though they know about AIDS risks in general.” But what happened to the availability heuristic and the salience bias?

Perhaps I’m demanding too much here. Even the rational choice model can’t predict human judgments when individual preferences push in different directions (e.g., will a lawyer who values money, leisure, and the life of the mind give up a lucrative law firm job to become a law professor?). But I do think that if we’re going to complicate the rational choice model with a bunch of quirky “exceptions,” we need some account of how the quirks interact when they conflict. Otherwise, we won’t be able to say that humans are predictably irrational.