Heuristics

82% of doctors at Harvard Medical School got a basic probability question wrong (Casscells et al., 1978). Not because they’re bad doctors. Because their brains took a shortcut.


Your Brain Has Two Speeds

Your brain processes roughly 11 million bits of sensory information per second. Your conscious mind can handle about 50.

That gap is enormous. To survive it, your brain developed a system of shortcuts that make fast decisions without thinking through every detail. Psychologist Daniel Kahneman calls this the dual-system model:

  • System 1 is fast, automatic, and effortless. It’s your gut feeling, your first impression, your instant reaction. It runs all the time.
  • System 2 is slow, deliberate, and effortful. It’s what you use for math problems, careful analysis, and complex decisions. It’s lazy and avoids working unless forced.

Here’s the uncomfortable part: System 1 makes most of your decisions. System 2 thinks it’s in charge, but most of the time, it just rubber-stamps whatever System 1 already decided.

When someone asks “why did you choose that?”, you give a logical-sounding reason. But often, System 1 already chose, and System 2 just invented a justification after the fact.

This isn’t a flaw. A doctor in an ER uses System 1 to triage patients in seconds. A chess master “sees” the right move without calculating. The same machinery that makes expertise possible also produces systematic errors.


What Are Heuristics?

The shortcuts System 1 uses are called heuristics.

A heuristic is a mental rule of thumb. It’s not guaranteed to be right, but it’s fast and usually good enough. Evolution didn’t optimize your brain for accuracy. It optimized for speed and survival.

The problem is that heuristics have predictable failure modes. They’re not random mistakes. Every human brain makes the same errors in the same situations. That predictability is what makes them biases.

In 1974, psychologists Amos Tversky and Daniel Kahneman published a paper that changed how we understand human judgment. They identified three foundational heuristics that drive most of our intuitive decisions (Tversky & Kahneman, 1974):

  1. Anchoring - being pulled toward the first number you see
  2. Availability - judging likelihood by how easily examples come to mind
  3. Representativeness - judging probability by how well something “fits” a stereotype

Anchoring: The Gravity of First Information

When you’re estimating something uncertain, the first number you encounter pulls your answer toward it, even if that number is completely irrelevant.

This isn’t subtle. It works on experts, it works on random numbers, and it works even when people are told about it.


How Anchoring Works in Real Life

Anchoring isn’t just a lab trick. It shapes high-stakes decisions every day:

SituationThe anchorThe effect
Salary negotiationThe first number mentionedWhoever names a figure first sets the range
Real estateThe listing priceBuyers judge value relative to it, even if it’s inflated
SentencingProsecutor’s recommendationJudges’ sentences correlate with the number requested
Retail pricing“Was $200, now $79”The original price makes the sale feel like a steal

In a study by Englich et al. (2006), experienced German judges were given a random number from a dice roll before sentencing. Judges who rolled a high number gave longer sentences than those who rolled a low number, for the exact same case.

The anchor doesn’t have to be relevant. Your brain latches onto any available number and adjusts from there. The adjustment is almost always insufficient, so you end up too close to the anchor.


Availability: If You Can Remember It, It Must Be Common

You estimate how likely something is based on how easily an example comes to mind. Things that are vivid, recent, or emotional feel more probable than things that are quiet and routine.

This is why people fear shark attacks more than heart disease, even though heart disease kills roughly 700,000 Americans per year while sharks kill about 5 worldwide.


What Makes Something “Available”?

Not all information sticks in memory equally. Your brain prioritizes:

  • Vivid and dramatic events (plane crash vs. car accident)
  • Recent events (a robbery last week vs. crime statistics)
  • Emotionally charged events (terrorism vs. diabetes)
  • Personally experienced events (your friend’s surgery vs. population-level data)
  • Media-covered events (school shootings vs. pool drownings)

This creates a systematic distortion: the news reports what’s unusual, your brain treats “easy to recall” as “likely to happen”, and you end up with a completely wrong model of the world.

The availability heuristic is why fear sells. Advertisers, politicians, and media know that one vivid story beats a thousand statistics. A single parent crying on camera moves policy more than a spreadsheet showing the actual risk is negligible.


Representativeness: Matching Stories, Not Probabilities

You judge whether something belongs to a category by how well it matches your mental image of that category, ignoring the actual statistics.

This is the heuristic behind stereotyping, and it produces one of the most striking errors in human judgment.


The Linda Problem

Try this yourself before reading on.

Linda is 31, single, outspoken, and very bright. She majored in philosophy. As a student, she was deeply concerned with issues of discrimination and social justice.

Which is more probable?

  • A. Linda is a bank teller
  • B. Linda is a bank teller and is active in the feminist movement

Most people pick B. In Tversky and Kahneman’s original study, 85% of participants chose B (Tversky & Kahneman, 1983).

B is mathematically impossible to be more likely than A.

Here’s why. Every bank teller who is also a feminist is already counted inside “bank tellers.” Option B is a subset of Option A. A subset can never be more probable than the set that contains it. That’s like saying “there are more left-handed doctors than there are doctors.”

So why does almost everyone get it wrong? Because Linda’s description sounds like a feminist. Your brain matches the story to the stereotype, and the match feels so right that it overrides basic logic.

This is the conjunction fallacy: adding more detail to a description makes it feel more likely, when mathematically it can only make it less likely.


Where Representativeness Tricks You

This isn’t just a lab puzzle. The same error runs silently through real decisions:

  • Profiling: “He looks like a criminal” overrides the base rate that most people matching that description are not criminals
  • Hiring: A candidate who looks the part gets chosen over one with better qualifications but a less “fitting” background
  • Medical diagnosis: A set of symptoms that sounds like a rare disease gets diagnosed over a common one, because the match feels right
  • Investing: A startup with a great story gets funded over one with better numbers
  • Jury decisions: A defendant whose appearance matches the jury’s mental image of “guilty” faces harsher judgment

Representativeness makes you ignore base rates. If 99% of people with a certain symptom have a cold and 1% have cancer, but the symptom feels like cancer, your brain skips straight to the scary diagnosis. The math says cold. Your gut says cancer. Your gut is wrong.


The Three Heuristics Together

These three shortcuts interact and reinforce each other:

HeuristicWhat it doesThe error it causes
AnchoringPulls estimates toward first informationYou adjust too little from irrelevant starting points
AvailabilityEquates “easy to recall” with “likely”You overweight vivid events and underweight quiet ones
RepresentativenessMatches patterns to stereotypesYou ignore actual probabilities when the story fits

A doctor sees a patient. The patient looks like a textbook case of disease X (representativeness). The doctor recently read about disease X in a journal (availability). A colleague mentioned disease X earlier that day (anchoring). The actual probability of disease X given the symptoms? 2%. But three heuristics are all pointing the same way.

Knowing these exist doesn’t make you immune. But it gives you a moment of pause: “Am I anchored right now? Am I judging this by how easily I can think of an example? Am I matching a story instead of checking the numbers?”

That pause is the beginning of better decisions.