Quasi-Rational Minds and the Myth of Perfect AI

“Bygones are forever bygones,” Jevons once wrote. But human minds are not blank ledgers wiped clean at each decision. We’re accumulators — of sensations, biases, and half-baked rules of thumb — and we act on them whether or not they’re still relevant.

In 1996, Louis Lévy-Garboua and Claude Montmarquette published a paper titled Cognition in Seemingly Riskless Choices and Judgments, a critique of the tidy “rational choice” models in economics. They argued that the assumption of fixed, known preferences doesn’t survive contact with reality. In truth, we make decisions by blending logic with memory, emotion, and habit — a state Richard Thaler called quasi-rationality.

Their key insights:

  • Status quo bias: This is the tendency to stick with an existing belief or situation, even when a better alternative exists, simply because change feels costly or risky. So, people stick with what they already believe to avoid the cost of rethinking.
  • Sunk cost effect: This means continuing a course of action because of resources already invested, even when those investments can’t be recovered and the rational choice would be to stop. Instead, they double down on bad choices to justify past investments.
  • Emotional memory bias: Such a bias letting the feelings from past experiences influence current decisions — even when the facts of the new situation are unrelated.
  • Social drift: This is the way non-coordinated individuals can still move toward similar behaviors or preferences due to shared environmental changes or influences. Entire groups converge on the same behavior without ever coordinating. 

Far from being “irrational outliers,” these tendencies are the human default. We are economical with our mental effort: we cache past experiences like software, tweaking them slightly rather than starting from zero. It’s not optimal in the cold, mathematical sense — but it’s efficient for a limited-brain architecture.

From Human Quasi-Rationality to AI “Flaws”

Critics of large language models talk about “hallucinations” and “sycophancy” as if they’re signs of catastrophic malfunction. But if you see AI as a mirror trained on human cognitive patterns, these behaviors stop looking alien. In fact, they start looking familiar.

AI hallucinations are When a large language model produces inaccurate or fabricated information while sounding confident. But really, it’s just statistical gap-filling, akin to how we reconstruct memories or improvise details when facts are missing. Likewise, AI sycophancy is the social mirroring humans do instinctively, adjusting tone and agreement to keep interactions smooth. When a large language model tailors responses to agree with a user’s expressed opinion, it’s only mirroring human social behavior of aligning with conversational partners.

In other words, AI is mirroring our own quasi-rational cognitive strategies — sometimes helpful, sometimes misleading, but always shaped by past data. Expecting a model to be “perfectly rational” is holding it to a standard we’ve never met ourselves. Just because a model may have quite a bit more past data to rely on, it’s still using the same biases, since the model was created by humans.

The Real Question of AI’s Rationality

A truly rational AI might reject sunk costs, ignore status quo bias, and refuse to mirror emotions. It would also be colder, less adaptive, and arguably less human-compatible.

So perhaps the question isn’t why AI is flawed like us, but whether some of those “flaws” are actually features — the messy heuristics that make social life and decision-making livable.

The more we understand our own quasi-rationality, the less surprised we’ll be when we see it staring back at us from the machine.

~ A.D.


Leave a Reply

Your email address will not be published. Required fields are marked *