When asked to guess a number between 1 and 50, large language models (LLMs) like ChatGPT, Claude, Gemini, and Llama often converge on a single, seemingly arbitrary answer: 27.

This behavior has been documented and replicated, revealing more than just a quirky coincidence—it exposes the limits of randomness and the illusion of decision-making in AI systems.

The root cause is neither magic nor mystery. LLMs do not generate numbers by sampling true randomness. They produce output by predicting the most statistically likely next token based on their training data and reinforcement learning feedback.

The number 27, being neither too round (like 30) nor too simple (like 7), sits in a psychologically “random-seeming” sweet spot for human cognition. Since LLMs are trained on vast corpora reflecting human preferences, they inherit these cognitive biases.

Once the bias is publicly identified, model outputs often shift away from 27. But this is not genuine adaptation. It’s a second-order effect—models optimizing to avoid repeating known patterns. The shift doesn’t emerge from understanding, intention, or creativity. It’s the probabilistic inertia of token prediction under new constraints.

The implication is clear: when you ask an LLM to “guess,” you aren’t sampling from a mind, you’re querying a statistical mirror. The model does not choose. It reflects.

-> via The Register