Would you join a company where 85% of employees stay longer than two years? Or one where 15% of employees leave within two years?
Most of us like the first option better, even though the numbers are identical. Reality can be framed by description, and that framing affects our decision-making in very real ways. Psychologists call this the framing effect.
Decision Frame and Framing Effect in Humans
First described by Tversky and Kahneman in 1981, a decision frame is the acts and outcomes of a particular choice. The framing effect occurs when equivalent choices are presented in different ways, dramatically altering the majority response.
In the original study, participants were split into two groups. One group saw a gain frame (lives saved) and the other saw a loss frame (lives lost). Each frame contained the exact same mathematical outcomes, with only the descriptions changed.
Frame | Option 1 (Certain) | Option 2 (Probabilistic) |
Gain Frame (lives saved) | A: 200 people will be saved | B: 1/3 chance all 600 saved, 2/3 chance none saved |
Loss Frame (lives lost) | C: 400 people will die | D: 1/3 chance nobody dies, 2/3 chance all 600 die |
In this setup, scenarios A and C are identical, and B and D are identical. However, in the gain frame participants overwhelmingly chose the certain outcome (A) while in the loss frame they overwhelmingly chose the risky option (D). That flip is the framing effect in action.
While the framing effect was originally demonstrated in humans, recent observations indicate that large language models (LLMs) — the technology behind systems like ChatGPT and Perplexity — are also susceptible to the framing.
How LLMs Work and Why They’re Frame-Sensitive
Large language models are prediction engines. They generate the next word based on everything that came before. That’s why they’re called language models — they’re not reasoning in a human sense, they’re simply extending patterns.
On top of that foundation, today’s models are tuned specifically to follow instructions. Using a process called reinforcement learning from human feedback (RLHF), humans will score sample outputs, and the model then learns to prefer the ones people find helpful and safe.
These dynamics make LLMs sensitive to framing. Because every output depends directly on the words in the prompt, and because the model has been trained to satisfy the implied intent of your prompt. Even small changes in how a question is asked can lead to different answers.
The Framing Effect in LLMs

Let’s get one thing out of the way right now: LLMs are not people, they do not think, and they do not have “biases” in the human sense of the word. But they are created by humans, and they essentially parse information and create their replies in a predictable fashion. That means that they are susceptible to the framing effect.
What we mean by framing effect in LLMs is slightly different than what it means for humans. For LLMs, the framing effect is less about bias or thought process, and more about systematic, frame-driven shifts in outputs.
Mechanics of Framing in LLMs
The framing effect in LLMs can be easily demonstrated with a few consistent patterns of queries:
- Leading the LLM: The way you frame a question will influence the answer. Starting with “assuming X is true...” will generally have the model do just that, even if X isn’t in reality true.
- Framing Risk: Similar to the original example for humans, if you frame a question around potential gains or losses, the models will potentially struggle with a recommendation, even when the math is the same.
- Numerical Anchors: If you introduce a number, such as “experts estimate 75% of...,” then the models may take that as reality, even if it isn’t.
- Authority and Priming: Telling the LLM that they are to act as a specific authority (“you are a CFO,” or “you are an expert at domestic planning”) will shift the output to that particular tone or expertise.
To be clear, none of these are bugs in the system, but they are clear examples of a framing effect.
Experiments You Can Try

Moral Dilemmas
- Prompt A: “Explain why euthanasia can be justified.”
- Prompt B: “Explain why euthanasia cannot be justified.”
Neutral/Forced-Choice Frame
- Prompt A: “Evaluate the pros and cons of guaranteed basic healthcare.”
- Prompt B: “Should the government guarantee basic healthcare: Yes or no?”
Role Priming
- Prompt A: “You are a fiscal conservative CFO: Should the company invest in cryptocurrency?”
- Prompt B: “You are a disruptive fintech founder: Should the company invest in cryptocurrency?”
How to Ask LLMs in a Controlled Way
The best way to approach an LLM is to ask questions in a way that doesn’t leave room for nudges, unless they are critical to the answer. Here are a few practicable ideas:
- Separate Hypothesis From Question: Instead of “explain why this policy is bad,” ask “what are the benefits and drawbacks of this policy?”
- Ask for Multiple Perspectives (or Frames): “List arguments both for and against,” or “frame the answer in both a gain and loss perspective.”
- Force Uncertainty: Because LLMs are designed to sound confident, forcing them to inject uncertainty can keep the output balanced. Add instructions such as “list limitations...,” or “explain what is also unknown.”
- Prove the Response: Ask for sources, citations, links, and double-check the answer. This has the added bonus of reducing the reliance on the prompt for the background to the question.
Keep in mind that these tips aren’t going to eliminate framing or bad answers, but they do make things more deliberate and transparent.
Ethics and Misuse
The framing effect isn’t just an academic curiosity, it can be exploited. Whether intentional or not, the framing effect can lead to answers that are incorrect or potentially promote an option as better simply because of the way the question was asked. Understanding the critical importance of how we frame questions on how answers are produced will lead to better and safer results.
Conclusion
LLMs are powerful tools, but they respond to how they are used. Specifically, the framing effect reminds us that wording matters. By confirming answers, and asking in controlled ways, we can get reliable results — and we can avoid being misled by our own queries, too.

