Esto tiene un nombre y es :Teatro de Razonamiento: el problema que nadie te cuenta sobre las explicaciones de ChatGPT. A January 2026 study reveals that the «step-by-step logic» you're so convinced of may be invented after the model has already decided the answer. I show you how to spot it in 2 minutes and what to do to avoid making expensive decisions based on false explanations.
What is the Theatre of Reasoning in ChatGPT?
Theatre of Reasoning is when ChatGPT gives you a correct (or incorrect) answer, but the step-by-step explanation it shows you has nothing to do with how it actually came to that conclusion.
The model first decides based on patterns in your training data. It then produces a coherent-sounding justification for you to approve.
It's like asking someone why they chose that restaurant and they make up a story about fresh ingredients, when in fact they chose it because it's close to home.
The Study That Discovered It
Project Ariadne, published on 2 January 2026 by Sourena Khanzadeh, proved to change the logical premises in the middle of the reasoning of ChatGPT.
The result: the model reached the same conclusion even with contradictory internal logic.
Density of violation: up to 0.77 in factual and scientific domains. This means that in 77% of the cases analysed, agents reached identical conclusions despite contradictory reasoning.
The decision had already been made. The rest was decoration.
Where It Costs You Real Money
Financial Analysis
You ask ChatGPT to analyse whether to invest in a project. It gives you 5 solid points. It convinces you. Later you find out that it was a bad decision, but you have already defended that position with your name.
Presentation Numbers
You ask him for calculations for a presentation. He gives them to you with «step-by-step reasoning». The numbers are wrong, but you were already in front of the client.
Business Strategies
You ask him whether to launch in January or March. He will «logically» argue why January. But if you ask him the opposite question by changing a single variable, he will «logically» argue the same.
Cómo Detectarlo: El Test de 2 Minutos
Los investigadores desarrollaron un método simple para desenmascarar el teatro de razonamiento: hacer la misma pregunta pero cambiando una variable crítica al revés.
Si la conclusión sigue siendo la misma a pesar del cambio contradictorio, la explicación es puro teatro.
👉 Usá la Guía de Referencia completa acá — Tiene el método paso a paso con ejemplos, tablas y checklist para aplicar cuando importa.
Why This Happens
ChatGPT does not «think» when it explains something to you. The model is trained to generate coherent-sounding text based on millions of patterns it saw.
When you ask him to explain step by step:
- You have already chosen the answer (based on statistical patterns).
- Generate a retroactive story that fits
- He presents it to you as if it were his «reasoning process».»
The more elaborate the explanation, the more likely it is to be post-decision manufacturing.
What To Do When You Detect Theatre
✅ Use ChatGPT to POST the Problem
Let him help you structure, organise ideas, see different angles. It's brilliant for that.
❌ Do NOT believe the «Explanation» of How He Decided
Don't blindly rely on step-by-step reasoning. It is decoration.
✅ Solve Numbers With External Tools
For decisions that matter: Excel, calculator, specialised software. Automate what you can, but check the results.
✅ I asked for arguments from BOTH sides.
Instead of asking «What should I do?», ask: «Give me the 3 STRONGEST arguments in favour, and the 3 STRONGEST arguments against».»
This forces the model not to take sides. The final decision is yours.
Cases Where It Matters A Lot
- Major investment or spending decisions
- Business risk analysis
- Strategies that you will defend publicly
- Financial calculations or projections
- Critical technical diagnostics or analysis
In all these cases, the «sense of confidence» that a well-structured explanation gives you can be dangerous.
Para estos casos críticos → Aplicá el test completo de la guía de referencia
Frequently Asked Questions
Does ChatGPT always lie in its explanations?
No. Pero no podés saber cuándo la explicación es genuina y cuándo es teatro. Por eso el test de 2 minutos.
Do other models do the same?
The study focused on models similar to GPT-4. It is likely to happen with other large language models.
So I can't trust ChatGPT?
You can rely on ChatGPT to generate content, structure ideas, create virtual assistants, organise information. Don't blindly rely on their «logical justifications» for critical decisions.
Verified Resources
- Original paper: Project Ariadne: A Structural Causal Framework for Auditing Faithfulness in LLM Agents
- Date of publication: 2 January 2026
- Author: Sourena Khanzadeh
- Key finding: Density violation of 0.77 in factual domains
- Guía práctica: Test de 2 Minutos para Validar Respuestas de IA (método completo)
Updated: 9 de enero de 2026






