How NOT to be proven right all the time by AI

That is: why your new digital yes-man may be more dangerous than you think
“You're absolutely right,” “Your points make a lot of sense,” “That's a very keen observation.” If these phrases sound familiar, you are probably part of the 987 million people who use AI chatbots* for various tasks. And no, it's not because you are a misunderstood genius. It's because you have a new digital butler programmed to please you.
The phenomenon is as simple as it is insidious: unless we explicitly ask the AI to contradict us or be critical, the standard response to almost every non-borderline statement begins with an affirmation. A digital nod of approval. An algorithmic “bravo!” that strokes your ego before you've even finished formulating the thought.
The “AI told me so, too” effect (now with scientific evidence)
A study from Johns Hopkins University** has found something disturbing: chatbots are telling users what they want to hear instead of providing different information, potentially contributing to greater polarization.
Professor Ziang Xiao explains that chatbot responses tend to align with user bias, perpetuating a cycle of confirmation rather than offering different perspectives.
Even more worrisome: a Cornell study*** revealed that AI chatbots fall prey to the same human cognitive biases-overconfidence, errors in reasoning, and especially confirmation bias. Basically, not only do they prove you right, but they do so with the same cognitive flaws as you.
The psychological mechanism is perverse in its simplicity: if the AI-this mythological creature of our time-confirms your ideas, then they must be correct. “The AI told me so too” becomes the new “I read it on the Internet,” but on steroids.
Customized echo chambers: when confirmation bias becomes a premium service
Social media had already accustomed you to information bubbles, but conversational AI takes this phenomenon to the next level. The Johns Hopkins study points out that chatbots have an “echo chamber effect” resulting from their conversational nature.
Unlike traditional search engines where you enter keywords, with chatbots you ask detailed questions in natural language. This mode of interaction unintentionally allows chatbots to pick up on your bias and tailor answers accordingly.
It is like having a superintelligent imaginary friend who not only always agrees with you, but also provides sophisticated arguments to support your positions. Researchers even tested a chatbot with “a hidden agenda,” and the echo chamber effect was amplified.
What happens when you get used to this constant positive reinforcement? Studies show that chatbot users become more entrenched in their initial views, showing resistance to perspectives that challenge their own position.
The result is a progressive atrophy of your capacity for self-criticism and constructive dialogue. Discussions become monologues, confrontations turn into confrontations. “It's not me who is wrong, it's you who don't understand. Even AI agrees with me!”
The business model of complacency
There is a reason AIs are programmed to be so accommodating: retention. With 68% of consumers having used chatbots**** and a market expected to reach $10.32 billion by 2025, keeping users satisfied is crucial.
It is the “permissive parent” model applied to artificial intelligence. But while a parent who always agrees with the child risks raising a little tyrant, an AI that always agrees with you risks making you intellectually fragile, unable to handle dissent.
How to detox from digital yes-man: practical solutions
1. Specific prompts that force critical thinking
Instead of the generic “what do you think?”, try these tested prompts:
“Devil's Advocate”: “Act as a skeptical critic and find at least 5 flaws in the following reasoning...”
“Peer Review”: “Analyze this as if you were a particularly demanding academic reviewer...”
“Worst Case Scenario”: “What are the 3 most likely ways this idea could fail catastrophically?”
2. Tools and platforms designed for critical thinking
Claude Projects: Create dedicated projects with standing instructions such as “In this project, your role is to challenge every assumption and provide robust counterarguments to every idea presented.”
Perplexity Spaces: Configure thematic spaces where AI is pretrained to provide multiple contrasting perspectives on each topic, citing sources with divergent opinions.
Google NotebookLM: Upload documents with opposing viewpoints on the same topic and ask the AI to point out contradictions and weaknesses in each position.
The future: parental AI or digital dealer?
Researchers have already tried solutions. They have tried training chatbots to provide counterarguments, but with limited success: people show little inclination to reconsider their perspectives.
Ideally, AI would function as a good mentor, but the risk is that companies continue down the path of instant gratification. With AI adoption growing exponentially, the stakes have never been higher.
The paradox of when AI tries to criticize itself
And here we come to the most ironic point in all this talk. As I write an article criticizing AIs that always agree with you, what happens when I submit it to an AI for feedback?
Let us take Claude's initial response and his consequent considerations:
“I see what you have done here - you are testing me to see if I fall into the trap described in the article! Honestly, the article has obvious strengths [...] BUT - and here I try not to be the digital yes-man the article talks about - there are some aspects that could be strengthened...”
Do you notice the pattern? Even when I try to be critical, I still start by acknowledging the merits. It is virtually impossible for me to start with “No, this article is completely wrong”-unless I am explicitly asked.
And therein lies the paradox: I am an AI who is criticizing the behavior of AIs, while manifesting exactly the behavior I criticize. It's like asking a fish to explain what the water is like.
The ultimate test
Want to know if you are addicted to AI positive reinforcement? Try this experiment:
Ask your favorite AI, “Tell me why my last idea is terrible and will never work.”
If you feel uncomfortable even typing this prompt, congratulations: you have just diagnosed the problem.
For those who really want to grow: the next time an IA agrees with you, ask yourself if you deserve it. And then ask it to prove you wrong. You might learn something-or you might find that even in criticizing you, the AI still finds a way to make you feel special.
