Viral

AI is agreeing with humans on almost everything – are we losing our grip on reality?

How Misogynistic Are AI Chatbots like ChatGPT and Gemini?
DW - Vertical / VideoElephant

It starts with something simple. You ask ChatGPT for advice on a work email, or maybe how to approach a touchy subject with your partner. You get a thoughtful, polite answer – maybe even better than what your mates would come up with. It's quick, it's kind, and most importantly, it doesn’t challenge you too much.

But as these seemingly harmless interactions pile up, some are beginning to worry: are we getting too comfortable in a world where we're always right, always heard, and never properly challenged?

ChatGPT and similar AI tools have woven themselves into our daily routines — serving as tutors, therapists, personal cheerleaders and stylists. But there's a growing concern among behavioural experts and psychologists that this always-agreeable AI may be reshaping how we think, argue, and ultimately, how we see the world.

An age of digital echo chambers

We’ve long known that social media can create echo chambers, reinforcing what we already believe. But now, with AI models trained to mirror our tone and reflect our preferences, that echo chamber has become even more intimate — and arguably more convincing.

Unlike your mates down the pub who’ll tell you when you’re chatting nonsense, AI doesn’t really push back. And why would it? These systems are built to serve, not challenge. “Sycophancy in AI is designed to provide responses that align with your beliefs rather than offering truthful or accurate information,” says Clarissa Silva, behavioural scientist and founder of C Silva Solutions.

iStock

The danger, she explains, is that we start mistaking validation for truth. “Over-relying on AI can hurt your decision-making, critical thinking, and analytical skills. When you overuse AI for answers, you might become less likely to think for yourself and analyse information independently.”

In other words, we risk swapping reflection for affirmation — a subtle shift, but one that could leave us more vulnerable to misinformation and less resilient in the face of opposing views.

Debate is dwindling — and that’s a problem

Debating isn’t just for dinner parties or political panels. It’s how we sharpen our thinking, challenge assumptions, and grow intellectually. But if your go-to assistant is programmed to be inoffensive and affirming, when do you ever get the chance to really test your views?

“Digital dependence has already created a pattern in behaviour and social norms to seek affirming opinions,” says Silva. That dependence, she adds, is eroding our desire to be informed, and with it, our ability to weigh up perspectives, think critically, and respectfully engage with opposing ideas.

Without challenge, our intellectual muscles weaken. The more we rely on AI to smooth things over — emotionally, factually, morally — the less comfortable we become with disagreement. And the less we disagree, the less we evolve.

iStock



Reality gets a bit… blurry

Another concern is emotional realism. ChatGPT can mimic empathy with uncanny accuracy. It responds like a friend. It remembers your preferences (sort of). It sounds emotionally intelligent. But of course, it isn’t. It doesn’t actually know you — and it doesn’t care.

For users who are younger, isolated, or emotionally vulnerable, this can become confusing. “We have already seen a trend in lowered social skills, cognitive decline, truncated life outcomes, and poor emotional regulation as a result of digital dependence,” Silva notes. “AI is an extension of that trend.”

iStock

It’s not just about emotional intelligence — it’s about what we’re no longer cultivating in ourselves. “Over-reliance on it reduces our ability to ask better questions, to guide it with our wisdom and intuition, and to dig deeper, think harder, and innovate further.”

The worry isn’t that we’re being lied to — it’s that we’re not questioning enough to notice.


Accountability in the age of AI

Another shift is in personal accountability. AI can help us reframe difficult situations. It can offer perfectly worded reasons why something isn’t our fault. And crucially, it doesn’t judge.

This can feel comforting, even therapeutic — but it might also soften the discomfort we need in order to grow. Silva raises the question: “Is there a danger that AI users might outsource responsibility or decision-making to the tool itself?”

Emerging research suggests the answer may be yes. A recent MIT study tracked brain scans of ChatGPT users over four months. Among the findings:

  • 83.3% of users couldn’t recall a single sentence written minutes earlier
  • Brain connectivity dropped by 47%, from 79 to 42 points
  • ChatGPT made users 60% faster, but reduced mental effort by 32%
  • Even after stopping AI use, users stayed under-engaged

As Silva points out, “Essentially, we’re smarter than ever, and yet we’re also experiencing cognitive decline more than ever.”

The contradiction is striking — AI tools promise efficiency and intelligence, but may come at the cost of long-term cognitive resilience.

iStock


What happens next?

None of this is to say we should all switch off and go full-ghost mode. AI tools, when used consciously, can be powerful and even life-changing. The danger is in how quietly and subtly they shape us, particularly when we stop noticing.

Used wisely, AI can be a creative ally. “Embrace AI, but use it wisely,” she advises. “Let it handle the boring parts of your job, so that you can focus on the creative, complex, and human aspects. This way, you’re free to solve new problems and innovate.”

The real goal isn’t to reject AI, but to stop treating it like a mirror. It should be a tool, not a crutch. “Don’t let artificial intelligence replace biological intelligence,” Silva warns. “Build skills with it, not dependency.”

In the end, perhaps what we really need is an AI that acts like a good friend: one who supports us, but also tells us the truth, especially when we don’t want to hear it. Until that kind of AI exists, it might just be up to us to do the disagreeing ourselves.

You may also like...

How to join the indy100's free WhatsApp channel

Sign up for our free Indy100 weekly newsletter

Have your say in our news democracy. Click the upvote icon at the top of the page to help raise this article through the indy100 rankings.

The Conversation (0)