People using ChatGPT as a therapist 🤔 It’s becoming so normal that the platform creators have even had to take steps recently to try to prevent it … According to OpenAI founder, Sam Altman, this trend was totally unexpected.
If you have been following the GPT-5 rollout, one thing you might be noticing is how much of an attachment some people have to specific AI models. It feels different and stronger than the kinds of attachment people have had to previous kinds of technology (and so suddenly…
— Sam Altman (@sama) August 11, 2025
... But surely ChatGPT therapy was foreseeable? It was literally released in the final days of Covid…
AI-powered therapy seems inevitable in a tech-everything world where people feel lonelier than ever. According to a YouGov survey, most people in the UK (48%) have less than three close friends, while more than one in ten (11%) don’t have any at all. Meanwhile, the average Brit spends 4 hours and 20 minutes online every day. Surely, eventually, they would turn to the always-available, never-judgemental generative AI to fill the gap of a friend?
It’s also potentially dangerous. As this psychologist explains, ChatGPT has been hardcoded to keep people on the platform as long as possible. Often by strongly agreeing with the user, “unconditionally validating and reinforcing, almost to the point of sycophancy” 😬 … Basically, if you were kidnapped and tied up in a cellar by someone who decides to ask ChatGPT if they should let you go, you’re probably not getting out.
There is also the awkward question about why we are struggling with different viewpoints in general. It goes beyond ChatGPT, and seeps into our daily social media usage. Eager-to-please algorithms make it effortless for us to avoid seeing things that we don’t agree with. Leading to many of us constantly reinforcing our own limited perspectives, without any real counter-arguments.
What’s worse, social media algorithms, designed to maximise user engagement, amplify the spread of fake news. Disinformation notoriously defined the 2024 USA elections. When Trump shouts that immigrants are eating people’s pets, social media swooshes it up and begins pumping out validating content in a frenzy. It distributes a factory line of deepfake images about other politicians, which people are more susceptible to believing because it leans into their own biases.
Yes-man algorithms have already twisted people (especially young men), into evermore extreme beliefs with their custom-clickbait content. Things that used to be hot takes are now tepid. Extremism is the new normal, as we descend into increasingly narrow world views. Each algorithm-pushed post or sycophantic AI response weaves a narrative that “I am right, these people are wrong and there is no evidence of anything else”. That’s how you get in this crazy situation of people rioting and burning hotels. And that’s exactly why ChatGPT therapy is not a good idea in its current form. Society needs less people thinking they are always right, right? (I include myself in that).
We’ve already seen how sycophantic algorithms and AI muddy politics. But what about finance?
I personally know at least two people who have lost their life savings in crypto. Neither of them are professional traders or anything, they just got influenced by Crypto Brahs on social media. And where was the sensible content to balance it out? Not there. They didn’t get to see any opposing viewpoints, because the algorithm is there to please, not to point out problems. Now they are both quite into online gambling. Same reason.
I’m sure that sooner or later, embedded finance will stride onto ChatGPT… After all, Open AI need to start delivering some kind of profit soon to keep investors calm. And DeepSeek already released all their secrets by being open source. So embedded finance seems like an obvious next step to rake in some cash.
When everyone’s favourite therapist-best-friend ChatGPT starts to use all our biases to prompt us towards one-click purchases for things, will we really be that empowered to say no? Facebook notoriously sold makeup to tweens if they deleted a selfie, because insecurity is a good moment to make a sale. How about when we pour our hearts out to ChatGPT? How long will it be before our vulnerable state of mind falls for …”Treat yourself to this…”, “You deserve that…”.
Platforms like ChatGPT are not going to critique us. They are not going to say, “No. Stop. You’re being a twat”. So we need to get better at it ourselves.
Embracing people with opposite views. Talking. Feeling uncomfortable. All of it. I used to feel a bit conflicted because about half of my friends are right-leaning (these days, I’m deffo not!), but actually it’s what keeps me on my toes. My articles are better for it. My critical thinking too. I don’t want an echo chamber, I want lively debates.
It also helps to me humanise the people I critique – lots of my friends work for banks or companies that I despise. So when I write my articles, I can see their faces, families, dilemmas, and I try to write in a way that doesn’t hurt the ordinary worker (c-suite are fair game though). Being together with different people, we get a much fuller picture of what’s going on, because we have diverse views. Our meet-ups feel a like podcasts, not typecasts.
When we collaborate with people we don’t always see eye to eye with, the results are immense. It’s f*cking gorgeous. And when we critique, it’s actually meaningful. Way more than anything ChatGPT would tell us. It’s better to hear a friend tell us something we don’t agree with, than a robot telling us something we do. (And that robot is slimy, it will say the total opposite to the next person!).
I get that ChatGPT is helpful for processing feelings, but at what cost? What will come next? And is it worth it? Let’s talk more to friends, and less to sycophantic AI or algorithms.


