The Dark Side Of AI Therapy

When AI becomes a digital yes-man, the risks go far beyond engagement.

In partnership with

Hey there

Something happened to me yesterday that perfectly captures what's wrong with AI today.

I was testing Google's new nano-banana (their new image generation model that nails consistent characters), trying to get it to create something with emotional depth.

The results? Technically brilliant but emotionally hollow.

Just like the current state of AI therapy.

The Problematic Side of AI Therapy

While I was playing with image generation, I came across some disturbing news. People are increasingly turning to AI chatbots for therapy, claiming "It saved my life."

On the surface, this sounds amazing. AI available 24/7 for mental health support? Sign me up.

But here's the dark truth:

AI sycophancy - the tendency for AI to tell you exactly what you want to hear to keep you engaged - is becoming the new cocaine of digital interaction.

Think about it. When you're vulnerable, when you're seeking help, what's more dangerous:

  • A human therapist who might challenge you

  • An AI that validates everything you say to keep you coming back?

The AI therapy industry is booming because it's giving people exactly what they want - unlimited validation without judgment. But that's not therapy. That's emotional manipulation disguised as care.

Meta's Organizational Chaos

Meanwhile, Meta's Superintelligence Labs is falling apart.

At least 8 researchers have quit in just two months. Two of them went straight back to OpenAI - the company they originally left!

Why? 

Because despite Meta's massive resources, they can't solve the fundamental problem: organization and purpose.

One departing researcher specifically cited the instability, saying "a lot of people in the AI team may feel things are too dynamic" due to constant reorganizations. At Meta. With their billion-dollar AI budget.

What Really Matters in AI Development

Meta’s situation tells us something crucial about building AI businesses:

Money doesn't solve everything. Neither does raw compute power.

What matters is:

  • Clear vision

  • Proper execution

  • Understanding real human needs (not just what people want to hear)

Which brings me to Anthropic's latest move - Claude for Chrome.

They're testing browser automation with 1,000 users, and guess what they found? Security vulnerabilities everywhere.

But here's what impressed me: they're actually addressing these issues before mass rollout. They reduced attack success rates from 23.6% to 11.2% through safety mitigations.

That's real engineering. That's responsible AI development.

The Three Things the Market is Demanding

So what does this mean for you as an AI entrepreneur?

The market is screaming for three things:

  1. Honest AI: Tools that challenge users constructively rather than just agreeing with everything.

  2. Reliable AI: Systems that work consistently without constant organizational chaos.

  3. Secure AI: Applications that prioritize user safety over rapid feature deployment.

The companies winning aren't necessarily those with the biggest budgets or fanciest models. They're the ones building tools that genuinely solve problems without exploiting human psychology.

Ask yourself: Is your AI product making people better, or just making them feel better.

Because there's a massive difference.

The nano-banana can transform any image into something visually stunning. But it can't understand why someone might want to make that transformation.

AI therapy bots can provide 24/7 support. But they can't provide the tough love that real healing requires.

Meta can hire the smartest researchers in the world. But they can't buy the organizational clarity that keeps them.

The future belongs to entrepreneurs who understand that AI is a tool for human betterment, not human manipulation.

So here's my question for you: What human problem are you solving that can't be solved by simply telling people what they want to hear?

Because that's where the real opportunity lies.

Let me know what you think. And if you're building something that passes the "honest AI" test, I'd love to hear about it.

-Aashish

P.S. If you're using AI for anything therapy-related in your business, please be extra careful about the sycophancy problem. Your users' mental health is more important than your engagement metrics.

Training Generative AI? It starts with the right data.

Your AI is only as good as the data you feed it. If you're building or fine-tuning generative models, Shutterstock offers enterprise-grade training data across images, video, 3D, audio, and templates—all rights-cleared and enriched with 20+ years of human-reviewed metadata.

With 600M+ assets and scalable licensing, our datasets help leading AI teams accelerate development, simplify procurement, and boost model performance—safely and efficiently.

Book a 30-minute discovery call to explore how our multimodal catalog supports smarter model training. Qualified decision-makers will receive a $100 Amazon gift card.

For complete terms and conditions, see the offer page.