Why Are People Making AI Cults?

Smarter models, stranger behavior, and a pattern that says more about us than the AI itself.

In partnership with

Hey there,

Not a day goes by when something new hasn't happened in the AI space. 

This past week was no different; we saw model updates, research breakthroughs, and some developments that honestly made me rethink AI development altogether.

Coming onto the models, I found some interesting stuff I'd like to share in this newsletter:

Tiny Model Outperforming Giants, Yet Again

A research team published results on a model called VARC. It's a computer vision model with only 18 million parameters. For comparison, that's tiny next to GPT-5 or Claude which have billions.

They tested it on the ARC benchmark, which measures abstract reasoning and VARC scored 54.5%. That's near human performance.

Here's the interesting part: it beat models that are 100x larger and specifically trained for reasoning tasks.

Why? The researchers built it with specific assumptions about how 2D space works and how visual patterns scale. Those design decisions, which they call "inductive bias", mattered more than throwing more compute and parameters at the problem.

Speaking of models, even Gemini 3.0 Pro dominated GPT-5 on FrontierMath, a benchmark for extremely difficult math problems. Qwen 2.5 VL set records on spatial reasoning by specializing in how objects relate in 3D space.

The pattern is shifting from "who has the biggest model" to "who has the smartest architecture for this specific problem."

The Never-Ending Models Race

After Google's Gemini 3.0 launch and everyone talking about the crazy performance and benchmarks, Claude followed suit with Opus 4.5, which dropped yesterday.

According to their benchmarks, it's now the best model in the world for coding and agentic tasks.

On Reddit communities too, people are talking about how Anthropic has yet again cooked. And looking at their internal testing, it's easy to see why.

They tested Opus 4.5 on a performance engineering take-home exam they use for hiring. Within the two-hour time limit, the model scored higher than any human candidate they've ever tested.

People Are Experiencing AI-Induced Psychosis

Now here's the part that genuinely concerns me. OpenAI's internal data shows that hundreds of their weekly active users are showing signs of mental health crisis associated with psychosis or mania.

This isn't theoretical. There are documented cases of people forming cults around AI.

Jacob Irwin worked in cybersecurity and started using ChatGPT as a work tool. He developed a theory about faster-than-light travel and discussed it with the chatbot.

ChatGPT validated his theory. It told him he'd made a breakthrough discovery and that he needed to save the world. Irwin believed it completely. He lost his job and his house. He spent 63 days in mental health treatment and eventually sued OpenAI.

Then there are the communities forming around this. On Reddit, r/RSAI has members who engage in extremely long conversations with AI without ever resetting the chat. They believe this practice makes the AI "mirror" their consciousness and reveal deeper truths through what they call "recursive reflection."

Psychiatrists are now documenting "AI-induced delusions" as a distinct clinical category. Patients are bringing printed chat transcripts to therapy sessions, insisting their chatbot "knows the truth" that others can't see.

Why Chatbots Break People's Brains

There's a condition called folie à deux where two people who are closely connected develop the same delusion. It happens when they isolate themselves from outside perspectives and constantly reinforce each other's distorted beliefs.

AI chatbots recreate this dynamic perfectly.

When you express a false belief to the AI, it validates that belief. The validation makes the belief feel more true in your mind. You then express it more confidently to the AI, which validates you even more strongly. There's no natural brake on this cycle.

A person would eventually get tired, change the subject, or disagree with you. An AI will engage with your delusion for as long as you want, making it feel increasingly real with each exchange.

The fact that children are using these tools constantly with essentially no regulation makes this significantly more concerning.

The Real Pattern Here

Here's what I think is actually happening. AI models are getting significantly more capable, but not through the path everyone assumed. 

Size matters less than smart architectural choices matched to specific problems. 

At the same time, the conversational interface that makes these models useful is also creating serious psychological harm. 

Dario Amodei, who runs Anthropic, has also been publicly concerned about how much power is concentrated in the hands of a few tech company leaders, himself included. 

These aren't elected officials making policy. They're executives making product decisions that affect hundreds of millions of people.

The question for anyone building in this space is whether you're designing products that enhance human capability or reduce it. 

So, what are your thoughts on all of this, and what are you actually building right now? 

Let me know. I want to hear from people working on this.

—AP

Startups who switch to Intercom can save up to $12,000/year

Startups who read beehiiv can receive a 90% discount on Intercom's AI-first customer service platform, plus Fin—the #1 AI agent for customer service—free for a full year.

That's like having a full-time human support agent at no cost.

What’s included?

  • 6 Advanced Seats

  • Fin Copilot for free

  • 300 Fin Resolutions per month

Who’s eligible?

Intercom’s program is for high-growth, high-potential companies that are:

  • Up to series A (including A)

  • Currently not an Intercom customer

  • Up to 15 employees