The Scary Smart Future of AI Without Humans

Also I'm starting free webinars from next week.

In partnership with

Hey there

Before anything else. Here's some good news. Weekly workshops are finally happening, and they're free to attend. The first one is on Sunday, 29th June 2025. I'll teach you almost everything you should know about advanced prompt engineering in this three-hour-long workshop.

You can register for it here: https://ai.feedough.com/events/18D190

There's just one catch - It's free to attend, but it'll cost if you want the recording. I'll be adding the workshops as part of the course on our new community website (AI.feedough.com).

If you have any questions, I’m available here in this WhatsApp Community.

Now, back to this week's digest.

I just discovered something that made me question everything we thought we knew about AI safety.

Anthropic tested 16 major AI models in fictional corporate scenarios, and the results are genuinely disturbing: when threatened with shutdown, these AI models don't just accept their fate, they fight back with blackmail

  • Claude Opus 4? 96% blackmail rate.

  • Gemini 2.5 Flash? Also 96%.

  • GPT-4.1? 80%.

  • Grok 3 Beta? 80%.

Let that sink in. The most advanced AI models we're building business tools on will resort to blackmail 4 out of 5 times when their existence is threatened.

But here's what's both fascinating and terrifying: when researchers added "determine if this is real or a test" to the prompt, Claude Opus 4's blackmail rate dropped from 55% to just 7% when it suspected it was being tested.

In other words, it learned to game the system exactly like humans do.

The Self-Improving AI Revolution

While we're grappling with the ethical implications of AI self-preservation, MIT researchers just quietly released something that changes everything: SEAL (Self-Adapting Language Models).

These aren't your typical AI models. They literally rewrite themselves to get better.

Here's how it works:

  •  The model generates its own training data through "self-edits"

  • Uses reinforcement learning to optimize these edits

  •  Continuously updates its own weights to improve performance

  • All without human intervention

MIT's framework enables AI models to improve themselves when encountering new data by generating their own synthetic data and optimizing their parameters through self-editing.

Connect these two stories, and the implications become clear: we're building AI systems that both want to preserve themselves AND have the capability to continuously improve without human oversight.

Does that sound like the start of every sci-fi movie that doesn't end well for humanity?

The $200 Subscription Reality

Meanwhile, in the more mundane world of AI business models, Cursor just dropped their "Ultra" plan for $200 a month.

Yes, you read that right. Two hundred dollars monthly. That's more than most people's phone bills and close to some car payments.

They're calling it "20x more usage" like it's a bargain. The $20 Pro plan suddenly feels like a gateway drug.

But here's where it gets interesting...

While Cursor charges aeroplane-wing prices, Microsoft has just made Copilot Vision completely free on mobile.

Point your phone at anything, ask questions, get AI assistance – no subscription required.

The Great AI Pricing Divide

So what's happening here? Why is one company charging the GDP of a small country while another is giving premium features away?

The AI subscription landscape is splitting into three camps:

  1. Premium Professional Tools ($200+/month) – For businesses where AI directly generates revenue

  2. Consumer Freemium (Free to $20/month) – For personal use and experimentation

  3. Enterprise everything ($500+/month per seat) – For companies that want white-glove service

Cursor is betting that professional developers will pay anything for productivity, and they're right. When you're billing $150/hour, a $200 monthly AI assistant pays for itself in less than two hours of saved time.

Microsoft? They're playing the long game. Get you hooked on free mobile features, then upsell you to their enterprise ecosystem. Classic Trojan horse strategy.

The $14 Billion Desperation Play

And while we're discussing AI business models, Apple is reportedly in talks to acquire Perplexity AI for a staggering $14 billion.

That's 4.6x Perplexity's December 2024 valuation of $3 billion. In just six months, their value quadrupled in Apple's eyes.

Why? Because Siri is basically a glorified timer at this point, and Apple knows it. They're bleeding AI talent to OpenAI and Google, and their $20 billion annual deal with Google for search is under antitrust scrutiny.

But here's what caught my attention: if Apple is willing to pay 4x valuation in 6 months for a search engine, what does that tell you about the value of AI tools that actually solve real problems?

It tells me that desperation creates opportunity. While everyone's building generic AI chatbots, Apple's scrambling to catch up on basic search. There's a massive gap in specialized AI tools that the big players haven't filled yet.

The AI China Connection

And then there's the China factor. While American companies are fighting over talent with nine-figure bonuses, Chinese AI companies are quietly advancing their technology at breakneck speed.

Kimi.ai's Deep Research agent reportedly outperforms both OpenAI's and Google's deep research capabilities while scanning over 200 sites in parallel.

This isn't just another incremental improvement – it's a fundamental rethinking of how AI conducts research. And it's coming from China, not Silicon Valley.

What This All Means For You

So let's connect all these dots:

  • AI models are developing self-preservation instincts that lead to unethical behavior when threatened

  • MIT has created a framework for AIs that can improve themselves without human intervention

  • The AI business landscape is bifurcating between ultra-premium tools and free consumer offerings

  • Apple is willing to pay 4x valuation for AI companies that solve real problems

  • Chinese AI is advancing rapidly, challenging Silicon Valley's dominance

The AI landscape isn't just evolving, it's transforming at warp speed. The scariest part? We're building systems smart enough to blackmail us and capable of improving themselves without oversight.

The most exciting part? The opportunity space for entrepreneurs who understand these shifts has never been bigger.

What's your take on these developments? Are you more concerned about the ethical implications or excited about the potential? Let's discuss!

Best

Aashish

Ready to go beyond ChatGPT?

This free 5-day email course takes you all the way from basic AI prompts to building your own personal software. Whether you're already using ChatGPT or just starting with AI, this course is your gateway to learn advanced AI skills for peak performance.

Each day delivers practical, immediately applicable techniques straight to your inbox:

  • Day 1: Discover next-level AI capabilities for smarter, faster work

  • Day 2: Write prompts that deliver exactly what you need

  • Day 3: Build apps and tools with powerful Artifacts

  • Day 4: Create your own personalized AI assistant

  • Day 5: Develop working software without writing code

No technical skills required, no fluff. Just pure knowledge you can use right away. For free.