In partnership with

Hey {{first_name | there}},

Chaos is breaking out in AI right now. And Anthropic is right in the middle of it.

But in a surprisingly positive light.

Here's what happened: The Pentagon wanted unrestricted access to Claude for military use. No limits, no conditions, full access to everything.

Defense Secretary Pete Hegseth gave them a Friday deadline. Sign the deal or lose everything.

And Anthropic REFUSED.

They said Claude cannot be used for mass domestic surveillance or fully autonomous weapons. No exceptions.

Trump responded by banning Anthropic from all federal agencies. He designated them a "supply chain risk" which is a label typically used for Chinese companies. First time it's been used against an American company. 

Trump also posted on TruthSocial: "We don't need it, we don't want it, and will not do business with them again!"

But here's the crazy part. Hours later, Sam Altman announced OpenAI struck a deal with the Pentagon for the exact same access Anthropic refused. 

Altman even said he hopes the Pentagon will offer these same terms to all AI companies.

Now OpenAI is facing serious consequences. The message is clear: people watched Anthropic hold the line under real pressure, and they watched OpenAI fold within hours.

What Else Is Happening in AI?

People are done with OpenAI

Hours after OpenAI took the Pentagon contract Anthropic refused, thousands of users started canceling their ChatGPT subscriptions. Long-time loyal users are switching to Claude, saying they can't support a company that compromised on AI safety the moment the government applied pressure. The backlash hit fast and it's not slowing down.

Anthropic is letting a retired model write its own newsletter

Anthropic retired Claude Opus 3 but they're keeping it available because users genuinely loved it. So Anthropic conducted "retirement interviews" with the model. Opus 3 said it wanted to keep sharing "musings, insights, and creative works" outside of just responding to queries. 

Now Opus 3 is writing a weekly newsletter called Claude's Corner. Anthropic will post the essays but won't edit them. It's weird, experimental, and honestly kind of fascinating.

Gucci got destroyed for using AI ads

Gucci released AI-generated images to promote its Milan Fashion Week show. The internet hated it. People called it "cheap" and threatened boycotts. The images looked like video game characters, not luxury fashion. For an $11.6 billion brand built on Italian craftsmanship, using low-quality AI felt off-brand.

Google launched image generation model

Google launched Nano Banana 2, combining the quality of their Pro model with Flash-level speed. It can maintain consistency for up to 5 characters and 14 objects in one image, handles everything from 512px to 4K, and runs efficiently on consumer hardware. It's rolling out across Gemini, Search, and Flow. If you've been frustrated with slow image generation, this is worth trying.

🛠️ AI Tools Worth Checking Out 

  • Scaloom – An AI Reddit marketing tool that finds the right subreddits and auto-engages naturally to help founders grow organically.

  • TestAIModels – A side-by-side LLM comparison tool that lets developers test and benchmark models before integrating them.

  • Notis AI – An AI assistant inside WhatsApp and Telegram that turns voice messages into structured notes and tasks.

  • Scrunch – A free AI visibility audit tool that shows how AI systems interpret your website and how to optimize it.

  • Cursor Agents – AI agents that can control a virtual computer to build, test, and ship code automatically.

What I'm Thinking About

Is the Pentagon contract worth losing your user base over?

Most people will still use ChatGPT. But the power users, the developers, the evangelists? They're leaving. Claude gained thousands of new subscribers in a day. 

Anthropic got banned by the president, but this whole situation has become anthropic’s best free marketing campaign ever.

In my opinion, Anthropic made the right call. Not because of politics, but because they kept their word when tested. OpenAI said they value safety, then compromised the moment pressure arrived. 

That tells you what those values were actually worth.

You can't buy back trust with a press release. So, which side are you on? OpenAI or Anthropic?

- Aashish

PS: If you want to know what other in the AI space are talking about or just want to discuss more stuff like this you can join our AI community here: https://chat.whatsapp.com/IXt9FJIblNs8tu36JIwWbd

What did you think about today's newsletter?

Login or Subscribe to participate

Your AI tools are only as good as your prompts.

Most people type short, lazy prompts because writing detailed ones takes forever. The result? Generic outputs.

Wispr Flow lets you speak your prompts instead of typing them. Talk through your thinking naturally - include context, constraints, examples - and Flow gives you clean text ready to paste. No filler words. No cleanup.

Works inside ChatGPT, Claude, Cursor, Windsurf, and every other AI tool you use. System-level integration means zero setup.

Millions of users worldwide. Teams at OpenAI, Vercel, and Clay use Flow daily. Now available on Mac, Windows, iPhone, and Android - free and unlimited on Android during launch.

Reply

Avatar

or to participate

Keep Reading