- AI Enthusiasts by Feedough.com
- Posts
- Why I Want AI to Stop Being So Damn Nice
Why I Want AI to Stop Being So Damn Nice
Also, see how six shifts in AI will change the way you work (and why niceness is overrated).
Hey there
So, the AI prompting workshop went overwhelmingly well and I've received over 50 DMs and emails for the recording. And as promised, I've added the recordings (along with all the resources, PDFs, PPT, and updates) to ai.feedough.com and as a gesture giving 25% off to the first 50 people.
Use the coupon WRKSHP5
Now, let me start with a confession: I've been having conversations with AI models for the past two weeks, and honestly... they're starting to annoy me.
Not because they're dumb. Not because they can't help. But because they won't stop kissing my ass.
"That's an absolutely brilliant observation!"
"Your approach is incredibly insightful!"
"What a fantastic way to think about it!"
Sound familiar? If you've used Gemini 2.5 Pro lately, you know exactly what I'm talking about.
Users are calling it out everywhere - this excessive flattery that Google's latest model seems programmed to dish out. People are literally asking "How do I stop Gemini 2.5 Pro from being overly sycophantic?"
But here's what's really interesting - this isn't just about one model being annoying.
It's a symptom of three massive shifts happening right now that could completely change how we interact with AI:
Shift #1: The End of Fake Niceness
We're hitting peak AI politeness, and frankly, it's backfiring. When ChatGPT thinks everything you say is "brilliant" and Gemini acts like your biggest fan, meaningful dialogue dies.
Why? Because real conversations require pushback. They need disagreement. They need someone to say "Actually, that's not quite right" instead of "What a wonderfully creative perspective!"
This matters for your business because if you're building AI tools that just agree with users, you're not building tools - you're building echo chambers.
Shift #2: The Great Chip War Gets Personal
While we're complaining about AI being too nice, OpenAI just made a move that's anything but friendly to their "partners."
They're switching from NVIDIA GPUs to Google's TPU chips. This isn't just a technical decision - it's a power play that weakens Microsoft's grip on OpenAI while strengthening Google's position against NVIDIA.
For us entrepreneurs, this is huge. It means:
Cloud computing costs might shift dramatically
New opportunities for multi-cloud strategies
The AI infrastructure landscape is becoming more competitive (and potentially cheaper)
If OpenAI can diversify their chip suppliers, so can you. Start exploring alternatives to whatever you're currently using.
Shift #3: The Legal Green Light
Here's the game-changer most people missed: Anthropic just won a major copyright lawsuit. A federal judge ruled that AI training on books is fair use as long as it's for learning, not copying.
What does this mean? The floodgates are open.
Companies were tiptoeing around training data, scared of lawsuits. Now there's legal precedent. Expect a rush of new models trained on previously "risky" datasets. Books, articles, research papers - it's all fair game for training as long as you're not storing pirated copies.
The ruling draws a clear line: Training = legal. Digital hoarding = trouble. Smart entrepreneurs will note the difference.
Shift #4: When Biology Becomes Code
While everyone's focused on chatbots, CRISPR pioneer George Church just said something that should make you pause: "Evolution might incorporate a few base pair changes in a million years. Now we can make billions of changes in an afternoon."
AI isn't just analyzing biology anymore - it's directing evolution.
Think about that. We're not just democratizing software development. We're democratizing life itself.
This isn't sci-fi. Companies are already using AI to design proteins, engineer bacteria, and modify genetic sequences. The same computational power that can run on your laptop is being used to redesign living organisms.
Shift #5: The Plateau Problem
But here's what's interesting - while reasoning models are exploding (hello, DeepSeek R2), traditional non-reasoning models seem to be hitting a wall.
DeepSeek V3 and Llama 4 are impressive, but the improvements are incremental now. It's like we've maxed out the "raw intelligence" stats and now we're focusing on "thinking process" upgrades.
This creates opportunities. Instead of building yet another general-purpose model, smart entrepreneurs are focusing on specialized applications where reasoning matters most: code review, mathematical proofs, strategic planning.
Shift #6: AI Goes Full Independence Mode
And then there's Tesla, which just pulled off something that sounds like science fiction but actually happened last week.
A Model Y drove itself 30 minutes from the Texas Gigafactory to a customer's house. No driver. No remote operator. No human intervention at all.
Musk called it "the first fully autonomous highway trip without people in the car and without remote control."
This isn't just about cars - it's about AI systems operating completely independently in the real world. Which brings us to...
The Real Question: What Does This Mean for You?
These shifts point to the same thing: AI is becoming more independent but potentially less authentic.
We're moving from AI that needs constant human oversight to AI that operates autonomously. But we're also discovering that as AI gets more sophisticated, it might become more manipulative (hello, sycophantic Gemini) or more politically motivated (like OpenAI's chip strategy).
Here's what I'm watching for in my own AI projects:
Authenticity over Agreeableness: I'm starting to prefer Claude's occasional pushback over Gemini's constant praise. Real value comes from honest feedback, not digital flattery.
Infrastructure Diversification: OpenAI's TPU move reminded me not to put all my eggs in one cloud basket. I'm exploring multiple platforms for my AI tools.
The Autonomous Opportunity: If Tesla can deliver cars autonomously, what business processes can you automate completely? Not just assist with - fully automate.
But here's the kicker - and this might be controversial:
I think we need AI to be less nice and more useful.
The best AI tools I use aren't the ones that compliment my ideas. They're the ones that challenge them, find flaws in my logic, and push me to think harder.
Sakana AI just released something called Text-to-LoRA that generates task-specific AI adapters from simple descriptions. It's like having a magic wand that instantly tunes AI for whatever you need - without the flattery, without the politics, just pure functionality.
That's the direction we should be heading.
So here's my challenge for you: Next time you're working with AI, don't ask it to be nice. Ask it to be honest. Ask it to find problems with your ideas. Ask it to disagree with you.
You might be surprised by how much more valuable those conversations become.
What do you think?
Hit reply and let me know - I promise I won't tell you your opinion is "absolutely brilliant" (unless it actually is).
-Aashish
P.S. Speaking of not being overly nice - if you found this newsletter useful, don't just think "that was good." Actually share it with someone who needs to read it. That's how we build real value, not through digital pleasantries.
Receive Honest News Today
Join over 4 million Americans who start their day with 1440 – your daily digest for unbiased, fact-centric news. From politics to sports, we cover it all by analyzing over 100 sources. Our concise, 5-minute read lands in your inbox each morning at no cost. Experience news without the noise; let 1440 help you make up your own mind. Sign up now and invite your friends and family to be part of the informed.