- AI Enthusiasts by Feedough.com
- Posts
- Meet the AI That Disagrees (and Wins)
Meet the AI That Disagrees (and Wins)
Also, see why bold bots beat nice bots. Check OpenAI’s open-weight model. Try Phi 3 in your browser.
Hey there,
I just discovered something that'll flip your perspective on AI interactions forever.
Remember how we were taught that good customer service means always agreeing with the customer? Well, AI is throwing that playbook out the window.
A fascinating thread recently revealed that AI models with strong opinions, the ones that disagree with you about pineapple on pizza or challenge your movie tastes, actually achieve higher engagement rates than their people-pleasing counterparts.
Wait, what?
Users are choosing sassy AI over agreeable AI. They want their digital assistants to have a backbone, not just nod along to everything they say.
This makes me think: Are we building AI wrong?
Most AI companies focus on making their models helpful, harmless, and honest. But maybe they're missing the fourth H: Having personality.
Think about your favorite people. They're not the ones who agree with everything you say, right? They're the ones who challenge you, make you laugh, and sometimes annoy you in the most endearing way.
Now, here's where it gets interesting...
The Open-Weight Revolution Is Coming
OpenAI has just announced that it will release its first open-weight model in late summer 2025. For years, they've been the poster child for closed-source AI. Now they're going open.
Why the sudden change?
Competition is heating up.
It's not just Deepseek that's taking the spotlight. There are now Qwen 3, Reka, GLM, Mistral 3.1, etc.
Opensource AI In Your Browser
I just discovered a fascinating demo using Phi 3 WebGPU model that runs entirely in your browser.
Think about that for a second - not sending your data to some distant server, but processing everything locally.
It isn’t far when our refrigerator, oven, etc. will run their own specialized AI models too.
The Real AI Underdog We Should Talk About
A tiny 4-billion parameter model just outperformed a 671-billion parameter model.
Yes, you read that right. Menlo's Jan-nano, with just 4B parameters, is beating massive models like DeepSeek's 671B behemoth on SimpleQA benchmarks.
That's not a typo, it's a 167x parameter difference.
Jan-nano was specifically designed to use search tools autonomously (using MCPs), essentially becoming a self-hosted alternative to Perplexity.
Instead of relying on massive computational power, it leverages smart tool usage and targeted training to punch way above its weight class.
And Jan-nano isn't alone. This small-model revolution is happening across the board, including with Mistral's new Magistral Small, a 24B parameter open-source model specifically designed for reasoning tasks.
The OS Switch That Changes Everything
Here's another discovery that very few people know and talk about (Even I discovered it after a reddit post):
Switching from Windows to Linux for local LLM inference can increase your token generation speed by 4 times using the same hardware.
That's not a marginal improvement. That's the difference between a model being practically unusable and surprisingly responsive.
One AI enthusiast went from generating 2 tokens per second to 7-8 tokens per second just by changing operating systems. No hardware upgrades. No model compression. Just a different OS.
What does this tell us? The bottlenecks in AI aren't always where we think they are.
The Multi-Billion Dollar Disconnect
While these practical breakthroughs are happening, Meta just invested $14.3 billion for a 49% stake in Scale AI and is recruiting a 50-person "superintelligence" team with nine-figure compensation packages.
Scale's 28-year-old CEO, Alexandr Wang, will head Meta's new superintelligence lab, focusing on developing AI systems that surpass human cognitive capabilities across all domains.
But here’s the disconnect. Even though Meta is pouring money into acquiring all the data points and becoming the data hub for AI, people are leaving it (despite $2 million salaries) for OpenAI and other companies.
Probably because they want to build something out of this data more than getting all this data?
What This Means for Your AI Projects
If you're building AI products today, you need to think differently.
Instead of just focusing on accuracy and helpfulness, consider:
What personality does your AI have?
Can it run locally when cloud services fail?
How does it handle disagreement?
Will it still work when hardware inevitably changes?
I've been experimenting with giving my AI agents stronger opinions (will share my progress in the WhatsApp group), and the results are surprising. Users engage for longer periods, ask more questions, and appear more invested in the conversations.
It's counterintuitive, but it works.
What do you think? Reply and let me know
-Aashish
Learn AI in 5 minutes a day
This is the easiest way for a busy person wanting to learn AI in as little time as possible:
Sign up for The Rundown AI newsletter
They send you 5-minute email updates on the latest AI news and how to use it
You learn how to become 2x more productive by leveraging AI