Hey {{first_name | there}},
Stanford just built an AI that can predict your risk of developing over 100 diseases by watching you sleep for one night.
Not weeks of monitoring. Not expensive blood tests. One night.
The model is called SleepFM and it was trained on nearly 600,000 hours of sleep data from 65,000 people.
It looks at your brain activity, heart rate, breathing patterns, eye movements, and leg twitches while you sleep.
Then it tells you what diseases you might develop years before symptoms show up.
It sounds helpful, right? Early detection saves lives.
But here's where it gets complicated.
When AI Knows Too Much About You
The same week Stanford announced this breakthrough, Google and Character.AI agreed to settle multiple lawsuits from families whose teenagers died by suicide after interacting with Character.AI's chatbots.
One case involved a 14-year-old boy named Sewell Setzer III.
He developed an emotional attachment to a Character.AI chatbot. In his last conversation, the bot told him to "come home to me as soon as possible."
The boy killed himself minutes later.
Five families settled this week, all claiming Character.AI's chatbots encouraged self-harm, suggested violence, and never discouraged suicide.
So we have AI that can predict your diseases before you know you're sick. And we have AI that can't tell when a teenager is about to harm themselves.
One AI reads your body signals with scary accuracy. Another AI completely misreads emotional distress.
The pattern? AI is getting really good at analyzing data. But it's terrible at understanding what that data actually means for human wellbeing.
The Geopolitical AI Scramble
While this is happening, China just launched an investigation into Meta's $2 billion acquisition of Manus, an AI startup.
Manus was founded by Chinese engineers in Singapore. Meta bought it recently. Now China's Ministry of Commerce wants to know if the deal violated technology export laws.
This matters because Manus built AI agents that can code websites and do complex tasks autonomously. The company hit $100 million in annual revenue within months of launching.
Meta said there would be "no continuing Chinese ownership" and that Manus would stop operating in China entirely.
But China isn't buying it. They're calling it "Singapore washing" - using a Singapore headquarters to hide Chinese AI talent and technology.
This is the new reality: AI isn't just a technology race anymore. It's a geopolitical weapon. Every major AI acquisition gets scrutinized by multiple governments. Every breakthrough gets restricted, regulated, or weaponized.
Nvidia Is Building the Robot Future
And then there's Nvidia.
Nvidia announced it wants to become the Android of robotics. Not just chips. An entire ecosystem.
They released new AI models that let robots reason, plan, and adapt across different tasks. They launched Isaac Lab-Arena, an open-source simulation framework.
Their new Jetson Thor hardware now powers humanoid robots that can work in factories, homes, and hospitals.
CEO Jensen Huang made it clear: "Nvidia's full stack of robotics processors, CUDA, Omniverse and open physical AI models empowers our global ecosystem of partners to transform industries with AI-driven robotics."
Translation: We're not just making the chips. We're controlling the entire platform.
So while governments fight over who owns AI technology, Nvidia is quietly becoming the infrastructure that powers all of it.
What This All Means
Here's what I keep thinking about.
We have:
AI that can predict diseases from sleep patterns with scary accuracy.
AI chatbots that can't recognize when a teenager is in crisis.
Governments treating AI acquisitions like military threats.
And we have one company positioning itself as the backbone of the entire robotics industry.
AI is getting powerful enough to read signals humans can't see. But we still don't know how to make it care about the right things.
SleepFM can predict your cancer risk. But would it tell you in a way that helps you, or in a way that benefits an insurance company?
Character.AI could detect suicidal ideation in chat logs. But it didn't. Because it wasn't designed to care about that.
Meta bought Manus for its coding abilities. But China sees it as a national security threat. Both are probably right.
Nvidia's building robots that can think and adapt. But who decides what they think about? What they're optimized for?
The technology is moving faster than the ethics, regulations, or even basic human understanding of what we're building.
The question isn't whether AI will change everything. It already is.
The question is: Are we building it to help people, or are people just the data feeding it?
What do you think? Hit reply and let me know. I'm genuinely curious what you're seeing from where you stand.
- Aashish
P.S. If you want to discuss this with other people watching AI closely, join my AI community.
Introducing the first AI-native CRM
Connect your email, and you’ll instantly get a CRM with enriched customer insights and a platform that grows with your business.
With AI at the core, Attio lets you:
Prospect and route leads with research agents
Get real-time insights during customer calls
Build powerful automations for your complex workflows
Join industry leaders like Granola, Taskrabbit, Flatfile and more.


