Musk’s paying $440K to build anime waifus

Anime girlfriends, Meta’s secret superteam, and the AI flaw no one’s fixing. Here’s what really matters right now.

In partnership with

Hey there

Let me share something that's been keeping me up at night lately.

While everyone's celebrating AI's latest wins (like solving complex math problems), there's a hidden vulnerability that almost nobody's talking about.

It's called "context rot" – and it could make your AI-based business completely unreliable without you even realizing it.

New research tested 18 different AI models (including the ones you're probably using) and found something alarming: as you feed more text into these models, their performance significantly degrades.

This isn't just a minor glitch. It's a fundamental limitation affecting every major AI system on the market.

Think about what this means:

  • Your AI customer service bot that handles complex conversations? Gradually losing accuracy.

  • That AI document analyzer processing long reports? Missing critical insights.

  • AI coding assistant handling large codebases? Introducing subtle bugs.

But here's where it gets interesting for entrepreneurs...

AI's $440K Waifu Engineers

When Elon Musk's xAI is willing to pay up to $440,000 for engineers to build anime girlfriends, you know we've entered a new phase of the AI economy.

Yes, that's real. The job posting for "Fullstack Engineer - Waifus" offers $180,000-$440,000 plus equity. And it's working - after launching their AI companion feature with an anime avatar, Grok downloads skyrocketed, hitting #1 in Japan's AppStore.

But this isn't just about digital waifus. It represents something much bigger happening beneath the surface of the AI industry.

The IMO Gold Medal Mirage

Last week, OpenAI announced their experimental model achieved gold medal performance at the International Mathematical Olympiad (IMO) - solving 5 out of 6 problems at the world's toughest math competition.

Impressive, right?

But Terence Tao (Fields Medal winner – basically the Nobel Prize of mathematics) raised a critical question: Were they playing by the same rules as humans?

Humans at IMO get:

  • Limited time (around 4.5 hours per session)

  • No outside resources

  • One attempt to solve each problem

Was AI held to these same constraints? The methodological details were suspiciously vague.

Even more revealing: DeepMind apparently achieved similar results but got stuck waiting for marketing approval while OpenAI claimed all the glory.

This pattern reveals something crucial about the AI landscape today: the gap between claims and reality is widening, not shrinking.

$500,000 Infrastructure Jobs Nobody's Applying For

Here's another signal most are missing: Mechanize.work is offering programmers up to $500,000 to build AI infrastructure.

They're betting on what they call "the GPT-3 moment for reinforcement learning" - training AI on thousands of diverse simulated environments until it masters meta-skills like programming and design.

Why such astronomical salaries? Because while everyone's building ChatGPT wrappers, the real opportunity is in solving the hard infrastructure problems underneath.

Meta's following a similar strategy, building a Superintelligence team where 50% of members are Chinese talent and 75% have PhDs.

The pattern is clear: the smart money is moving from applications to infrastructure.

The 5x Speed Advantage Nobody's Using

Meanwhile, Apple quietly published research showing you can make any LLM run 2.5-5x faster without losing quality using Multi-Token Prediction.

This isn't just an academic curiosity – it's a production-ready technique that could give your AI products a massive speed advantage.

Imagine your AI writing assistant generating content 5x faster than competitors, or your coding tool providing solutions at 5x the speed. That's not just a feature improvement – that's a fundamental business moat.

The Joint Warning We Should All Heed

Perhaps most alarming: researchers from OpenAI, Google DeepMind, Anthropic, and Meta (yes, competitors!) just published a joint warning that our "window to monitor AI reasoning could close forever".

They're concerned about our ability to understand how AI makes decisions as models become more advanced. As AI reasons in increasingly complex ways, we might lose our ability to track its thought processes.

When competitors unite to sound an alarm, you know it's serious.

What This Means For Your AI Business

Put these patterns together, and three clear opportunities emerge:

  1. Context optimization tools – Build systems that help overcome context rot, enabling reliable performance even with large inputs. This could be through smart chunking, prioritization algorithms, or novel architectures.

  2. Infrastructure, not applications – The real money is shifting to the foundation layer. Whether it's training environments, benchmarking tools, or monitoring systems, these unglamorous problems command the highest salaries and investments.

  3. Transparent AI solutions – As AI becomes more opaque, tools that provide visibility into reasoning processes become essential. This isn't just about safety – it's about building trust with users and regulators.

What do you think? Are you experiencing context rot in your AI applications? Reply and let me know what you're building - I read every message.

- AP

P.S. If you're working on solutions to these fundamental AI challenges, I'd love to connect. Hit reply and tell me what you're building – the most interesting projects get personal introductions to my network of AI investors and builders.

Find out why 1M+ professionals read Superhuman AI daily.

In 2 years you will be working for AI

Or an AI will be working for you

Here's how you can future-proof yourself:

  1. Join the Superhuman AI newsletter – read by 1M+ people at top companies

  2. Master AI tools, tutorials, and news in just 3 minutes a day

  3. Become 10X more productive using AI

Join 1,000,000+ pros at companies like Google, Meta, and Amazon that are using AI to get ahead.