In partnership with

Hey {{first_name | there}},

Right now, millions of people are getting their daily news from AI chatbots. 

Not as a supplement. But as the primary news source.

And these chatbots are changing what we believe in without anyone noticing.

Researchers published findings showing that AI models exhibit communication bias that gradually influences how people form opinions. This isn't about obvious misinformation or fact-checking failures.

It's about something more subtle and more dangerous.

When AI summarizes news, writes headlines, or answers questions about current events, it doesn't just report facts. It FRAMES them. It emphasizes certain details while downplaying others. 

The researchers call this communication bias. I call it a problem nobody's taking seriously enough.

Because unlike traditional media where you can choose between different news sources with different perspectives, most people using AI for news are using one of maybe three models. ChatGPT, Gemini, or Claude. That's it.

Three companies shaping how millions of people understand the world.

And those companies are becoming less transparent, not more.

AI Companies Are Hiding More Than Ever

Stanford researchers recently released their annual Foundation Model Transparency Index, and the findings are concerning.

Only 4 out of 13 major AI companies properly evaluate risks before releasing their models and actually report the results. 

The rest? They're keeping critical information locked away. What training data did they use? What's the environmental impact? Do their safety measures actually work? Most companies won't say.

And this is happening at exactly the moment when AI is becoming the primary information source for millions of people.

Think about what that means. The systems shaping public opinion are built by companies that won't tell you how they work, what data they learned from, or whether their safety measures are effective.

You're supposed to trust them. But they won't show you why you should.

The Stanford Prediction Nobody's Talking About

Meanwhile, Stanford AI experts just made predictions for 2026 that should change how you think about building in this space.

They're saying the hype ends next year. Not the technology. The hype.

Investments will continue pouring into AI data centers through 2026, and then the excitement will dry up.

Julian Nyarko, an associate director at Stanford HAI, says 2026 will be characterized by rigor and ROI. Translation: companies will stop funding things that don't make money.

According to Deloitte's 2026 Tech Trends report, the urgency is shifting from endless pilots to real business value. The demo phase is ending.

The Skills That Actually Matter Now

Here's what's interesting about the job market right now.

According to economists studying AI's impact, the most valuable skill isn't coding or prompt engineering. 

It's the ability to explain how AI works in simple terms that non-technical people can understand.

They're calling it the "AI translator" role. Someone who can bridge the gap between what AI can do and what business teams need.

Another emerging role: AI auditor. Someone who checks AI systems for bias and factual inaccuracies before they cause problems.

Notice what these roles have in common? They're not about building AI. They're about making AI safe, understandable, and actually useful.

That's where the opportunity is. Not in building the next ChatGPT competitor. In helping organizations use AI responsibly and effectively.

What This Means For You

Let me connect the dots here.

  • People are getting their news from AI systems that exhibit communication bias. 

  • The companies building those systems are becoming less transparent about how they work. 

  • Stanford experts predict the hype cycle ends in 2026, forcing everyone to show real ROI.

  • And the most valuable skills are about understanding and auditing AI, not just building it.

If you're consuming AI-generated news, start cross-referencing with original sources. The convenience of AI summaries isn't worth the distortion in understanding.

If you're looking for AI opportunities, focus on the translator and auditor roles. Every company adopting AI needs people who can explain it simply and verify it's working correctly. These roles will survive the hype cycle because they solve real problems.

And if you're planning to launch something in this space, you have roughly 12 months before the market shifts from hype to ROI. Build something that demonstrably makes money or saves money. Everything else is noise.

So here's my question: Are you building for the hype cycle that's ending? Or for the ROI-focused market that comes next?

Hit reply and let me know what you're seeing. I'm curious whether others are noticing the same patterns.

- Aashish

Easy setup, easy money

Making money from your content shouldn’t be complicated. With Google AdSense, it isn’t.

Automatic ad placement and optimization ensure the highest-paying, most relevant ads appear on your site. And it literally takes just seconds to set up.

That’s why WikiHow, the world’s most popular how-to site, keeps it simple with Google AdSense: “All you do is drop a little code on your website and Google AdSense immediately starts working.”

The TL;DR? You focus on creating. Google AdSense handles the rest.

Start earning the easy way with AdSense.

Reply

or to participate