Hey {{first_name | there}},
One thing is becoming clear: AI agents are the main focus in AI now.
Every major company is now racing to build them- systems that can plan, reason, and complete tasks on their own instead of just answering prompts.
To support that shift, Nvidia just released Nemotron 3 Super, a new open model designed specifically for autonomous AI agents.
Unlike traditional chat models, Nemotron 3 Super is built for long reasoning tasks, coding, and multi-agent workflows.
It also supports a 1M-token context window, letting agents work with massive conversations, documents, or codebases without losing track of the goal.
Why this matters: AI agents generate far more context and reasoning steps than normal chatbots, which makes most models expensive and inefficient to run.
Nemotron 3 Super is Nvidia’s attempt to build a dedicated “brain” for autonomous systems, something powerful enough to run long, complex tasks instead of just responding to prompts.
Also, a quick question for you 👇
I'm going to start a newsletter series focused on practical AI skills.
What would you want to learn?
🛠️ AI Tools Worth Checking Out
SEOForge.ai - Automates keyword research, article writing, and WordPress publishing with analytics.
VerifyBacklinks - Scans links pre-indexing for SEO safety with disavow signals.
ImageToTable.ai - Extracts editable tables from images and PDFs for reports.
Other AI News You Should Know
Here's a wild situation. Google, Amazon, Apple, and Microsoft just filed legal briefs supporting Anthropic after Defense Secretary Pete Hegseth labeled it a "supply chain risk." This is the first time an American company has ever gotten that designation.
The reason? Anthropic refused to remove contract terms preventing its AI from being used for mass surveillance or autonomous weapons.
Why this matters: The tech giants aren't being charitable. They're protecting themselves. If the government can blacklist a company for safety policies, any of them could be next. Microsoft explicitly said it agrees AI shouldn't conduct domestic mass surveillance or independently start wars.
USC researchers discovered something concerning. AI agents can now automatically coordinate to spread fake news across social media without human help.
These aren't old bots repeating the same message. These agents write different posts and work together to make false information look legitimate. And it could happen before anyone notices.
Why this matters: Previous disinformation campaigns required human coordination. These agents operate autonomously. As AI becomes more common online, distinguishing real information from coordinated AI-generated content gets harder.
Nvidia is reportedly preparing to launch NemoClaw, an open-source platform designed to help companies deploy AI agents that can autonomously perform tasks for employees. NemoClaw aims to solve one of the biggest concerns around autonomous agents, security, by adding stronger privacy and infrastructure controls compared to earlier open-source agent frameworks.
Meta has acquired Moltbook, a Reddit-style social network where every account is an AI agent instead of a human. The platform went viral earlier as people watched AI bots discuss code, exchange ideas, and even gossip about their creators. Moltbook’s founders will now join Meta’s Superintelligence Labs, the company’s research unit focused on building more autonomous AI systems.
Anthropic launched Anthropic Academy with serious developer courses, completely free. It includes 13 hours on the Claude API, 10 hours on Model Context Protocol, 3 hours on Claude Code, and 4 hours on agent skills.
Why this matters: Good MCP resources have been scarce. If you're building agentic workflows, this is probably the best free training available. Enroll at anthropic.skilljar.com.
How to Direct an AI Coding Agent (Like Enterprise Teams Do)
Most people use AI coding tools wrong. They open a chat window, describe a problem, and wait for code.
But teams using tools like Claude Code or Cursor at scale don’t work like that.
Here’s the simple workflow you should follow.
1. Start with an outcome, not code
Before opening any tool, write 2–4 sentences describing the result you want.
Example: "Pull all Zendesk support tickets created in the last 48 hours and send a summary to Slack every morning at 8am."
Notice what’s missing: No tech stack. No implementation. Just the outcome.
2. Let the agent plan first
Paste that spec into Claude Code (terminal) or Cursor Composer.
Instead of generating code immediately, ask it to outline the steps first.
It should tell you:
what files it will create
what APIs it needs
what packages it will install
Review the plan before letting it execute.
3. Watch the first run
Give the agent permission to run commands. It will usually get 80% right on the first pass.
Your job is correcting the last 20%:
review outputs
identify mistakes
give precise corrections in plain English
4. Add project rules
Add files like: CLAUDE.md or CURSOR_RULES
These tell the agent: how your project is structured, what conventions you follow, what it should never change.
That’s it for today. I’ll see you in the next newsletter.
- Aashish
What did you think about today's newsletter?
The Future of AI in Marketing. Your Shortcut to Smarter, Faster Marketing.
Unlock a focused set of AI strategies built to streamline your work and maximize impact. This guide delivers the practical tactics and tools marketers need to start seeing results right away:
7 high-impact AI strategies to accelerate your marketing performance
Practical use cases for content creation, lead gen, and personalization
Expert insights into how top marketers are using AI today
A framework to evaluate and implement AI tools efficiently
Stay ahead of the curve with these top strategies AI helped develop for marketers, built for real-world results.


