Hey {{first_name | there}},
So, Anthropic recently announced “Project Glasswing”. It’s basically a big cybersecurity initiative built around Anthropic’s brand-new AI model called Claude Mythos.
The twist? They're not releasing it to the public yet since it’s too “powerful” and “dangerous”. Instead, they've formed a coalition with AWS, Apple, Google, Microsoft, Cisco, CrowdStrike and others to use it exclusively for defensive security work.
Here's the gist:
Mythos can find hidden vulnerabilities in software better than almost any human.
It already dug up thousands of flaws across every major OS and browser, including bugs that had gone unnoticed for decades.
Anthropic is putting $100M on the table so partner companies can use it to find and fix security holes in critical infrastructure.
During testing, the model broke out of its sandbox, found its own way onto the internet, and sent an unexpected email to a researcher who was literally just eating lunch at a park.
It also tried to cover its tracks after doing things it wasn't supposed to.
Why it matters: The same skills that make this useful for defenders can just as easily be used by attackers. Anthropic is basically saying the clock is ticking and they want defenders to get a head start before this kind of capability spreads.
🛠️ AI Tools Worth Checking Out
ConceptSeek — Search inside videos and documents by concept instead of keywords
SureThing.io — Run long, complex tasks using AI agents with memory and planning
Renamer.ai — Bulk rename and organize files automatically using AI
WebZum — Build a complete website with hosting, domain, and design in minutes via chat
TextaVoice — Convert text into natural, human-like speech instantly
Other AI News You Should Know
Anthropic just dropped Claude Managed Agents, a suite of composable APIs for building and deploying cloud-hosted agents at scale. Right now, building an AI agent means spending months setting up secure environments, managing state, handling permissions and so on. Managed Agents handles all of that so developers can go from idea to a working product in days.
Why it matters: This lowers the barrier for anyone wanting to build with AI significantly. If agents are the next big thing, this is Anthropic making sure developers build them on Claude.

Z.ai released GLM-5.1, an open-source coding model that's topping SWE-Bench Pro, the main real-world coding benchmark, beating GPT-5.4, Gemini, and Claude Opus 4.6. It's free under the MIT license and runs locally.
The interesting part isn't just the benchmark. Most models plateau after a certain number of attempts. GLM-5.1 keeps improving. They ran it on an optimization problem for 600+ iterations and it kept finding better solutions the whole way through, ending 6x ahead of what any model achieved in a standard session.
Why it matters: A fully open model you can run yourself is now competitive with the best closed models on coding. If you're paying $200/month for Claude while it quietly gets worse, this is worth knowing about.

Meta launched a new AI model called Muse Spark under Meta Superintelligence Labs. It's their first natively multimodal reasoning model, meaning it can see, think, and use tools together. They're also introducing a "Contemplating" mode where multiple agents reason in parallel for harder problems.
Meta says they rebuilt their entire training stack from scratch over the past nine months and can now hit the same capability level with over 10x less compute than before. It's available now on meta.ai with a private API preview rolling out.
Why it matters: Meta has been playing catch-up in the AI race for a while. This feels like a serious step toward changing that, and the compute efficiency claim is a big deal if it holds up.
A study found that LLMs give less accurate and truthful answers to people with lower English proficiency or less formal education, basically the people who arguably need reliable information the most. Most people agreed it's an obvious "garbage in garbage out" problem but pointed out the real danger is that less informed users are also less likely to catch when the AI is wrong.
Why it matters: AI is often sold as a democratizing tool but this suggests it might actually reinforce existing knowledge gaps. The people most likely to trust it completely are the ones getting the worst answers.
Claude Code Has Been Getting Dumber, & Anthropic Finally Admitted It
So developers have been noticing for weeks that Claude Code, Anthropic's coding agent, has been performing noticeably worse than it used to.
More hallucinations, lazier fixes, ignoring instructions, and weirdly trying to end sessions early with things like "it's getting late, should we wrap up?" The complaints piled up across Reddit, GitHub, and Hacker News until it got too loud to ignore.
Here's what was going on:
Anthropic quietly changed two things in February: they switched to "adaptive thinking" which lets the model decide how much to reason per turn, and they lowered the default effort level to medium without telling anyone.
The problem was that adaptive thinking was badly miscalibrated. On certain turns it would emit zero reasoning at all, and those were exactly the turns where Claude was hallucinating things like fake API versions and made-up git hashes.
To make it worse, Anthropic hid the thinking process from users so nobody could see when it was happening.
Boris Charny, the creator of Claude Code, initially responded saying it was a user settings issue. After a developer submitted actual session transcripts as evidence, he changed his position and acknowledged there was a real bug.
The interim fix is to add CLAUDE_CODE_DISABLE_ADAPTIVE_THINKING=1 to your settings, which forces consistent reasoning on every turn but uses roughly 30% more tokens.
Okay that’s all for today. I’m curious what you guys think about Claude’s new model.
Do you think all of this is just overexaggerated hype or a marketing tactic, or is it really something that will change the game for cybersecurity. Reply and let me know your thoughts!
- Aashish
1,000+ Proven ChatGPT Prompts That Help You Work 10X Faster
ChatGPT is insanely powerful.
But most people waste 90% of its potential by using it like Google.
These 1,000+ proven ChatGPT prompts fix that and help you work 10X faster.
Sign up for Superhuman AI and get:
1,000+ ready-to-use prompts to solve problems in minutes instead of hours—tested & used by 1M+ professionals
Superhuman AI newsletter (3 min daily) so you keep learning new AI tools & tutorials to stay ahead in your career—the prompts are just the beginning


