Your Hand-Written Code Has Had Bugs for 27 Years. But Sure, Vibe Coding Is the Problem.
Anthropic just unveiled Claude Mythos, and it's already making some people very uncomfortable. The model isn't publicly available — Anthropic considers it too powerful to release broadly right now — but what it's already doing behind closed doors is enough to rewrite the entire conversation about AI and code security.
Not because it writes code. Because it reads yours. And it's finding things that 30 years of human developers couldn't.
In the last few weeks, Mythos Preview has discovered thousands of zero-day vulnerabilities across every major operating system and every major web browser. Not theoretical weaknesses. Actual exploitable bugs, many classified as critical. The oldest one? A 27-year-old denial-of-service flaw in OpenBSD's TCP stack. OpenBSD — the operating system whose entire reputation is built on being the most secure in the world. Twenty-seven years.
It also found a 17-year-old remote code execution flaw in FreeBSD that gives unauthenticated root access. And a 16-year-old vulnerability in FFmpeg's video codec that survived every fuzzer and human code review since 2010.
These are not hobbyist projects. These are codebases maintained by some of the most talented, experienced, security-obsessed developers on the planet. And they missed these bugs for decades.
So I have a question for the people who love to say that AI-generated code is a security disaster.
Where was your outrage for the last 27 years?
I keep seeing this argument online: "Vibe coding is dangerous, these AI tools are pumping out insecure code, real developers would never ship vulnerabilities like that." And I get the concern on the surface. Studies do show that AI-generated code can contain more vulnerabilities than human-written code. That's a real thing and I'm not dismissing it.
But here's what nobody wants to talk about. Human-written code has always been full of vulnerabilities. Always. The difference is that when a senior engineer ships a bug, we call it an "oversight" or a "zero-day." When an AI ships a bug, we call it proof that AI can't be trusted.
The double standard is wild.
Mythos is so good at finding vulnerabilities that Anthropic won't even release it to the public. They created something called Project Glasswing, where only 12 partner organizations — Amazon, Apple, Microsoft, CrowdStrike, Cisco, the Linux Foundation, and others — get access. Because if this model got into the wrong hands, it could be used offensively just as easily as defensively.
Think about what that means for the "vibe coding is insecure" argument. Once tools like Mythos are widely available — and they will be eventually — it won't matter whether your code was written by a junior dev, a senior architect with 30 years of experience, or an AI. Mythos will find the holes in all of it.
The playing field is about to be completely leveled. And honestly? It already has been.
If the best human-maintained codebases in the world have had critical vulnerabilities hiding in them for two decades, then the argument that human-written code is inherently more secure than AI-written code doesn't hold up. It's not about who writes the code. It's about who checks it afterward.
And increasingly, the thing doing the checking is going to be AI too.
So maybe instead of dunking on people who use AI to build software, we should be thanking the AI for finally finding the bugs that elite developers missed since the Clinton administration.
Just a thought.
Sources: Anthropic Project Glasswing, SecurityWeek, The Hacker News, Tom's Hardware, Help Net Security
0 Comments
No comments yet. Be the first to reply!