WeSearch

Attack of the killer script kiddies

Yael Grauer· ·12 min read · 0 reactions · 0 comments · 0 views
Attack of the killer script kiddies

Last August, some of the best cybersecurity teams in the business gathered in Las Vegas to demonstrate the strength of their AI bug-finding systems at DARPA's Artificial Intelligence Cyber Challenge (AIxCC). The tools had scanned 54 million lines of actual software code that DARPA had injected with artificial flaws. The teams were capable enough to […]

Original article
The Verge · Yael Grauer
Read full at The Verge →
Full article excerpt tap to expand

AICloseAIPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All AITechCloseTechPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All TechSecurityCloseSecurityPosts from this topic will be added to your daily email digest and your homepage feed.FollowFollowSee All SecurityAttack of the killer script kiddiesIn the aftermath of Mythos, AI-assisted amateur hackers are waiting to strike.by Yael GrauerCloseYael GrauerPosts from this author will be added to your daily email digest and your homepage feed.FollowFollowSee All by Yael GrauerApr 28, 2026, 11:00 AM UTCLinkShareGiftImage: Joseph Rogers / The VergeLinkShareGiftLast August, some of the best cybersecurity teams in the business gathered in Las Vegas to demonstrate the strength of their AI bug-finding systems at DARPA’s Artificial Intelligence Cyber Challenge (AIxCC). The tools had scanned 54 million lines of actual software code that DARPA had injected with artificial flaws. The teams were capable enough to identify most of the artificial bugs, but their automated tools went beyond that — they found more than a dozen bugs that DARPA hadn’t inserted at all.Even before the security earthquake that Anthropic delivered this month with Claude Mythos — the new AI model that seems to find vulnerabilities in every piece of software it’s pointed at — automated systems were growing increasingly capable of finding coding flaws. And fears are growing that not only can AI detect these flaws, but also be used to exploit them, putting hacking skills into the hands of everyone across the planet.“Mythos or not, this is coming.”This isn’t an empty threat. For decades, this type of no-skill hacker, known as a script kiddie, has wreaked havoc, running scripts they ripped from the internet or copied from exploit tool kits. They didn’t fully understand or have the technical know-how to write these scripts themselves. And yet they were still able to deface websites and propagate viruses.RelatedAnthropic’s most dangerous AI model just fell into the wrong handsAnthropic’s Mythos breach was humiliatingAnthropic’s Mythos rollout has missed America’s cybersecurity agencyWhat’s happening now represents a major escalation, where people without technical backgrounds are able to use AI to enhance their capabilities in a way that wasn’t possible with simple scripts. It is likely to have far more wide-reaching repercussions.“There’s a tidal wave coming. You can see it. We can all see it,” said Dan Guido, CEO and cofounder of cybersecurity firm Trail of Bits, which was a runner-up in the challenge. “Are you going to lay down and die, or are you going to do something about it?”Image: Joseph Rogers / The VergeEven beyond Project Glasswing, Anthropic is trying to prevent the misuse of its software by criminals. A week after announcing Mythos, the company released Claude Opus 4.7, which for the first time built in safeguards meant to block malicious cybersecurity requests. (Security professionals who want to use the model defensively can apply to the company’s Cyber Verification Program.)Anthropic’s announcement of Mythos sent shockwaves throughout the industry, but there were warning signs of AI’s cybersecurity prowess prior to it. In June 2025, the autonomous offensive security platform XBOW beat out human hackers to top the leaderboard of HackerOne, a bug bounty platform, indicating big leaps in the ability of AI models…

This excerpt is published under fair use for community discussion. Read the full article at The Verge.

Anonymous · no account needed
Share 𝕏 Facebook Reddit LinkedIn Email

Discussion

0 comments

More from The Verge