Episode #314

LLM AppSec Disruption, Limitations of AI in Security, and AppSec Oversight

This week on the 314th episode of Absolute AppSec, Seth (@sethlaw) and Ken (@cktricky) have more to say about the seismic shift in the application security landscape triggered by the rise of LLMs, and specifically, Anthropic’s Claude Code. The industry’s historical reliance on checkbox compliance tools like SAST, DAST, and SCA are being replaced by AI-native strategies that are theoretically capable of reasoning through complex logic flaws. AI still suffers from what is called “reasoning drift” and still requires deterministic validation, meaning security professionals must adapt by building custom skills and focusing on governance and observability. As developers move to AI speed, the traditional role of the AppSec professional is forced to evolve. To find this episode, go to https://www.youtube.com/@AbsoluteAppSec/streams, or find us wherever you get your podcasts.

“Those traditional tools have to change. And if they don’t, guess what? Anthropic and Open AI is going to eat their lunch.”

Seth

A large portion of this discussion surrounds the economic overreaction following Anthropic’s announcement of its Claude-based secure-code review capabilities. Seth and Ken express their shared astonishment at how the financial markets have wiped out hundreds of billions in value from established SAAS and cybersecurity stocks based on what Ken describes as a hackathon project. This market panic exposes a fundamental lack of understanding among investors and even some CISOs regarding the true scope of AppSec. There is a common, dangerous misconception that AppSec is nothing more than a triad of SAST, DAST, and SCA tools—an oversimplification that leads many to believe AI can simply replace human domain expertise overnight.

Seth and Ken are always first in line to get mad over the current state of old-world security tooling. Seth specifically targets the dynamic scanning space, revealing that many traditional DAST scanners suffer from a 20%-40% false-positive rate and struggle to crawl modern frameworks like React effectively. They posit that traditional tools are dead if they do not adapt beyond marketing rebrands, as native AI strategies allow for much deeper reasoning about logic flaws and business context than pattern-based legacy systems. Ken illustrates this by describing how an AI agent can identify a complex scoping mismatch in an authorization decorator, which is a nuanced logic error that basic deterministic scanners would almost certainly miss.

“There’s no way that a human being can sit there and […] review PR that, you know, twenty, thirty, forty PRs in a day that contain fifty plus lines or files of [AI-generated] code […] no human eyeballs can manage that.”

Ken

Looking forward to vibe coding and this forced shift toward governance and observability: developers have begun shipping code at AI speed—sometimes tens of thousands of lines daily—and human code review, as Ken notes above, has become physically impossible, leaving security teams in an oversight role rather than manual intervention. Seth and Ken emphasize that while AI can empower small teams to build custom, purpose-built security rules using natural language rather than complex domain-specific languages, the ultimate challenge remains ‘reasoning drifts’ and the lack of standardized benchmarking in AI-generated code. While AI-powered tooling seems transformative and even magical when orchestrated correctly, it remains an assistant that requires a human-led framework of governance to prevent it from introducing new, high-velocity vulnerabilities.

This episode was sponsored by Redpoint Security. Redpoint specializes in "Code Security by Coders," bolstered by years of experience testing applications and conducting code reviews against all types of apps, including AI, web, and mobile. Redpoint also offers developer appsec and secure-code training to help ground your teams in better security practices across the development lifecycle. Check out redpointsecurity.com for more information and put your company on a path to better security.

Smatterings of snow storms pushing back Spring-time can’t really get you down. You can force the issue by getting ready for warmer weather in our merch store. Pick a tee-shirt, and get a size and color for yourself or friends:

Could it be…the perfect t-shirt gift for your AppSec sweetheart?

If you’ve got thoughts about AI or if you’re tired of hanging with your botfriends at Moltbook, come hang out on our Slack. As they say, on the internet no one knows you’re a…. If there’s something you’d like to have Seth and Ken discuss, by all means float the idea there.

Stay Secure,

Seth & Ken

https://www.youtube.com/watch?v=lGSFHzkEJcI – Episode Ep. #132 - Supply Chain Attacks, What I Wish I Knew Starting in Security – There is a lot to chew on in this episode, but some of the advice here is evergreen. Ken and Seth provide some general pointers: “[Push through the imposter syndrome” until you come to understand the hard thing you’re facing. Then, it will be that little bit easier each time you face a similar hurdle.]”

https://www.youtube.com/watch?v=YKFnKwR-FoM – Episode Ep. #183 - Info Warfare w/LegendaryPatMan – This episode included special guest Paddy Kerney, The Legendary Pat Man, who discussed the intersection of international conflict (in this case Ukraine is at the forefront) and open-source intelligence. Another moment when AppSec topics that usually we feel like we discuss in a silo break containment.

https://www.youtube.com/watch?v=KZ1NDVuLcDU – Ep. #92 - Working from Home, Sneek, Evolution of AppSec – Deep in the work from home era, Seth and Ken talk through the history of AppSec here. Ken provides a useful review of AppSec, highlighting how he had to promote webapp concerns to some of his potential clients by emphasizing that webapps were actually a vector into network access.

Absolute AppSec Happenings

https://www.youtube.com/watch?v=aoag03mSuXQ – ‘The Internet Was Weeks Away From Disaster and No One Knew.’ Veritasium’s AppSec documentary about XZ hack.

https://trufflesecurity.com/blog/google-api-keys-werent-secrets-but-then-gemini-changed-the-rules – Google historically advised that API keys weren’t secrets. However, enabling Gemini now grants these public keys access to private data and billing. Truffle Security found thousands of exposed keys—including Google’s own—vulnerable to this “privilege expansion.” Developers must now restrict or rotate keys to prevent unauthorized AI access.

https://www.citriniresearch.com/p/2028gic — A thought experiment depicting a fictional 2028 Global Intelligence Crisis, where hyper-productive AI causes massive white-collar unemployment. This triggers a negative feedback loop: job losses crush consumer spending (70% of GDP), collapsing the intermediation economy. The resulting defaults in software-backed loans and private credit eventually threaten the global financial system.

Upcoming Events

Where in the world are Seth and Ken?

March 21-22, 2026 - BSidesSF - Watch this space for the details regarding a panel with Seth and Ken taking place in San Francisco in March.

April 7-8, 2026 - Harnessing LLMs for Application Security - IN PERSON at Kernel Con. Come join Seth and Ken in Omaha for the Harnessing LLMs course!

April 26-27, 2026 - Harnessing LLMs for Application Security - In-person training at DEF CON Singapore. Be sure to register now if you’re looking to enhance your day-to-day AppSec processes with the power of LLM agents.