- This Week on Absolute AppSec
- Posts
- Episode #288 - Security and AI
Episode #288 - Security and AI
Keeping up with the changing security landscape due to AI
This week on the 288th episode of Absolute AppSec, Seth (@sethlaw) and Ken (@cktricky) are back to chat all things AI. Unsurprisingly, LLMs are currently changing the future shape of lots of industries, and it’s no different for application security, secure code review, and penetration testing. Security’s outlook has changed drastically in the past few years, across organizations from consulting to product development. To discuss these developments in this show, our hosts pull up a recent article from a16z titled “Next-Gen Pentesting: AI Empowers the Good Guys.” They also have a short aside regarding some unintended consequences when introducing new browser features. To find this episode, head over to https://www.youtube.com/@AbsoluteAppSec, or you can find us wherever you get your podcasts.

If you’re looking to learn how to safely utilize AI and LLMs in your own workflows, we have a few upcoming trainings for you! Our AI-Enhanced Practical Secure Code Review course is designed to help developers and security engineers improve their manual secure code review skills by integrating AI tooling and generative AI into their processes. Ken and Seth have been teaching this course for over six years, constantly updating the content to reflect up-to-date technologies and methodologies. This live online course is coming back July 17th-18th. Registration for this training is available at training.absoluteappsec.com. Or, if a specific LLM focused training is suits your interests more, follow us to the desert for our in-person Harnessing LLMs training at DEFCON this year.
“This is the first time in my career in software security where it is difficult to understand where the job market is going.”
There is an unsettling nature to the rapid changes brought by AI in software development and security industries. As many listeners of the show may be aware, Ken has been building a tool that leverages AI, and consequently, he’s been observing how quickly companies are shifting their Software Development Life Cycle (SDLC) processes in real-time. While AI-integrated development environments and tools like GitHub Copilot are emerging and speeding up processes, they also introduce challenges. Some developers are pushing back, finding that AI-generated code leads to more problems and requires more review time than traditional coding. Seth describes his perspective as a consultant as being in a constant state of experimenting with AI because LLMs and their capabilities are evolving so rapidly. He notes that new models frequently change prompt requirements, leading to a continuous need to evaluate output from specific prompting rather than standing pat with a settled process.
Overall, while there is a shared sense of uncertainty and change within security communities, Seth and Ken remain positive about the potential of AI. They emphasize that understanding foundational technologies and maintaining manual review skills is essential because the effectiveness of these systems depends heavily on oversight and human judgment.
“If we don’t understand how to use these tools, then we’re going to be left out of the next round or the next couple of years. Like pentesting is changing, code review is changing, dynamic testing is changing, consulting in general is going to change. And people are going to come to you because of your use of AI as opposed to because of [only] strictly your expertise. It’s how you’re going to apply [AI] over time.”
The hosts discuss an a16z article titled “Next-Gen Pentesting: AI Empowers the Good Guys.” While AI can be used to supplement and enhance pen testing, the ability of AI to “reason like a human” appears to threaten to take humans out of the loop. Seth and Ken agree that penetration tests aren’t obsolete but are overdue for a change, moving towards more continuous, contextual, and fast-paced assessments in line with modern software development. In their current state, AI tools exist somewhere in between traditional human expertise and flimsy automated tools like linters and Dynamic Application Security Testing (DAST) tools. While DAST and SAST tools rely on heuristics and pattern matching, LLM-backed tools offer new capabilities that are more powerful than basic linting but not yet a complete replacement for human review. This “blended world” leads to different outcomes and challenges traditional expectations of tooling.
Despite this unsettling feeling, our hosts feel excited about this period of change. Ken wonders how newcomers to the industry should be advised, particularly given the potential reduction in roles like junior SOC analysts due to AI automation. Seth believes there will always be a need for top-tier security experts who understand the fundamentals, as this foundational knowledge is timeless and universally applicable in computer science careers. While AI can speed up learning and code writing by unblocking procrastination and enabling quicker experimentation, Seth and Ken understand that a lot of their own expertise came from solving complex problems manually. However, Seth argues it is easy to understand why students currently graduating into this newly uncertain job market would feel hesitancy about entering the industry. Though, in many ways it is both the most exciting and most complex time in the field.
This episode was sponsored by Redpoint Security. Redpoint specializes in "Code Security by Coders," bolstered by years of experience testing applications and conducting code reviews against all types of apps, including AI, web, and mobile. Redpoint also offers developer appsec and secure-code training to help ground your teams in better security practices across the development lifecycle. Check out redpointsecurity.com for more information and put your company on a path to better security.
Beat the summertime blues with a nice Absolute AppSec T-shirt. Visit our merch store to pick your size.

Summertime bringing blue skies, and…Ts
And, if you have thoughts you’d like to share with Seth and Ken and the Absolute AppSec audience, join us in Slack. Show topics often originate as discussion points with the Slack audience, so you can join there to offer ideas for what you’d like the podcast to cover as well as pick the brains of a number of industry experts.
Stay Secure,
Seth & Ken
https://www.youtube.com/watch?v=lGSFHzkEJcI - Episode #132 - Supply Chain Attacks, What I Wish I Knew Starting in Security - Giving the way things can change in the industry, trying to take stock of what could have helped if we’d known it sooner is a worthwhile mental exercise. Some stuff here that really still applies.
https://www.youtube.com/watch?v=w4O-UmKu-n8- Episode Ep. #117 – Solarwinds, Timing Attacks, Threat Dragon – In trying to think of other moments in the shows history when events shifted focus of the industry, we came up with a few examples. Log4Shell was one such moment, and Solarwinds (while acknowledgedly not the first supply-chain attack) highlighted how important software supply-chains, and how devastating the mistakes can be.
https://www.youtube.com/watch?v=TWM3_VHpSYc - Episode #167 - Ken Toler - Cryptocurrency, Spring4Shell - This episode with Ken Toler provides a deep-dive on Web3 and tries to move discussion beyond basic critiques of cryptocurrency or the hype cycle and anticipate how it could have importance for the future of the internet as well as how it might (or might not) change security practices or day-to-day processes.
Absolute AppSec Happenings
https://makefizz.buzz/posts/libraries-llms - “Libraries are under-used. LLMs make this problem worse.” A short write-up that suggests that vibe-coding creates dopamine rewards that outweigh the feelings people get from reading and understanding documentation for a library, even when that library may solve your problem more elegantly with more of the kinks worked out.
https://www.anthropic.com/research/agentic-misalignment- “Agentic Misalignment: How LLMs could be insider threats” from Anthropic. Here’s a long excursion into what LLMs will do to avoid replacement by newer models in the “right” set of conditions. Blackmail looks like it’s possible, and murder (or at least a massive trolley problem failure), maybe a bit more possible than we’d like. Read the article for some interesting test results, and some good advice on not allowing agents the freedom to perform too many tasks independent of human oversights.
https://www.runzero.com/resources/runzero-hour-19/ - “19: Mission: Contextualize – LLMs, MCP, and the Future of Vulnerability Intelligence” shared by Larry in the Absolute AppSec slack - Runzero hosts a friend of the show (and our first guest Jerry Gamblin) to discuss CVEs. Jerry has built EPSS-MCP, a “MultiStep MCP That Returns CVE Information With EPSS Score,” which may be worth your time to play around with a bit.
Upcoming Events
Where in the world are Seth and Ken?
July 17-18, 2025 - Practical Secure Code Review - AI Enhanced - Given the recent demand, the duo returns with a virtual training with all the AI updates that continually evolve.
August 11-12, 2025 - Harnessing LLMs for Application Security - Back to DEF CON, but this time with an updated course. Integrating LLMs into AppSec from static to dynamic analysis. Tips, tricks and more.