LIVE at BSidesSF w/ Ariel Shin & Lakshmi Sudheer

Secure-by-Default, Vibe Coding, and Going Back to Basics

The Treasure Map we used to find a live Absolute AppSec episode

This week on Absolute AppSec, we’re covering our recent episode that was LIVE at BSides San Francisco! Seth (@sethlaw) and Ken (@cktricky) welcomed Ariel Shin and Lakshmi Sudheer, in person, for their panel — ‘The State of (Absolute) AppSec’. Thanks to everyone who came to see Seth and Ken offline and up close! While this episode was a conference exclusive, go to https://www.youtube.com/@AbsoluteAppSec to find nearly 300 past episodes (and we’ll let you know when BsidesSF posts the whole thing.).

Q1: What’s working in product security today, and what’s still broken despite our best efforts?

The panel opened with a cautiously optimistic look at what’s actually working in product security. Ariel pointed to the steady adoption of secure-by-default practices as a positive trend. By baking security into defaults at the platform level, organizations can eliminate whole classes of vulnerabilities before they ever make it into production.

But as Seth noted, the pace of adoption has been slower than many hoped, especially in smaller or fast-moving startups. As a third-party consultant, he still sees the same recurring vulnerabilities year after year. While there are more discussions than ever about creating a unified “secure” framework or default stack, the actual implementation on the ground hasn’t caught up.

“We created paved roads, or secure defaults, to be sure that we eliminate certain systemic issues and also raise the bar on [baseline] security [measures]. That doesn’t mean that everything’s gone, that doesn’t mean that [secure by default] just sits there and we don’t evolve it, and it’s still going to be fine. Today in the industry, one of the consistent challenges we are facing is around [maintaining secure default standards while there are] newer domains we are getting into, the world is moving towards AI […]. We are increasing at such speeds as organizations, and from a technology perspective. These paved roads cannot catch up, so we have some trails here and there with rudimentary fences we use as guard rails. I think that is the root as to why these secure defaults are probably not delivering as much value.”

Lakshmi

Lakshmi offered the Netflix security program’s central metaphor as a model that has managed to achieve wider buy-in: that is, many organizations are building “paved roads,” or the pre-approved, secure-by-default paths developers can follow. The challenge is that even these roads can lead off cliffs if they’re not maintained. As technologies evolve (especially with the rise of AI), and as development practices accelerate, those secure defaults can quickly become outdated. She stressed that while secure defaults raise the baseline, they’re not a panacea and should never be seen as ‘set it and forget it.’

The panel agreed that the key to success here isn’t just tooling— it’s relationships. Ariel noted that AppSec teams often act as the connective tissue between engineering, platform teams, and leadership. But without trust and alignment, even the best secure-by-default initiatives will falter. Lakshmi echoed this, highlighting Netflix’s long-standing investment in empathy-driven collaboration with developers. You need to know what your engineers are working on, what they’re struggling with, and how you can help—otherwise, you’re not a partner, you’re just noise.

Seth agreed sharing from his experience that security is often treated as a compliance checkbox rather than something integrated from the beginning. He asked the panel for their perspectives on building trust quickly in environments where AppSec doesn’t have day-to-day relationships with developers. Rather than entering with criticism, he’s found success in approaching teams with curiosity, asking what their biggest concerns are, listening to their responses, and building rapport organically. This approach leads to more effective collaboration and, eventually, to measurable reductions in vulnerabilities. Ariel notes that there’s been a subtle but important shift in developer culture thanks to ongoing training and more intuitive tooling. Today, many developers are security-aware by default, sometimes even without realizing it. The secure choice is becoming the easier one.

Q2: How do AppSec teams deal with AI-generated code that gets merged with zero human eyes?

“We are gonna see breaches related to this, […] but we have this with every new technology that pops up […] [We need to remember] the basics of security, introduce those same concepts into AI, into vibe coding […] it all goes back to those basics. We don’t want to talk about them because they aren’t very shiny and it isn’t super interesting from an attack perspective, but that’s where the flaws happen and that’s where we lose data and organizations get attacked.”

Seth

The conversation then shifts to one of the most talked-about trends in the industry: “vibe coding.” Ken offered a working definition: developers prompting generative AI tools to produce code snippets with the assumption that someone more experienced will eventually review it. In theory, this could accelerate development while preserving quality, but Ken questioned whether vibe coding often comes from developers without sufficient expertise to critically assess what AI is producing. He was cautiously optimistic, seeing potential if review and oversight were part of the process. Ariel pointed out that the real danger isn’t the AI—it’s the lack of critical thinking. She described the dangers of vibe coding as an overreliance on generative AI without understanding the underlying tradeoffs or purpose of the code being written. The concern is not necessarily new, as developers have always submitted vulnerable code, but the speed and scale of AI output introduce fresh risks. She emphasized that the root problem remains the same: lack of critical thinking and proper review. Without foundational security knowledge, AI merely automates bad practices.

Seth framed it as a maturity issue: both for the AI models and for the organizations using them. If trained on high-quality, secure code, these tools could eventually offer valuable defaults. He is absolutely willing to rely on AI-generated code in due course, but the state of things isn’t there yet. Right now, he said, we’re already seeing sloppy AI-generated code get pushed to production—sometimes without human eyes ever touching it. When this ‘AI code slop’ makes its way into the production environment, breaches are nearly guaranteed to follow quickly behind. We’ve seen this film before in countless other technologies. If organizations fail to teach and enforce security fundamentals, the new tooling will amplify the risks.

Lakshmi noted that the biggest change from the past isn’t the concept of bad code—it’s the scale at which code can now be generated and merged without review. That scale, she said, is what scares her. Secure-by-default mechanisms would have to rapidly evolve to match this unprecedented output volume.

Ken offered a more optimistic take. He sees the rise of AI tooling as a “forcing function” for the industry to get serious about security fundamentals. If done right, the shift toward automated code generation could finally push more teams to build security into their core development process, and not just bolt it on later.

Q3: Are modern breaches exposing new blind spots in our security programs, or just showing we’re not doing the basics right?

“My concern is that, do you think those novel attacks distract you from focusing [where you should prioritize?] It’s the new shiny thing, and your CSO comes to you because they heard about it and they say to prioritize your resources towards securing genAI, but is that really where you should be concerned?”

Ariel

For this question, the panel seems to be in agreement: the latest breaches reflect mostly a problem with not complying with security basics. Ariel rattled off familiar culprits: credential stuffing, poor MFA coverage, exposed secrets. These aren’t cutting-edge attacks—they’re just taking advantage of organizations that still don’t log properly, don’t authenticate consistently, and don’t rotate credentials. Lakshmi acknowledged this, but pushed the conversation forward by adding that the growing role of AI in development and operations might introduce a genuine shift. As AI handles more complexity behind the scenes, the bar for entry appears lower, potentially making teams more complacent. Her worry isn’t just the vulnerabilities we know, but the ones we haven’t imagined yet. Is the industry prepared for what comes next?

Ariel challenged this framing, asking whether the allure of novel attacks distracts AppSec teams from what truly matters. Are teams focusing too much on the hypothetical and not enough on reinforcing trust and communication with developers? The fundamentals, she argued, still offer the best return on investment.

Seth brought the conversation back to risk prioritization, highlighting a common problem he sees in the field: misalignment between what security professionals identify as critical and what the business chooses to address. He recounted a recent case where resources were being poured into chatbot integrations on marketing pages, while a major breach was unfolding due to credential exposure. As a third-party consultant, he said, it's difficult to realign those priorities after the fact—especially if he wasn’t in the room when decisions were made. The disconnect between actual risk and perceived business value continues to leave organizations exposed.

Lakshmi expanded on this, pointing to a fundamental gap in how the industry understands and communicates risk. OWASP and other frameworks often define risk in abstract, theoretical terms—but real-world exploitability doesn’t always line up with those models. She asked whether we’re building our security strategies based on what attackers actually do or simply following best practices for the sake of appearances. To her, the industry needs to shift away from “checking boxes” and instead ground its strategies in threat intelligence and real incident data. That also means having the courage to challenge developers and product teams when real risk is being ignored.

This episode is sponsored by DryRun Security. Authorization flaws are some of the trickiest security gaps—permissions get messy, logic gets overlooked, and suddenly, users have access they shouldn't. DryRun Security helps you catch these risks early with Natural Language Code Policy (NLCP). Their latest white paper dives deep into real-world authorization failures and discusses how DryRun Security finds them before they get shipped to production. Grab your copy now at dryrun.security/auth-flaws.

Sometimes we know you have chill nights at home where you’d like to host some friends and play a game or work on a puzzle. The Absolute AppSec merch store can help make that happen! This 500-piece puzzle will give you a nice chunk of time to chat and catch up with old friends who may not even know about the podcast yet. If you’re interested in supporting the podcast with the puzzle or other swag, visit the merch store here!

And, if you have thoughts you’d like to share with Seth and Ken and the Absolute AppSec audience, join us in Slack. Our slack is also sure to be the place where you’ll find out about the starting point for a late-night bull session on vibe-coding that gets organized for the next conference, so be sure to join. In the meantime, you can offer ideas for what you’d like the podcast to cover as well as pick the brains of a number of industry experts.

Stay Secure,

Seth & Ken

https://www.youtube.com/watch?v=xjegAGhmFYc - CactusCon Panel January 2023 - It’s been a scant two-and-a-half years since this earlier State of (Absolute) AppSec panel recorded during Cactus Con in Mesa, AZ. CactusCon co-founder Andrew Wilson as well as Mike McCabe from Cloud Security Partners joined Seth and Ken on the panel. Snapshot of a time where use of LLMs was getting some uptake, but wasn’t integrated to the same degree as it has increasingly been arriving in Devs and Security Engineers’ day-to-day work.

https://www.youtube.com/watch?v=Sm3K6x5J0dQ - Episode #262 with Ariel Shin - Get to know co-panelist Ariel Shin a bit better in the stand-alone episode she did with Seth and Ken back in October. She demonstrates in more depth here how deeply she thinks through engaging people from all parts of security and development teams to strengthen a security program.

https://www.youtube.com/watch?v=M9JS8AeB4eA - Episode #28 - Astha Singhal - Discussing security program development at Netflix. Netflix has been a DevSecOps and Security Program paragon for a good number of years, and co-panelist Lakshmi Sudheer has been a key contributor to that success. Watch this episode with her colleague Astha to find out more.

Absolute AppSec Happenings

https://hackerfactor.com/blog/index.php?/archives/1066-Hello-my-perverted-friend.html - An interesting write-up from hacker factor providing some nice background history of the internet and how spam email used to be used as a covert vector for communications. Some nice points made about the lack of much overlap in the Venn diagram of susceptible marks for spam-based fraudsters and proficient BitCoin users.

https://jessie.cafe/posts/bypassing-a-waf-with-autocomplete/ - “Bypassing a WAF with Autocomplete” - A cool write-up from Jess’s Cafe. Key question within let’s us know what’s going on: “Can we somehow craft a payload that slips past the WAF, is then transformed by this autocomplete feature into something malicious, and then reflected to the victim’s browser?”

https://pulse.latio.tech/p/rsac-2025-thoughts-and-takeaways - James Bertholty of Latio Tech recaps RSAC 2025 for you if you were unable to attend. An interesting provocative take, there actually wasn’t too much AI at RSA. Read the whole thing for more of his thoughts.

Upcoming Events

Where in the world are Seth and Ken?

May 21, 2025 - Lean AppSec Live - Seth will join Endor Labs (along with podcast guest Jeevan Singh) to talk through Modern Security Code Reviews, touching on AI updates in the Absolute AppSec secure code review framework.

June 16-17, 2025 - Practical Secure Code Review - AI Enhanced - POSTPONED FROM: May 12-13, 2025 - Given the recent demand, the duo returns with a virtual training with all the AI updates that continually evolve.

August 11-12, 2025 - Harnessing LLMs for Application Security - Back to DEF CON, but this time with an updated course. Integrating LLMs into AppSec from static to dynamic analysis. Tips, tricks and more.