Episode #319 - Vercel Breach, Security vs. Compliance, Pull Request Flows w/ AI Agents

Vercel and the risks of AI third-party dependencies, what security is and is not, and planning the potential funeral of pull request flows in the SDLC.

This week on Absolute AppSec, Ken (@cktricky) and Seth (@sethlaw) are far away from home at DEF CON SP. So until they return, let’s dive into last week’s episode. This week, they focused on the recent Vercel security incident and what it tells us about the evolving landscape of AI-driven compliance. This leads to a discussion about the SOC2 provider Delve, with Seth and Ken addressing allegations of “fake” compliance automations and the general limitations of auditing frameworks that do not inherently equate to true security. Then, Seth and Ken debate the future of the Pull Request flow, whether or not traditional human-led code reviews are dead due to the massive volume of code generated by AI agents. To catch us live next week, head over to https://www.youtube.com/@AbsoluteAppSec/streams at 10 AM PST/12 PM ET. You can also find our 318 other episodes on Apple Podcasts, Spotify, or wherever you get your podcasts. 

“Meeting compliance regulations does not mean that you’re secure[…]Every breach that’s happened against an organization that has been audited in some manner [shows that] compliance does not equal security.”

Seth

The central focus of this episode centers on the Vercel breach and the cascading security failures of its third-party dependencies, specifically Context AI. Upon a closer look at the mechanics of the incident, it is revealed that it began when a Vercel employee authorized a consumer-level Context AI application within their Google Workspace. This authorization granted Context AI broad permissions, which became a critical vulnerability when Context AI’s own environment was breached. The attackers were then able to leverage compromised OAuth tokens to pivot from the Context AI consumer suite into Vercel’s internal environments. Ken notes that while Vercel allows users to mark environment variables as sensitive to prevent them from being read back, any variables not marked as such were exposed during the leak. This incident serves as a stark reminder of the price of engineers pushing the “easy button,” despite knowing better, granting broad permissions to tools for the sake of convenience.

They then pivot into an examination of security attitudes surrounding compliance, sparked by the discovery that Context AI utilized a SOC2 automation startup called Delve. There are allegations that Delve may have been fabricating evidence or providing fake compliance checks through AI-generated documentation that lacked human oversight. Seth is passionate that compliance—whether it be SOC2, PCI, or ISO—does not equate true security. There is an inherent conflict in low-cost compliance automation, where customers naturally gravitate towards the cheapest checkmark they can get, often leading to a process where intermediaries generate control documentation that doesn’t reflect actual operational reality. The hosts view this as a cautionary case study on how AI-driven automation can inadvertently weaken security by removing the necessary human element during critical audits.  

“The pull request flow takes too long in the traditional SDLC, but I would argue that the overall idea is not dead[…]because I still see errors being made by the agents themselves. And so we’ve got to do some sort of monitoring on what those agents are doing.”

Seth

Finally, Seth and Ken try to predict the future of the Software Development Life Cycle, and try to answer Ken’s provocative question regarding the potential death of the pull request flow. Ken discusses how some organizations, such as Notion, are moving towards more hands-off, agentic workflows where AI agents write and commit code directly. While a survey of their community showed a split between those who believe the PR flow is permanent and those who see it vanishing within a few years, Ken and Seth agree that the traditional human-heavy review process is struggling to keep pace with the sheer volume of AI-generated code. Seth suggests that the PR flow is already dead in its current form for agile startups, evolving instead into a system of agentic guardrails and automated attestations. They both agree that while the need for historical record-keeping and change management remains, the mechanism of trust will likely shift toward signed commits and perhaps even blockchain-style attestations to verify that security and functional reviews were performed by authorized agents.

This episode was sponsored by Redpoint Security. Redpoint specializes in "Code Security by Coders," bolstered by years of experience testing applications and conducting code reviews against all types of apps, including AI, web, and mobile. Redpoint also offers developer appsec and secure-code training to help ground your teams in better security practices across the development lifecycle. Check out redpointsecurity.com for more information and put your company on a path to better security.

Sprouts are coming up, so now’s the time to display your solidarity along with your support for the Podcast. Check out the selection of green tees in our merch store. (There are other colors as well). Tee-shirts are great gifts for yourself or your friends:

Spring green tees are available now!

If you want to debate the future of pull requests, the safety of AI third-party dependencies, or anything else AppSec-related, come do so in our Slack, and maybe stay a while.

Stay Secure,

Seth & Ken

https://www.youtube.com/watch?v=VreqmGPAK7I – Episode #189 - Security Bypasses, AppMap, Dastardly - Seth and Ken discuss the way users work around security constraints when it gets in the way of usability, in the case in the health care industry where screen lockouts can undermine speed of care in serious situations.

https://www.youtube.com/live/jALpBoAKiB8 – Episode #236 - Memory Safe Languages, LLM Supply Chain Security - An early instance of LLM supply-chain risks is discussed in this episode with Hugging Face models repository being used for watering-hole attacks.

https://www.youtube.com/watch?v=hjnSfZLyKM0 – Episode #257 - In-Person vs. Virtual Training, Compliance Violations - The podcast duo discuss the experience of delivering trainings in-person vs virtually (shoutout to everyone who came to DEFCON Singapore for Harnessing LLMs). Also, the compliance violations leading to the government bringing a lawsuit against Georgia Tech brings up similar discussion points regarding what compliance is versus what security is.

Absolute AppSec Happenings

AI Full-Stack Development: The Anti-Patterns Rise Against Us - Part 1 – Using the recent Vercel and Lovable sensitive data exposures as a jumping off point, this article warns that AI-driven development is fostering dangerous "anti-patterns," where developers over-rely on LLM-generated code without full comprehension. This shift leads to "hallucinated dependencies," bloated codebases, and critical security vulnerabilities. 

LLM-tier personal computer security – As LLMs lower the cost of finding exploits and executing phishing, individual security must evolve. The author recommends moving toward a "zero-trust" personal setup: sandboxing risky software using tools like Flatpak or VMs, implementing strict firewalls to block unauthorized egress, and using hardware security keys. The core takeaway is to treat amateur software as potentially compromised while prioritizing "consciousness hygiene" and robust authentication to weather the increasing AI-driven threat landscape.

Defending Our Consciousness Against the Algorithm - Michael Pollan argues that algorithms and AI erode our consciousness by colonizing what he calls “generative boredom,” the first step in genuine creativity. By replacing spontaneous thought and real human connection with digital distraction, a neurological highway away from displeasurable thoughts, our mental muscle weakens. To defend our interiority, we must embrace idle moments and practice "consciousness hygiene" to reclaim our minds.

Upcoming Events

Where in the world are Seth and Ken?

August 1-4, 2026 - AI-Enhanced Secure Code Review: Black Hat Edition - Seth and Ken are bringing a four-day exclusive course to Black Hat. This is an update on the exclusive version of the course offered at Black Hat Europe. Early bird pricing is ongoing, so it’s a great opportunity to get a truly in-depth understanding of Secure-Code Review and how it can be empowered through LLM-tooling. Seth and Ken have innovated industry-leading trainings in both of these topics, so this four-day course promises to provide a lot of valuable insight.