Episode #281

Signing Models, Vibe Coding, GitHub Action Abuse

This week on the 281st episode of Absolute AppSec, Ken Johnson (@cktricky) and Seth Law (@sethlaw) are back to discuss securing machine learning models using Sigstore, based on a recent blog post from Google Security. They follow with some hot takes on vibe coding and its effects on application and product security. To find this episode, go to https://www.youtube.com/@AbsoluteAppSec, or find us wherever you get your podcasts.

“I have a hard time with these claims that we’re securing the underlying architecture of [machine learning] systems and infrastructure, and then claiming that it’s taking care of problems that are running on higher layers of interactions with that infrastructure […] Because on the one hand, …, that’s great, because now as a developer I can make sure that my provider is giving me the model that I expected of them and that I developed against. But if I flip around and then tell all my users, hey, I’m secure now because I’m checking this model signature, I’m doing a disservice for the actual security of my prompts and everything else that goes into it.”

Seth

Recently, a blog post from Google titled “Taming the Wild West if ML: Practical Model Signing with Sigstore” has sparked a discussion on the recent developments in machine learning security. This post addressed the need for verifiable integrity in machine learning models, especially those shared via public platforms like Hugging Face. Ken welcomed the development as a necessary and anticipated step, recalling recent instances where malicious models and leaked credentials led to compromised ML models being publicly available. These incidents echoes traditional software chain attacks, underscoring the importance of securing the AI ecosystem in much the same way as conventional package repositories like npm or PyPl. Seth adds that while signing models can protect against some types of supply chain attacks, it does little to mitigate many of the unique threats posed by LLMs, such as prompt injection, prompt evasion, data poisoning, or prompt chaining. These are interaction-level threats that signing cannot address. He likened the istuation to early web security claims where websites boasted military-grade encryption simply because they enabled HTTPS, misleading users about the actual protection level. The concern is, then, that security vendors may overstate the benefits of model signing, leading developers or users to believe they are fully protected when they are not.

“I feel like [vibe coding] is the future, right? We learn how to architect an application, how it’s well structured, how security requirements should actually be fed,[…] make that the background for the code that’s coming out of it. Especially if you have things like specific code standards that you can supply, the code might be fairly good and I can see using that, especially where I have used GitHub’s Copilot in the past, and it has increased my own productivity, […] But I would still like someone in charge of that process that has a background in both security and software engineering in order to generate that code, to make sure that it is up to snuff.”

Seth

From there, the hosts shift toward a more speculative and critical examination of “vibe coding,” a term that refers to using LLMs to generate entire software applications based on prompts rather than traditional manual coding. The hosts explored the potential and risks of this new development, with Ken raising concerns about security, maintainability, and the overall trustworthiness of AI-generated code. The problem, as both agreed, lies in the details—especially when less experienced developers begin to depend on these tools without sufficient oversight. Ken is reminded of a time when a typo corrected by an LLM in a function name caused a cascade of issues across a codebase, only caught thanks to robust automated testing. He emphasized that human attention, particularly from experienced developers, remains essential when evaluating LLM-generated code.

Seth and Ken also discussed the announcement of SecGemini, Google’s new security-focused LLM. While public information is still limited, the model is said to outperform other LLMs on the Cybersecurity Threat Intelligence Root Cause Mapping benchmark. This benchmark evaluates an LLM’s capacity to interpret vulnerability descriptions and their root causes, then classifies them according to the Common Weakness Enumeration, or CWE, taxonomy. This is a meaningful step forward in LLM capabilities, especially in areas where understanding nuance and context is crucial. If SecGemini can aid or even automate SCA workflows, it might reduce burdens and improve accuracy in vulnerability triage. There is also potential for threat detection and monitoring, particularly in analyzing application traffic or malware patterns. This growing trend of niche, purpose-built commercial LLMs, if developed and supported effectively, could mean that more refined and accurate experiences with models are on the horizon.

This episode was sponsored by Redpoint Security. Redpoint specializes in "Code Security by Coders," bolstered by years of experience testing applications and conducting code reviews against all types of apps, including AI, web, and mobile. Redpoint also offers secure code training to help ground your teams in better security practices across the development lifecycle. Check out redpointsecurity.com for more information and put your company on a path to better security.

The Springtime sunshine can be sneakily scorch noses that have been too accustomed to a winter spent indoors. In which case, maybe it’s time to consider a hat to keep you safe from the rays. Visit the merch store to support the show and be prepared for the year’s upcoming increasing sun exposure.

And, as always, if you have thoughts you’d like to share with Seth and Ken and the Absolute AppSec audience, join us in Slack.

Stay Secure,

Seth & Ken

https://youtu.be/8YtO-0kuM8M - Episode #128 - Seth hosts Stefan Edwards (@lojikil) and David Coursey (@dacoursey) to discuss PHP's recent backdoor, probable fixes including code commit signing and the move to GitHub. The discussion covers ease of security, developer tendencies when securing code, and application security nihilism.

https://youtube.com/live/OxUYDV1XRh4 - Episode #221 – Seth and Ken talk about the use of coding requirements and projects during interviews for application security. Both have had experience on both ends and have opinions. This is followed by reactions to the recent breach and data dumps from 23andMe. Finally, new AI tools are starting to emerge that will help security find and fix vulnerabilities.

https://youtu.be/klByndbwB0s - Episode #114 - Seth and Ken discuss account enumeration vulnerabilities and open source tools that take advantage of them. Discussion about the recent Github Actions vulnerability.

Absolute AppSec Happenings

https://ostro.ws/post-timing-attacks - Timing attacks are fascinating, at least to Seth when he’s wearing his Crocs and Socks. This article does a deep dive into the issuers and techniques for exploiting timing issues when comparing user-provided data (ahem, passwords) to sensitive values. So many things that can go wrong and possibilities for exploit/discovery.

https://www.landh.tech/blog/20250211-hack-supply-chain-for-50k/ - Real-world bug bounty research hacking a supply chain. These sorts of bounties (and attacks) are not going away. Hence the talk on Sigstore and model signing, but across the board any watering-hole attack can have devastating consequences.

https://blog.trailofbits.com/2025/02/28/threat-modeling-the-trail-of-bits-way/ - This came out a few weeks ago, but threat modeling is always relevant. A lightweight threat model process from Trail of Bits. Given how important effective and repeatable threat model is becoming, any new approach that helps out an organization will identify gaps and increase security. Check it out if you are looking for new approaches in threat modeling.

Upcoming Events

Where in the world are Seth and Ken?

April 26-27, 2025 - BSides San Francisco 2025 - State of (Absolute) AppSec - Ken and Seth will be hosting a panel on the current state of the application security industry with help from a few friends. Expect spicy takes, opinions, and wild predictions. If you have questions you would like to cover, submit them via this form.

April 28-May 1, 2025 - RSA - Seth and Ken will be in the environs as well for RSA, with a special event that will be taking place on April 28th (“Aegis of Tomorrow: An AI & Security Summit”). Reach out to us if the private event (with Seth, Ken, Jason Haddix, and Daniel Miessler among others) looks like it could be of interest to you, and generally let us know if you’ll be around during the conference and you’d like to catch up.

May 12-13, 2025 - Practical Secure Code Review - AI Enhanced - Given the recent demand, the duo returns with a virtual training with all the AI updates that continually evolve.

May 21, 2025 - Lean AppSec Live - Seth will join Endor Labs (along with Jeevan Singh) to talk through Modern Security Code Reviews, touching on AI updates in the Absolute AppSec secure code review framework.

August 11-12, 2025 - Harnessing LLMs for Application Security - Back to Vegas, but this time with an updated course. Integrating LLMs into AppSec from static to dynamic analysis. Tips, tricks and more.