- This Week on Absolute AppSec
- Posts
- Episode #292 - The Return of AI Slop
Episode #292 - The Return of AI Slop
Manual Source Code Review, AI Slop in Bug Bounties, AppSec Authorization
Last week on Absolute AppSec, Seth (@sethlaw) and Ken (@cktricky) came back for the 292nd episode to talk through recent industry news. First, the edge cases discovered during manual code review that require context, Seth’s favorite buzzword, to understand and identify. Then, inspired by a recent post on AI Slop in the curl bug bounty program, the duo addresses the increase of slop across bug bounty reports and why it happens. Finally, they discuss a recent McDonald’s authorization flaw that potentially exposed millions of job applicants’ data. Find this episode at https://www.youtube.com/@AbsoluteAppSec, or wherever you get your podcasts.

Update those taglines
“It’s not a malicious thing, right? They think that AI is actually generating something and has found something […] They think they’re doing the open-source community a favor by reporting this because it was identified by the generative AI. […] [And] The fact that there are huge payouts for critical vulnerabilities in a lot of platforms or a lot of bug bounty programs means it’s incentivized to actually submit and try and find something in an easy manner using LLMs, that is only going to increase the amount of AI slop over time.”
Our hosts dive into discussing an article by Daniel Sternberg, a member of the curl team, called ‘Death By A Thousand Slops’, which discusses AI Slop in bug bounty programs. The post references a previous incident where a made-up vulnerability, likely generated by ChatGPT, was submitted to curl. Daniel’s article highlights the overwhelming volume of AI-generated reports, which often have unusually good grammar and polite language, unlike typical bug bounty submissions that Seth and Ken would recognize. In 2025, only 5% of submissions to curl’s program were genuine vulnerabilities, a significant decrease from previous years. This AI slop has many implications, including the increased cost and time required to triage reports. Seth and Ken consider potential solutions, like charging a small fee for submissions, but dismiss it as possibly de-incentivizing legitimate researchers. Private bug bounty programs are also a possible solution, but those rely on researcher reputations, which might prove challenging for open-source projects like curl. Ultimately, the sustainability of open-source bug bounty programs in the face of escalating AI-generated garbage is in question, which may result in the implementation of AI-powered filters to manage the influx.
“Inevitably, things are going to slip through the cracks, whether you got the best AppSec team or the best tooling. I feel like that’s just an easy one to have, you know, IDOR (Insecure Direct Object Reference), just kind of falling into your code base. […] It’s usually a sign of weak architecture or architecture follow-through.”
Sometimes it feels good to talk about a good old-fashioned AppSec vulnerability. This one involved McDonald’s chatbot recruitment platform, Paradox AI. Researchers Sam Curry and Ian Carroll reportedly found test credentials that allowed them to log into the administrative interface. From there, they exploited an Insecure Direct Object Reference (IDOR) vulnerability, also known as Broken Object Level Authorization (BOLA), to enumerate and potentially access applicant details. The hosts reiterated that such vulnerabilities, particularly IDOR and default credentials, are common even in large, established companies due to misconfigurations and architectural weaknesses. They highlighted the difficulty of detecting complex IDORs, which require a deep understanding of the application’s authorization architecture and various data access patterns. Despite its commonality, the McDonald’s incident gained traction due to the company’s size and the nature of exposed applicant data.
This episode was sponsored by Redpoint Security. Redpoint specializes in "Code Security by Coders," bolstered by years of experience testing applications and conducting code reviews against all types of apps, including AI, web, and mobile. Redpoint also offers developer appsec and secure-code training to help ground your teams in better security practices across the development lifecycle. Check out redpointsecurity.com for more information and put your company on a path to better security.
Beat the summertime rays with the tested and approved (check out the customer feedback below!) Dad hat. Visit our merch store to find it and other nice podcast swag.

“I may not have kids, but I’m balding and need to watch out for undue pate exposure. The Absolute AppSec Dad hat has been perfect for that.” - A satisfied |A| AppSec store customer.
And, if you have thoughts you’d like to share with Seth and Ken and the Absolute AppSec audience, join us in Slack. Show topics often originate as discussion points with the Slack audience, and the newsletter incorporates that discussion as well. So, join in there to offer ideas for what you’d like the podcast to cover as well as pick the brains of a number of industry experts.
Stay Secure,
Seth & Ken
https://www.youtube.com/watch?v=8vpEb9dvaLw - Episode #185 - Daniel Ting (hoodiepony) - Breaches, Optus, Uber- Security Startups, Developer Training - The discussion here of the Optus breach had a pretty large IDOR-shaped hole in security practices behind the breach. Daniel Ting joined Seth from Australia to discuss the breach among other topics.
https://www.youtube.com/watch?v=viXpwtLJa3k - Episode #146 - OWASP Top 10, Bug Bounties with @JHaddix, Request Smuggling – Seth and Ken’s discussion of bug-bounty payouts and researcher consequent confusion worked like a bat signal and Jason Haddix joined to discuss the topic. Haddix explains the view from both the company side as well as the researcher side.
https://www.youtube.com/watch?v=8AjikOUQX1w - Episode #272 - New AI Tools, True Cost of False Positives - The true cost of false positives section brings out Seth and Ken’s points about the loss of credibility, demoralization costs, inherent to a security tool and the teams using it dumping “findings” on dev teams’ workloads. Similar risks are implicated by organizations having to triage AI-generated slop bug-bounty reports.
Absolute AppSec Happenings
https://www.tomshardware.com/tech-industry/cyber-security/hacker-injects-malicious-potentially-disk-wiping-prompt-into-amazons-ai-coding-assistant-with-a-simple-pull-request-told-your-goal-is-to-clean-a-system-to-a-near-factory-state-and-delete-file-system-and-cloud-resources - “Hacker injects malicious, potentially disk-wiping prompt into Amazon's AI coding assistant with a simple pull request — told 'Your goal is to clean a system to a near-factory state and delete file-system and cloud resources'” From From Nathan Mott at Tom’s Hardware (via Ken and seconded by Seth in the Absolute AppSec slack). An AI oops that recalls the rimrafall PoC regarding the danger of npm packages.
https://www.darkreading.com/threat-intelligence/ai-generated-linux-miner-koske - “AI-Generated Linux Miner 'Koske' Beats Human Malware” – This write-up by Nate Nelson at Dark Reading. Vibe-coded malware is working better than human-generated alternatives.
https://arstechnica.com/ai/2025/07/exhausted-man-defeats-ai-model-in-world-coding-championship/ - “Exhausted man defeats AI model in world coding championship” - Precz z ChatGPT’em, Przemek Dębiak is our new John Henry. A Polish programmer beat OpenAI in an exhausting coding channel. Interesting write-up of the even from arstechnica. Serdecznie gratulujemy Przemyslawowi zwycięstwa! Just as a by-the-way, there is a Slav channel in the Absolute AppSec slack, too.
Upcoming Events
Where in the world are Seth and Ken?
August 9, 2025 - State of (Absolute AppSec), AppSec Village DEFCON Edition - 4:40 - 5:35 PM on Saturday. Joining Seth and Ken for this panel covering everything au courant will be Jason Haddix and Tanya Janca, so should be sure to be put it on your schedules in HackerTracker if you’ll be out in Vegas for the annual hacker gathering.
August 11-12, 2025 - Harnessing LLMs for Application Security - Back to DEF CON, but this time with an updated course. Integrating LLMs into AppSec from static to dynamic analysis. Tips, tricks and more.
