Episode #294 w/ Anshuman Bhartiya - AppSec in the Age of AI

AppSec has changed over the past year, and Anshuman has noticed it too.

This week on Absolute AppSec, Seth (@sethlaw) and Ken (@cktricky) are back from Vegas and joined by Anshuman Bhartiya (@anshuman_bh). Anshuman has over a decade of experience in the security industry for several major enterprises (EMC, Intuit, Atlassian, Lyft, etc). As the current Tech Lead for Application Security at Lyft and co-host of The Boring AppSec Podcast, Anshuman is a deep well of knowledge on AppSec topics. Together, they all have a wide-ranging conversation about making it in information security. To watch this episode, go to https://www.youtube.com/@AbsoluteAppSec, or find us wherever you get your podcasts.

Anshuman grew up in India with a predisposed “hacker mentality,” always wanting to understand how to build things in order to break them. When he moved to the United States to pursue a master’s degree in computer science at Arizona State University, he didn’t know how to write a single line of code, as his undergraduate degree in India was very theoretical. After graduating during a recession, he struggled to find a security job and started his career as a cloud technical consultant. He eventually transitioned into the security field as an associate security consultant at a boutique firm that was later acquired by Synopsys. His career has focused on application and product security, working for a mix of large enterprises like Atlassian and Intuit, as well as smaller startups.

Anshuman’s current role is as an AppSec Tech Lead at Lyft, where he works with a small proactive team of three people. His work involves collaborating with other security teams, such as cloud security and incident response, to develop projects that have a meaningful impact. For instance, he is responsible for anticipating risks related to new technologies like the Model Context Protocol (MCP) and building secure guardrails for engineering teams. Anshuman also mentions that his team manages the company’s private bug bounty program, and they are now able to find IDOR (Insecure Direct Object Reference) and authorization issues using LLMs internally, even on code that the models were not specifically trained on. He feels that using LLMs provides an “intelligence layer” that helps to speed up analysis and make the team more efficient.

“I’ve obviously heard skepticism [on AI]. I’ve heard from people where, you know, the first question I get asked is, can [AI] do this reliably, consistently? Can you get the same outcomes on every run? [But I think] if we are able to get from zero to eighty (%), I still see that as a win. The last 20% is always going to be a challenge, where we continue to struggle for accuracy, determinism, consistency, all of those things. But if [we’re] able to get from zero to eighty in terms of coverage and visibility, I see that as a huge win.

Anshuman

Anshuman was also in Las Vegas earlier this month and has lots of lingering thoughts based on the conversations he had there. The security community’s view on AI has shifted over the past year. He observes that the skepticism prevalent last year has started to diminish, especially after announcements like Google’s Project Big Sleep, which reportedly used LLMs to find zero-day vulnerabilities. He acknowledges that LLMs are not a perfect solution and that there are still vulnerabilities in AI systems, but he feels that people are now recognizing the real value of the technology. He believes that to truly understand the technology’s strengths and weaknesses, one must actively experiment with it. Highlights of his trip included the AI Security Forum, a full-day event with AI-specific presentations from Anthropic, Dreadnought.io, and other companies. Another notable moment was the DARPA AI Cyber Challenge, where teams of AI systems competed to find and fix vulnerabilities in open-source software. He finds this fascinating because all the code from the challenge is open-sourced, and it demonstrates that AI can be used to find and fix real-world vulnerabilities, moving beyond theoretical discussions.

“Security is all about defense in depth, right? It’s not [just] about SaaS, or DAS, or just SCA. It has to be a combination.”

Anshuman

Before the rise of LLMs, AppSec was a very manual and time-consuming process. Anshuman believes that the industry has been focused on solving the wrong problem, namely vulnerability management, which is like playing whack-a-mole. Instead of just patching individual vulnerabilities, he advocates for finding the root cause of vulnerability classes and eliminating them entirely. He expresses frustration that many security vendors focus on marketing tools that show billions of blocked attacks, which are often just trivial scans, rather than focusing on impactful issues like authentication, authorization, and business logic flaws. Anshuman also discusses the distinction between AppSec and ProdSec. He notes that the definitions have evolved, with product security now often encompassing cloud security and the entire product development lifecycle. He believes that titles are less important than collaboration across teams, as security is about a defense-in-depth approach.

This episode was sponsored by Redpoint Security™. Redpoint specializes in "Code Security by Coders," bolstered by years of experience testing applications and conducting code reviews against all types of apps, including AI, web, and mobile. Redpoint also offers developer AppSec and secure-code training to help ground your teams in better security practices across the development lifecycle. Check out redpointsecurity.com for more information and put your company on a path to better security.

Do you know someone going back to school who needs a place to take notes? Well, the Absolute AppSec merch store has you covered. Pick out a size for the student who needs some swag in their lives.

The Absolute Appsec notebook – for fans of the crocs and SOCs .

And, if you have thoughts you’d like to share with Seth and Ken and the Absolute AppSec audience, join us in Slack. Show topics often originate as discussion points with the Slack audience, and the newsletter incorporates that discussion as well. So, join in there to offer ideas for what you’d like the podcast to cover as well as pick the brains of a number of industry experts.

Stay Secure,

Seth & Ken

https://www.youtube.com/watch?v=aFuvZw250_g - Absolute AppSec & Lojikil’s MidWinter Night’s Con 2020 - Clint Gibler / How to Eradicate Vulnerability Classes with Secure Defaults + Lightweight Enforcement - Clint’s talk at the Midwinter Con presages some of Anshuman’s recommendations for using tools to eliminate entire classes of bugs rather than addressing onesie-twosie problems cropping up during organizations’ vulnerability management programs.

https://www.youtube.com/watch?v=X79M7sqfEAg - Episode #242 - LLMs Exploiting Vulns, State of DevSecOps – Seth and Ken’s discussion highlights the LLM ecosystem and tooling and ways they can be used. Diving into an article whose title hyped LLM’s abilities to exploit vulnerabilities on their own, but the actual capabilities hinted at some real value to the tools going forward. A harbinger of this week’s brief discussion on Google’s Big Sleep project.

https://www.youtube.com/live/nzNNuerM3pM - Episode #238 - AppSec vs. Enterprise Sec, Supply Chain Tool Analysis - Resourcing constraints mean that organization’s AppSec and ProdSec success necessitates smart prioritization. The discussion with Seth, Ken and Anshuman this week suggested there is a great deal of help LLM integration can help with these internal prioritizations.

Absolute AppSec Happenings

https://techcrunch.com/2025/08/04/google-says-its-ai-based-bug-hunter-found-20-security-vulnerabilities/ - “Google says its AI-based bug hunter found 20 security vulnerabilities” from Lorenzo Franceschi-Bicchierai at Tech Crunch. The news that animated some of this week’s discussion from Project Big Sleep at Google.

https://guard.io/labs/scamlexity-we-put-agentic-ai-browsers-to-the-test-they-clicked-they-paid-they-failed - “‘Scamlexity’ - We Put Agentic AI Browsers to the Test - They Clicked, They Paid, They Failed” - (Shared by Seth in the Absolute AppSec slack) by Nati Tal, and Shaked Chen. The helpful agents we instrumentalize to do certain tasks are susceptible to human failings like clicking on phishes it turns out. Key nugget takeaway: “If AI Agents are going to handle our emails, shop for us, manage our accounts, and act as our digital front-line, they need to inherit the proven guardrails we already use in human-centric browsing: robust phishing detection, URL reputation checks, domain spoofing alerts, malicious file scanning, and behavioral anomaly detection - all adapted to work inside the AI decision loop. Security must be woven into the very architecture of AI Browsers, not bolted on afterward.” Or as ReyRey(Ely) says in the slack, “Agents will have to take yearly trainings now.”

https://phrack.org/issues/72/1- For our fans of ascii-art and the long-lived institutions of hacking, there is a new issue of Phrack.

Upcoming Events

Where in the world are Seth and Ken?

December 8-11, 2025 - Next-Gen Secure Code Review: Black Hat Edition - Seth and Ken are bringing a four-day exclusive course to Black Hat Europe in London, UK. This is a great opportunity to get a truly in-depth understanding of Secure-Code Review and how it can be empowered through LLM-tooling. Seth and Ken have innovated industry-leading trainings in both of these topics, so this four-day course promises to provide a lot of valuable insight.