Episode #282

Model Context Protocol, A2A, NHI Authentication

This week, on episode #282 of Absolute AppSec, Ken Johnson (@cktricky) and Seth Law (@sethlaw) dive into the implications of tool integration protocols like MCP (Model Context Protocol) and the security challenges of non-human identities. In this time of growing reliance on agentic systems and LLM-based tooling, the hosts discuss practical use cases and the inherent security risks of MCP advancements from an informed, hands-on perspective. Throughout this episode, they advance a recurring theme: that the security community is running to catch up to developers in securing a fast-moving innovation. This is because the excitement surrounding agentic tooling and AI orchestration is outpacing developer teams’ internal secure-by-default discussions and security-focused principles. To find this episode, go to https://www.youtube.com/@AbsoluteAppSec, or wherever you find your podcasts.

“While [MCP] is enabling queries to enrich and standardize the way that we have meaningful outcomes within the LLM ecosystem, I feel like this is a very traditional set of vulnerabilities, just with a different technology in the mix.”

Ken

A significant portion of the episode centers around the application and limitations of MCP, an emerging protocol designed to standardize the interaction between tools, agents, and LLMs. Ken’s gut reaction is to see MCP as a natural and useful evolution for integrating LLMs into infrastructure, much like REST APIs or GraphQL standardized web service interactions, but he and Seth quickly pivot to concerns about the protocol’s lack of built-in security. They reference a blog post by Elena Cross, which critiques MCP for not being secure by default, potentially opening avenues for command injection, tool poisoning, or unverified tool access. Seth explains how MCP can unwittingly allow LLMs to interact with command-line tools or database interfaces without proper authentication layers, creating new threat surfaces that resemble historical AppSec issues but are amplified by the capabilities of AI agents. While acknowledging the importance of standardization, they warn that, as it stands, MCP mirrors early-stage protocols in other tech domains: functional but immature in its treatment of trust boundaries.

This leads to a broader concern about the role of agentic tools and the emerging ecosystem of LLM-driven automation. There is a parallel between current trends in LLM integration and previous cycles in the tech industry since security is rarely prioritized in the early stages of innovation. Ken emphasizes that while tools like LangChain have made LLM usage more accessible, they haven’t adequately addressed traceability, authentication, or session-level controls by default. He provides examples of his own company, which has had to build custom infrastructure to achieve end-to-end traceability and fine-grained tool access control. This should not be an afterthought in an LLM-driven architecture, especially as AI begins to act with increasing autonomy in these systems that handle sensitive data or production workflows.

“I don’t think anyone’s really, to this point, been super concerned about how to secure every step along the way and also make it possible for incident responders to go and triage incidents. I think it’s been more of a focus on, well, hell, how do we even get these to work? […] So when funding is there, you’re incentivized in a different way [away from] security.”

Seth

This episode also delves into the idea of non-human identities (NHIs), drawing on a post by Ben Kim. The conversation ties NHIs—such as API keys, OAuth tokens, service accounts, and machine IDs—directly into the risks posed by protocols like MCP. The hosts reflect on the fact that many traditional approaches to securing machine-to-machine interactions, like static tokens or shared secrets, are insufficient in the LLM context. Because these machine identities often lack features like multi-factor authentication or behavior verifications, they become a weak point when agents begin to orchestrate actions on their behalf. Seth and Ken both voiced concern that exposing models to sensitive infrastructure through these identities will likely lead to breaches if industry players do not adopt layered authentication and authorization mechanisms. While companies like Google are attempting to publish standards such as A2A to improve authentication between LLM tools, these are still early drafts and have yet to be widely implemented.

This episode was sponsored by Redpoint Security. Redpoint specializes in "Code Security by Coders," bolstered by years of experience testing applications and conducting code reviews against all types of apps, including AI, web, and mobile. Redpoint also offers secure code training to help ground your teams in better security practices across the development lifecycle. Check out redpointsecurity.com for more information and put your company on a path to better security.

Let’s say you are listening to today’s show and you’re inspired by discussion about how security basics remain important during eras of rapid innovation. Perhaps, you think to yourself, “I wish I had a handy mnemonic to remember that principle.” Well, if you visit the Absolute AppSec merch store, you may find just the memory jog you’re looking for. The Crocs ’n’ Socks tee reminds everyone that it’s cool to take on the Dad criteria in your day-to-day security role.

And, as always, if you have thoughts you’d like to share with Seth and Ken and the Absolute AppSec audience, join us in Slack.

Stay Secure,

Seth & Ken

https://www.youtube.com/watch?v=THvjSVgaehE - Episode #216 - Security SDLC, Time Management, How AI can help in security day-to-day. This is a wide-ranging discussion throughout the episode. Stick around for the discussion on how Seth and Ken try to get into flow, but thinking about how AI tools can help/hinder the devsec pipeline relates to some of the concerns that bubbled up in this week’s episode.

https://www.youtube.com/watch?v=OontCwV_Gi0 - Episode #233 – Scammers, Deep Fakes, Data Exposure - Seth and Ken talk through a range of security concerns that even people who work day-to-day in information security still find themselves susceptible to. It’s a good primer on the basics.

https://www.youtube.com/watch?v=Oq6ijTZdtdM - Episode # 251 - Passive Scanning, Chrome Extensions, CocoaPods, NVD - Seth and Ken discuss a range of topics, but thought the National Vulnerability Database (NVD) discussion should be of interest particularly given recent developments with its likelihood of being supported going forward.

Absolute AppSec Happenings

https://www.bleepingcomputer.com/news/security/cisa-extends-funding-to-ensure-no-lapse-in-critical-cve-services/ - CISA extends funding to ensure 'no lapse in critical CVE services’ - At the eleventh hour, the worries about the end of CVE support and the consequences were staved off. The number of tools that make use of CVE data in their operations meant that a host of concerns had begun mounting about what developments could mean for the security industry around the globe.

https://tierzerosecurity.co.nz/2025/04/14/edv.html - EDV - Endpoint Detection & Vibes- Claudio Contin from Tier Zero security shares a write-up on how he interacted with Microsoft’s built-in Copilot application to vibe his way to a security tool, with interesting results. Watch as he compares his tool’s output to Windows Defender.

https://knightcolumbia.org/content/ai-as-normal-technology - AI as Normal Technology An alternative to the vision of AI as a potential superintelligence - From the authors of a topical buzzy book (AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference), a call for having a measured response. Arvind Narayanan and Sayash Kapoor argue for an alternative to the hyping/catastrophizing extremes brought about when viewing AI as a world-changing superintelligence. It’s a long read, which hopefully is a nice break from the hotter takes that currently proliferate on the topic.

Upcoming Events

Where in the world are Seth and Ken?

April 26-27, 2025 - BSides San Francisco 2025 - State of (Absolute) AppSec - Ken and Seth will be hosting a panel on the current state of the application security industry with help from a few friends. Expect spicy takes, opinions, and wild predictions. If you have questions you would like to cover, submit them via this form.

April 28-May 1, 2025 - RSA - Seth and Ken will be in the environs as well for RSA, with a special event that will be taking place on April 28th (“Aegis of Tomorrow: An AI & Security Summit”). Reach out to us if the private event (with Seth, Ken, Jason Haddix, and Daniel Miessler among others) looks like it could be of interest to you, and generally let us know if you’ll be around during the conference and you’d like to catch up.

May 12-13, 2025 - Practical Secure Code Review - AI Enhanced - Given the recent demand, the duo returns with a virtual training with all the AI updates that continually evolve.

May 21, 2025 - Lean AppSec Live - Seth will join Endor Labs (along with podcast guest Jeevan Singh) to talk through Modern Security Code Reviews, touching on AI updates in the Absolute AppSec secure code review framework.

August 11-12, 2025 - Harnessing LLMs for Application Security - Back to DEF CON, but this time with an updated course. Integrating LLMs into AppSec from static to dynamic analysis. Tips, tricks and more.