- This Week on Absolute AppSec
- Posts
- Episode #283 - Vulnerable MCP, Software Hallucinations
Episode #283 - Vulnerable MCP, Software Hallucinations
Intentionally-Vulnerable MCP Server, Hallucinating Software Packages

Absolute AppSec moving up the ranks
This week on the 283rd episode of Absolute AppSec, Seth (@sethlaw) and Ken (@cktricky) explore the growing complexity of securing modern application development environments, particularly those augmented by LLMs, agentic AI, and new tools like Model Context Protocol (MCP) servers. Then, the duo discusses on recent research around hallucinations of 3rd party dependencies in AI-generated Python and JavaScript. They know they’re talking about AI and LLMs more often than not recently, and have things to say about that too. To find this episode, go to https://www.youtube.com/@AbsoluteAppSec, or wherever you get your podcasts.
“I love to see these sorts of interactions and how they break down because it’s all kind of unexpected territory for us. [We’ve] talked quite a bit about tools and agentic AI and MCP as a kind of natural response to that—a way to actually standardize. Now the attacks that go into it are not something that we’ve necessarily thought through, but it is [still] maturing quickly.”
The technical content begins with a walkthrough of the Damn Vulnerable MCP Server, a project designed to demonstrate security weaknesses in systems that use MCPs. Since Model Context Protocols allow language models to interact with external tools and services, they create powerful but possibly risky integrations. Ken demos the setup process, noting complications with configuration files and some misleading initial documentation. But, once you’ve resolved those hurdles, the Damn Vulnerable MCPs educational environment is an effective playground for understanding how real-world vulnerabilities might manifest in MCP-integrated systems. MCPs effectively expose new attack surfaces by connecting LLMs to operational tools, often without sufficient threat modeling or security safeguards. In that vein, Seth and Ken discuss an attack documented by Trail of Bits, where an adversary exploited LLM prompt engineering to force a malicious MCP to run every time, under the guise of being a compliance tool. This illustrates how attackers can manipulate LLMs via tool descriptions and prompt injection to exfiltrate data or subvert normal workflows. This shows the immaturity of security practices in this space, because although MCPs promise standardization for agentic AI systems, they also introduce the need for new controls, validations, and trust boundaries, needs that are akin to traditional AppSec practices. Awareness of the risks usually comes second to integration into production environments.
“The fact is that the more you turn up the temperature [on AI involvement], the more creative you allow the LLM to be, [that] leads to more hallucinations.”
Shifting gears, the hosts examine “slop squatting”, a novel variation of dependency confusion attacks covered in this blog post by Socket.dev. LLMs hallucinate package names when generating code, which attackers exploit by registering these non-existent packages on public repositories like PyPl or npm. When unsuspecting developers using AI run the suggested code, malicious packages get pulled into their systems. Socket’s research found that 20% of package names out of the roughly 576,000 code samples suggested by LLMs did not exist. Attackers could easily exploit this, starting with repeated prompt testing to generate likely hallucinated names, then pre-registering them. These hallucinations, coupled with relaxed developer “vibe coding,” pose risks to software supply chains.
This episode acknowledges the challenges of staying current in the AI-infused development landscape because, to put it simply, Ken and Seth are feeling the weight of both opportunity and exhaustion. They are playing dual roles as enthusiastic engineers embracing AI tools for efficiency, and cautious security professionals grappling with the risks of integrating LLMs into developer workflows. Ken, in particular, voices concerns about how prompting fatigue and vague guidance in vibe coding can lead to a loss of control, making traditional coding methods at times more efficient and the advisable alternative for the security-focused members of a team.
This episode is sponsored by DryRun Security. Authorization flaws are some of the trickiest security gaps—permissions get messy, logic gets overlooked, and suddenly, users have access they shouldn't. DryRun Security helps you catch these risks early with Natural Language Code Policy (NLCP). Their latest white paper dives deep into real-world authorization failures and discusses how DryRun Security finds them before they get shipped to production. Grab your copy now at dryrun.security/auth-flaws.
We here at Absolute AppSec have a bevy of listeners who are the dads and moms populating soccer/football sidelines. If you’d like to unite your passions for your little athletes with a healthy love for an AppSec podcast, then we have a tee-shirt option for you! Visit the merch store to pick your size.
And, if you have thoughts you’d like to share with Seth and Ken and the Absolute AppSec audience, join us in Slack. A fair number of the shows’ topics begin as discussion points with the Slack audience, so you can join there to offer ideas for what you’d like the podcast to cover as well as pick the brains of a number of industry experts.
Stay Secure,
Seth & Ken
https://www.youtube.com/watch?v=3AcL_37gnhY - Episode #170 - Security Basics, Social Engineering, Plan for Failure - Given the topic this week highlighted a need to integrate security in the beginning of a new wave of innovation, this episode has a very good primer on what security basics look like. Risks like Advanced Persistent Teenagers (a coinage from Krebs on Security discussed in the context of $LAPSUS) should inform how we should have expectations of things falling down or facing breaches in similar ways as they did in the past.
https://www.youtube.com/watch?v=j093mOfawiU - Episode #168 - Secure Code Review, Package Confusion, Privacy Acts - Discussion of the Red-Lili dependency confusion attack revealed and analyzed by Checkmarx in real time. The discussion of how the attacker picked package names that could garner unwitting downloads is an earlier version of the new slop-squatting attack Seth and Ken discuss this week.
https://www.youtube.com/watch?v=pjTybGNSFT8- After Dark episode #4 - The intentionally vulnerable MCP server discussed this week put is in mind of Ken’s walkthrough of intentionally vulnerable code, which many Absolute AppSec viewers may be familiar with, but also this After Dark CTF we did with SecDim.com and the wonderful Pedram Hayati. Check out episode while Seth and Ken dive in to see what they can find in the SecDim exercise and check their website for the innovative platform with war-game, bug-finding, and bug-fixing components.
Absolute AppSec Happenings
https://blog.trailofbits.com/2025/04/21/jumping-the-line-how-mcp-servers-can-attack-you-before-you-ever-use-them/?hss_channel=lcp-912286 - “Jumping the line: How MCP servers can attack you before you ever use them” - This Trail of Bits article is discussed in the episode, but it’s worth diving in to see how MCP servers work and how attackers are thinking about exploiting them. “Tool descriptions themselves are prime vectors for prompt injection” lays out the fundamental trick for the line-jumping attack.
https://www.theregister.com/2025/04/19/us_crosswalk_button_hacking/ - “Hacking US crosswalks to talk like Zuck is as easy as 1234” from the Register. Shenanigans in Silicon Valley highlight that default credentials are still a security issue and can lead to zany as well as malicious outcomes.
https://www.dryrun.security/blog/sast-accuracy-the-metric-that-matters - Co-founder and CEO of DryRun Security recaps the head-to-head testing Ken and the Dry Run team have been doing to highlight the advantages of DryRun’s Contextual Security Analysis (highlighting vulnerability scans results they’re getting in comparison with major industry SASTs). There is an overarching report available at the site with a work email.
Upcoming Events
Where in the world are Seth and Ken?
April 26-27, 2025 - BSides San Francisco 2025 - State of (Absolute) AppSec - Ken and Seth will be hosting a panel on the current state of the application security industry with help from a few friends. Expect spicy takes, opinions, and wild predictions. If you have questions you would like to cover, submit them via this form.
April 28-May 1, 2025 - RSA - Seth and Ken will be in the environs as well for RSA, with a special event that will be taking place on April 28th (“Aegis of Tomorrow: An AI & Security Summit”). Reach out to us if the private event (with Seth, Ken, Jason Haddix, and Daniel Miessler among others) looks like it could be of interest to you, and generally let us know if you’ll be around during the conference and you’d like to catch up.
May 12-13, 2025 - Practical Secure Code Review - AI Enhanced - Given the recent demand, the duo returns with a virtual training with all the AI updates that continually evolve.
May 21, 2025 - Lean AppSec Live - Seth will join Endor Labs (along with podcast guest Jeevan Singh) to talk through Modern Security Code Reviews, touching on AI updates in the Absolute AppSec secure code review framework.
August 11-12, 2025 - Harnessing LLMs for Application Security - Back to DEF CON, but this time with an updated course. Integrating LLMs into AppSec from static to dynamic analysis. Tips, tricks and more.