Episode #318 - Slack Impersonation, Mythos, Vulnerability Research Future

On Seth's recent Slack impersonator and fallout, Anthropic's Mythos in the news everywhere, and the future of vulnerability research

In the 318th episode of Absolute AppSec, Ken (@cktricky) and Seth (@sethlaw) talk high-stakes security incidents, beginning with a firsthand account of a sophisticated phishing campaign that targeted our open-source Slack community. We then jump to the leak of Anthropic’s highly capable Mythos model and its potential to disrupt the market. They analyze how such frontier model announcements contribute to massive stock market volatility for traditional security firms while simultaneously creating an intense echo chamber regarding AI’s ability to replace human practitioners. Referencing Thomas Ptacek’s writing, Seth and Ken debate whether AI agents will soon supplant human vulnerability research for common bug classes, shifting the human role toward high-level governance and context infusion. Ultimately, they both advocate for autonomous defense and rigorous evaluation frameworks to manage reasoning drift and the exploding velocity of AI-generated code. To watch this episode or one of our hundreds of others, head over to https://www.youtube.com/@AbsoluteAppSec/streams, or find us wherever you get your podcasts. 

Our Slack workspace was recently the victim of an impersonation attack where a malicious actor registered a week in advance, copying Seth’s username and profile image to distribute malware via a deceptive Google Sites redirect. This was not an isolated incident; it was part of a broader offensive against open-source Slack communities, including the Linux Foundation and the OpenSSF. Seth expressed significant frustration with Slack’s security architecture, arguing that the platform’s lack of controls against username collisions and easy spoofing is unacceptable in a modern threat landscape. Ken draws a parallel to Git protocol impersonation but notes that while GitHub offers GPG signing to mitigate this, Slack offers no such native verification for free or open workspaces, forcing hosts to move to a manual, email-based invitation process to protect their members.

“Now you’re saying that you’re the best pen testing agent that’s out there…which you may very well be, but you’re not pulling back the covers enough for us to trust you.”

Seth

This episode is primarily dedicated to a critical analysis of Mythos, a rumored, leaked high-capability model that has generated intense discussion regarding its offensive cybersec potential. Seth has a pessimistic view of the marketing surrounding such models, suggesting that companies like Anthropic and OpenAI are financially incentivized to create an overinflated sense of capability to drive investor interest and user subscriptions. These companies often fail to pull back the covers enough for true experts to verify their claims, leaving the industry to rely on curated benchmarks that may not reflect real-world performance. These models are powerful, and Ken adds that the actual engineering effort and token cost required to make them perform complex, long-running agentic tasks are often understated, making them less of a panacea than they appear. 

“I know people are concerned about their jobs. They’re worried Mythos is going to take their jobs[…] I don’t think you need to worry about that at all. I think what you will need to worry about is figuring out how to get ahead of reduction in the workforce, budget slashing, and start to tell a story to your C-suite[…] just have a plan, just have some answers in case the scenario comes up, because it will.”

Ken

Finally, Seth and Ken address the economic and professional impact of the AI arms race on the security workforce. Ken is alarmed over reports that a significant portion of CISOs believe LLMs will eventually replace human security practitioners, leading to premature budget slashing and reductions in the workforce. Both Seth and Ken believe in using Small Language Models and open-source alternatives, noting that these are becoming just as performant as frontier models for specific security tasks while being significantly more cost-effective and easier to fine-tune. They conclude that while AI will certainly accelerate the speed of exploitation, the future of defense lies in autonomous, specialized systems that augment rather than replace the creative reasoning of human practitioners. For now, they advise security professionals to stay grounded in fundamentals and have a clear plan for articulating their value to leadership as the hype cycle eventually cools.

This episode was sponsored by Redpoint Security. Redpoint specializes in "Code Security by Coders," bolstered by years of experience testing applications and conducting code reviews against all types of apps, including AI, web, and mobile. Redpoint also offers developer appsec and secure-code training to help ground your teams in better security practices across the development lifecycle. Check out redpointsecurity.com for more information and put your company on a path to better security.

Sprouts are coming up, so now’s the time to display your solidarity along with your support for the Podcast. Check out the selection of green tees in our merch store. (There are other colors as well). Tee-shirts are great gifts for yourself or your friends:

Spring green tees are available now!

If you’ve fallen for any Seth impersonators recently, come expose them in our Slack, and maybe stay a while.

Stay Secure,

Seth & Ken

https://youtu.be/7JnNd7QkfjU – Episode #142 - AI Code Generation, Puma Scan, HTTP Request Smuggling - The first ever “titled” episode from the podcast that discusses the use of AI in coding. Given the discussion of Mythos, interesting to review what the hosts thought about AI in 2021. Before the hype of ChatGPT.

https://youtu.be/3AcL_37gnhY – Episode #170 - Security Basics, Social Engineering, Plan for Failure - A discussion of security basics and failures resulting from a lack of security hygiene. As a developer, security engineer, or a CISO, it's important to recognize that breaches will happen, so security planners should "plan for failure." "It's not a matter of if but when."

https://youtube.com/live/91bQZJ1m1Rc – Episode #285 - easyjson, Software Dependencies, Breaches - The podcast duo discusses the implications and how to protect apps from certain sub-dependency threats. Given current throws of models affecting overall industry trends, this episode covers whether breaches (or other events) have an effect on the industry, company, or individual. Current regulations and certifications can be lost, but does not always have the effect we would expect.

Absolute AppSec Happenings

I’ve Completely Changed How I Work – Scott Behrens argues that AI agents like Claude and Cursor have fundamentally shifted the Staff+ engineering role toward a "solve by default" mindset. Instead of filing tickets or writing memos, engineers can now execute end-to-end tasks—from coding and PRs to data analysis—with minimal friction. By offloading routine execution to AI, leaders can focus on high-level strategy, creative problem-solving, and mentoring others.

Security brief: tax scams aim to steal funds from taxpayers – This Proofpoint article warns of a significant rise in tax-themed phishing campaigns during the 2026 tax season. Threat actors are increasingly using remote monitoring and management (RMM) tools and social engineering lures, such as IRS impersonation and W-2 fraud, to steal funds and credentials. These global attacks exploit taxpayer stress and urgency, highlighting the critical need for heightened email vigilance and effective organizational security during filing periods.

Governing the Ungovernable: Why AI Governance is the Blueprint for Everything Else - This article advocates for a "Tier Zero" AI governance strategy, shifting from static, reactive checklists to real-time orchestration. It emphasizes that traditional governance fails because it ignores how people actually work. By prioritizing visibility, strategic alignment, and active orchestration, organizations can manage the "Principal-Agent" risks of AI.

Vercel Breach Tied to Context AI Hack Exposes Limited Customer Credentials - Vercel disclosed a breach after an attacker compromised a third-party AI tool, Context.ai, used by an employee. By abusing OAuth permissions, the actor hijacked the employee’s Google Workspace, gaining access to internal environments and non-sensitive variables.

Upcoming Events

Where in the world are Seth and Ken?

April 26-27, 2026 - Harnessing LLMs for Application Security - In-person training at DEF CON Singapore. Be sure to register now if you’re looking to enhance your day-to-day AppSec processes with the power of LLM agents.

August 1-4, 2026 - AI-Enhanced Secure Code Review: Black Hat Edition - Seth and Ken are bringing a four-day exclusive course to Black Hat. This is an update on the exclusive version of the course offered at Black Hat Europe. Early bird pricing is ongoing, so it’s a great opportunity to get a truly in-depth understanding of Secure-Code Review and how it can be empowered through LLM-tooling. Seth and Ken have innovated industry-leading trainings in both of these topics, so this four-day course promises to provide a lot of valuable insight.