Episode #301 - AI Browsers, New AI Agent Attacks, Framework Checklist

OpenAI's plans to further data collection and surveillance by embedding AI into web browsing. Useful for surfing, or a major privacy concern?

This week on Absolute AppSec, Seth and Ken are back after a few weeks’ hiatus for their 301st episode. They discuss the various conferences that have kept them busy before introducing their topics for this episode: first, a discussion of the security implications of embedding AI directly into web browsing, and second, a recent Co-Pilot Studio phishing flaw discovered by researcher Katie Knowles. Our hosts also zoom out to observe wider industry trends, noting big companies moving toward agentic AI capabilities. To find this episode, head to https://www.youtube.com/@AbsoluteAppSec, or find us wherever you get your podcasts.

“The natural progression for ChatGPT and OpenAI is to combine [web searching and AI] and capture the experience in order to […] control the user over time and [everything the user is seeing]. […] They want to control your eyeballs and they don’t want you going outside of their garden in order for them to capture the ads and to capture the revenue that’s associated with that and keep you in their experience.”

Seth

OpenAI has announced its new browser, Atlas, and there are a plethora of concerns about embedding AI directly into web browsing. Ken immediately voiced concern because achieving privacy and security in browsers is already difficult, even for dedicated browser companies, and this is not OpenAI’s specialty. Adding AI introduces autonomy and significantly exacerbates security and privacy concerns. Historically, when new browsers launch, they quickly encounter critical flaws as researchers and the community test them. An attack vector has already been demonstrated, where a prompt injection, obfuscated within an image and not visible to the human eye, is interpreted by the LLM. This successfully executes a prompt injection bypass that could potentially exfiltrate sensitive data.

When it comes to privacy, the new AI browser combines invasive tracking of browsing habits (Google, Chrome, Gmail) with potentially intimate personal data people share with AI systems like ChatGPT and Gemini. Given that AI companies are aggressively fighting for data, this new browser creates a massive-scale collection of highly sensitive data given it’s a combination of personal and browsing data. Some security experts have referred to this as “capitalism’s final form.” Seth, while not initially predicting AI companies would enter the browser space, is not surprised because they must eventually turn a profit. The AI has enough context about a user’s searches, browsing, and current concerns that it can become an advanced ad stream, insidiously targeting consumers with advertisements based on highly personal needs and interests. This move is a natural progression for OpenAI to capture the user experience, maintain a consistent look, and control the user’s attention and interaction. By owning the browser, AI companies keep users in their platform’s garden to collect data, display ads, and capture revenue, following the historical pattern of tech companies.

“There’s a lot of foot guns [in security expert guides] because of the fuzzy lines between public and private and what’s running client-side versus server-side. It’s well-defined within the framework, but it’s easy to cross those boundaries.”

Seth

The hosts also discussed a recent attack found by Datadog researcher Katie Knowles against Microsoft Co-Pilot Studio. Microsoft allows users to quickly generate a demo URL for agents built in Co-Pilot Studio. An attacker can set up a fraudulent agent that, after a user signs in, silently redirects to an external server. This exploits the OAuth handshake to exfiltrate the user’s access token. The social engineering attack is highly likely to succeed because the sign-in and subsequent activity all appear to happen through legitimate Microsoft URLs, making it extremely confusing to detect. As a mitigation tactic, Seth and Ken discuss a Next.js security testing and bug-bounty guide from deepstrike.io and Daoud Youssef. Checklists and tribal knowledge from specialists, like bug bounty researchers, are essential for dealing with new frameworks.

This episode is sponsored by DryRun Security. Authorization flaws are some of the trickiest security gaps—permissions get messy, logic gets overlooked, and suddenly, users have access they shouldn't. DryRun Security helps you catch these risks early with Natural Language Code Policy (NLCP). Their latest blog post is written by this fellow Ken Johnson, titled, “How We Turned Natural Language Into a Scalable Agentic AppSec Engine.

Are you looking to bulk up your clothing collection for fall weather? Well, the Absolute AppSec merch store might just have what you’re looking for. Pick out a hoodie or beanie to keep your core temperatures nice and toasty.

An Absolute AppSec hoodie for fall-worthy and winter-layering AppSec fashions

And, if you have thoughts you’d like to share with Seth and Ken and the Absolute AppSec audience, join us in Slack. Show topics often originate as discussion points with the Slack audience, and the newsletter incorporates that discussion as well. So, join in there to offer ideas for what you’d like the podcast to cover as well as pick the brains of a number of industry experts.

Stay Secure,

Seth & Ken

https://www.youtube.com/watch?v=zsN2FNoUZIc - Episode #134 - Legal Protections, Browser Sanitization APIs, Burnout - Thinking about the security problems Browsers have faced as they’ve evolved over the years, let’s revisit a positive development. In this discussion, Seth and Ken highlight how Browser’s implemented Sanitization APIs to potentially help eliminate XSS-style attacks.

https://www.youtube.com/watch?v=FA6C6Kr1Ty8 - Episode #207 - Watering Hole Attacks, Adversarial AI, Cookie Security - In this discussion, Ken and Seth talk about an earlier watering-hole attack focusing on IDEs as the vector for attacks. What was malicious VSCode extensions yesterday is now Co-Pilot studio. Seth predicted more of this type of attack in this episode. Ahhh, the prescience borne of the rinse and repeat of security.

https://www.youtube.com/watch?v=ugXNt994TOM - Episode #159 - Neil Matatall - CSP, Infosec Hiring, Languages & Framework Security - Return guest and friend of the show Neil Matatall joins Seth and Ken in this episode. The discussion is wide-ranging including CSP, Neil’s spinning up a TikTok to review and explain Content-Security Policies, Locomoco Sec, and Languages and Framework Security.

Absolute AppSec Happenings

https://www.scworld.com/resource/heres-why-you-need-to-embed-security-into-ai-development - The article emphasizes the importance of integrating security into AI development from the outset, as rapid innovation often overlooks it as an afterthought. AI's unpredictable, non-deterministic nature creates unique risks, such as prompt injection and rogue agents.

https://www.darkreading.com/cyber-risk/zombie-projects-rise-again-undermine-security - We all have abandoned projects in the back of our closets. Here, Robert Lemos argues that these skeletons threaten to rise up and undermine the security posture of their companies, like zombies. This is especially dangerous, Seth thinks, given that we train all the AIs on the zombie code.

Upcoming Events

Where in the world are Seth and Ken?

November 20-21, 2025 - Harnessing LLMs for Application Security - VIRTUAL TRAINING - Our classic training for dev and sec alike. Participants will gain a deep understanding of LLM functionality, strengths, and weaknesses, and learn to craft effective prompts for diverse use cases.

December 8-11, 2025 - Next-Gen Secure Code Review: Black Hat Edition - Seth and Ken are bringing a four-day exclusive course to Black Hat Europe in London, UK. This is a great opportunity to get a truly in-depth understanding of Secure-Code Review and how it can be empowered through LLM-tooling. Seth and Ken have innovated industry-leading trainings in both of these topics, so this four-day course promises to provide a lot of valuable insight.

April 7-8, 2026 - Harnessing LLMs for Application Security - IN PERSON at Kernel Con. Come join Seth and Ken in Omaha for the Harnessing LLMs course!