Episode #270

2025 AppSec Predictions

This week on Absolute AppSec, Seth (@sethlaw) and Ken (@cktricky) return from their holiday hiatus to reflect on the past year’s industry trends and personal insights. Together, they revisit their predictions for the past year, and spin them into new ones for 2025. You can watch this episode at https://www.youtube.com/watch?v=DJ73bMi5-tQ, or wherever you get your podcasts.

Reflecting on their 2024 predictions, Ken and Seth revisited their first forecast of an increase in supply chain attacks and security efforts. There were many notable attacks this year, especially targeting AI ecosystems. While supply chain vulnerabilities did not increase as dramatically as Ken had predicted, the attack surface has expanded to include newer technologies. To address these vulnerabilities, organizations have made progress in reachability analysis and dependency tools like Dependabot and Snyk. However, these tools are still often mismanaged. While they allow teams to promptly address dependency issues, there is a high volume of unaddressed vulnerabilities hiding underneath the surface. However, companies that are consistently reviewing their security practices over time often see the most measurable decrease in vulnerabilities. They concluded that while supply chain security tools have improved, there remains significant room for adoption and effective utilization. Building on the challenges observed in 2024, Seth and Ken expect supply chain security to remain a key focus area, foreseeing increased sophistication in both attacks and defensive tools. Organizations will likely prioritize better adoption of supply chain security measures, such as dependency management and traceability tools, as attackers continue to exploit vulnerabilities in interconnected systems.

Their second prediction was about 2024 seeing heightened research and preventative measures from companies concerning prompt injection— an emerging concern in AI security. While the hosts expected greater industry focus on preventing prompts injection, they observed that it didn’t become as widespread a concern as anticipated. Seth mentioned some preventative measures like AI firewalls but noted a lack of comprehensive research and proactive development in the space. While tools for defending prompt injection were developed in 2024, the focus on sophisticated evasion techniques and advanced LLM firewalls didn’t materialize to the extent they had expected.“The reason why they have these events is because it is a business [itself]. Someone is paying to put on this event to gather more business. […] It’s going to always be a factor in running those conferences.” 

“From a coding perspective, from a security perspective, if I have to struggle with the AI as opposed to actually digging in to look for vulnerabilities, that’s where I start to drop it out of my process, because it isn’t well integrated in.”

Seth

At the beginning of 2024, Ken predicted that AI-powered co-pilot would be integrated into developer workflows, but would have mixed results. This ended up being an accurate forecast. While these tools are often technically helpful, they can also interrupt workflows by suggesting unwanted to irrelevant code completions. Developers, he argued, sometimes find these interruptions frustrating, leading to less enthusiasm for adopting such tools. Seth echoed these sentiments, describing how interruptions in flow can hinder productivity, especially in security workflows where efficiency is critical. Ken observed a shift in AI tools’ design, noting the emergence of AI-native IDEs like Windsurf and Cursor, which embed AI functionality more seamlessly. These platforms aim to reduce disruptions and increase productivity by integrating AI into the development environment itself, rather than as an external add-on. The hosts agreed that this evolution represents a more promising direction for AI in application security, allowing it to complement rather than disrupt workflows. The host anticipate, in 2025, the proliferation of AI-nature that are specifically designed to integrate seamlessly. This represents a departure from traditional add-on AI solutions, which often disrupt processes.

Another prediction the hosts made was about the splintering of LLMs into specialized versions tailored to specific industries or use cases. Contrary to their expectations, Ken and Seth observed that companies are largely continuing to rely on generalized LLMs provided by major players like OpenAI, Anthropic, and Amazon. Ken pointed out that organizations are using techniques like Retrieval-Augmented Generation (RAG) and fine-tuning rather than investing in bespoke LLMs. He attributed this to the high cost and technical complexity of training custom models, which remains inaccessible for most companies. Seth remarked that while the democratization of AI training might eventually enable smaller organizations to create custom LLMs, this scenario is still far from reality.

“I’ve seen everybody trying to figure out a good way to use these machines and to use these systems to bolster software security. […] I think you’ll see the AppSec community embrace AI.”

Ken

Looking ahead to 2025, Ken and Seth made a few more predictions.

First, they forecasted a significant shift in how AI-based tools, particularly those utilizing LLMs, are billed. Traditional subscription-based models may evolve into transaction billing systems based on token usage. Ken explained that the increasing demand for LLMs is already putting a strain on providers like AWS Bedrock, which has led to throttling issues. This reflects the growing challenge of balancing affordability and scalability as organizations integrate AI into their workflows. Seth compared this potential shift to the early days of mobile phone plans, where usage caps and transactional billing were introduced to manage costs and demand. The hosts believe that businesses will increasingly seek transparency and control over their AI usage costs, with transaction models offering more predictability for high-demand scenarios.

They also anticipated ongoing tension between efficiency and data privacy. Organizations will need to find a balance between leveraging AI models’ vast capabilities and ensuring secure handling of sensitive information. They predict that advances in privacy-preserving AI techniques— such as federated learning and encrypted processing— may play a larger role in addressing these challenges in 2025. 

This episode is sponsored by Redpoint Security. Redpoint specializes in “Code Security. By Coders,” which is bolstered by years of experience testing applications and conducting code reviews against all types of applications. Redpoint also offers secure code training to help ground your teams in better security practices across the development lifecycle. So check out redpointsecurity.com for more information and put your company on a path to better security.

So let’s be serious for a moment, we know you still aren’t rocking some Absolute AppSec swag. It’s cold out and at the very least you need a new hat, or maybe a warm hoodie. Visit the merch store and pick up those needed items. Or just join Seth and Ken in Slack.

Stay Secure,

Seth & Ken

https://youtube.com/live/gHexk2F-ycE - Episode #229 - Software Supply Chain Security, 2024 Predictions - Seth and Ken kick off a 2024 with security predictions, including LLMs, dynamic scanning, process, and other possibilities in the near future.

https://youtu.be/fzhTbRVEvpo - Episode #162 - Mike McCabe (@mccabe615) - Cloud Security - The Absolute AppSec-ers return to talk about all things Cloud Security. Discussions on cloud security tools, various differences between AWS and Azure, infrastructure as code (IaC), and predictions on CloudSec merging with AppSec in the future.

https://youtu.be/SfuEfQsSCFY - Episode #157 - 2022 Predictions, Schema Libraries, NPM and Open Source Packages - The duo gives their application security predictions for 2022. A discussion on 3rd party library differences, in particular how URL/URI Schema libraries and parsing can lead to security flaws.

Absolute AppSec Happenings

https://portswigger.net/research/top-10-web-hacking-techniques-of-2024-nominations-open - We have highlighted the top 10 web hacking techniques for a few years on the podcast, running through the interesting ones. That being said, as mentioned in Slack #general, most of these techniques have not been useful to Seth during his day to day. It is good marketing, but how do you find the actual content?

https://www.veracode.com/blog/secure-development/innovating-secure-software-supply-chains-veracode-acquires-phylum - Kicking off the new year with more acquisitions in the supply chain space. There is a lot of innovation but also merging and acquisitions going on. Will be interesting to see how Veracode integrates Phylum technology into their platform.

https://www.whitehouse.gov/briefing-room/statements-releases/2025/01/07/white-house-launches-u-s-cyber-trust-mark-providing-american-consumers-an-easy-label-to-see-if-connected-devices-are-cybersecure/ - Talos shared this article in the #random Slack channel, but of general interest. The government wants to put labels on iOT devices showing they are secure. More hurdles for manufacturers, but does it really change the security of these devices (and services)?

Upcoming Events

Seth and Ken can be found across the globe and virtually.

January 23-24, 2025 - Harnessing LLMs for Application Security - Virtual Training - Next opportunity for the new course that focuses on strategies for using more than just the chat interface of an LLM to secure software.

February 14-15, 2025 - CactusCon, Mesa, AZ - Ken is speaking, Seth may or may not make an appearance for drinks, discussions, and some crocs&socks.

February ??, 2025 - Practical Secure Code Review - Virtual Training - The course that started it all! We will reprise this online with a training in Q1, if you have specific dates you would like to see, please reach out (via email or Slack).

June 28-29, 2025 - DEF CON Trainings, Seattle, WA - Seth and Ken will present one of the courses. Watch this space for additional details once it is fully posted.