AI Browsers: Cyber Risk via Shadow AI
10–27–2025 (Monday)
Hello, and welcome to The Intentional Brief - your weekly video update on the one big thing in cybersecurity for middle market companies, their investors, and executive teams.
I’m your host, Shay Colson, Managing Partner at Intentional Cybersecurity, and you can find us online at intentionalcyber.com.
Today is Monday, October 27, 2025, and we’re going to talk about a pattern that we all knew would be there but we’re only just now starting to see come to fruition, which centers on cyber risk exposure via new AI tools.
AI Browsers: Cyber Risk via Shadow AI
I think we’re all, by now, familiar with the notion of ‘Shadow IT’ - enabled largely by the rise of SaaS platforms. Similarly, we’re now seeing lots of Shadow AI enter the space, where either existing tools add AI in a way that circumvents existing third-party risk management or procurement controls, or other controls like removing local administrator privileges are enabling users to open the door to AI tools.
Regardless of the mechanism, the source of the risk getting the most attention this week is AI Browser agents, as both ChatGPT and Perplexity released these capabilities in an effort to unseat Google Chrome (and the ubiquitous search integration into the address bar).
ChatGPT’s Atlas and Perplexity’s Comet were both immediately poked and prodded by security researchers, as they should be. Unfortunately, they both seem to face the same challenge around what’s being called “prompt injection attacks.”
Essentially, these attacks use the natural mechanics of the way LLMs interact to get them to perform insecurely - such as giving up data, making unintended purchases or social media posts, etc.
Browser company Brave, who clearly has a dog in this fight, is leading the charge on this research. I would note that the researchers themselves even admit the inherent difficulties around addressing these issues, writing:
“until we have categorical safety improvements […], agentic browsing will be inherently dangerous and should be treated as such”
The attacks are not just limited to OpenAI and Perplexity, either. Microsoft’s CoPilot saw an attack dubbed “CoPhish” debut this week that allowed attackers to steal OAuth tokens (like noted above) via Copilot Studio.
We also saw a “worm” (old-school!) making its way around the developer community leveraging some very clever techniques that attack LLM-based code tools (which all major platforms offer). Things like using Google Calendar as a Command and Control server, using invisible Unicode characters that aren’t seen in a console even if a human does line-by-line code review, and more.
While these are all early “proof of concept” mechanics, the signal here is clear: we’re moving very quickly to introduce tools and capabilities that aren’t as secure as we’d want them to be. On top of that, many of our traditional security mechanics are not going to be well-suited to addressing these risks, because of where and how these browsers operate. Essentially, I think we should treat these the same way we would treat other insider threats, because that’s exactly how they operate.
That means we need to continue to invest in securing the identity, the endpoints, and the network connections in a way that gives us a fighting chance at protecting ourselves from this new form of insider threat.
Attackers are going to continue to be more clever than we can be, and the rate of innovation from both AI companies themselves and companies backporting AI into their products and platforms is going to continue at a breakneck pace. Our challenge, as defenders, remains to do our best to manage the risks we can with the tools we’ve got - again, identities, endpoints, and the connections between them.
Suddenly, zero trust is starting to sound more and more appealing, but it all still hinges on the basics on using technical controls to reduce the areas of reach for these tools in a thoughtful way, ensuring that requests are validated by a human indicating intent, and limiting resources like data and systems to the reasonable set your humans need to complete their work.
Beyond that, frankly, you’re just allowing the attack surface to increase.
Fundraising
From a fundraising perspective, back down to more realistic numbers, but still solid with more than $15.4B in newly committed capital across more than a dozen funds.
At this rate, we’re just one big week from surpassing the Q3 fundraising totals, and one more big week after that to take the top spot from Q1 and Q2 at just over $200B.
A reminder that you can find links to all the articles we covered below, find back issues of these videos and the written transcripts at intentionalcyber.com, and now sign up for our monthly newsletter, the Intentional Dispatch.
We’ll see you next week for another edition of the Intentional Brief.
Links
https://techcrunch.com/2025/10/25/the-glaring-security-risks-with-ai-browser-agents/
https://www.perplexity.ai/grow/comet