Is AI Embedding Risk Into Your Organization?

9–1–2025 (Monday)

Hello, and welcome to The Intentional Brief - your weekly video update on the one big thing in cybersecurity for middle market companies, their investors, and executive teams.

I’m your host, Shay Colson, Managing Partner at Intentional Cybersecurity, and you can find us online at intentionalcyber.com.

Today is Monday, September 1, 2025, and it’s Labor Day here in the United States, or Labour Day in Canada - the ‘u’ being very important.

This week, we’re going to talk a little bit about how AI in your environment can create risk. There are two areas worth exploring a little more deeply here.

The first area is pretty obvious: so-called “shadow AI” - or instances where employees are using AI without following policy, or using appropriate guardrails, creating the potential for exposure.

The second area is less obvious: AI in your defensive tool stack. We’ll talk in a minute about how that might come to life. But first - shadow AI.

How AI Can Embed Risk In Your Organization

Shadow-AI

It’s hard to go for even a single meeting without hearing about AI, and I can tell you that it’s become a common refrain amongst C-suite executives looking to find ways to add value to their companies (tech companies and non-tech companies, alike).

As a result, employees are experimenting with these tools as well. Sometimes, in a fairly straightforward way by using sites like ChatGPT or Perplexity.

In these instances, it can be difficult to understand what, exactly, users are doing with your data. That’s part of the appeal of using an enterprise approach like Microsoft’s Copilot or Google Gemini through your Google Workspace subscription.

Price, of course, can be off-putting here. Currently, Microsoft is charging $30 per user per month for Copilot in their M365 offering, which adds up quickly for a company of any size.

We see clients who have AI policies, and have approval processes for those on the business side to use these tools, but when you look a little deeper, the truth is that there aren’t many requests, but there’s already a lot of usage.

If you take a look at your corporate firewall logs for the last 30 days, I suspect you’ll see plenty of traffic to ChatGPT and other AI-based chat sites, not to mention the areas where AI is being baked in by vendors and deployed into your existing on-prem and web-based tools without much concern.

As with many things, the basics will really help us manage this risk here, namely training our people about the AI risks, building a strong third-party risk management capability, and partnering closely with the business to enable AI-driven transformations in a way that’s risk aware.

Speaking of risk-aware, I’m seeing some interesting moves from attackers targeting AI that might be in your defensive security stack, so maybe we can start by “eating our own dogfood” with regards to risk-informed AI usage.

Exploiting AI In The Defensive Tool Stack

Over on the Malware Analysis blog, there was an interesting post last week showing some phishing campaigns that are now explicitly targeting AI embedded models.

The phishing email in question uses a fake expiring password message in an attempt to get users to click and provide credentials, but buried in the header is some instructions that look a lot like a prompt you might provide to an LLM.

This particular example appears mostly designed to generate activity within the model that might slow, distract, or otherwise confound the intended purpose (looking for spam, one imagines). But don’t focus too much on the prompt - look at the technique.

If we’re feeding data - including things like email headers - to AI models, they may be vulnerable to attacks like this. We know that Security Operations Centers (SOCs), suffering from alert fatigue, are using AI tools to triage alerts. Attacks like this can throw those tools off the scent, perhaps for this message, or perhaps for all messages received after this one if it can crate a resource-intensive analysis loop.

Don’t miss the pivot here: attackers are now attempting to exploit both your people and your defensive tools with the exact same message. They are building on all of the existing evasion techniques:

  • Using SendGrid to increase chances of delivery;

  • Adding an abused Microsoft Dynamics link as the first hop to appear more trustworthy;

  • Blocking the automated crawling / scanning of their domain with Captcha;

  • Obfuscating javascript on the malicious login page;

  • Collecting user data to profile the victim when the land;

  • Encrypting and wrapping the malicious payload;

  • And even adding purposeful errors in the phishing message to make I harder to detect.

This attack is novel today, but make no mistake it will be commodity soon.

Use this as a chance to think about how your tools - now featuring new AI features - might actually be performing less well or adding unintended risk to your operation.

Model out a risk management exercise that you can use to bring to your stakeholders on the business side to show not only an example, but build some camaraderie around the idea that this is a hard problem and it’s happening to the IT and Security teams too.

Start now, because this is all going to happen quickly. But maybe not today - it is Labor Day, after all.

Fundraising

From a fundraising perspective, we saw a very light week with only about $1B in newly committed capital. Maybe not surprising, given the end of vacation season, and - of course - the holiday.

A reminder that you can find links to all the articles we covered below, find back issues of these videos and the written transcripts at intentionalcyber.com, and we’ll see you next week for another edition of the Intentional Brief.

Links

https://malwr-analysis.com/2025/08/24/phishing-emails-are-now-aimed-at-users-and-ai-defenses/

Previous
Previous

Defaults in Documentation = Vulnerability?

Next
Next

AI, ZeroTrust, and The Value of Your Data