AI, ZeroTrust, and The Value of Your Data

8–25–2025 (Monday)

Hello, and welcome to The Intentional Brief - your weekly video update on the one big thing in cybersecurity for middle market companies, their investors, and executive teams.

I’m your host, Shay Colson, Managing Partner at Intentional Cybersecurity, and you can find us online at intentionalcyber.com.

Today is Monday, August 25, 2025, and we’re going to shift gears a bit on the show this week and NOT talk about ransomware crews, although - perhaps ironically and perhaps not - we are going to be talking about your data being taken and used to deliver value to others.

AI, ZeroTrust, and The Value of Your Data

In particular, I’m talking about an interesting discussion that has surfaced  this week about zero-trust network architecture (or ZTNA) provider zScaler.

According to both a panel appearance last week with the Cloud Security Alliance and an Earnings Call transcript from 2024, zScaler, apparently, has been taking the logs their customers generate via deep packet inspection and using this data to train their AI systems.

And not just a little, but a lot - to the tune of “three trillion logs from customers’ IT estates every week”.

As a primer for those who may not live in this world, deep packet inspection is a deployment architecture that breaks the encryption from client to server, essentially by sitting as a proxy between the two, breaking, inspecting, and then re-encrypting the connection on the other side. While this can offer some real benefit, it also slows and complicates things, and now can pose some additional challenges.

Specifically, because this technology has significant vulnerability to the web traffic, these logs often contain highly sensitive details. The earnings call included this quote:

“These are complete logs that have structured and unstructured data, including the full URL. We leverage this proprietary data to train AI models that power innovations throughout our platform.”

So it’s not as if zScaler is downplaying their awareness of this risk. Additionally, these connections are typically encrypted using TLS - which is supposed to be end-to-end encrypted, so most teams, even technical security teams, reasonably assume that this data is protected end-to-end.

Short of not using the deep packet inspection capability, or not using zScaler or a ZTNA provider in general, there’s really no way to know what of your data is being used this way, and what the downstream implications are of this “exposure” (air quotes).

We’re going to see more and more of these instances where AI is going to create unintentional exposure (at least from a data leakage perspective). It won’t always look so obvious as product companies shim AI into their products left and right, be mindful, too, of these data mis-appropriation opportunities.

Early this morning, zScaler posted a response to this conversation, attempting to clarify how they have trained their AI. In particular, they use an analogy:

“We only use data or metadata that does not contain customer or personal data for AI model training.”

“Within that contained environment, customers can harness the power of their own data. Logs, transactions, and telemetry generated by their use of our platform are used to improve outcomes for their organization alone. This means customers benefit directly from their own signals, whether it’s for risk modeling, AI copilots, or policy enforcement, without having to trade away autonomy or privacy or security.”

“To re-emphasize: customers’ proprietary information or personal data in the Zscaler logs is never shared outside of the customer boundary.”

To me, I’m still not convinced that they’re truly not using your data, but instead, using your data with the thought that since you benefit from it, you’ll be fine with it. And maybe you will, or maybe not.

A solid AI policy, paired with tight coordination with your business stakeholders and third parties (in addition to network logs and maybe a skeptical eye towards those who want to be close to your data), can really help at least raise awareness, if not drive down this risk.

Fundraising

From a fundraising perspective, we saw only $5.3B in newly committed capital, which is one of the lower numbers we’ve seen up to this point for both the quarter and the year.

That said, there are a few cyber IPOs in the pipeline including SASE provider Netskope, who is reporting “a 30.7% jump in revenue in the first half of fiscal 2026”. They’re looking to raise $500M at a valuation of $5B, but also “posted a net loss of $169.5 million on revenue of $328.5 million in the six months ended July 31, narrowing from a net loss of $206.7 million on revenue of $251.3 million a year earlier.”

Will be interesting to see how Q4 comes together when we get everyone back in the office in September.

A reminder that you can find links to all the articles we covered below, find back issues of these videos and the written transcripts at intentionalcyber.com, and we’ll see you next week for another edition of the Intentional Brief.

Links

https://www.thestack.technology/zscaler-earnings-logs-ai/

https://x.com/vxunderground/status/1958230321532371161

https://www.aol.com/news/cybersecurity-firm-netskope-files-us-162250053.html

https://www.zscaler.com/blogs/company-news/zscalers-commitment-to-responsible-ai

Previous
Previous

Is AI Embedding Risk Into Your Organization?

Next
Next

Scattered LAPSUS$ Hunters Is a Real Thing. Now What?