AI Malware: Nothing New or Next Big Thing?
11–10–2025 (Monday)
Hello, and welcome to The Intentional Brief - your weekly video update on the one big thing in cybersecurity for middle market companies, their investors, and executive teams.
I’m your host, Shay Colson, Managing Partner at Intentional Cybersecurity, and you can find us online at intentionalcyber.com.
Today is Monday, November 10, 2025, and it’s time to have a somewhat frank discussion about AI as a threat vector for your company.
AI Malware: Nothing New or Next Big Thing?
There has been a bit of a blitz on media coverage of “AI-developed malware” over the past week, based in large part on a report from Google Threat Intelligence. Dan Goodin from ArsTechnica has a great writeup laying out why we might not need to be as concerned as the marketing teams would have us believe.
But - before we get to his analysis - let’s do a quick tour of the headlines.
Security researchers at ESET reviewed one sample, and called it “the first AI-powered ransomware” and promptly gave it a clever name: PromptLock. They said in their press release that “tools like PromptLock highlights a significant shift in the cyber threat landscape.”
But Goodin’s reporting notes this is nothing new. The folks at Connectwise have been working from this playbook since October 2024, and have leaned heavily on a 2024 report from OpenAI about the threats AI could pose.
Pen testing and bug bounty firm Bugcrowd also had a bit in this window - with a report looking at what they called “12 months of AI innovation” for hackers that features phrases like “Whoa! In just a year, AI dramatically proved its value to hackers” and something they’re calling “the three T’s of AI hacking.”
But to come back to Goodin’s reporting, he notes that the five malware families Google were:
“easy to detect, even by less-sophisticated endpoint protections that rely on static signatures. All samples also employed previously seen methods in malware samples, making them easy to counteract. They also had no operational impact, meaning they didn’t require defenders to adopt new defenses.”
“AI isn’t making any scarier-than-normal malware,” [a] researcher said. “It’s just helping malware authors do their job. Nothing novel. AI will surely get better. But when, and by how much is anybody’s guess.”
Indeed, it sounds like there’s both nothing new here, and nothing we need to do to bolster our defenses - yet.
Indeed, it’s affirmation that the bigger risk or threat around AI is that your own people will use it without thinking about the privacy and security implications, that you’ll build a tremendous amount of ShadowAI “security debt” in a very short amount of time, with limited visibility and controls, and be left with a huge amount of both unknown and unknowable exposure.
In fact, this was the bit from the Bugcrowd report that stood out the most to me: they note that 93% of their survey respondents report “companies using AI tools [have] introduced a new attack vector for threat actors to exploit.”
So if you’ve been considering tools that would help you better manage the potential security risks of AI within your enterprise, now’s the time to get those projects moving. This might be something simple like choosing CoPilot over another provider, but it could also be deploying a CASB or moving more towards a zero trust architecture that gives you visibility and granular control over your user and data interactions with these various platforms.
Regardless, just as those on the business side are working to make plans to leverage AI in their workflows, your security teams need to be working to make similar plans to help them leverage it securely - in whatever form that takes for the risk tolerance and larger business context of your own organization.
Meanwhile, the hype cycles are turned up to 11 on both the FUD side for security teams and the ROI side for the execs. The truth is somewhere in the middle, and our job remains to keep it between the lines as we go down the road.
Fundraising
From a fundraising perspective, we find ourselves with a relatively light week, turning in just over $6B in newly committed capital this week, with no large fund announcements.
I wouldn’t see this as an indicator of anything in particular, and the macro uncertainty continues here in the US with the continuing government shutdown and a potential political swing if Tuesday’s election results are extrapolated out. In fact, the major indices all saw weekly losses, with the NASDAQ taking their hardest hit week since April.
A reminder that you can find links to all the articles we covered below, find back issues of these videos and the written transcripts at intentionalcyber.com, and now sign up for our monthly newsletter, the Intentional Dispatch.
We’ll see you next week for another edition of the Intentional Brief.
Links
https://cloud.google.com/blog/topics/threat-intelligence/threat-actor-usage-of-ai-tools
https://www.connectwise.com/blog/the-dark-side-how-threat-actors-are-using-ai
https://www.bugcrowd.com/wp-content/uploads/2024/10/Inside-the-Mind-of-a-Hacker-2024.pdf