Generative AI has become a defining force in cybersecurity. Its potential is double-edged: while it offers defenders new tools to detect and respond to threats, it also enables attackers to automate, scale, and personalize attacks with alarming speed and realism.
CBTW’s cybersecurity teams are already witnessing this shift in red team operations, threat intelligence, and client engagements. Our experience shows that organizations can no longer rely on static, rules-based security models. Today’s threat actors are using AI tools to write persuasive phishing emails, clone voices, generate malicious code, and even simulate internal communications. It’s a new battleground, and both sides are using AI to compete.
Attackers Are Already Using AI
One of the clearest signs of this transformation is the rise in voice phishing, or vishing. In late 2024, such attacks surged by 442%, driven by AI-generated voice cloning and deepfake audio. Criminals can now impersonate executives, vendors, and even family members with convincing precision. At the same time, tools like WormGPT and other jailbroken LLMs are becoming more widely available in underground forums. These tools let cybercriminals automate email generation, write malware, and craft advanced social engineering content. They can also be used to simulate chats and impersonate help desks or finance teams.
Our greatest concern lies not in the existence of these tools, but in the unprecedented ease with which they can now be accessed. In simulated phishing attacks conducted as part of red teaming services, we’ve seen AI-generated messages significantly increase click-through rates. These messages bypass basic spam and phishing filters, fool users, and even deceive seasoned professionals.
Autonomous Agents Are Taking Shape
Some experts are now warning of agentic AI – autonomous AI systems capable of completing multi-step cyber operations with little or no human input. While these technologies are still emerging, early frameworks suggest how they might be used to replicate reconnaissance, scanning, exploitation, and lateral movement without active human control.
At CBTW, our internal research teams have been testing chained LLM agents in sandbox environments to understand how these could be used or misused. While we believe fully autonomous attacks are not yet widespread, the pace of advancement suggests they may become viable sooner than expected. This means organizations must begin preparing today.
Defensive AI Is No Longer Optional
Against this backdrop, organizations must integrate AI into their defensive posture. Traditional security systems, based on static rules, manual reviews, and retrospective analysis, can not keep up with AI-enhanced threats. Darktrace offers one of the most mature defensive AI systems available. Its Enterprise Immune System uses self-learning AI to model normal behavior across users, devices, and systems. It can detect and respond to anomalies in real time, without relying on predefined threat signatures.
We often pair Darktrace capabilities with human-led threat analysis and red team intelligence. In a recent penetration test for a Luxury Goods Manufacturer, our team identified security gaps that had gone undetected for years. We then worked with the client to build a more dynamic threat model incorporating AI-enhanced visibility. In another engagement, we helped a Trade Union combine red team testing with a security awareness campaign. This project improved employee detection rates and enhanced overall security posture.
Rethinking Cyber Resilience
Building resilience in this new arms race means organizations must:
- Continuously assess how AI might be used against them
- Update incident response playbooks to include AI-generated content and impersonation
- Test security awareness with AI-generated phishing simulations
- Use adaptive monitoring tools, such as Darktrace, to identify unknown threats
We work with clients to operationalize these steps. Our team helps define AI threat models, integrate detection and response workflows, and assess the security maturity of both in-house and third-party systems.
What We Recommend
To prepare for AI-driven threats, organizations should:
- Conduct an AI threat surface assessment
- Evaluate vendor and third-party exposure to generative AI attacks
- Deploy AI-powered monitoring tools across network, cloud, and endpoints
- Incorporate AI-based threat scenarios into tabletop exercises
- Maintain human oversight to contextualize and validate AI-generated alerts
These are not one-off tasks. AI threats evolve rapidly, and defenses must be updated just as often. By making AI part of your cybersecurity foundation, you reduce the risk of being outpaced by increasingly sophisticated attacks.

Defend with Intelligence
The AI arms race in cybersecurity is no longer hypothetical. Attackers are already leveraging these tools, and the pace of adoption is only accelerating. Organizations that lag behind will find themselves outmaneuvered by adversaries that never sleep and never stop learning.
We believe that the best defense is one that adapts as quickly as the threat landscape. Through a combination of AI-enhanced detection, red team intelligence, and strategic partnerships with technology leaders like Darktrace, we help clients stay ahead of the curve.
The cyber battlefield has changed. It is time to fight AI with AI.
AI is rapidly reshaping the threat landscape, and this evolution is far from over. If you’re considering what’s next for your cyber strategy, we’d be happy to share what we’re seeing in the field.
Let’s talk: https://cbtw.tech/contact/