AI cuts cyberattack breakout time to 29 minutes, reveals CrowdStrike report

CrowdStrike's 2026 Global Threat Report shows AI speeding up cyberattacks, with breakout time dropping to 29 minutes and a fastest case of 27 seconds.

Cyber Security Ransomware Email Phishing Encrypted Technology, Digital Information Protected Secured

Attackers are moving faster, and artificial intelligence is helping them do it. That is the core message from the 2026 Global Threat Report by CrowdStrike. The firm's latest findings show that AI is not only giving criminals new tools, but also creating new weak points inside companies.

In 2025, the average "breakout time" — the span between an attacker's first access and when they start moving deeper into a system — dropped to 29 minutes. That marks a 65% jump in speed from the year before. The fastest case on record took just 27 seconds. In one incident, attackers began pulling data out within four minutes of getting in.

The report draws on threat tracking tied to more than 280 named adversaries. It shows that AI is now part of both the attack and the target.

AI tools become entry points

In more than 90 organizations, attackers fed harmful prompts into legitimate generative AI tools. Those prompts pushed the systems to create commands that stole login details and cryptocurrency. In other cases, threat actors found flaws in AI development platforms, planted ransomware, and set up fake AI servers that looked like trusted services to capture sensitive data.

The message is clear: AI systems are no longer just tools employees use. They are part of the attack surface.

AI-driven activity rose 89% year over year. Criminal groups and state-backed actors used AI for tasks such as scanning networks, dumping credentials, and hiding traces of their work. These intrusions often moved through trusted user accounts, SaaS apps, and cloud systems, blending into normal traffic and shrinking the time defenders have to react.

State actors scale up

Several known groups increased their use of AI. Russia-linked FANCY BEAR deployed LLM-enabled malware known as LAMEHUG to automate reconnaissance and collect documents. The eCrime group PUNK SPIDER used AI-generated scripts to speed up credential dumping and wipe forensic evidence. DPRK-linked FAMOUS CHOLLIMA created AI-generated personas to expand insider operations.

Activity tied to China rose 38% in 2025, with logistics firms seeing an 85% increase in targeting. Two-thirds of the vulnerabilities exploited by China-linked actors gave them immediate system access, and 40% focused on internet-facing edge devices.

North Korea-linked activity surged even more sharply. Incidents connected to FAMOUS CHOLLIMA more than doubled. Another group, PRESSURE CHOLLIMA, carried out a $1.46 billion cryptocurrency theft, described as the largest single financial heist reported to date.

Zero days, cloud, and fake CAPTCHAs

The report also points to a rise in zero-day exploitation. About 42% of vulnerabilities were abused before they were publicly disclosed. Attackers used these flaws for initial access, remote code execution, and privilege escalation.

Cloud-focused intrusions increased 37% overall. Among state-linked actors, attacks targeting cloud environments jumped 266%, often for intelligence gathering.

Another shift: fake CAPTCHA pages. The use of "I'm not a robot" lures climbed 563%. Instead of verifying users, these pages trick victims into downloading malware. Criminal groups appear to be moving away from fake browser update prompts and toward these CAPTCHA traps.

Trusted relationships under strain

The report describes 2025 as the year of the evasive adversary. Attackers leaned on trusted relationships — supply chain partners, legitimate software, internal systems, and even employees — to get inside and stay hidden.

"This is an AI arms race," said Adam Meyers, head of counter adversary operations at CrowdStrike. "Breakout time is the clearest signal of how intrusion has changed. Adversaries are moving from initial access to lateral movement in minutes. AI is compressing the time between intent and execution while turning enterprise AI systems into targets. Security teams must operate faster than the adversary to win."

The numbers suggest that speed now defines modern attacks. As AI use spreads across enterprises, so does the pressure on security teams to keep up.