What happened?
AI has made it much easier for cybercriminals to scale attacks, letting ransomware groups automate code, create polymorphic malware, and craft realistic deepfake social‑engineering lures. TRM Labs identified nine new groups using these tools over the past year, including APTLock linked to state actors and AiLock which markets itself as AI‑assisted and uses polymorphic defenses. These groups are also shifting tactics—from full encryption to reputational and regulatory extortion—and laundering proceeds through mixers and non‑custodial exchanges.
Who does this affect?
Businesses across industries—from cable providers and healthcare to manufacturing and construction—are being targeted with phishing, credential theft, and tailored ransomware. Individual crypto users and influencers are also at risk from AI‑driven deepfake scams and malware that drain wallets, while exchanges and mixers are being abused to launder funds. Regulators, compliance teams, and security vendors face growing pressure as state‑linked actors and affiliate networks blur the line between crime and geopolitics.
Why does this matter?
This surge in AI‑enabled attacks comes alongside rising crypto scam losses (about $4.6 billion in 2024) and high‑profile takedowns, which chip away at user trust and slow adoption. The abuse of mixers and non‑custodial services will drive tougher regulation and stricter KYC, raising compliance costs for firms and potentially reducing liquidity. The likely market impact is lower investor confidence, more volatility and downward pressure on riskier crypto assets, plus higher costs for exchanges, custodians and insurers that get passed on to users.
