By Tyler Rasmussen, VP of Cybersecurity
AI-powered CEO personation scams are a rapidly growing threat in 2025, with cybercriminals leveraging advanced deepfake audio and video tools to convincingly mimic company executives and trick employees into transferring funds or revealing sensitive information. These scams use publicly available data—such as interviews, earnings calls, podcasts, and social media videos—to train AI that can replicate a CEO’s speech, tone, and even facial expressions in real time, often leaving employees unable to distinguish between real and fake communications.
Notable Recent Example
One of the most high-profile incidents involved a finance worker in Hong Kong who, during a sophisticated AI-driven video conference, was convinced to transfer approximately $25.6 million after seeing and hearing what seemed to be the company’s chief financial officer and other senior colleagues. The deepfake impersonations used in the scam were so convincing that, even after initial suspicion, the employee ultimately complied—demonstrating just how realistic these attacks have become.
Other recent cases include voice-cloned calls and video messages targeting employees at tech firms like Wiz and LastPass, as well as failed attempts where vigilant staff recognized something was off, such as inconsistencies in the executive’s speech patterns or unusual requests made outside business hours.
Methods and Risks
- Voice Cloning: With just a few seconds of recorded audio, AI tools can copy a CEO’s voice for phone calls or voicemails.
- Deepfake Video Calls: Real-time video impersonation is used in virtual meetings, often providing “face-to-face” urgency to financial requests.
- Text and Social Media: Personalized phishing via email, text, or messaging apps, sometimes combined with cloned audio or video.
- Scale and Damage: Reports suggest losses from such scams surpassed $200 million in Q1 2025 alone, not counting significant reputational and operational harm.
Protection Strategies
- Verify Independently: Never act solely on urgent instructions received through voice or video. Independently confirm requests via a different communication channel, such as outbound phone call to a known-good number.
- Multi-Factor Authentication: Use robust authentication for sensitive financial actions or approvals, even for internal communications.
- Staff Training: Regularly educate employees on AI-driven social engineering and how to recognize red flags (e.g., tone, urgency, unusual requests, off-hours contact).
- Internal Security Protocols: Implement strict protocols for wire transfers and confidential information sharing, including independent verification—no exceptions for authority.
- Limit Public Data: Reduce executive public exposure by limiting the amount of high-quality audio and video released online.
As AI-powered personation scams grow more sophisticated, organizations must stay one step ahead. Proactive education, regular security audits, and cross-functional collaboration are no longer optional—they’re critical. Don’t wait for a breach to expose vulnerabilities. Start strengthening your defenses today by reviewing your cybersecurity protocols, training your teams, and staying informed. The future of your organization’s security depends on the actions you take now.
Have a question about AI or Cybersecurity for your business? Contact us today!