FBI Issues Urgent Advisory on AI-Powered Social Engineering
The FBI and CISA have issued a joint cybersecurity advisory warning organizations about a dramatic increase in AI-generated deepfake attacks targeting corporate executives. The advisory, released February 7, 2026, details how threat actors are using generative AI to create convincing voice clones and real-time video deepfakes to authorize fraudulent financial transactions.
Since October 2025, the FBI's Internet Crime Complaint Center (IC3) has received over 400 reports of deepfake-enabled business email compromise (BEC) attacks, with combined losses exceeding $145 million.
Attack Methodology
How the Attacks Work
Phase 1: Reconnaissance
├── Scrape executive voice samples from earnings calls, interviews, podcasts
├── Collect video footage from social media, conferences, webinars
└── Map organizational hierarchy and financial approval chains
Phase 2: AI Model Training
├── Train voice cloning models on 30-60 seconds of audio
├── Generate real-time video deepfakes for video calls
└── Create synthetic email writing styles matching target
Phase 3: Execution
├── Initiate urgent video/voice call impersonating CEO/CFO
├── Request emergency wire transfer or vendor payment change
├── Use deepfake video in Teams/Zoom call to "confirm" identity
└── Funds transferred to attacker-controlled accountsReal-World Incidents
| Date | Target | Method | Loss |
|---|---|---|---|
| Jan 2026 | US Financial Services Firm | CEO voice clone phone call | $25.6M |
| Dec 2025 | European Manufacturer | CFO deepfake video on Teams | $18.3M |
| Nov 2025 | Healthcare Provider | Board member voice impersonation | $11.2M |
| Oct 2025 | Energy Company | CEO deepfake approving vendor change | $8.7M |
In the largest reported case, attackers used a real-time deepfake video call impersonating a company's CEO during a scheduled Microsoft Teams meeting. The deepfake was convincing enough that the CFO authorized a $25.6 million wire transfer to what appeared to be a legitimate acquisition escrow account.
Threat Actor Capabilities
Current State of Deepfake Technology
| Capability | 2024 | 2026 |
|---|---|---|
| Voice cloning quality | Detectable artifacts | Near-indistinguishable |
| Required training audio | 5-10 minutes | 15-30 seconds |
| Real-time video deepfake | Laggy, obvious | Smooth, convincing |
| Cost of tools | $10,000+ | Under $500 |
| Languages supported | English only | 15+ languages |
| Detection evasion | Low | High |
The FBI notes that commoditized AI tools have dramatically lowered the barrier to entry, with some deepfake-as-a-service platforms available on dark web marketplaces for as little as $200 per month.
Detection Indicators
Signs of a Deepfake Attack
Audio/Voice Calls:
- Slight latency or unnatural pauses in conversation
- Inability to handle unexpected questions or topic changes
- Background audio that doesn't match claimed location
- Caller avoids being called back on known number
Video Calls:
- Inconsistent lighting on face vs. background
- Unnatural eye blinking patterns or gaze direction
- Lip sync slightly off from audio
- Hair edges or earrings that flicker or distort
- Caller keeps face centered and avoids turning head
Behavioral Red Flags:
- Unusual urgency bypassing normal approval processes
- Request to keep transaction confidential
- New payment details or unfamiliar bank accounts
- Executive calling outside normal business hours from unusual location
Recommended Defenses
Organizational Controls
- Implement multi-person authorization for all wire transfers above a defined threshold
- Establish verbal verification protocols using pre-shared code words or callback to known numbers
- Mandate in-person or secondary channel confirmation for transactions over $100,000
- Create a "no exceptions" policy — legitimate executives will understand verification delays
- Limit public executive media exposure where possible (earnings calls, social media videos)
Technical Controls
- Deploy AI-powered deepfake detection on video conferencing platforms
- Enable advanced anti-spoofing on phone systems (STIR/SHAKEN)
- Implement email authentication (DMARC, DKIM, SPF) to prevent domain impersonation
- Use hardware security keys for executive account authentication
- Monitor for executive voice/video scraping on public platforms
Employee Training
- Conduct deepfake awareness training for all finance and executive staff
- Run tabletop exercises simulating deepfake-based BEC scenarios
- Establish clear escalation paths for suspicious authorization requests
- Test with simulated deepfake calls to measure organizational resilience
Industry Response
Major technology companies have announced countermeasures:
- Microsoft is rolling out deepfake detection for Teams Enterprise
- Zoom has added AI watermarking to detect synthetic participants
- Google announced real-time deepfake detection in Google Meet
- Pindrop released enterprise voice authentication specifically for detecting AI-cloned voices
Report Suspected Attacks
Organizations that experience deepfake-enabled fraud should report to:
- FBI IC3: ic3.gov
- CISA: report@cisa.gov
- Local FBI Field Office: fbi.gov/contact-us/field-offices
- Secret Service (for wire fraud): Contact local field office