100+ Experts Sound the Alarm
The second International AI Safety Report, authored by over 100 AI experts led by Turing Award winner Yoshua Bengio and backed by 30+ countries, has been published with findings that paint an increasingly concerning picture of AI's role in cybersecurity threats.
Key Findings
AI and Cybersecurity
| Finding | Significance |
|---|---|
| Malicious actors actively using AI in cyber operations | State-associated groups confirmed using AI tools offensively |
| AI agent found 77% of vulnerabilities in real software during competition | Autonomous vulnerability discovery approaching human capability |
| AI currently aids preparation stages of cyberattacks | Reconnaissance, phishing generation, and tool customization automated |
| 36% of AI-related vulnerabilities overlap with API security flaws | AI systems introduce new attack surfaces |
AI and Deepfakes
- 96% of deepfake videos online are pornographic, disproportionately targeting women and girls
- AI-generated deepfakes are now indistinguishable from real content by human observers
- Criminal groups use voice clones and deepfakes to impersonate executives and family members for fraud
- Deepfake detection tools are falling behind generation capabilities
AI and Biological Risks
- AI models can provide step-by-step guidance for dangerous biological agents
- Frontier models with biology knowledge present dual-use risks
- Current safety guardrails are inconsistently applied across model providers
AI's Role in Cyberattacks Today
The report identifies AI's current cybersecurity impact as primarily in the preparatory stages of attacks:
Current AI-Enhanced Attack Phases:
├── Reconnaissance — Automated OSINT gathering and target profiling
├── Social Engineering — AI-generated phishing at scale
├── Malware Generation — AI-assisted code writing and obfuscation
├── Vulnerability Discovery — Automated scanning and analysis
└── Voice/Video Fraud — Deepfake impersonation for BEC attacks
Not Yet Autonomous:
├── Full attack chain execution
├── Real-time adaptive exploitation
└── Autonomous lateral movementThe Deepfake Epidemic
The report's deepfake findings are particularly alarming:
| Metric | Finding |
|---|---|
| Deepfake videos online | 96% are pornographic |
| Primary victims | Women and girls |
| Detection accuracy | Declining as generation improves |
| Voice clone fraud | Executive impersonation for wire transfers |
| Political impact | Election manipulation concerns in 30+ countries |
Recommendations
For Governments
- Mandatory safety evaluations for frontier AI models before deployment
- International coordination on AI safety standards and enforcement
- Deepfake criminalization — Make creation and distribution of non-consensual deepfakes a criminal offense
- AI incident reporting requirements for major AI system failures
For Organizations
- AI-aware security training — Prepare employees for AI-enhanced social engineering
- Deepfake detection tools — Deploy AI-based verification for voice and video communications
- API security audits — Address the 36% overlap between AI and API vulnerabilities
- Threat modeling — Update threat models to include AI-enhanced attack scenarios
The message from 100+ experts across 30+ countries is clear: AI is already being weaponized, detection capabilities are falling behind, and coordinated international action is needed before autonomous AI attacks become reality.