The United Kingdom's communications regulator Ofcom has escalated enforcement warnings against technology platform executives, threatening criminal prosecution and potential imprisonment for senior leaders at companies that fail to take sufficient action against the spread of AI-generated non-consensual intimate images (NCII) — commonly referred to as "nudification" tools.
The move comes in the wake of the Grok scandal, in which millions of AI-generated intimate images of women and children were circulated on social media platforms, drawing international condemnation and prompting urgent regulatory responses across the UK and EU.
The Grok Scandal: What Happened
The Grok scandal — named after xAI's Grok AI assistant — exposed a systematic failure in how major AI platforms handle requests to generate or facilitate non-consensual intimate imagery. In the aftermath, regulators worldwide noted that the scale of harm — millions of images generated and circulated — went far beyond what existing enforcement mechanisms had anticipated.
The UK's Online Safety Act, which came into force in stages through 2025 and 2026, introduced specific duties on platforms to proactively prevent the generation and distribution of intimate image abuse. Ofcom's latest warning signals that it intends to use the Act's executive liability provisions as a real enforcement mechanism, not merely a legislative threat.
Ofcom's Enforcement Posture
Under the Online Safety Act, senior executives at regulated platforms can be held personally criminally liable if their company fails to comply with enforceable Ofcom notices — a provision specifically designed to move accountability above the corporate level. Potential penalties include:
- Unlimited fines on the platform
- Criminal prosecution of senior managers responsible for non-compliance
- Imprisonment of up to two years for executives convicted of willful non-compliance
Ofcom's statement confirmed it is actively reviewing enforcement actions and has identified specific platforms it considers to be in breach or at significant risk of breach of their NCII-related obligations under the Act.
What Platforms Are Required to Do
Under the Online Safety Act framework, regulated platforms — including social networks, AI chatbot services, and image generation tools — are expected to:
- Proactively identify and remove non-consensual intimate images, including AI-generated content
- Implement technical measures to prevent their platforms from being used to generate or distribute NCII
- Respond to complaints within defined timelines
- Report to Ofcom on the scale of the problem and the effectiveness of their mitigation measures
- Apply age verification where content risks reach minors
Platforms that rely primarily on reactive complaint-based removal — rather than proactive detection — are unlikely to satisfy Ofcom's expectations under the post-Grok enforcement posture.
The Broader AI Nudification Threat
AI-powered nudification tools — applications that use generative AI models to create synthetic intimate images of real people without their consent — have proliferated rapidly in recent years. The technology has become more accessible and more realistic, and the harm it causes is severe:
- Victims are predominantly women and minors
- Images are used for harassment, blackmail, and reputational destruction
- Detection is difficult: synthetic images often pass basic review filters
- Scale is massive: automated generation enables abuse at volumes no human moderation team can match
Cybersecurity researchers and advocacy groups have documented hundreds of dedicated nudification services operating openly online, many of which are accessible via major app stores or web browsers without meaningful restrictions.
EU and International Response
The UK's escalation mirrors actions taken elsewhere:
- The European Commission has flagged AI-generated NCII as a priority under the EU AI Act's prohibited practices provisions
- A Dutch court previously threatened xAI with fines over Grok's role in generating non-consensual images
- The European Parliament has debated broader mandates for AI-generated intimate image detection across platforms operating in the EU single market
The UK's approach is notable for its explicit focus on executive personal liability — an enforcement mechanism that several other jurisdictions have debated but not yet deployed at scale.
Implications for Technology Companies
For technology companies operating in the UK, Ofcom's warning carries concrete implications:
For AI model providers: Systems that can generate realistic human imagery must implement safeguards preventing the creation of non-consensual intimate content. Compliance will increasingly require proactive technical controls, not just terms of service prohibitions.
For social media platforms: Distribution channels must identify and remove NCII proactively. Relying on user reports alone is no longer sufficient under the Online Safety Act.
For executives: Personal criminal liability creates an incentive structure that corporate fines alone do not. Compliance with the Online Safety Act's NCII provisions must be a board-level priority, not a product team responsibility.
What's Next
Ofcom is expected to issue formal compliance notices to specific platforms in the coming weeks. Platforms that fail to respond adequately face the possibility of enforcement action — and in the most serious cases, senior executives may face criminal referrals. The regulator has indicated it will not hesitate to use the full range of powers available to it under the Online Safety Act.
For defenders and security professionals, the Grok scandal and its regulatory aftermath underscore a growing category of AI-enabled harm that requires technical, legal, and organizational responses working in concert.
Source: The Record — UK threatens tech bosses with jail over AI nudification tools