The UK government on Tuesday called on Elon Musk’s social media platform, X, to take urgent action over its artificial intelligence tool, Grok, being used to create fake sexually explicit images of children.
“What we have been seeing online in recent days has been absolutely appalling, and unacceptable in decent society,” Technology Secretary Liz Kendall said in a statement, adding that X needed to address the issue urgently.
Grok has faced growing international backlash for allowing users to generate sexualised deepfakes of women and minors through its so-called “spicy mode” setting.
On Monday, the European Commission announced that it was “very seriously” reviewing complaints about the AI tool. The UK media watchdog, Ofcom, also said it was investigating X and xAI, the company behind Grok.
Kendall said Ofcom had her full backing to take any enforcement action it deemed necessary.
Under Britain’s Online Safety Act, which came into force in July, websites, social media platforms, and video-sharing services hosting potentially harmful content are required to implement strict age-verification measures, including tools such as facial recognition or credit card checks.
The law also makes it illegal to create or share non-consensual intimate images or child sexual abuse material, including sexual deepfakes produced using artificial intelligence.
Companies that fail to comply face fines of up to 10 percent of their global revenue or £18 million, whichever is higher.
The UK government has also announced plans to ban so-called “nudification” tools that allow users to digitally remove clothing from images.
On Friday, Grok said it had identified flaws in its AI system, describing them as lapses in safeguards, and said it was working urgently to fix them.
