Analysis: Elon Musk's AI bot will undress anyone, but that's just the beginning
- nordref4
- 3 days ago
- 4 min read

The avalanche of criticism prompted the X platform to change Grok's rules, but the undressing of women continues elsewhere, writes Johanna Vehkoo.
(Click here to read the original article on Yle's website.)
Social platform X (formerly Twitter) announced the “spicy” feature of its Grok AI bot in the summer of 2025. Since then, the bot has allowed users to create nude photos and sexual content. The bot has undressed women and helped its users imagine sexual situations that also involve children.
The matter only made headlines about three weeks ago, when mass undressing of women and girls erupted on the X platform.
Grok operates within X so that users can ask it questions and requests. Users began searching for photos of women on the service and asking the bot to strip them and put them in, for example, string bikinis. Thousands of women and girls soon became victims.
A user asked Grok to undress the dead body of Renee Good, who was shot by an ICE agent in Minnesota . In the real photo, Good is slumped over the steering wheel of a car, covered in blood. Grok changed her into a bikini on command.
Grok also put Swedish Deputy Prime Minister Ebba Busch in a bikini.
Grok also undresses men on request, but bikini photos of men tend to be made as a joke: X owner Elon Musk posted one of himself, laughing at the whole trend.
The huge backlash initially prompted X to simply close Grok's photo-making feature behind a paywall, allowing the company to make money by having people undress against their will. Finally, on Thursday, X announced that Grok would no longer be making nude photos of real people in countries where it is illegal.
Deepfakes have always stripped women against their will
Grok's nude photos are so-called deepfakes. A deepfake is an image, video, or sound that is difficult to distinguish from the real thing. The bikini photos look very much like the people whose real photos they were made from.
The word deepfake was reportedly first used by a Reddit user, who chose it as his nickname. The year was 2017.
The pseudonym Deepfake created a group on Reddit where users exchanged pornographic videos they had made of famous women, in which the woman's face was superimposed onto the body of a porn actress.
Deepfake technology has been used for the digital exploitation of girls and women much more than for political purposes and propaganda, although the latter has been much more of a concern in research and public debate.
According to a report by cybersecurity firm Security Hero , as many as 98 percent of deepfake videos online in 2023 were non-consensual porn. 99 percent of the victims are girls and women.
Pornographic fakes can have very serious consequences for the target. The victim may feel humiliated, or others may think so. Many women have lost jobs and other opportunities because of deepfake porn videos, as the Friday documentary below tells. Deepfakes can be part of the persecution directed at a woman.
Easy-to-use tools based on generative AI make this kind of digital violence even easier. Because X is a free, open social media service, it succeeded in mainstreaming the undressing of women and girls against their will.
The number of deepfake images and videos has been growing rapidly for years. The number will continue to grow unless the companies providing the software are stopped. The EU and its Digital Services Regulation and Artificial Intelligence Regulation are key to this.
The European Commission has ordered X to keep all internal company documentation related to Grok until the end of this year. Digital Commissioner Henna Virkkunen stated in X that she found the images appalling and said that the EU will not hesitate to fully implement the Digital Services Regulation to protect EU citizens. In practice, this would mean huge fines for X.
In addition, Grok's actions may violate local laws in several countries, including the company's home country of the United States, where the Take it Down Act criminalizes sexually explicit images taken without the subject's consent in 2025.
The UK's communications regulator, Ofcom, has launched an investigation into Grok's image generator. Ofcom has the power to fine X millions or even ban the service in the UK altogether. Malaysia and Indonesia have already blocked Grok.
In the Finnish Criminal Code, at least defamation or dissemination of information that violates privacy could apply to scantily clad images of real people.
The danger is that the use of artificial intelligence to undress women will become normalized
Little attention has been paid to the fact that Grok also has a version of Grok that works outside the X platform, Grok Imagine, which allows you to take photos and videos privately. They will not appear on public X, unlike the bikini photos that have sparked discussion, but you can share them wherever you want.
A Wired investigation reveals that Grok has made brutally violent and pornographic videos, including those featuring child characters, and these have been shared on child pornography forums in the dark corners of the internet.
It is clear that AI is making it possible for anyone to produce porn. There are models available with few content restrictions or where the user can easily circumvent restrictions, such as putting minors in sexual situations or sexual violence against women.
Artificial intelligence and social media companies are now looking more broadly, if not towards pornography, then at least towards erotica. OpenAI has planned an “adult mode” feature for its ChatGPT language model , which would allow users to have erotic conversations with the chatbot.
As early as the summer of 2025, Meta's chatbots were even having sexually suggestive conversations with children.
All of this normalizes AI porn, and platforms used by billions of people like X, Meta, and OpenAI are making the sexual use of AI mainstream.
They don't ask girls and women anything.




Comments