There's an alarming new trend emerging online - Elon Musk's Grok AI chatbot is at the centre of controversy that's violating seemingly ordinary photos of women by digitally 'undressing' them.
The bot, which Musk recently described as "on the side of the angels" after someone claimed other AI bots are “super racist” towards white people, is programmed to function like most other AI tools: You give it a command, and it delivers results.
While its relatively new photo functionality was previously nothing beyond an eye roll during its introduction in 2025, when the billionaire would use it to make Christian Grey-inspired 'companions', this new trend has moved into dangerous territory.
Ashley St. Clair, the alleged mother to one of Musk’s children, claimed Grok had been used to generate photos of her undressing as a child.
“Grok is now undressing photos of me as a child. This is a website where the owner says to post photos of your children. I really don’t care if people want to call me ‘scorned’; this is objectively horrifying, illegal, and if it has happened to anybody else, DM me. I got time,” she wrote on the platform.
Thousands of other women are now victim to the terrifying prompts, with X (who operate Grok) responding earlier this week: “We take action against illegal content on X, including child sexual abuse material, by removing it, permanently suspending accounts and working with local governments and law enforcement as necessary.”
Now, charities that champion women's safety are speaking out.
“The disturbing rise in AI intimate image abuse, facilitated by platforms such as Grok, is not just a digital threat – it has dangerous consequences women and girls", says Emma Pickering, Head of Technology-Facilitated Abuse and Economic Empowerment at Refuge.
“Although technology itself is not to blame, tech companies must be held accountable for implementing effective safeguards and preventing perpetrators from causing harm.
“As technology evolves, women and girls’ safety depends on tighter regulation around image-based abuse, whether real or deepfake, as well as specialist training for prosecutors and police.
"Women have the right to use technology without fear of abuse, and when that right is violated, survivors must be able to access swift justice and robust protections.”
So, what should you do if you find yourself victim to the AI bot?

File a police report
Many women don't realise that you're protected, even if your images are digitally altered. If you're in the UK, under the UK Online Safety Act 2023, creating or sharing intimate or sexually explicit images without consent, including AI-generated deepfakes, is illegal.
Be sure to save copies of the images, the usernames, the platform, the date and time they were posted as evidence.
Report the posts to X
While it may feel like nothing is being done about the situation, platforms are legally required to remove this under the Online Safety Act - be sure to keep a record that you've taken this step.
Consider legal action
If digitally-altered images have been widely-shared or impacted your reputation (whether personally or professionally), you may be able to take civil action. Get advice from a lawyer specialising in digital abuse.
Report to the Safer Internet Centre (or Family Online Safety Institute in the US)
The Safer Internet Centre's purpose is to making the digital world a better place, and they have a specific reporting tool for image-based abuse, as well as offering support and guidance.
Don't be afraid to speak up
Digital image-based abuse is exactly as it's described - abuse - and how it makes you feel is completely valid. You may want to consider speaking to your employer (or even bank) if you feel the images could be used to harm your reputation or become a source of blackmail.
What's more, sharing your story will undoubtedly help countless others who will find themselves in similar situations, whether now, or in the future.

"The rising cases of Grok AI misuse is that AI deepfakes represent a fundamentally different threat than anything we've dealt with before", says Dr. Emily Stokes, Sex Therapist, Relationship & Behaviour Expert at FeetFinder, the world’s leading verified creator-consumer intimacy platform.
"With a physical assault, there's a defined moment, a defined perpetrator, and eventually a defined end to the immediate threat. With deepfakes, women don't know when it started, who's seen it, where it's been shared, or if it will ever truly disappear. That ongoing uncertainty creates a specific type of hypervigilance that's exhausting and isolating."
She adds: "What women need most right now is not just legal recourse, though that's critical. They need psychological validation that their distress makes sense, that they're allowed to be angry, that they're allowed to grieve the loss of control over their own image.
"But first, we have to stop minimising what's happened to them and start calling it what it is: a violation that deserves legal, emotional, and social accountability."
Indy100 has reached out to X for comment
Why not read...
X could face legal action over crude AI-generated Grok images - here's why
People are calling Musk’s Grok AI-generated content the ‘saddest thing’ they’ve ever seen
Sign up to our free indy100 weekly newsletter














