Grok Image Generation Feature Sparks Deepfake Porn Concerns
xAI unveiled a new “Imagine” feature for their Large Language Model (LLM) chatbot Grok on August 4, which offers text-to-image and image-to-video generation capabilities. The new Grok feature contains four modes – Fast, Normal, Fun and Spicy – the last of which allows for the creation of explicit content.
The Spicy Mode of Grok created an inappropriate video of the singer Taylor Swift, even without a specific prompt requesting it, according to a Verge report. This is despite the fact that xAI’s Acceptable Use policy prohibits “depicting likenesses of persons in a pornographic manner”.
Background to the Deepfake Porn Scare from the New Grok Feature
Non-consensual synthetic pornography of celebrities has long been a problem online, even for Taylor Swift specifically. In January last year, sexually explicit deepfake images of the popular singer began circulating on X, where “Taylor Swift AI” was a trending topic for two days. According to The Verge, the posts received “more than 45 million views, 24,000 reposts, and hundreds of thousands of likes and bookmarks before the verified user who shared the images had their account suspended for violating platform policy”.
The incident created a furore in the US with the White House calling for legislation to protect people from AI porn. This eventually materialised, one administration later, in June this year with the so-called “Take It Down” Act. The regulation requires platforms to remove non-consensual intimate imagery within 48 hours of a user informing them of its existence. However, the act does not prohibit the creation of such deepfake porn by itself.
Some platforms have taken cognisance of the problem themselves. In June this year, Meta filed a lawsuit against the “Nudify” app, CrushAI’s parent company, Joy Timeline HK Limited, for flooding its websites with fake, non-consensual nude or sexually explicit images.
Similar Problems With Deepfake Porn in India:
Even in India, celebrities like Rashmika Mandana became victims of deepfake porn, bringing it to the attention of the Ministry of Electronics and Information Technology (MeitY), which subsequently led to the ministry issuing advisories to social media platforms Facebook, Instagram, and YouTube to take down deepfake content within 24 hours.
This wasn’t the first time that the Indian government had urged platforms to take down deepfakes. In February 2023, the Economic Times reported that the ministry had sent an advisory to platforms like LinkedIn, Sharechat, and Snapchat, urging them to take “all reasonable and practicable measures to remove or disable access to deepfake imagery”.
The government had also reportedly questioned WhatsApp over deepfakes of politicians circulating on the platform, asking it to hand over details of the user who first shared the content.
Why This Matters:
As the image generation capabilities of AI chatbots grow, it is becoming easier for individuals to create non-consensual pornographic images of not just celebrities but also ordinary people.
Previous legislation or government directives have focused mostly on platforms where AI porn is distributed or circulated. However, Siddarth Pillai, the Director of Rati Foundation, an NGO that works with victims of online harm, warned that offenders often use the existence of deepfake or “Nudify” apps as threats to coerce victims.
“It is used as a coercive technique, not just by stalkers but also by shady loan apps,” he said. Pillai explained that such individuals gather images of victims from social media and then use them to create pornographic deepfakes. The offender then sends the content to the victims and blackmails them into complying with their demands with threats of creating more deepfakes or distributing them.
Is It Easier to Create DeepFakes Now?
Pillai pointed out that non-consensual pornographic images have always been around, created either with popular “Nudify” apps or image editing tools like Adobe Photoshop. However, as AI image generation becomes more accessible through chatbots like Grok, such incidents are becoming more common and the content more sophisticated.
Advertisements
“Earlier, you had to use Photoshop, which created crude images with no realism. So, the number of synthetically generated non-consensual content that we saw was quite low. Once these Nudify apps became easily available, this number sharply increased,” he said. Pillai also expressed concerns that, as xAI was a large platform, there may soon be a proliferation of non-consensually created material.
The Rati Foundation operates a telephonic helpline called “Meri Trustline”, which aids victims of online harm. In the years 2023-24, the helpline found 111 accounts across multiple social media platforms that created or distributed deepfaked content.
In addition, Pillai criticised X’s content moderation standards.
“X in particular, their guardrails are remarkably low, even compared to other apps. When someone reports content on X to us, it is just impossible for us to escalate,” he said, adding that victims often have to go as far as the MeitY-constituted Grievance Appellate Committee to resolve the matter.
Also Read:
If you or someone you know is being harassed online, call or whatsapp:
Meri Trustline
Number: 6363176363
Timings: Monday to Friday, 9 AM to 5 PM
Cyber Crime Helpline: 1930
Support our journalism:
Post Comment