×

Delhi HC Orders Meta, X To Remove AI Porn Of Social Influencer

The Delhi High Court (HC) recently directed X (formerly Twitter) and Meta to remove URLs containing AI-generated porn content targeting a social media influencer. These platforms were also ordered to disclose the Basic Subscriber Information and details of various defendants sharing the material within 15 days of the order’s passage. The judgment was first reported by Bar and Bench.

The case concerns an influencer (whose details have been masked) seeking relief from pornographic or nude images displaying her as ‘obscene and malicious’. The plaintiff’s counsel contended that the allegedly defamatory content also violates her fundamental rights to privacy, dignity, and reputation, alongside committing a civil tort.

Similar judgments delivered in the past

During the hearing, the counsel also referenced previous orders passed by the Court in similar circumstances of deepfake imagery.

  • November 2024: The Delhi HC ordered websites to take down URLs offending the plaintiff, scrutinise, and take action upon other similar URLs discovered.
  • February 2024: While dealing with a case of doxxing or publishing an individual’s private information on the internet with malicious intent, the Court directed X to remove certain URLs and disclose basic subscriber information within a stipulated time. Justice Prathiba Singh ordered tech giant Google to furnish the Gmail addresses of the defendants as well.

Additionally, in July 2025, the Madras HC ordered the Ministry of Electronics and Information Technology (MeitY) to remove the non-consensual intimate imagery (NCII) of a lawyer within 48 hours.

Furthermore, in May 2023 a Delhi HC judgment held search engines responsible for disabling access to NCII content disseminated online. Notably, if the intermediaries failed to expeditiously resolve concerns of privacy violations, their safe harbour protection under India’s IT laws would be suspended.

The Court’s verdict

To begin with, the HC acknowledged that the content was “completely appalling, deplorable, defamatory, and a patent breach” of the plaintiff’s fundamental rights. Accordingly, it passed an injunction restraining third-party porn websites from uploading or sharing any offending non-consensual explicit images of the plaintiff through their own handles or via third parties. The Court also ordered the Union Government and internet service providers (ISPs) to block access to the listed pornographic websites and URLs. Finally, the plaintiff can notify intermediary platforms and websites of any URLs containing the offensive material which they subsequently discover.

How is the government regulating deepfakes?

The judgment comes on the heels of increasing government efforts in tackling the deepfake menace in India. In March 2025, MeitY furnished a comprehensive status report on such technology and its impending misuse in misinformation, defamation, cybercrime, and violation of personal rights. Notably, this representation followed multiple petitions, including one by India TV Chairman and Editor-in-Chief Rajat Sharma, urging the government to restrict access to software enabling deepfake generation.

Before preparing the status report, MeitY constituted a Committee which discussed global precedents on deepfake regulation and evaluated available forensic tools. Later, this Committee sought insights from stakeholders like tech giants about detection challenges, industry policies, and public-private collaboration. In response, Google and Meta referenced their AI content detection and labelling tools, while X argued that despite such tools at its disposal, not all AI-generated content is harmful.

Advertisements

Internationally, the US recently passed the Take It Down Act that directs platforms to remove NCII content within 48 hours of of a user informing them of its existence. Likewise, the UK government is criminalising sexually explicit deepfakes under the Online Safety Act, while the European Parliament and Council decided on a proposal to criminalise cyber violence against women, including sharing deepfakes, stalking, and hate speech-related crimes, among others.

MediaNama’s take

The Delhi HC’s decision highlights gaps in India’s digital rights framework, which often responds to harms post-occurrence rather than focusing on preventive measures. While the constitution of a Committee is a step in the right direction, there is still no comprehensive statutory definition of deepfakes, NCII, or consent-driven media manipulation. Without clear legal boundaries, enforcement risks becoming arbitrary and inconsistent.

A national framework could mandate standardised reporting protocols, deepfake detection research, and industry-government collaboration on early-warning systems. Additionally, provisions for victim compensation and psychological support also remain underdeveloped. As MediaNama has previously analysed, funding for the Cyber Crime Prevention Against Women and Children (CCPWC) scheme declined from Rs 93.13 crore in 2017-18 to Rs 10.69 crore in 2023-24, against the backdrop of rising incidents of cyber violence against women. Overall, while the Court’s directive provides immediate relief, India’s long-term approach needs to be ascertained with a swift pace amid the rising deepfake menace.

Post Comment