Friday, 12 January 2024

Deepfake- Are Indian laws prepared to handle the challenge?


With the advent of Artificial intelligence, Machine Learning, Deep Learning and such other technologies, the world, in the last couple of years, has been witnessing a rapid growth in products and services which are making human lives simpler by the day. But at the same time, the rampant growth of these technologies has created a sense of fear in several intellectuals who think that misuse of these technologies can be really harmful/dangerous for mankind and the effects of the same can be devastating and in some cases even irreversible.     


Since the last month or so, the issue of DeepFake in India has taken a front seat in most of the newshour debates. We have seen news items relating to some of the prominent Hindi movie celebrities expressing their displeasure and fear over how Deepfake was used to show them in bad light on the social media platforms like Instagram and Facebook. We even heard the Hon’ble Prime Minister of India giving an example (in the lighter vein) of how Deepfake was used to create a video of him playing garba during the Navratri festivities and then expressing his concerns over the dangers of how misuse of this technology can create havoc in the society. The Hon’ble Prime Minister also expressed the need for creating an appropriate framework around the subject of Deepfakes, to make sure its orderly use for subjects relevant and important for humanity and not for some illegal objectives.


There have further been news about the Central Government and the Ministry of Information Technology notifying the social media platforms like Instagram and Facebook to make sure they duly comply with the extant provisions of the Information Technology Act, 2000, (IT Act) Rules framed thereunder and the Intermediary Guidelines, applicable to the platforms. 


There are also talks and news in addition to public statements made by Hon’ble ministers and officials about the work that's underway to bring Digital India Act, which will presumably be an all encompassing Act dealing with everything digital in India.


While all this is under works, the use and/or misuse of the technology has already started gaining momentum and till such time we have a comprehensive piece of legislation we will have to look at what do we already have and how we can make good use of the existing legislations to make sure that the misuse of technology does not go undetected and unabated.       


In this article we will try and understand what does Deepfake means, what are its potential dangers and how capable the existing Indian legal and regulatory system [focus being the Information Technology Act, 2000 and the Intermediary Guidelines issued under the same] is to deal with the down sides that the misuse of technology may bring along. 


Let us first try and understand what this term Deepfake really means. As is evident from the term, it is made up of two words “Deep” and “Fake”. The word “Deep” is derived from the term “Deep Learning” which in layman terms refers to a method in Artificial Intelligence (AI), which basically teaches computers to process certain data. It is interesting to note that these methods of data processing are inspired by how a human brain would ordinarily process the data. As per an article by Betül Çolak on Institute for Internet and Just Society: 


there are three essential techniques to create Deepfake contents: face swap, expression swap, Generative Adversarial Networks (“GAN”). Regardless of which technique is used, the process  has generally the same steps that are extraction, training and creation. Also, there is no need for massive data sets anymore. Today, even one single photo of a source is enough to create deepfake contents.


It is assumed that the term deepfake was coined sometime in the year 2018 by a Reditt user who apparently created a dedicated forum on the platform for using deep learning and machine techniques to swap female celebrity stars faces on the pornographic content.


Since then the technology has only evolved with time. There are several off-the-shelf Apps available, which can be used to take images from the social media platforms or any other public platforms and swap the faces with faces that the user may want to use for creating deepfakes. 


In April 2023, Kyland Young, a star from the popular reality TV show Big Brother, brought a right of publicity claim against NeoCortext, Inc., the developer of a deepfake software called Reface. See Young v. NeoCortext, Inc., 2:23-cv-02486 (C.D.CA filed Apr. 3, 2023). Young claimed that NeoCortext’s Reface, “which uses an artificial intelligence algorithm to allow users to swap faces with actors, musicians, athletes, celebrities, and/or other well-known individuals in images and videos,” violates California’s right of publicity law. Young’s case, which is still pending in the U.S. District Court for the Central District of California, raises important questions about deepfakes and their intersection with the law as it pertains to famous figures.


While there are several intellectual property rights related issues such as personality rights, right to publicity and copyright, there are some more critical issues surrounding the subject such as human rights issues, privacy issues, political issues and data protection issues etc. WIPO in its Draft issues paper on intellectual property policy and artificial intelligence has recognised this issue that deepfake is more of privacy, human rights and data issue than a copyright issue, but, at the same time it has suggested that when it comes to copyright the same should belong to the inventor of the deepfake technology.


Recently in India too, we have seen a couple of controversies involving deepfake technology. One of which was around an issue where a popular actor’s image was used to be superimposed onto the face of an Indian origin British social media influencer. Within a few days of this incident, we saw Hon’ble Prime Minister of India speaking in a public forum about deepfake being used to create his fake videos. Subsequent to this the Ministry of Information Technology and Electronics ordered the social media platforms, especially the ones falling within the category of “Significant Social Media intermediary” to take down the deepfake content from their website within 36 hours of it being reported to be deepfake. It also directed the platforms to make sure they comply with all the provisions of the IT Act and the Intermediary Guidelines framed thereunder.  


With the above background, in my opinion there are following 5 critical issues that emerge from the rampant use of the deepfake technology:


  1. Impersonation Cheating, forgery etc. by use of fake images/videos: There have been news about how technology has been used to place calls on unsuspecting people with the deepfake voice or video call, impersonating one of their close family members and luring/forcing them into paying money to the fraudsters. 


  1. Personality rights violation: [moral rights as well as copyrights]

Creating videos, voice clips or images using two different people’s images and videos etc. without taking approvals from either the person whose image/video is used or from the person who owns the rights in the images/videos used.  


  1. Obscenity and Pornography connected with impersonation:

The matter relating to Hindi film industry actresses, mentioned hereinabove may be an example of this category where the faces/persona of popular artists are superimposed to create obscene and/or pornographic content and circulated on the internet. 


  1. Hate speech, misinformation, fake news, interference in national matters:

We are already seeing a flood of fake social media clips and videos showing certain political figures in certain situations and or making statements which they in reality have not been into or spoken about. This is probably one of the most dangerous misuse of the technology as it may create internal unrest, interfere with political situations or electoral processes etc. 


  1. Criminal Defamation:

One of the offshoots of all the above could be defaming someone with a criminal intent and damaging goodwill of a person in public. There are high chances that people may form an opinion about a certain person, which is based purely on fake content that they have heard or seen on social media or any other technological platforms and that may damage the reputation or goodwill of the person irreparably. 


The above being the risks, the next question that comes to our mind is whether the IT Act, 2000 and the Rules made thereunder are adequate to handle these matters? If yes then what are the relevant provisions that may come handy for the Law Enforcement Agencies (LEA).


If we carefully look at the provisions of Section 66A-66E of the Information Technology (Amendment) Act, 2008, the provisions therein deal with the following issues in a nutshell:


S. No.

Section

Brief Content

Remarks

1

66 A

Electronic transmission of information which is offensive, false, causing annoyance or inconvenience

Can come handy while dealing with issues relating to fake news, defamation, deepfake creating false narratives

2

66 B

Dishonestly receiving or retaining a stolen computer resource or communication device

Since the term resource includes software construct, fake images or images/videos obtained without consent may attract this provision 

3

66 C

Fraudulent use of electronic signature or unique identity feature of any person

While primarily this deals with misuse of e-signature, the rationale can be extended to personality/persona because the section also applies with respect to unique identity feature 

4

66 D

Cheating by personation using computer resource or communication device

Deepfake is all about impersonation or creating images and videos using two or more personas.

5

66 E

Publishing image of a private area of a person without consent [Private Are- refers to human body parts]

Pornography, obscene content under deepfake will attract this subsection.

6

66 F

Threatening unity and integrity of India

Fake news, creating political deepfake content with an intent to disrupt peace and harmony or disturbing the electoral process may attract this subsection. 

7

67

Electronic publishing of lascivious content

All three sections deal with publication of obscene, pornographic and child pornography content.

8

67A

Publication of sexually explicit content

9

67B

Publication of child pornography


All the above provisions prescribe imprisonment of up to 7 years and/or penalty of upto 10 lakh rupees and should be able to effectively handle the offences related and connected to deepfakes as mentioned in the article above. 


The provisions of Section 79 of the Information Technology Act, 2000 carve out an exception for the intermediaries and exempts them (subject to conditions prescribed) from applicability of these sections with respect to any third party material uploaded on their platform. 


Recently, The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 have prescribed stricter norms for the intermediaries in terms of due diligence to be carried out by them on the users, the content, the IT Security, data protection, Data deletion requests, grievance redressal measures, communication of policies, procedures and rules to the users and such other measures required to bring in higher level of responsibility and accountability for the platforms/intermediaries.


With all these measures already in place, it would appear that there should be no difficulty for the LEAs to deal with the instances of deepfakes. However, in the modern day and time of borderless commerce and platforms, a lot of times LEAs struggle to gather relevant information, material, and evidence to bring home the charges under relevant provisions of the IT Act. 


Recently in the matter relating to the deepfake of an actor the news reports suggested that the platforms have not been very forthcoming in providing the relevant information to the LEAs and that had hindered the pace of investigation. it was then that the central government through the ministry of Information and Technology issued advisory and then a warning to the social media platforms to ensure due compliance with the relevant provisions of the Intermediary Guidelines.


Considering all the above, it seems that our laws are adequately equipped effectively combat the ill effects of deepfake and such other technology misuse, however the following measures will have to be taken at different levels by different stakeholders to make sure that the rule of law is adhered to and implemented:


  1. User education and awareness is the first step towards a safer internet and, therefore, platforms, government, consumer groups etc need to ramp up the programs that educate the common users and make them aware of these issues and ways to better tackle them.

  2. LEA training and education is also a must. The agencies who have to enforce these provisions have to be up to date with the developments in technologies, so that they are adequately equipped to handle these modern challenges. 

  3. Effective grievance handling at platforms’ level is the need of the hour. Platforms need to be proactive in categorising user grievances in different categories from moderate to severe to grave and then addressing them on priority and the category each one of them belong to. 

  4. While the Intermediary Guidelines, 2021 did prescribe stricter norms for platforms, there still is scope for the implementation to be stricter;

  5. Since the lines between publishers and intermediaries are blurring by the day, carve outs for platforms need to be based on clear, objective and evidence based demonstration of facts by the platforms who wish to take shelter of these carve outs.

  6. With the government announcing the Digital India Act to be the single piece of legislation dealing with all things digital, we can expect some stronger, more effective tools to deal with more modern challenges at hand.  




References

Dcruze, D. C. (2023, November 17). 'I saw a video in which I was doing garba': PM Modi addresses threat of deepfakes in India. Business Today. https://www.businesstoday.in/technology/news/story/i-saw-a-video-in-which-i-was-doing-garba-pm-modi-addresses-threat-of-deepfakes-in-india-406089-2023-11-17

Govt issues advisory to platforms on countering deepfakes. (2023, November 7). Mint. https://www.livemint.com/news/deepfakes-major-violation-of-it-law-harm-women-in-particular-rajeev-chandrasekhar-11699358904728.html

Legal Issues of Deepfakes, B. C. (2021, January 19). Institute for Internet and the Just Society. Institute for Internet and the Just Society. Retrieved December 16, 2023, from https://www.internetjustsociety.org/legal-issues-of-deepfakes

Penning, N. (2023, July 25). The Legal Issues Surrounding Deepfakes: Law Firm, Attorneys, Lawyers - Honigman. Honigman LLP. Retrieved December 16, 2023, from https://www.honigman.com/the-matrix/the-legal-issues-surrounding-deepfakes

Sinha, S. (2023, November 24). Meta not cooperating in Rashmika Mandhana deepfake probe: Delhi Police sources. India Today. https://www.indiatoday.in/india/story/rashmika-mandhana-deepfake-video-delhi-police-investigation-facebook-meta-2467137-2023-11-24

THE STATE OF DEEPFAKES. (2019, October 8). THE STATE OF DEEPFAKES. Retrieved December 16, 2023, from https://regmedia.co.uk/2019/10/08/deepfake_report.pdf

WIPO Secretariat. (2023, October 30). Draft issues paper on intellectual property policy and artificial intelligence. WIPO.in. Retrieved December 16, 2023, from https://www.wipo.int/export/sites/www/about-ip/en/artificial_intelligence/call_for_comments/pdf/ind_lacasa.pdf