Explained: Generative AI and Personal Data – Digital Transformation News

AI News


By Krishna Sarma

What is the law regarding deepfake audio?

AI-powered deepfake audio has emerged as a powerful tool in political campaigns, corporate espionage and cyber fraud. Krishna Sarma said India's existing laws are Provide optimal treatment to victims

Why are deepfake audios dangerous?

Recently, OPENAI unveiled a new voice assistant for ChatGPT 4.0 called “Sky,” which is reminiscent of the AI ​​character played by Scarlett Johansson in the 2013 film “Her.” Scarlett Johansson immediately sent a legal letter to OpenAI, asking for Sky to be removed. In response, OpenAI enjoined the use of Sky, but denied that it imitated Johansson's voice, claiming that it used a natural voice with the permission of another actress. In India, AI-created deepfakes of celebrities have become a national topic of conversation. Actor Amitabh BachchanAI-generated voices from were misused in political campaigns for the Madhya Pradesh assembly elections. A video of actor Rashmika Mandanna's face superimposed onto someone else's body went viral.

The transformative potential of AI is undoubtedly acknowledged. But there are significant risks, ranging from more alarmist theories that AI poses an existential threat to humanity, to real societal dangers such as bias (computational statistical source bias and human systematic bias), data privacy, intellectual property infringement, discrimination, deepfakes, disinformation, political interference, and national security. Generative AI, and more specifically deepfakes, are not inherently bad. They also have beneficial and harmless uses. They can be used to help protect personal information when needed, recreate crime scenes, augmented reality for fashion retail, and more.
Currently, traditional legal frameworks apply to user harms related to generative AI, situations that were not considered or imagined when these laws were enacted.

What are the constitutional provisions to prevent voice cloning?

The Supreme Court has ruled that under Article 21 of the Indian Constitution, an individual has a fundamental right to privacy.The landmark Puttaswamy judgment in 2017. Further, in the Ritesh Sinha case in 2019, the Supreme Court held that voice samples are protected under the right to privacy but can be legally compelled for the purposes of criminal investigation in public interest.

Will an individual's voice be protected under the Digital Personal Data Protection Act?

Under the yet-to-be-implemented Digital Personal Data Protection Act 2023 (DPDP Act), it is unclear whether “audio” (recording or copy) will be considered digital personal data and its use will be subject to an explicit consent requirement. Personal data is defined as any data relating to an individual.
Any information that can identify an individual by or in relation to the data. The information must be capable of directly or indirectly identifying that individual.

Does copyright law offer any protection?

The “voice” itself cannot be copyrighted. However, the artist's voice can be copyrighted as a performance right.
In some cases, the Delhi High Court An interim injunction was granted preventing an unnamed party from commercially exploiting images and likenesses of famous actors, including AI-generated voice clones.
The legal bases for protection are moral rights, including the right of publicity; copyright in speech, image and manner of speaking; and common law rights (torts), including the right to protection against passing off, dilution and unfair competition.
The origins of moral rights and rights of publicity in India stem primarily from common law principles and judicial interpretations of copyright and trademark laws, rather than from any express statutory provisions.

Are new laws regarding AI-enabled deepfakes necessary?

India does not have a legal or regulatory framework to specifically regulate AI and address deepfakes. The government is considering the need for separate regulation of AI and appears to be considering addressing some of the risks such as deepfakes under the proposed Digital India. However, information technology The Intermediaries Guidelines and Digital Media Code of Ethics Rules, 2021 and various advisories issued by the government mandate intermediaries and platforms to ensure that current AI technology does not allow the misuse of illegal content. They are also required to label synthetic content and remove reported deepfake content within 24-36 hours of receiving a report from a user or government authority. If platforms fail to comply, they can avail remedies under the Information Technology Act, 2000 and the Indian Penal Code.

The author is Managing Partner of the Corporate Law Group and Chairperson of the IT and ITES Sub-Committee of CII.

follow me twitter, Facebook, LinkedIn





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *