On December 26, the Indian government issued a directive to social media companies, including Facebook, Instagram, and others, instructing them to curb deepfake content on their platforms. The Ministry of Electronics and Information Technology (MeitY) called on these platforms to remove AI-generated deceptive videos and comply with Information Technology (IT) Rules. Deepfake technology enables the creation of impersonation videos using individuals’ images.
In adherence to Indian IT rules, social media platforms are required to ensure that activities causing 11 listed harms are not carried out. These include threats to national security, child pornography, obscenity, disinformation, insulting or harassing based on gender, religion, race, personal information sharing without consent, impersonation, commercial fraud, deception, cheating in online games, and engagement in other unlawful activities.
The directive also emphasized the need for platforms to increase awareness among users about prohibited content. In recent months, the rise of deepfake technology has led to the creation of deceptive videos featuring Indian actresses and prominent figures, such as Rashmika Mandanna, Kajol, Alia Bhatt, Priyanka Chopra, Katrina Kaif, and Ratan Tata.
A notable deepfake controversy emerged when a fake video featuring Ripple CEO Brad Garlinghouse was posted on YouTube, where the CEO appeared to endorse a fraudulent crypto scheme. Despite being made aware of the deepfake scam, Google initially failed to promptly remove the video.
Indian Prime Minister Narendra Modi, who has himself been a victim of AI deepfake content, highlighted the importance of regulating AI and exercising caution with new technologies. He emphasized the dual nature of technology, which can be useful when employed carefully but can lead to significant problems when misused. PM Modi particularly noted the presence of deepfake videos generated with the help of generative AI, urging vigilance in dealing with such content.
Post Your Comments