The threat of deep fakes
NEWS The protesters of Capitol Hill in the U.S. believe that the 2020 U.S. election was stolen by the Democrats. This is largely due to misinformation and disinformation of which deepfakes are a part. It has even threatened the electoral outcome of the world’s oldest democracy.
WHAT ARE DEEP FAKES?
Deepfakes are synthetic media, i.e. media (including images, audio and video) that are either manipulated or wholly generated by Artificial Intelligence (AI).
CHALLENGES OF DEEP FAKES
- AI is being used for fabricating audios, videos and texts to show real people saying and doing things they never did, or creating new images and videos. These are done so convincingly that it is hard to detect what is fake and what is real.
- The idea of mere existence of deep fakes causes enough distrust that any true evidence can be dismissed as fake.
- Detection can often be done only by AI generated tools.
- Deep Fakes can target anyone, anywhere.
- They are used to tarnish reputations, create mistrust, question facts, and spread propaganda.
MEASURES AGAINST DEEPFAKES IN INDIA
- So far, India has not enacted any specific legislation to deal with deep fakes, though there are some provisions in the Indian Penal Code that criminalise certain forms of online/social media content manipulation.
- The Information Technology Act, 2000 covers certain cybercrimes. But this law and the Information Technology Intermediary Guidelines (Amendment) Rules, 2018 are inadequate to deal with content manipulation on digital platforms. (The guidelines stipulate that due diligence must be observed by the intermediate companies for removal of illegal content)
- In 2018, the government proposed rules to curtail the misuse of social networks. Social media companies voluntarily agreed to take action to prevent violations during the 2019 general election. The Election Commission issued instructions on social media use during election campaigns.
- But reports show that social media platforms like WhatsApp were used as “vehicles for misinformation and propaganda” by major political parties during the election.
WHAT NEEDS TO BE DONE?
- Existing laws are clearly inadequate to safeguard individuals and entities against deepfakes. Only AI-generated tools can be effective in detection. Hence, as innovation in deep fakes gets better, AI-based automated tools must be invented accordingly.
- Blockchains are robust against many security threats and can be used to digitally sign and affirm the validity of a video or document.
- Educating media users about the capabilities of AI algorithms could help.
- Deepfakes must be contextualised within the broader framework of malicious manipulated media, computational propaganda and disinformation campaigns.
- Journalists need tools to scrutinise images, video and audio recordings for which they need training and resources.
- Policymakers must understand how deepfakes can threaten polity, society, economy, culture, individuals and communities, and bring adequate laws to curb this menace.
Deepfakes is a major challenge which causes multidimensional issues. Hence, there is a need for collaborative, multi stakeholder responses that require experts in every sector to find solutions. Also it should be noted that in today’s world disinformation comes in varied forms and no single technology can resolve the problem. Hence, as deep fakes evolve, AI-backed technological tools to detect and prevent them must also evolve.