New guidelines to regulate digital content
What’s in the news?
- Citing instructions from the Supreme Court and the concerns raised in Parliament about social media abuse, the government has recently released guidelines that aim to regulate social media, digital news media, and over-the-top (OTT) content providers.
- For social media platforms, the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 envisage a category of significant social media intermediaries, the threshold for which will be announced later.
- In addition, the government said that it wanted to create a level playing field in terms of rules to be followed by online news and media platforms vis-à-vis traditional media outlets.
What are the key proposals that the guidelines make for social media?
- Section 79 of the Information Technology Act 2000 provides a “safe harbour” to intermediaries that host user-generated content, and exempts them from liability for the actions of users if they adhere to government-prescribed guidelines.
- The new guidelines notified recently prescribe an element of due diligence to be followed by the intermediary, failing which the safe harbour provisions would cease to apply to these platforms such as Twitter, Facebook, YouTube, and WhatsApp.
- They also prescribe a grievance redressal mechanism by mandating that the intermediaries, including social media platforms, should establish a mechanism for receiving and resolving complaints from users.
- These platforms will need to appoint a grievance officer to deal with such complaints, who must acknowledge the complaint within 24 hours, and resolve it within 15 days of receipt.
Do the guidelines lay the rules for removal of content from social media?
- In essence, the rules lay down 10 categories of content that the social media platform should not host.
- These include content that “threatens the unity, integrity, defence, security or sovereignty of India, friendly relations with foreign States, or public order, or causes incitement to the commission of any cognizable offence or prevents investigation of any offence or is insulting any foreign States”; “is defamatory, obscene, pornographic, paedophilic, invasive of another’s privacy, including bodily privacy; insulting or harassing on the basis of gender; libellous, racially or ethnically objectionable; relating or encouraging money laundering or gambling, or otherwise inconsistent with or contrary to the laws of India”, etc.
- The rules stipulate that upon receipt of information about the platform hosting prohibited content from a court or the appropriate government agency, it should remove the said content within 36 hours.
What does the due diligence entail for social media companies?
- In addition to appointing a grievance officer, social media platforms will now be required to appoint a chief compliance officer resident in India, who will be responsible for ensuring compliance with the rules.
- They will be required also to appoint a nodal contact person for 24×7 coordination with law enforcement agencies.
- Further, the platforms will need to publish a monthly compliance report mentioning the details of complaints received and action taken on the complaints, as well as details of contents removed proactively by the significant social media intermediary.
- While the rules have been notified and will take effect immediately, the due diligence requirements will come into effect after three months.
What are the penalties for companies violating these guidelines?
- In case an intermediary fails to observe the rules, it would lose the safe harbour, and will be liable for punishment “under any law for the time being in force including the provisions of the IT Act and the Indian Penal Code”.
- While the offences under the IT Act range from tampering with documents, hacking into computer systems, online misrepresentation, confidentiality, privacy and publication of content for fraudulent purposes, among others, the penal provisions vary from imprisonment for three years to a maximum of seven years, with fines starting from Rs 2 lakh.
What do the rules for OTT services mean for consumers?
- For OTT service providers such as YouTube, Netflix, etc., the government has prescribed self-classification of content into five categories based on age suitability.
- Online curated content that is suitable for children and for people of all ages shall be classified as “U”, and content that is suitable for persons aged 7 years and older, and which can be viewed by a person under the age of 7 years with parental guidance, shall be classified as “U/A 7+” rating.
- Content that is suitable for persons aged 13 years and above, and can be viewed by a person under the age of 13 years with parental guidance, shall be classified as “U/A 13+” rating; content which is suitable for persons aged 16 years and above, and can be viewed by a person under the age of 16 years with parental guidance, shall be classified as “U/A 16+” rating.
- Online curated content which is restricted to adults shall be classified as “A” rating. Platforms would be required to implement parental locks for content classified as U/A 13+ or higher, and reliable age verification mechanisms for content that is classified as “A”.
View all comments