Facebook Enabled Delhi Riots, Internal Research Reveals

0
Thick smoke billowed in the air and mobs roamed unchecked through the streets, pelting stones, vandalising shops and threatening locals, as fresh violence broke out in northeast Delhi on February 26. — AP,

Researchers have determined that RSS and Bagrang Dal post inflammatory anti-Muslim content on the platform

Team Clarion 

NEW DELHI — Several news outlets have reported that an internal research at Facebook suggests that the platform and its other products allowed inflammatory content linked to Delhi riots of February 2020.

According to the Wall Street Journal, content targeting Muslims spiked 300% above the previous levels after December 2019 when nationwide protests broke out in India against the Citizenship Amendment Act (CAA), a law that critics say is discriminatory against Muslims.

The report said that rumours and calls to anti-Muslim violence spread particularly on WhatsApp, a messaging app owned by Facebook, in later February which translated into largescale violence on ground killing more than 50 people, majority of them Muslim.

The users in India said that they were exposed to “a large amount of content that encourages conflict, hatred and violence on Facebook and WhatsApp,” including blaming Muslims for spread of coronavirus and assertions that Muslim men are targeting Hindu women for marriage as a “form of Muslim takeover” of the country.

“Hindus are in danger, Muslims are about to kill us,” a Hindu man in Delhi told the researchers that he received such messages frequently on Facebook and WhatsApp. Similarly, in a July 2020 FB report, a Muslim man in Mumbai told Facebook researchers that “if social media survives 10 more years like this, there will be only hatred.” Unless FB does a better job of policing content, India will be a “very difficult place to survive for everyone,” he said.

Critics alleged that Facebook has been aware of the problems for years but did not take enough steps to address them. Many users told the researchers that it was the responsibility of Facebook “to reduce this continent” in the feed and on WhatsApp.

The Associated Press reported that the social media giant didn’t have enough local language moderators or content-flagging in place in India to stop misinformation that at times led to real-world violence. Facebook tags India among “at risk countries” and identifies Hindi and Bengali languages as priorities for “automation on violating hostile speech.”

The researchers also determined that two Hindu nationalist groups, RSS and Bagrang Dal with ties to ruling BJP, post inflammatory anti-Muslim content on the platform.

Though the internal report recommended one of the organizations be kicked off for violating the company’s hate speech rules, the group remains active.

The other group, also remains active on Facebook, and wasn’t designated as dangerous due to “political sensitivities”. It promotes incitements to anti-Muslim violence including “dehumanizing posts comparing Muslims to ‘pigs’ and ‘dogs’ and misinformation claiming the Quran calls for men to rape their female family members.”

“Facebook had balked at removing the Bajrang Dal from the platform following warnings in a report from its security team that cracking down on the group might endanger both the company’s business prospects and its staff in India and risk infuriating Mr. Modi’s political party,” the Wall Street Journal report said.

The documents are part of an extensive array of internal Facebook communications that offer an unparalleled look at how its rules favour elites, its algorithms breed discord, and its services are used to incite violence and target vulnerable people, according to The Wall Street Journal.

LEAVE A REPLY

Please enter your comment!
Please enter your name here