
The internal documents also show that the employee questioned Facebook’s failure not to invest to tackle hate speeches in India
Team Clarion
NEW DELHI – Facebook has brushed aside the red flags raised by the internal employees about “constant barrage of polarizing nationalistic content”, “fake or inauthentic” messaging, “misinformation” and content “denigrating” minority communities between 2018 and 2020 in India.
According to internal documents as reported by the Indian Express, these alerts were raised by the staff mandated to oversee the functions but an internal review meeting in 2019 with Chris Cox, then Vice President, Facebook, described them as “comparatively low prevalence of problem content (hate speech, etc.)” on the social media site.
There are three reports which flagged the issue of hate speech and problem content. Two reports were presented before the 2019 Lok Sabh elections. The third one presented in 2020 was related to AI (Artificial Intelligence)’s failure to identify the vernacular languages failing to identify the hate speech and problem content.
However, the internal meeting with Cox ignored the issue saying “survey tells us that people generally feel safe. Experts tell us that the country is relatively stable.”
These issues were raised by the legal counsel of former Facebook employee and whistleblower Frances Haugen in their disclosures made to the United States Securities and Exchange Commission (SEC) and provided to the US Congress in redacted form. These disclosures received by the US Congress have been seen by the consortium of global news organizations.
One of the reports titled “Adversarial Harmful Networks: India Case Study” found that around 40 per cent of sampled top VPV (view port views) postings in West Bengal were either fake or inauthentic. The VPV is a Facebook method to measure the views of the content on the social media site.
Another report was based on the findings of a test account. A Facebook employee opened an account with no friends to understand the impact of various features of the social media site. The employee found that within just three weeks, the news feeds of the account “become a near constant barrage of polarizing nationalistic content, misinformation, and violence and gore”.
The account, which did not add any friends but followed the Facebook algorithm, was opened on 4 February in 2019. After the Pulwama terror attack, the Facebook algorithm started showing groups and pages which are mostly about politics and military content.
“Following this test user’s News Feed, I’ve seen more images of dead people in the past three weeks than I’ve seen in my entire life total,” the employee was quoted by the New York Times as saying.
Talking to the New York Times, Andy Stone, a Facebook spokesman, acknowledged the rise of hate speech against Muslims.
“Hate speech against marginalized groups, including Muslims, is on the rise in India and globally,” Stone said. “So, we are improving enforcement and are committed to updating our policies as hate speech evolves online.”
Responding to the issue of hate speech and misinformation which was ignored by the internal meetings, a spokesperson of the Meta Platforms Inc said “Our teams have developed an industry-leading process of reviewing and prioritizing which countries have the highest risk of offline harm and violence every six months. We make these determinations in line with the UN Guiding Principles on Business and Human Rights and following a review of societal harms, how much Facebook’s products impact these harms and critical events on the ground.”
Various apps and technologies being run under Facebook have been brought together under Meta Platforms Inc.
The internal documents also show that the employee questioned Facebook’s failure not to invest to tackle hate speeches in India.
“From the call earlier today, it seems AI (artificial intelligence) is not able to identify vernacular languages and hence I wonder how and when we plan to tackle the same in our country? It is amply clear that we have at present, is not enough,” another internal memo said.
The documents also revealed that employees also asked Facebook to earn back the trust of Muslim employees after a senior Facebook official shared a post on her account which denigrates Muslims.