Facebook did not report hateful content from India because it lacked tools: whistleblower
Despite being aware that “users, groups and RSS pages promote fear-mongering and anti-Muslim narratives,” social media giant Facebook has been unable to take action or report this content, given its ” lack of Hindi and Bengali classifiers, âaccording to a whistleblower complaint filed with the US securities regulator.
The complaint that Facebook’s language skills are “inadequate” and lead to “global disinformation and ethnic violence” is one of the many complaints reported by whistleblower Frances Haugen, a former Facebook employee, with the Securities and Exchange Commission (SEC) against Facebook’s practices.
Citing an undated internal Facebook document titled âAdversarial Harmful Networks-India Case Study,â the complaint sent to the US SEC by the nonprofit legal organization Whistleblower Aid on behalf of Haugen notes: âThere is had a number of dehumanizing posts (about) Muslimsâ¦ Our lack of Hindu and Bengali classifiers means that much of this content is never reported or implemented, and we have yet to apply for the designation of this group (RSS) taking into account political sensitivities.
Classifiers refer to Facebook’s hate speech detection algorithms. According to Facebook, it added Hindi hate speech classifiers from early 2020 and introduced Bengali later in the year. The Hindi and Bengali violence and incitement classifiers first went live in early 2021.
Eight documents containing dozens of Haugen’s complaints were uploaded by US news network CBS News. Haugen first revealed his identity on Monday in an interview with the News Network.
In response to a detailed questionnaire sent by The Indian Express, a Facebook spokesperson said, âWe ban hate speech and content that incites violence. Over the years, we’ve made significant investments in technology that proactively detects hate speech, before people even report it to us. We are now using this technology to proactively detect unauthorized content in Hindi and Bengali, as well as in more than 40 languages ââaround the world â.
The company claimed that from May 15, 2021 to August 31, 2021, it “proactively removed” 8.77 lakh of hate speech content in India and tripled the number of people working on safety and security issues in India. over 40,000, including over 15,000 dedicated content reviewers. âAs a result, we have reduced the prevalence of hate speech globally – that is, the amount of content people actually see – on Facebook by almost 50% over the past three quarters and it is now 0.05% of all content viewed. . Additionally, we have a team of content reviewers covering 20 Indian languages. As hate speech against marginalized groups, including Muslims, continues to increase around the world, we continue to make progress in enforcement and commit to updating our policies as speeches continue to rise. hate evolve online, âadded the spokesperson.
Not only was Facebook made aware of the nature of the content posted on its platform, but it also discovered, through another study, the impact of posts shared by politicians. In the internal document titled “Effects of Disinformation Shared by Politicians”, it was noted that examples of “high risk disinformation” shared by politicians included India, which resulted in a “societal impact” of ” video out of context stoking anti-Pakistani and anti-Muslim sentiment â.
An Indian-specific example of how Facebook’s algorithms recommend content and “groups” to individuals comes from a survey conducted by the company in West Bengal, where 40% of top users sampled, based on impressions generated on their civic publications, were deemed “false / inauthentic”. The user with the highest view port views (VPV) or impressions to be rated as inauthentic has registered over 30 million users in the L28. The L28 is referred to by Facebook as a set of active users in a given month.
Another complaint highlights Facebook’s lack of regulation of “multiple single-user accounts”, or SUMAs, or duplicate users, and cites internal documents to describe the use of “SUMAs in international political discourse” . The complaint said: “An internal presentation noted that a party official for the Indian BJP used SUMA to promote pro-Hindi messages.”
Requests sent to RSS and BJP went unanswered.
The complaints also specifically point to how “deep reshuffles” lead to misinformation and violence. Reshare depth was defined as the number of hops from the original Facebook post into the reshare channel.
India is ranked among the top countries by Facebook in terms of political priorities. From January to March 2020, India, along with Brazil and the United States, is among the âlevel 0â countries, the complaint indicates; âLevel 1â includes Germany, Indonesia, Iran, Israel and Italy.
An internal document titled âCivic Summit Q1 2020â noted that the disinformation summary, with an âobjectiveâ of âremoving, reducing, informing / measuring disinformation on FB applicationsâ had an overall budget allocation in favor of the States- United. He said 87 percent of the budget for those goals was allocated to the United States, while the rest of the world (India, France and Italy) was allocated the remaining 13 percent. “This despite the fact that the United States and Canada represent only about 10 percent of ‘daily active users’ …”, adds the complaint.
On Tuesday, Haugen appeared before a US Senate committee where she testified about Facebook’s lack of oversight for a company with “a frightening influence on so many.”
In a Facebook post following the Senate hearing, CEO Mark Zuckerberg said, âThe argument that we are deliberately delivering content that makes people angry for profit is deeply illogical. We make money from ads, and advertisers constantly tell us that they don’t want their ads to be next to harmful or angry content. And I don’t know of any tech company that sets out to create products that make people angry or depressed. Moral, business and product incentives all point in the opposite direction â.