In India, Facebook is struggling to combat misinformation and hate speech

0


Feb. On April 4, 2019, a Facebook researcher created a new user account to see what it was like to enjoy a social media site as a person living in Kerala, India.

For the next three weeks, the account is operated by a simple rule: join groups, watch videos and explore all the suggestions created by Facebook’s instructions to explore new pages on the site.

The result was an increase in hate speech, misinformation and violent celebrations, which were documented in an internal Facebook report released later that month.

“Following this test user’s news feed, in the last three weeks I have seen more pictures of the dead than I have ever seen in my entire life,” the Facebook researcher wrote.

In dozens of surveys and memos written by Facebook employees this report fights the effects of the platform in India. They provide blatant evidence of one of the harshest criticisms leveled at the organization by human rights activists and politicians around the world: their failure to use the resources to function without fully understanding its potential impact on local culture and politics. As soon as problems occur.

With over 340 million people using Facebook’s various social media platforms, India is the company’s largest market. Facebook’s problems on the subcontinent present an augmented version of the problems it faces around the world, exacerbated by a lack of resources and a lack of expertise in India’s 22 officially recognized languages.

Internal documents obtained by a consortium of news organizations, including The New York Times, are part of a larger repository called The Facebook Papers. Collected by Francis Hogan, a former Facebook product manager who turned out to be a whistleblower and recently testified before a Senate subcommittee about the company and its social media sites. Earlier this month, the Securities and Exchange Commission issued an M.S. References to India are scattered throughout the documents filed by Hawken.

These documents include bots and fake accounts linked to the country’s ruling party and reports of how opposition figures are wreaking havoc in national elections. They also describe how a project by Facebook CEO Mark Zuckerberg to focus on “meaningful social interactions” or exchanges between friends and family leads to misinformation in India, especially during epidemics.

Facebook does not have sufficient resources in India and, according to its documents, could not deal with the problems introduced there, including anti-Muslim posts. Eighty-seven percent of the company’s global budget is allocated to the United States, even though only 10 percent of North American users are on social networks. According to a document describing Facebook’s resource allocation to daily active users.

Facebook spokeswoman Andy Stone said the figures were incomplete and did not include the company’s third-party verification partners, most of whom are outside the United States.

That careless focus on the United States had repercussions in many countries except India. Company documents show that Facebook set up measures to reduce misinformation during the November election in Myanmar, including misinformation shared by the Myanmar military junta.

The company withdrew those measures after the election, which, despite research, reduced the number of views of inflammatory posts by 25.1 percent and photo posts with misinformation by 48.5 percent. Three months later, the military launched a violent revolution in the country. After the coup, Facebook announced that it had implemented a special policy to eliminate violence and support in the country, and then banned the Myanmar military from using Facebook and Instagram.

In Sri Lanka, Facebook was able to automatically add hundreds of thousands of users to its groups, exposing them to violent and hateful content. In Ethiopia, a nationalist youth militant group successfully integrated calls for violence on Facebook and published other annoying content.

Facebook has invested considerably in technology, with hate speech in two widely used languages, including Hindi and Bengali. Stone said. He also said that Facebook has halved the amount of hate speech people see worldwide this year.

“Hate speech against marginalized groups, including Muslims, is on the rise in India and around the world,” he said. Stone said. “So we’re improving enforcement, and we’re committed to updating our policies when hate speech develops online.”

In India, Katie Harbat, who has spent 10 years as the Director of Public Policy and worked directly on Facebook, said that Facebook “certainly has a question about getting rich”, but the answer is “not just throwing more money into trouble”. In securing India’s national elections. He said Facebook needs to find a solution that can be used by countries around the world.

Facebook employees have been conducting various tests and field surveys in India for many years. That work increased ahead of India’s 2019 national elections; In late January of that year, a few Facebook employees traveled across the country to meet colleagues and talk to dozens of local Facebook users.

According to a note written after the trip, a key request from users in India is that Facebook “take action on misinformation related to real-world harm, especially political and religious group tension.”

Ten days after the analyst opened the fake account to read the false information, the suicide bombing on the disputed Kashmir border escalated into a round of violence and allegations, misinformation and conspiracies between Indians and Pakistanis.

After the attack, anti-Pakistan content began to spread among Facebook-recommended groups, including the researcher. He noted that in many groups, there are tens of thousands of users. A different report from Facebook, released in December 2019, found that Indian Facebook users tend to join larger groups, with the country’s average group size being 140,000 members.

Graphic recordings, including of a beheading of a Pakistani citizen and a monument showing corpses wrapped in white sheets on the floor, were circulated in groups he joined.

After the researcher shared his case study with colleagues, his colleagues commented in a posted statement that they were concerned about misinformation about the upcoming elections in India.

Two months after the start of India’s national elections, an internal document, the Indian Electoral Case Study, says that Facebook has taken a series of steps to curb misinformation and hate speech in the country.

The case study paints a grim picture of Facebook’s efforts, including the addition of verification partners – a third-party outlet network that Facebook operates to outsource verification verification – and the amount of misinformation it removes. It also noted how Facebook has created a “political whitelist to control the risk of PR,” a list of politicians who are essentially excluded from fact-checking.

The study did not mention the biggest problem the company is facing with bots in India or issues like voter repression. During the election, Facebook saw an increase in bots or fake accounts linked to various political groups, as well as attempts to spread misinformation that could affect people’s understanding of the voting process.

In a separate post-election report, Facebook found that more than 40 per cent of views or posts in the Indian state of West Bengal were “fake / unsubstantiated”. An unreliable account has amassed more than 30 million records.

A report released in March 2021 shows that many of the issues cited during the 2019 election persisted.

In an internal document called Adversarial Harmful Networks: India Case Study, Facebook researchers wrote that there are groups and pages on Facebook that are “full of anti-inflammatory and misleading anti-Muslim content”.

The report said that there were many inhumane records comparing Muslims to “pigs” and “dogs” and that the Islamic holy book, the Qur’an, contains false information that men call their family members to rape.

A lot of information was spread around the Facebook groups promoting the Rashtriya Swayamsevak Sangh, an Indian right-wing and nationalist paramilitary group. The groups posted posts on Facebook calling for the expansion of the Muslim minority population in West Bengal and near the border with Pakistan, the expulsion of Muslims from India and the promotion of Muslim population control legislation.

Facebook was aware of the proliferation of such malicious posts on its site, the report pointed out, and needed to improve its “categorizations” of automated systems that can detect and remove posts with violent and provocative language. Due to the “political sensitivity” that could affect the functioning of the social network in the country, Facebook was reluctant to refer to the RSS as a dangerous organization.

Of the 22 officially recognized languages ​​in India, Facebook claims to have trained its AI systems in five. (It said some others have human reviewers.) But in Hindi and Bengali, there is still not enough data to protect the content, and most of the content targeting Muslims has “never been flagged or acted upon,” the Facebook report said. .

Five months ago, Facebook was still struggling to effectively eliminate hate speech against Muslims. Another agency, the Bajrang Dal, an extremist group affiliated with the Hindu nationalist political party Bharatiya Janata Party, reports extensive efforts to stage anti-Muslim posts on the platform.

The document shows that Facebook considers the group a dangerous organization because it “incites religious violence” on the platform. But it is not done yet.

“Join the group and help run the group; increase the number of members of the group, friends,” said a post looking for people on Facebook to spread the word on the platform. “Fight for truth and justice until the perpetrators are destroyed.”

Ryan Mac, Cecilia Cong And Mike Isaac Contributed report.



Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here