From March research to company memos dating back to 2019, internal company documents about India show Facebook’s ongoing struggles to thwart abusive content on its platforms. India is the largest democracy in the world and the largest growth market. India’s communal and religious tensions have a history that has seen violence stoked by social media.

These files reveal that Facebook knew about the issues for many years. This raises questions as to whether Facebook has done enough to address them. Many digital experts and critics claim it has not done so, particularly in cases where members from Prime Minister Narendra Modi’s ruling Bharatiya Janata Party (the BJP) are involved.

Facebook is a global platform that has gained prominence in politics around the globe. India is no exception.

Modi is credited with using the platform to his party’s advantage during elections. However, The Wall Street Journal’s reporting last year raised doubts about whether Facebook was selectively enforcing hate speech policies to avoid backlash from the BJP. Modi and Mark Zuckerberg, the Facebook CEO and chairman, exuded warmth. This is evident in a 2015 photo of them hugging at the Facebook headquarters.

Leaked documents include internal company reports about hate speech and misinformation in India. Some of the information was amplified by the company’s “recommended” feature or algorithms. They also contain the concerns of company employees about how these issues were handled and the discontent they expressed regarding the viral “malcontents” on the platform.

According to documents, Facebook considered India to be one of the “most at risk” countries in the world. It identified Hindi and Bengali as priority languages for “automation upon violating hostile speech.” However, Facebook did not have sufficient local language moderators or content flagging to stop misinformation from leading to violence in real life.

Facebook stated in a statement to AP that it had “invested significant in technology to find hate expression in various languages including Hindi and Bengali”, which has led to a “reduced amount hate speech that people see in 2021.”

“Hate speech against marginalized group, including Muslims, is increasing globally. A spokesperson for the company said that they are working to improve enforcement and will continue to update our policies as hate speech evolves online.

This AP story, along with others being published, is based on disclosures made to the Securities and Exchange Commission and provided to Congress in redacted form by former Facebook employee-turned-whistleblower Frances Haugen’s legal counsel. A consortium of news agencies, including the AP, obtained the redacted versions.

In February 2019, before the general election, concerns about misinformation were high. A Facebook employee wanted to know what a new user saw in their news feed. If they only followed pages and groups recommended by Facebook, that was all.

An employee created a test account and maintained it for three weeks. During this time, an unusual event rocked India. A militant attack in disputed Kashmir had claimed the lives of over 40 Indian soldiers. This brought India to war with Pakistan.

The note is titled “An Indian test user’s descent into a sea of Polarizing, Nationalistic Messages.” It was written by an employee whose name has been redacted. They stated that they were shocked at the constant flood of polarizing content, misinformation and violence flooding their news feed.

Facebook recommended groups that appeared to be benign and harmless, but they quickly turned into something more sinister, with hate speech, unsubstantiated rumors, and viral content.

Fake news, anti-Pakistani rhetoric and Islamophobic content were flooding the recommended groups. Many of the contents were extremely graphic.

One featured a man with his bloodied head covered in a Pakistani flag and an Indian flag. Its “Popular Across Facebook” feature displayed a slew unverified content regarding the retaliatory Indian strikes against Pakistan after the bombings. This included an image of a napalm-bomb from a videogame clip that was debunked by one Facebook fact-check partner.

The researcher wrote, “Following the test user’s News Feed I’ve seen more images dead people in these past three weeks than in my entire lifetime.”

It raised deep concerns about what such divisive content could cause in the real world. At the time, local news was reporting on attacks on Kashmiris in the fallout.

The researcher asked, “Should we as companies have an additional responsibility for preventing integrity damages that result from recommended material?”

This question was not answered by the memo that was circulated among employees. It did reveal how the default settings and algorithms of the platform were a contributing factor to such content. Employee pointed out that there were obvious “blind spots” in “local content.” They stated that they hoped these findings would spark conversations about how to avoid “integrity harms” for people who are “significantly different” from the average U.S. user.

Despite the fact that the research was done over three weeks, it was not an average representation. However, they acknowledged that it showed how problematic and “unmoderated content” could “totally take over” during a “major crisis event.”

Facebook spokesperson stated that the test study had “inspired deeper, more thorough analysis” of its recommendation system and “contributed towards product improvements to improve them.”

The spokesperson stated that “Separately, we continue to work on curbing hate speech and we have strengthened our hate classifiers, which now include four Indian languages.”

Other research on misinformation in India highlights the extent of the problem.

A month prior to the user test, another assessment was done that raised similar concerns about misleading content. The findings were presented to employees and concluded that Facebook’s misinformation tags didn’t make it clear enough for users. This highlighted the need to do more to combat hate speech, fake news, and other forms of discrimination. Researchers were told by users that clearly labeling information would help them make their lives easier.

It was also noted that the platform did not have enough language fact-checkers. This meant that a lot more content was left unauthenticated.

The leaked documents show that Facebook India is not only misinforming users, but also expose another problem: anti-Muslim propaganda by Hindu-hardline groups.

India is Facebook’s largest customer market, with more than 340 million users. Nearly 400 million Indians use WhatsApp’s messaging platform. Both have been accused of spreading hate speech and fake information against minorities.

These tensions were brought to life by Modi’s party politician uploading a Facebook video in February 2020 in which he asked his supporters to remove predominantly Muslim protestors from a New Delhi road if the police did not. Within hours, violent riots broke out and 53 people were killed. Many of them were Muslims. Facebook removed the video after it had been viewed thousands of times and was shared many times.

Misinformation targeting Muslims went viral again on Facebook’s platform in April. The hashtag “Coronajihad”, which flooded news feeds with the hashtag, “Coronajihad”, blamed the community for an increase in COVID-19-related cases. Although the hashtag was very popular on Facebook for several days, it was eventually removed by the company.

These messages were alarming for Mohammad Abbas (a 54-year old Muslim preacher from New Delhi),

Some videos and posts claimed to show Muslims spitting at hospital staff and authorities. These videos were quickly proved to be fakes, and India’s communal faultlines, which were still being stressed by the deadly riots of a month prior, were once again openly split.

This misinformation led to violence, boycotts of businesses and hate speech towards Muslims. For weeks, thousands of people from the community, including Abbas were kept in institutional quarantine across the country. Some were sent to jail, but the courts later exonerated them.

“People posted fake videos on Facebook, claiming that Muslims had spread the virus. Abbas stated that what started out as lies on Facebook turned into truth for millions of people.

After a series anti-Muslim posts, critics of Facebook’s handling were raised when The Wall Street Journal published stories in August last year detailing how Facebook had internal debated whether to ban Modi’s Hindu hard-line lawmaker as a “dangerous person”.

These documents show that the leadership dither on the decision, prompting concern from some employees who wrote that Facebook was not designating non-Hindu extremist groups as “dangerous”.

These documents also reveal that the South Asia policy chief of the company had posted what many believed were Islamophobic comments on her Facebook profile. She also claimed that naming the politician as “dangerous” would harm Facebook’s chances in India.

An internal document dated December 2020 examining the influence of powerful political players on Facebook policy decisions states that Facebook “routinely makes exceptions to powerful actors when enforcing Content Policy.” It also quotes a former Facebook chief security officers who said that local policy heads are usually drawn from the ruling party and rarely come from disadvantaged groups, religious creeds, or casts, which “naturally bends decision making towards the powerful.”

The official from India quit Facebook months later. The platform also removed the politician, but documents reveal that many employees felt the platform had mishandled it. Documents show that they were accused of selective bias to avoid getting in the way of the Indian government.

An employee stated that “Several Muslim colleagues were deeply disturbed/hurt” by the language used in posts on their FB profiles by the Indian policy leadership.

Another writer wrote that “barbarism was being allowed” to “flourish in our network.”

According to leaked files, it’s still a problem for Facebook.

The company was debating internally whether it could manage the “fear mongering and anti-Muslim narratives”, pushed by Rashtriya Swayamsevak Sangh on its platform, which is a far-right Hindu nationalist organization that Modi also belongs to.

One document titled “Lotus Mahal” stated that members of the BJP had created multiple Facebook profiles to spread anti-Muslim content. These accounts included “calls for ousting Muslim populations from India” as well as “Love Jihad”, which is a conspiracy theory by Hindu hardliners who claim that Muslim men use interfaith marriages in order to force Hindu women to convert to Islam.

Research revealed that much of the content was not flagged or taken into account because Facebook did not have “classifiers” or “moderators” in Bengali and Hindi languages. Facebook claimed it introduced Bengali and hate speech classifiers to Hindi in 2018, as well as Bengali in 2020.

Employees also stated that Facebook had not yet “put forward a nomination to designate this group due to political sensibilities.”

According to the company, its designations process involves a review of every case by relevant teams within the company. They are not agnostic about region, ideology or religion but instead focus on indicators of violence & hate. However, it did not reveal whether the Hindu nationalist organization had been designated “dangerous” since then.