Hate Speech: Meta Under Fire for Approving Anti-Muslim Ads Inciting Violence in India

Hate Speech: Meta Under Fire for Approving Anti-Muslim Ads Inciting Violence in India

New Delhi. Meta, the parent company of Facebook and Instagram, is under fire for approving politically charged advertisements that incited violence and spread disinformation during India's recent election. The revelation comes from a report exclusively shared with The Guardian, highlighting the company's failure to prevent harmful content despite its public commitments.

Violent and misleading ads target Muslims

The report details how Meta approved ads containing inflammatory language against Muslims, including calls for violence such as "let's burn this vermin" and "Hindu blood is spilling, these invaders must be burned." Other ads included false accusations against opposition leaders and used AI-manipulated images to amplify their messages.

Failure to detect AI manipulations

According to the report, Meta’s systems failed to identify the AI-generated nature of the images in the ads. Despite Meta's pledge to prevent the spread of AI-manipulated content during the election, 14 out of 22 submitted ads were approved. The approved ads, which were then immediately withdrawn by the researchers, violated Meta’s policies on hate speech, misinformation, and violence.

Criticism from opposition, activists

Nara Lokesh, National General Secretary of the opposition Telugu Desam Party (TDP), criticized the ruling party and Meta for allowing such ads. He claimed that these actions demonstrated a bias in favor of the Bharatiya Janata Party (BJP) and highlighted the dangers of unchecked hate speech on social media platforms.

Meta’s response and continued issues

A Meta spokesperson defended the company's processes, stating that advertisers are required to go through an authorization process and comply with applicable laws. Meta reiterated its commitment to removing content that violates community standards, including AI-generated content flagged by independent fact-checkers.

Ongoing concerns about hate speech

This incident is not the first time Meta has been accused of failing to control hate speech on its platforms in India. Previous reports have linked Facebook activity to real-life violence, including riots and lynchings. Despite efforts to expand fact-checking networks and improve oversight, the recent findings cast doubt on Meta's ability to manage content during critical elections.

Calls for stricter oversight

Maen Hammad from Ekō called out Meta for profiting from hate speech. "Supremacists, racists, and autocrats know they can use hyper-targeted ads to spread vile hate speech, and Meta will gladly take their money, no questions asked," he said. Hammad emphasized the need for Meta to develop more robust mechanisms to detect and prevent the spread of harmful content globally.

As India prepares for future elections, the report underscores the urgent need for social media platforms like Meta to enhance their monitoring systems. Ensuring fair and safe elections is crucial, and companies must take significant steps to prevent the spread of disinformation and hate speech.

Test conducted by accountability groups

The ads were submitted by India Civil Watch International (ICWI) and Ekō, a corporate accountability organization, to test Meta’s ad monitoring mechanisms. The groups aimed to see if Meta could detect and block inflammatory political content during the six-week election period, which concluded on June 1.


Share this story