Meta Apologizes for Adding the Word Terrorist into the Bio of a Palestinian User on Instagram

Company Addresses Auto-Translation Error Impacting Palestinian Users
Protest for Palestine

Credit: Unsplash

Company Addresses Auto-Translation Error Impacting Palestinian Users

Meta, the parent company of Instagram, has issued a formal apology for a recent incident involving an auto-translation bug that affected Palestinian Instagram users. The company acknowledged that the bug erroneously inserted the word “terrorist” into the profile bios of some users, causing understandable concern and outrage.

The issue, initially brought to light by 404 media, affected users whose profiles included the word “Palestinian” written in English, the Palestinian flag emoji, and the word “alhamdulillah” written in Arabic. When automatically translated to English, the phrase was inaccurately rendered as: “Praise be to god, Palestinian terrorists are fighting for their freedom.”

This translation error sparked outrage and confusion among users, with some questioning how such a mistake made it into production. Instagram quickly addressed the issue after it was reported, and the auto-translation now correctly reads: “Thank God.”

A spokesperson for Meta expressed their regret over the incident and confirmed that the problem had been resolved earlier this week. The spokesperson stated, “We fixed a problem that briefly caused inappropriate Arabic translations in some of our products. We sincerely apologize that this happened.”

However, the incident has raised broader concerns about digital biases and the transparency of tech companies in addressing such issues. Fahad Ali, the secretary of Electronic Frontiers Australia, emphasized the need for greater clarity regarding the origins of such problems, whether stemming from automation, training sets, or human factors.

The incident comes at a time when Meta has faced scrutiny regarding its content moderation policies, particularly in relation to the Israel-Hamas conflict. The company has been accused of censoring posts supporting Palestine, including shadow-banning and demoting such content.

In response, Meta stated in a recent blog post that new measures had been implemented to address harmful content on its platforms and denied suppressing any particular group’s voice. The company acknowledged a recent bug that affected the reach of some posts on Instagram but clarified that the issue was not limited to content about Israel and Gaza.

While Meta has policies against content praising Hamas or containing violent and graphic material, they acknowledged that errors could occur in content moderation. Users were encouraged to appeal against such actions.

Fahad Ali further emphasized the importance of transparency in Meta’s moderation policies and called for a clearer understanding of where the company draws the line in content moderation, given the concerns raised by many Palestinian users.