One of the most serious criticisms Facebook faced over its handling of hate speech against the Rohingya people in Myanmar concerned a lack of content moderators who could speak the local language.
From documents obtained by Haaretz, it seems that in Israel, too, not all content is moderated by Hebrew speakers – despite the fact that it is a country full of domestic and foreign political tensions, and where Facebook has been active for over a decade.
When Haaretz asked Facebook whether all content posted in Hebrew is examined by Hebrew speakers, it did not deny that some is not. In fact, the social media platform did not actually respond to questions but instead made do with a vague response trumpeting its efforts to deal with potentially offensive content.
The new documents were revealed thanks to a lawsuit filed by Prof. Amir Hetsroni against Facebook in May, after it deleted his account. Hetsroni is demanding his account be reinstated and one million shekels ($270,000) in compensation.
- Facebook Blocks Trailer of Film on Palestinian Women's Role in the First Intifada
- Facebook Takes Down Video of Second Birthright Participants Walking Off a Trip
- Germany Slams Mark Zuckerberg Over Facebook's Holocaust Denial Policy
The new documents, which were marked confidential, were submitted by Facebook following a request by Hetsroni’s lawyer, Jonathan Klinger. Although Facebook refused to provide most of the requested documents, including internal correspondence and problematic posts it failed to remove from other pages, it did provide a long list of posts by the controversial academic that had been removed.
Of over 50 posts that appeared among the documents, only 10 appeared solely in Hebrew. More than 40 (the number is larger, but some were duplicates) were accompanied by an machine translation, which failed to accurately reflect Hetsroni’s original text. In other words, in order to decide what to do with the Hebrew posts, the content moderators who are in charge of monitoring posts needed something to translate them. The translation results are far from satisfactory and are riddled with inaccuracies.
“The embarrassing ‘translations’ Facebook is presenting to a respected Israeli court as proof that I’m spreading ‘hate speech’ are clear proof of the social network’s antidemocratic dictatorship and its contempt for the Israeli public – thanks to whom it earns billions of shekels, barely paying taxes,” said Hetsroni.
“We’re letting a golem censor us,” he added. “Facebook’s censorship is worse than any draft bill by [Culture and Sports Minister] Miri Regev, such as the ‘loyalty in culture’ bill, because it’s arbitrary and not subject to the High Court of Justice or the state comptroller. I hope the Knesset shows some courage and take away the role of public discourse censor from Facebook – because this censorship is destroying democracy and harms not only those who think like me, but also those who see themselves as sworn Zionists and patriots.”
It could be argued that in the case of Hetsroni – who doesn’t hesitate to stretch the freedom of expression to its limits, and then some (at least according to Facebook’s rules) – it is not a problem, since there are quite a few instances in which the way he expresses himself is clear even with a poor translation.
But Hetsroni isn’t really the issue here, and he is not the greatest challenge facing Facebook’s moderators. Instead, it is the people conducting serious discussions without trying to challenge the boundaries of debate and demanding total freedom of expression.
One example of the problematic nature of Facebook’s “censorship machine” came recently when it blocked an ad for the documentary “Naila and the Uprising.” It initially rejected the trailer, about the role played by Palestinian women in the first intifada, saying it forbids “shocking, derogatory or sensationalistic content, including ads that present violence or threats of violence.”
In another instance, it initially rejected a post sharing a painful personal story by Dafna Lustig about the sexual assault she experienced, after Facebook decided the original post was pornographic.
Over the past two years, Facebook has tried to display its activities and transparency on the matter, but has yet to reveal how many Hebrew speakers deal with the Hebrew language content – and the same is true of most non-English languages. This despite Facebook announcing the recruitment of a further 3,000 content moderators in addition to the 4,500 who current work for Facebook.
As has been documented, Facebook uses a combination of an artificial intelligence system and actual moderators. The problem is that even in the era of deep neural networks, that same AI is based on learning from the huge quantities of information fed into them.
Facebook CEO Mark Zuckerberg is counting on this, but it isn’t really intelligence as we understand it: This is software that is capable of carrying out very specific activities relating to texts or images, and is extremely dependent on the information fed into it. For example, if the software learns to identify problematic texts based on poor translations, it has no chance of beginning to find a reasonable solution.
In addition, many firms are trying to deal with biases that are often built into the decisions made by Facebook’s neural networks – as well as the lack of transparency in the algorithms, which usually don’t reveal their reasons.
We should also remember that even when a human eye parses the posts earmarked for deletion, the process is outsourced and the moderators are not very well paid. A large number work for contractors in the Philippines and Germany. According to a Radiolab show dedicated to the subject, the moderating work is Sisyphean and wearing, and involves exposure to countless horrors such as violent video footage or child pornography, as well as texts. Moderators are required to handle huge quantities of content without understanding the context, which is likely to differ from one country to the next and also from year to year.
After receiving the court documents, Haaretz sent Facebook four questions:
1. Does Facebook employ content moderators who understand Hebrew?
2. Is all the Hebrew content moderated by Hebrew speakers?
3. How much of it is processed by artificial intelligence?
4. Is the AI based on a neural network that was trained in data in Hebrew, or is it based on a machine translation by Bing or Google?
Facebook did not offer direct answers to any of the questions. Instead, it sent this reply: “Over 2 billion people in the world use Facebook and there’s no question that people share their opinion on the platform only when they feel safe. Therefore, we have clear rules regarding what is acceptable. In order to help us to manage content, there are teams working all over the world that examine posts 24 hours a day, seven days a week, in all the time zones and in dozens of languages.
“In certain instances, the examination is also sent to a team at Facebook that has expertise in the topic in question and is familiar with that country. We also invest a great deal in AI technology, in order to help deal with problematic content on Facebook in the most effective manner. We remove content from Facebook regardless of who posts it when it violates our rules.
“A fast and accurate survey of posts is crucial for safeguarding Facebook users. That’s why we are doubling the number of people on the security teams this year to 20,000 – over 7,500 of whom are content moderators. We are constantly improving the rules of our community and have invested significantly in our ability to enforce them effectively. This is a complex task and we still have a lot to do, but we are committed to making Facebook a safe place.”
Haaretz showed Facebook’s response to Hetsroni. He said that its reply “indicates that without our noticing, censorship has been created that tops not only Miri Regev and [outspoken Likud MK] Oren Hazan, but even the Chinese firewall. For Facebook’s information: There is no clause in the laws of the State of Israel that restricts freedom of expression only to ‘safe things.’ On the contrary: Freedom of expression is designed to make it possible to say the unpleasant and unsafe things. That’s democracy.
“If Facebook thinks a post is a criminal offense, it must immediately send it to the police and the state prosecutor, and not assume the role of censor, for which no one gave it authorization – all the more so when it turns out that Facebook’s censorship is based on ridiculous machine translations. If I may add some business advice as a former Facebook shareholder (I sold my shares after I was blocked), Facebook’s decline in recent years stems from its ridiculous censorship. People aren’t willing to invest precious time in writing a well-thought-out post, only to have Facebook delete it because they dared use the legitimate word ‘Negro.’”