Twitter Fails to Confront Vicious anti-Semites, Jewish Journalists Say

New York Times editor Jonathan Weisman has quit the social network after the company refused to take down anti-Semitic tweets. Others say Facebook does a better job battling hate.

A screenshot of one of the anti-Semitic tweets Jonathan Weisman retweeted.
Screenshot / Twitter

Jonathan Weisman had had enough.

On Wednesday, the New York Times editor tweeted the following message to his nearly 35,000 followers: “I will leave @twitter to the racists, the anti-Semites, the Bernie Bros who attacked women reporters yesterday. Maybe Twitter will rethink.” And with that, his Twitter account went dark.

For the past several weeks, Weisman and a number of other prominent Jewish journalists have faced an onslaught of anti-Semitic slurs and threats on Twitter. Much of the abuse has come from anonymous accounts pledging their loyalty to Donald Trump in the U.S. presidential race.

A screenshot of one of the anti-Semitic tweets that Jonathan Weisman retweeted.
Screenshot / Twitter

Weisman said in an interview that The Times’ social media staff had reported to Twitter the most virulently anti-Semitic slurs against him and repeatedly received the following email: “We’ve investigated the account and reported Tweets for violent threats and abusive behavior, and have found that it’s currently not violating the Twitter Rules.”

Those rules include that users may not “engage in the targeted abuse or harassment of others” based on religious affiliation.

Frustrated by Twitter’s inaction, Weisman decided to take a hiatus from the social network and from social media altogether. “I think that Twitter should be taken to task over this,” he said. “The fundamental issue here is that Twitter allows for fake accounts and anonymity and just flagrant violations of its stated rules that other social media platforms don’t.”

A screenshot of one of the anti-Semitic tweets Jonathan Weisman retweeted.
Screenshot / Twitter

After tweeting that he would be signing off for an unspecified stretch, Weisman said he began to receive emails from Twitter notifying him that several of the accounts he reported had been suspended for “participating in abusive behavior.” He attributed this apparent about-face to his position at The Times.

Facebook’s human beings

Rabbi Abraham Cooper, who supervises the Digital Terrorism and Hate Project at the Simon Wiesenthal Center in Los Angeles, described the recent experiences of Weisman and other Jewish journalists on Twitter including Jeffrey Goldberg of The Atlantic and Yair Rosenberg of Tablet Magazine as an “indictment” of the company.

“I’m not surprised that Mr. Weisman had to make that kind of decision,” Cooper said. “Under [Twitter CEO] Jack Dorsey, the company has been moving in the right direction, but on this stuff they have done virtually nothing. Maybe this will finally wake them up.”

The Australia-based Online Hate Prevention Institute reported in February that 22 percent of the anti-Semitic tweets it analyzed had been removed over a 10-month period. For Facebook, the number was 37 percent, for YouTube 8 percent.

The report noted that “the status quo is no longer acceptable” regarding online abuse and that “social media platforms are starting to respond, but some are doing so more effectively than others.”

Cooper, who regularly meets with representatives of Twitter, Facebook and Google (which owns YouTube) to discuss ways of addressing online hate, said Twitter sorely lagged the other companies.

“The best of the lot is Facebook, because they assign real human beings for us to interact with, so we can generally get a quick response,” he said. “At the other end of the spectrum is Twitter.”

The Wiesenthal Center gave Twitter a D in its 2016 social media report card for its handling of hateful content, down from a C in 2015. (The company did manage a B in the terrorism category for its efforts to suppress accounts associated with the Islamic State and other terror groups.) YouTube also suffered a D, while Facebook earned a B minus.

Twitter representatives have committed to doing a better job, Cooper said.

“The bottom line about all these companies is we can’t expect them to become experts in every extremist group on the planet or the history of anti-Semitism,” he said. “But what we can expect is that they have to have a game plan in place to deal with extremists who are leveraging their very powerful marketing platforms to do bad things.”

Weisman had a simple recommendation to improve the Twitter experience: Make users post under their own names. “Obviously there will be people who abuse that and make up names, but what you see now is these users are using crazy names,” he said. “Any quick perusal of these accounts would tell you they were set up as instruments of hate.”

Positive ‘counter-narratives’

At the moment, users can mute, block and/or report offending accounts. But Robert Hernandez, a digital journalism professor at the USC Annenberg School for Communication and Journalism, said this was not enough.

“You can block people, but it doesn’t stop them from saying this stuff and putting this out in the world where other people can look you up and find it,” he said.

Hernandez noted that, in addition to Jewish journalists, female journalists and Muslims have also been targeted for abuse on Twitter.

In response to an inquiry about unchecked anti-Semitism on the platform, Twitter referred Haaretz to a statement last month by its head of public policy in Europe, Karen White.

“Hateful conduct has no place on Twitter and we will continue to tackle this issue head on alongside our partners in industry and civil society,” White said. “We remain committed to letting the Tweets flow. However, there is a clear distinction between freedom of expression and conduct that incites violence and hate.”

At the behest of the European Union, Facebook, Twitter, Google and Microsoft recently adopted new rules forcing them to take greater accountability for the content they host. The “code of conduct” requires the companies to review the majority of reports about online hate speech and take action within 24 hours, as well as to promote positive “counter-narratives.” Still, the companies are not bound by these rules in the United States, a situation Cooper said was untenable.

“Hate is hate, and if something offensive is removed because it’s posted from Germany, then they should voluntarily remove content if it’s being posted from the States,” he said.

Last week, as part of a grassroots effort to combat hate on Twitter, users began putting “echo” parentheses around their usernames. Anti-Semitic trolls have been using these triple parentheses, which the Anti-Defamation League recently classified as a hate symbol, to identify Jews and coordinate online attacks against them. Thousands of Jews and non-Jews alike have joined the campaign so far.