Facebook this week stopped advertisers from targeting messages to people interested in topics such as "Jew haters" and "how to burn Jews" after journalists inquired about it, the news organization ProPublica reported on Thursday.
- Not Just Facebook: Google also allows ads targeting anti-Semitic keywords
- Swedish Jewish community braces for Yom Kippur neo-Nazi march near synagogue
- French school principal reveals depth of anti-Semitism in public schools
- European Jewish Congress sees rising anti-Semitism in Poland and lack of government response
ProPublica, a nonprofit outlet based in New York, said it found the topics in Facebook's self-service ad-buying platform and paid $30 to test them with its own content. Another category it found was "History of 'why Jews ruin the world.'
The anti-Semitic categories were created by an algorithm rather than by people, ProPublica reported. Some 2,300 people had expressed interest in them.
Facebook, the world's largest social network, said in a statement that it had removed the ability to buy targeted marketing based on those topics and believed the use of the topics in ad campaigns had not been widespread.
Along with Alphabet's Google, Facebook dominates the fast-growing market for online advertising, in part because it lets marketers target their ads based on huge volumes of data.
Facebook, though, has had difficulty ensuring that advertisers on its self-service system comply with its terms and conditions.
Last year, ProPublica reported that Facebook allowed advertisers to exclude users by race when running housing or other ads, despite a prohibition on such ads under the U.S. Fair Housing Act 1969.
Facebook last week said an operation likely based in Russia spent $100,000 on thousands of U.S. ads promoting social and political messages over a two-year period through May, fueling concerns about foreign meddling in U.S. elections.
The company said it shut down 470 "inauthentic" accounts as part of an internal investigation into those ads.
The anti-Semitic targeting categories likely were generated because people listed those themes on their Facebook profiles as an interest, an employer or field of study, ProPublica reported.
Rob Leathern, product management director at Facebook, said in a statement on Thursday that sometimes content appears on the network that "violates our standards."
"In this case," he went on, "we've removed the associated targeting fields in question. We know we have more work to do, so we're also building new guardrails in our product and review processes to prevent other issues like this from happening in the future."
Facebook said it was considering other changes to its advertising platform, such as adding more reviews of targeting categories before they show up in the self-service platform.