Facebook Inc on Tuesday updated its community standards on hate speech to ban more racist writings and imagery, including images of “blackface” and statements pertaining to “Jewish people running the world or controlling major institutions such as media networks, the economy or the government.”
Read the full update here.
Facebook has been the target of a months-long boycott led in part by the Anti-Defamation League - the “Stop Hate for Profit” campaign. Over a thousand companies joined the boycott of Facebook ads as a means to pressure the company to change its online hate speech policy.
Facebook said it removed 7 million posts in the second quarter for sharing false information about the novel coronavirus, including content that promoted fake preventative measures and exaggerated cures.
It released the data as part of its sixth Community Standards Enforcement Report, which it introduced in 2018 along with more stringent decorum rules in response to a backlash over its lax approach to policing content on its platforms.
The world's biggest social network said it would invite proposals from experts this week to audit the metrics used in the report, beginning in 2021. It committed to the audit during a July ad boycott over hate speech practices.
The company removed about 22.5 million posts with hate speech on its flagship app in the second quarter, a dramatic increase from 9.6 million in the first quarter. It attributed the jump to improvements in detection technology.
- How the ADL is working to end Facebook’s ‘thriving ecosystem of Holocaust denial’
- Netanyahu blasts attorney general's 'zero action on threats' against his family
It also deleted 8.7 million posts connected to "terrorist" organizations, compared with 6.3 million in the prior period. It took down less material from "organized hate" groups: 4 million pieces of content, compared to 4.7 million in the first quarter.
The company does not disclose changes in the prevalence of hateful content on its platforms, which civil rights groups say makes reports on its removal less meaningful.
Facebook said it relied more heavily on automation for reviewing content starting in April as it had fewer reviewers at its offices due to the COVID-19 pandemic.
That resulted in less action against content related to self-harm and child sexual exploitation, executives said on a conference call.
"It's graphic content that honestly at home it's very hard for people to moderate, with people around them," said Guy Rosen, Facebook's vice president for integrity.