NEW YORK – Barely a week passed between the moment Hurricane Harvey began to batter Houston, in late August, and publication of the report about the provocative behavior of a leader of the Muslim community in Georgia. What started as a typical American outpouring of social solidarity and fundraising for victims of the devastating floods morphed, according to the report, into yet another demonstration of the deep social rift in society – brought to you courtesy of President Donald Trump’s extreme, schismatic policy.
- A brief history of 'Lügenpresse,' the Nazi-era predecessor to Trump's 'fake news'
- Anti-Muslim videos Trump retweet clearly misrepresent what actually happened
- Trump, take note: How Jerusalem went from hosting 16 embassies to zero
“Because Donald Trump will not let victims of war and crime seek solitude in the United States, we cannot in good conscience help his people when so many Muslims cannot find shelter,” Imam Sharaj Alkalb of the Ramazala Mosque in Peachton, Georgia, was quoted as saying. The report also stated that the Muslim cleric had ordered that food packages collected for Houston’s flood victims be sent instead to Syrian war refugees.
“Allah has told us that we must consider where the need is greater and respond to it,” Alkalb apparently added.
It was hard to remain indifferent to the imam’s outrageous behavior, not least because a photograph showed him loading food packages onto a truck. But it’s harder still to be indifferent in light of the knowledge that the report was a total fabrication, plucked out of the air, with the aim of besmirching America’s Muslim citizens. Fake news, which Trump and Prime Minister Benjamin Netanyahu keep talking about.
The mosque doesn’t exist, the town is a fiction, the imam is a figment of the imagination and the photo has no connection to the story. But that didn’t stop the “sensational” report from making waves and spreading across oceans of websites and from there to Facebook pages. Some sites actually ran a follow-up to the story, claiming that the imam had subsequently been arrested.
The source of the false report was a fly-by-night site called As American as Apple Pie. Five seconds of perusing the site is enough for anyone to understand that any connection between its contents and reality is purely coincidental. But few, apparently, actually took the trouble to check it. In an era in which items skitter across the web from one person to another, when the Facebook wall has become the most-read source of news in the world, when every bored blogger is a journalist and every untenable rumor is an exclusive story – questions about the credibility and source of reports become almost immaterial.
Facebook decided to take the matter seriously, apparently not out of concern for surfers’ well-being, but in the wake of trenchant criticism leveled at the social network after the 2016 American presidential elections. Facebook had found itself widely accused of giving a free platform to rumor-mongers and conspiracy theorists. In fact, some blamed Facebook for Trump’s victory. One measure it took in response was to launch a special platform that allows surfers to warn in real time about offensive reports and stories, or items that seem unreliable. The network also joined a collaborative effort, together with The Associated Press and two sites, Snopes and Politifact, that specialize in identifying false reports, in order to reduce as far as possible dissemination of the latter on the web.
An initial indication, albeit partial, of the effectiveness of these measures was recently provided by NewsWhip, a company that tracks content circulated on the internet. The data show that during the U.S. election campaign, an astounding 40 percent of relevant items that made the rounds on Facebook were false, but that the figure dropped to just 10 percent during the French presidential elections last May.
Google, too, has introduced similar measures, such as marking false reports and establishing a special team to locate and delete problematic search results. In Google’s case, the criticism it received referred not so much to what happened during the 2016 election as to the absence of filtering by its search engine, creating a situation in which information from reliable sources intermingles almost equally with that from extremist conspiracy sites of a distinctly racist character.
For example, according to a report published in the British newspaper the Guardian last December, in response to the question “Did the Holocaust happen?” Google’s search engine produced thousands of links, the first of which came from neo-Nazi site Stormfront. Not surprisingly, the relevant page on that anti-Semitic site presented the “top 10 reasons why the Holocaust didn’t happen.”
Journalism in jeopardy
Now the U.S. government is joining the battle: Recently, the U.S. National Science Foundation announced the first grant of its kind to develop a special mechanism that would be capable of identifying fake news and warning internet users about it. The $300,000 grant was awarded to two professors from Pennsylvania State University: Dongwon Lee, an expert in information science and technology, and S. Shyam Sundar, a professor of communications. Sundar is considered one of the leading scholars of the electronic media in the United States. In addition to his teaching and research activities, which have resulted in publication of dozens of articles, he is also the founder and co-director of Penn State’s Media Effects Research Laboratory, a unique facility that focuses on the psychological and social effects of the technology and content of the electronic media and social networks.
“This is a very big problem, mainly because the social networks make it easy to spread this fake information, unlike the way the print media checks and verifies the information it receives and publishes,” Sundar said in an interview with Haaretz, a few days after the announcement of the prestigious grant. “There is a real danger here that the current trend of spreading fake news will make more and more people become skeptical about everything they read, and eventually the credibility of the traditional journalistic stories will be in jeopardy. People will view the stories they read as one big fake news, and so that’s why it is so important for us to come up with a real solution that will prevent the continuous spread of fake news all over the internet in a way that only hurts the status of the print newspaper and the entire media industry at large.”
Evidence of the loss of credibility Sundar is talking about can be found in a survey conducted in May by Harvard University, in which no fewer than 80 percent of the Republican Party voters questioned said they believe the mainstream media is full of fake news. The figure was lower for Democrats, but at 53 percent still distressingly high.
Like many scholars who spend much of their time in the laboratory, Sundar is careful not to take a political stance. He chooses his words carefully, avoids leveling criticism and tries to support his arguments with empirical academic research. Still, it’s hard for him to conceal the unease he feels when witnessing the inflated political exploitation of the term “fake news” by Trump and Netanyahu.
“I think it serves them [Netanyahu and Trump] with their supporters, who, at the end of the day, will continue to believe their arguments over those of the media,” Sundar notes, adding, “However, their use of the term ‘fake news’ is not what we mean when we talk about fake news. When President Trump says ‘fake news’ he is talking about critical news, about opinion pieces that don’t support his positions, what he describes as biased news. The bottom line is that you can’t call stories that aren’t convenient ‘fake news’ just because they don’t match your political positions.”
What do we know about people who tend to believe [real] fake news?
“First of all, people tend to believe reports that match their own views, reports that validate their positions, what we call ‘confirmation bias.’ If the false reports – the fake news – are consistent with the original positions of those people, then they are much more likely to believe them. But in the case of stories that are not political, where most readers don’t have initial positions or opinions on the subject, my research has shown that people will be much more inclined to believe reports if they have a lot of comments – stories that have gotten a lot of shares on the internet. In these cases, readers tend to believe that if it’s good for everyone, it must be good for me as well.
“Another factor is that we tend to believe our friends, stories that come from people in our social circle. In such cases, people do not stop to question the source of the story and whether the source is credible or not. Instead they tell themselves, ‘If my friend sent me the link to the story, then I guess it’s interesting enough for me to read.’ Friends become the source, and people tend to believe their friends, which can explain why Facebook has become such a great source of fake news.”
But there are people who get fake news straight into their email without knowing it’s fake, and some who spread it deliberately, to further political or social interests. One survey found that 14 percent of respondents admitted to having disseminated fake news, even though they knew it was fabricated. Those people will go on doing that even if you find a mechanism that will warn others that it’s false.
“You are probably right. Exactly, by the way, just as many people keep reading tabloids like the National Enquirer and The Sun, even though it is clear that the stories they publish are not true. We still read it because it entertains us, because it’s fun to read about other people’s troubles. There are many reasons why we read stories even when we know they are false, but this, at the end of the day, is the choice of the reader – that is, what to read and what not. I agree with you that there are a lot of people who spread false stories knowing that they’re fake news; the problem is that the people who receive these stories are often not aware that this is the case and often make their decisions based on this same false information. Our goal is to inform them about the possibility that what they read may be false and that now it’s in their hands to decide if they want to continue to read the story knowing that it’s probably fake – or not.”
False news has been circulating in the United States for decades, and Donald Trump used the concept in devising his winning strategy on the way to the White House and since taking office. From CNN to The Washington Post, from MSNBC to The New York Times – whoever has had the presumption to criticize him has immediately been lumped in the unflattering category of “fake news.”
Prime Minister Netanyahu, who has been waging a fierce war against the Israeli media for the past 20 years, adopted Trump’s rhetoric when he published a special post on his Facebook page under the headline “Fake News,” which showed the logos of almost all the local news channels and newspapers (with the notable exception of the freebie local paper Israel Hayom, considered to be Netanyahu’s mouthpiece).
The irony is that Trump himself, more than any president in U.S. history, built his political career on fabricated, nonfactual “news.” What began in 2011 with the ugly and mendacious public campaign he conducted alleging that Barack Obama was not American-born, continued with a web of hundreds of lies disseminated throughout his presidential campaign. Nor did he stop after he was elected, claiming, for example, that Hillary Clinton won the popular vote thanks to millions of false ballots counted in her favor. Like many of Trump’s allegations, this one, too, has been proved to have lacked a factual foundation.
Furthermore, together with the fake news he spread himself, Trump enjoyed a tailwind in the form of false reports disseminated by others during his campaign. A comprehensive study conducted by BuzzFeed found that “in the final three months of the U.S. presidential campaign, the top-performing fake election news stories on Facebook [in the form of shares, reactions and comments] generated more engagement than the top stories from major news outlets.”
The widely quoted study, posted just one week after the 2016 election, found that “during these critical months of the campaign, 20 top-performing false election stories from hoax sites and hyperpartisan blogs generated 8,711,000 shares, reactions and comments on Facebook. Within the same time period, the 20 best-performing election stories from 19 major news websites [including The Washington Post, The New York Times, Huffington Post and NBC News] generated a total of 7,367,000 shares, reactions, and comments on Facebook.”
A close reading of the BuzzFeed data reveals leaves no room for doubt about the political affiliation and ideological goals of those behind the fabricated reports: Of the 20 most popular fictitious reports circulated on Facebook ahead of the 2016 elections, 17 were intended to bolster Trump’s candidacy and adversely affect Clinton’s campaign. And, the more fantastical and untenable the item, the greater was its popularity.
For example, the top fake election story in terms of Facebook engagement, chalking up 960,000 shares and reactions, informed readers that, “Pope Francis shocks world, endorses Donald Trump for president.” In second place, with 789,000 Facebook such responses, was: “WikiLeaks CONFIRMS Hillary Clinton sold weapons to ISIS.” Not far behind (754,000 engagements) was: “IT’S OVER: Hillary’s ISIS email just leaked and it’s worse than anyone could have imagined.”
We know that Facebook and Google are taking measures to reduce the scale of fake news on the internet. How does your solution differ from theirs?
Sundar: “I think that the recent steps that Facebook and Google have taken are definitely a good start in the right direction, but that’s just not enough. Both companies rely heavily on human solutions, on teams of employees who will try to locate and delete fake news from the internet. The problem is that at the same time the effort is made to block [them], these fake news [items] will increase and the technology of those who spread the fake news will just get better and better. And so for that reason it is clear to me that we need to find a technological solution for this problem rather than rely on a human solution.
“Our goal is to build a mechanism that will be based on algorithms we put together that will allow the software to identify whether the story is fake news or not. The main things we want to examine when we build this mechanism are whether the story has a reliable source or not, if the person behind it has published other stories elsewhere and where, whether it includes vocabulary that is typically used in false news stories, what advertisements are associated with the website and if the story is circulating only on the social networks, or in the mainstream media as well. Today we already have several sites that are based on manual labor meant to find fake news, and one of the things our algorithm will be able to do, is to work based on these sites in order initially filter out some of the fake stories.”
What does that mean concretely – that these fake reports will disappear from our screens, or that linking to them will be blocked to us?
“Our goal is to create a whole system of algorithms that will be able to detect fake news. It’s not in our control what use will be made of the system that we will develop. Our goal is to be able to define clear criteria in which will be able to determine, with highly accurate percentages, what the chances are that the story is fake. What this means is that you may read a news story and the system will be able to tell you that there is an 80-percent chance that it’s fake. We hope that as soon as we offer an efficient and reliable technical solution, social networks such as Facebook, which today rely on people to locate fake news, will use our platform as an initial filter. A second possibility is that people will be able to download our software to their phone the way you download any other app.”