From incitement to violence camouflaged as LARPing (Life Action Role Playing) to livestreamed shootings, we have entered a new age of gamified terrorism.
The 28-year-old Christchurch attacker Brenton Tarrant was an active member of the far-right online community on 8chan, an imageboard forum with a culture of anonymity, trolling and ‘sarcastic’ racism that has become popular with white nationalists, neo-Nazis and alt-right sympathizers.
Those of us who know the early image boards of 4Chan from their teenage years in the 2000s and who laughed at transgressive memes and witty pop culture references now find these corners of the internet hardly recognizable today.
They have undergone a profound metamorphosis over the past few years: trolling has been politicized, and shitposting weaponized. Breaking taboos is no longer an end in itself. It has become a means to spread harmful ideas.
Unregulated social media platforms such as 8chan, Voat, Gab and Minds that present themselves as "free speech" alternatives to mainstream platforms serve today's far-right as online safe havens. We call these places on the internet "Dark Social" because of their obscure nature. Some of these virtual communities are so hard to monitor and track that the gamification of extremism happened largely unnoticed and unchallenged.
With virtually no content moderation, "Dark Social" platforms are rife with hateful content and violent far-right imagery that escapes policy scrutiny. The 8chan thread on which Tarrant posted about the attack contained everything from pictures of Hitler and swastikas to popular far-right memes like Pepe the Frog and pop culture references such as pictures of the Joker.
The Christchurch attack demonstrated how far-right extremists have weaponize internet culture and infrastructure to air their violent ideology and rally support from a virtual community. It confirmed that governments, security services and tech companies are unprepared to face the challenges of far-right mobilization on unregulated platforms like 8chan.
At the Institute for Strategic Dialogue we have been monitoring the meeting hubs for online sub-cultures over the past few years. We have watched individuals climb down the rabbit holes of radicalization in real time. Some individuals arrived "from normal backgrounds," as Brenton Tarrant says of himself in his manifesto, to find themselves being drawn into self-reinforcing extremist bubbles.
To us, the Christchurch attack was a shock but not a surprise. The question was not if but when and where these extremist hotbeds would inspire a real-world attack.
Before Tarrant, other far-right extremists radicalized and shared sinister plans on alt-tech platforms.
The attacker who rammed a van into a crowd in Toronto last April was part of the incel subculture, the online community of "involuntarily celibate" men brought together by their hatred of women. The Pittsburgh synagogue attacker Robert Bowers was a familiar presence on Gab, where he posted about his plans to carry out an attack.
Taking down a platform like 8chan may seem like an easy fix after the Christchurch shooting. But the reality is that the problem is not confined to niche internet sub-cultures anymore.
While hateful enclaves on alt-tech platforms continue to escape policy scrutiny, far-right extremists also rely on mainstream social media platforms to spread their narratives, skillfully navigating the gray zones of legal frameworks and company policies.
Tarrant showed that live content moderation and visual pieces of content are examples of easily exploitable security loopholes on mainstream platforms.
Tarrant posted a picture of his weapons on Twitter a few days prior to the attack. Facebook took down the attacker's live stream video and deleted his Facebook and Instagram account, but archives continued to spread across other platforms like YouTube and Twitter. "I’m not sure how this video was able to stream for  minutes," a Facebook source said. Within the first 24 hours Facebook removed 1.5 million videos of the attack.
When videos can turn violent in a split second, and with few mechanisms to stop someone from starting a livestream video, armies of content moderators are struggling to keep up. Meanwhile, far-right extremists have understood the potential of live streaming in generating sensationalism and making their actions go viral.
Ultimately, the attack exposed multiple overlapping problems. The bad news is: there is no quick fix to resolving any of these. Solutions that focus only on individual platforms are doomed to fail.
We need wide-ranging responses that look at the entire media and tech ecosystem, from alt-tech platforms to social media giants. A broad, longer-term government response is needed to stem the growth of far-right ideology online and offline.
There is also good news. Governments in Europe have started to take steps to combat far-right extremism online. Germany adopted the NetzDG or "network enforcement" law which imposes fines on social media platforms if they do not remove hateful content. France is set to adopt a similar law in May this year. These changes, while welcome, remain largely reactive and do not apply to alt-tech platforms.
Identifying the most radicalized individuals on alt-tech platforms and denying them access could be a first step. Governments and tech companies need to work together to devise long-term solutions to fight far-right radicalization online, while strengthening civil society-led efforts to challenge the narratives that inspired the hateful attack in Christchurch.
Julia Ebner is a terrorism and extremism researcher, and author of The Rage: The Vicious Circle of Islamist and Far-Right Extremism (2017). She is Research Fellow at the Institute for Strategic Dialogue and Global Fellow at the Project for the Study of the 21st Century. Twitter: @julie_renbe
Cécile Guerin is a freelance French journalist based in London. She is an Associate at the Institute for Strategic Dialogue. Twitter: @CecileNGuerin
Want to enjoy 'Zen' reading - with no ads and just the article? Subscribe todaySubscribe now