Bad connections: Countering far-right extremism in online spaces

Since the advent of the COVID-19 pandemic, the online activity of far-right groups has accelerated. Far-right movements around the world have instrumentalized the pandemic to vindicate white supremacist views, and many of these groups have been active in rallies against pandemic public health restrictions. The already-sizeable online presence of the far-right in Germany, Switzerland and Austria, for example, has grown by 18% since March 2020. And the “Freedom Convoy,” which became notorious partly for its numerous ties to far-right groups in Canada, was organized primarily through social media platforms, where thousands of convoy participants and supporters congregated for information about the ongoing occupation.

Far-right groups using online spaces to recruit and mobilize is not a new phenomenon, however. Other violent events tied to far-right movements—like the 2019 attack on a mosque in Christchurch and the 2017 “Unite the Right” rally in Charlottesville—were organized, mobilized, or announced publicly on social media networks. These events highlighted in tragic detail the role of digital platforms in enabling terrorist attacks and fuelling far-right rhetoric online.

In the wake of the Christchurch attack, governments around the world adopted the Christchurch Call to Action and identified the tech sector as a key partner in creating safer digital spaces. Since then, some large tech platforms have been working to build competency in disrupting extremist activity online. In 2020, for example, Facebook announced that it would ban communities representing the far-right conspiracy movement QAnon across its platforms. And some recent projects show promising results for preventing the proliferation of far-right extremism using counter-speech and redirection efforts. Facebook’s Redirect Initiative targets Facebook users searching for keywords linked to violent extremism and invites users to visit the website of local intervention providers, such as Life After Hate or EXIT Australia. Prevention efforts like the Redirect Initiativebuild on evidence that shows that positive counter-messaging and viable exit options are key to lessening the allure of far-right beliefs and dampening the desire to engage with far-right communities online.

However, some experts suggest that censoring or redirecting from extremist content is not enough to prevent the proliferation of far-right extremism online. Free market logic guarantees that where demand exists, profit-motivated suppliers will follow—and there has been no shortage of private actors looking to capitalize on the demand for a platform that will tolerate incendiary and extremist content.

Indeed, the most substantial growth in far-right activity since the beginning of the pandemic has taken place on smaller, alternative platforms. This migration is largely attributed to major platforms’ recent enforcement efforts against extremist content. Telegram in particular is attractive to far-right groups because it hosts public channels, which are opportune spaces for extremist recruitment and content dissemination, as well as private chats, where established networks can organize activities. The migration of far-right organizing from mainstream platforms to apps like Telegram demonstrates that a crackdown on one platform will push groups towards other, even more obscure platforms, where they can continue to operate outside of the orbit of government scrutiny and civil society efforts.

The far-right’s growing preference for these alternative platforms means that engaging smaller tech companies is crucial to the effort to eliminate extremist content. One project seeking to engage these companies is the Terrorist Content Analytics Platform (TCAT), which will create a digital repository of extremist content. TCAT will alert smaller tech companies when this type of content appears on their platforms and support companies in removing it. While TCAT is still in the development stage, this type of multisectoral collaboration could be key to smaller companies’ efforts to disrupt extremists’ use of their platforms.

Notably, though, human rights advocates have warned governments to proceed with caution when outsourcing certain public interest tasks to the private sector. Content moderation, for example, is an inherently political exercise that cannot be wholly supplanted by the computational solutions that tech platforms prize. The Internet is an aggregator of extremist activity, but it is also a space that facilitates free expression, unprecedented access to information, and vibrant public discourse. And subcontracting content moderation to private companies places an unusually onerous burden on a sector that is not designed to adjudicate questions of human rights and is not subject to any meaningful kind of democratic oversight. Further, some smaller platforms have indicated resistance to counter-extremist efforts. Gab founder Andrew Torba, for example, has welcomed far-right groups to his platform, marketing it as a “free speech alternative” to the “left-leaning Big Social monopoly.”

Governments, therefore, should take a major interest in ensuring that any efforts by tech companies to detect and combat extremist content online are guided by principles of human rights. And any effort to disrupt far-right extremism online should also be accompanied by preventative efforts that redirect individuals to intervention services, offer viable and compelling counter-narratives, and address the underlying factors that make individuals susceptible to far-right beliefs.

Evidently, disrupting and preventing far-right extremism in online spaces will require a suite of short- and long-term solutions. Removing violent extremist content from apps and forums will temporarily obstruct communications networks, but more compelling preventative efforts are required to subvert the underlying beliefs of individuals who ascribe to far-right ideologies. As long as far-right groups exist, they will continue to find digital platforms ready and willing to host them.

Hilary Lawson

Hilary Lawson is a first-year graduate student in the Master of Global Affairs program at the Munk School and a feature contributor on Indigenous Affairs with Global Conversations. Before starting graduate school, she worked for five years as a political staffer on Parliament Hill, first for a Member of Parliament and then as an advisor to the Minister of Indigenous Services. She has travelled to Taiwan on a parliamentary staffers' delegation and contributed amendments to legislation on prisons, national security, and firearms. Her research interests include information warfare, surveillance and privacy rights, Indigenous self-determination, and criminal justice transformation. 

Hilary is a settler of British and Irish descent living on the traditional territories of the Erie, Neutral, Huron-Wendat, Haudenosaunee and Mississaugas. She holds a bachelor’s degree in Conflict Studies and Human Rights from the University of Ottawa. 

Previous
Previous

Too big to change? What is happening to France’s pension system?

Next
Next

In the wake of the “Freedom Convoy,” Canadian national security institutions need to educate the public about emerging threats