February 08, 2021 at 09:44 AM
Since the Capitol riots, Twitter has taken more precautions to stop hate speech and incitement to violence, including a purge of more than 70,000 accounts it found were engaged in sharing harmful content. It also permanently suspended President Trump’s account for the risk of further incitement to violence, referencing Twitter’s “public interest framework” which outlines its guidelines towards the profiles of world leaders on its platform.
Twitter asserts it will not tolerate “clear and direct threats of violence against an individual” and recently updated its policy to prohibit the “dehumanization of a group of people based on their religion, caste, age, disability, serious disease, national origin, race, or ethnicity.” On Jan. 21, 2021, it locked the account of China’s US embassy for a tweet defending China’s persecutory policies towards Muslim Uyghurs in Xinjiang.
By enacting stricter rules, Twitter thus acknowledges that dehumanizing speech can lead to real-world harm. However, if Twitter does not stop such rhetoric in time, can it face liability for violence incited or enabled through inaction, or atrocity crimes committed in distant wars by despotic regimes that rely on social media to spread violence? Regulations and precedents argue yes.
A number of new regulations, from Europe in particular, concern social media giants. For instance, late last year, the European Commission set out new responsibilities in its Digital Services Act (“the most significant reform of European Internet regulations in two decades”) regarding content liability, due-diligence obligations, and a robust sanctions system for violations, including fines of up to 6% of annual revenue for violating rules about hate speech.
The United Kingdom has adopted a similar process through its Commons culture committee which allows it to impose fines of up to £18m, or 10% of turnover, whichever is higher, for breaches of the regulations requiring social media companies to remove illegal and harmful content.
As for precedents, the 1994 Rwandan Genocide was marked by grotesque caricatures in racist newspapers and broadcast appeals over the radio to participate in killings against Tutsis. In what eventually became known as the “Media case” before the UN International Criminal Tribunal for Rwanda, three media leaders were convicted of genocide and each sentenced to more than 30 years’ imprisonment for their respective roles in publicly disseminating such messages of hatred resulting in the massacres of over 800,000 Tutsis in just three months.
Twenty years later, Facebook became the means by which Myanmar’s military spread anti-Rohingya propaganda with posts inciting murders, rapes and the largest forced human migration in recent history. Facebook eventually banned several individuals and organizations, including senior Myanmar military leaders, from its network. However, hundreds of troll accounts went undetected, flooding Facebook with incendiary posts timed for peak viewership.
Inversely, the suspension of accounts containing harmful content risk destroying valuable evidence of hate crimes if it is not properly preserved after being taken down, and could obstruct justice if withheld from authorities. Facebook, for instance, was asked to share the data from the suspended pages and accounts of Myanmar’s military with The Gambia in its ICJ genocide case against Myanmar, but refused. A week later, Facebook provided the information to the UN mechanism probing international crimes in Myanmar, after the lead investigator said the company was withholding evidence.
Such precedents and regulations thus imply important legal obligations on social media companies when it comes to hate speech, with particular caution to content from the highest officials in a government, especially in the context of inter-group conflicts such as that in the South Caucasus.
On Sept. 27, 2020, at the height of a global pandemic, Azerbaijan, backed by Turkey, initiated a large-scale, unprovoked war against Nagorno-Karabakh. Also known as “Artsakh”, the independent breakaway State has predominantly been inhabited by ethnic Armenians since time immemorial.
Not only were the next 44 days of war rife with reports of the use of inherently indiscriminate munitions, chemical weapons, and Syrian mercenaries, but there is also mounting evidence since a ceasefire came into effect on Nov. 10 of acts of torture, mutilation, executions and enforced disappearances against Armenian POWs still in captivity, as well as civilians in Artsakh.
Social media was awash with anti-Armenian content throughout the war. In light of long-standing Azerbaijani state-sponsored anti-Armenian hatred that, despite having been condemned by the European Court of Human Rights, has continued to fuel the ongoing conflict, one could argue that Twitter and Facebook contributed to the crimes against ethnic Armenians by failing to sufficiently monitor and censor their platforms.
Social media companies have immense power to shape and empower discourse around the world. They can change the course of events both for good, by promoting freedom of speech and voicing dissent, as well as for evil, through misinformation, disinformation and hate speech. Their outsized influence thus requires strict adherence to regulations already in place, and careful adaptation of such regulations to arising needs. Claims of being merely platforms and shirking responsibility for the content of their sites are no longer tenable.
Sheila Paylan is an international human rights lawyer and former legal advisor for the United Nations. Anoush Baghdassarian is a JD candidate at Harvard Law School.