CFOM attended UNESCO’s Internet for Trust Conference from 21-23 February in Paris. It was a multi-stake holder conference to discuss the current version of UNESCO’s draft guidelines on how to regulate the Internet. The over 4000 participants, the majority of them online, testify to the fact that the concern about the Internet and particularly, discourse on social media (which most of the discussions focused on) and correspondingly, the question of how to regulate (or not) the Internet is a shared one.

Image Above: UNESCO Internet for Trust Conference. Image Credit: UNESCO

It is clear that social media platforms have become the new stage where civil life takes place and where civil values and principles are contested and fought about. It is equally evident that social media affordances prefer ‘bad news’ and ‘sensational’ or ‘negative’ statements to facts and truth and amplify the former with unprecedented speed and at an unprecedented scale with the help of algorithms, AI, fake accounts and bots. Bringing those two – social media as the new stage for civil life and their affordances together – it is no surprise that it is on social media that we find dangerous speech. Dangerous speech is speech that threatens to damage the social fabric of society, that polarises and amplifies divisive discourse, that marginalises, excludes and attacks certain groups within society. Here, we are using the term ‘dangerous speech’ rather than hate speech as the former is, in our view, more encompassing than the latter. The former includes dehumanisation, rumours, lies about people, mockery, insults, incivility as well as the propagation of stereotypes and prejudice. The latter is and needs to be – especially when it comes to regulation through laws or professional self regulation – understood in a narrow sense where hate is the desire to inflict harm with hate speech inciting this harm – sometimes going as far as the killing of the other. Hate is directed against individuals because they belong to a certain group – gender, race, ethnicity, culture, religion. Confining hate speech to a narrow definition does not mean that dangerous speech is less important, less despicable or more acceptable but what is means is that impeachments on freedom of expression through the law need to remain exceptional and that one is not tempted to regulate ugly and offensive speech.

Image Above: UNESCO Internet for Trust Conference. Image Credit: UNESCO 

UNESCO emphasises the need to take a human rights approach to regulating the Internet and states that the ‘aim of the Guidelines is to support the development and implementation of regulatory processes that guarantee freedom of expression and access to information while dealing with content that is illegal and content that risks significant harm to democracy and the enjoyment of human rights’. Accordingly it references Article 19 of the ICCPR.

The aim to deal with content that is illegal is relatively straightforward. A multitude of countries across the world have laws regarding for example public order, terrorism, incitement to violence, child pornography, group libel and slander which are actively applied to the online and digital environment. Of course, we are where we are with this as States prosecute offenders to varying degrees, with varying commitment and success (and we see this problem in other areas including journalism safety and media freedom). It is good to see UNESCO reaffirm these existing norms through the regulation initiatives and aiming to build new norms or more accurately, to extend them to the online world.

The aim of dealing with content that risks significant harm to democracy and the enjoyment of human rights is more controversial and carries risks. It needs careful examination. If we think about the prevalence of mis-/disinformation on social media, it is obvious that an environment in which citizens cannot distinguish between truth and lies or what is correct and what is false is dangerous in the sense that it undermines common facts and knowledge that provide the foundation for discussion, debate and the addressing of common problems. To force tech companies to flag mis/disinformation clearly and systematically is indeed reasonable. To ask tech companies to be transparent about the use of algorithms and AI is equally reasonable and necessary. To ask them to moderate content in any other way than to comply with their codes of conduct (that also forbid hate speech in the legal sense) is to give more power to the already unaccountable thereby undermining democratic mechanisms and accountability even further. In addition, to ask tech companies to engage in digital media literacy education is strange as such education should be left to professional educators and indeed find its place in all school curricula across the world.

Finally, and importantly, whenever restrictions are imposed on freedom of speech, one makes a judgement about which and whose human right is more important than that of another. Whereas this is legitimate where the law exists (see above) such judgments cannot be extended whether by law or policy to discourse that is dangerous. UNESCO has traditionally operated with quite a wide definition of speech and needs to clarify what it is attempting to regulate – if it is offensive speech then we are down a dangerous path and though well-intentioned, the harm will be greater than the good done.

Above image: Nobel Peace Prize winning journalist, Maria Ressa

The UNESCO conference was insightful, and the debate was dynamic. Though there were calls to propose concrete steps, discussion dominated. And given that regulation can do more harm than good – even unintentionally so -, continuing to talk is not necessarily a bad thing. What was also interesting was to listen to Chris Wylie who eloquently and persuasively argued and pleaded for multi-stakeholder attendees and international organisations more generally to finally consider the challenges we face with social media essentially as an engineering problem. He advocated for security checks before social media platforms are allowed to operate in the same way that planes, cars or toasters have to go through safety checks and be approved by experts before going to market. For him, this is more effective than continuing to focus on the problems caused by the lack of engineering oversight – after all, we don’t do this in any other areas. If planes fall out of the sky no one calls for having stronger NGOS to deal with plane disasters. Instead airlines are held accountable, engineers are held accountable – why is the case different for social media? Chris certainly had a point, a very good one at that.

Though this event was important in norm setting and in norm reaffirming, we would hope for a wider approach combining laws and regulation with pre-market safety checks but to also address problems within societies: divisions, lack of digital media literacy, communicative skills (discursive civility and counter speech) and a lack of knowledge about social media and the Internet. If we don’t address the root causes of hate speech or dangerous speech within society such as injustice and oppression, then interventions on social media merely clean up symptoms but don’t ‘cure’ societies. The problem is not so much one of tech company accountability but the lack of social justice in society.