A Human Rights-Based Approach to Social Media Platforms

By: Rikke Frank Jørgensen

February 26, 2021

Is Social Media Ethical?

Tech giants and how to respond to them has become the regulatory challenge of our time, most recently illustrated by the debate around Twitter’s expulsion of former president Trump in January 2021. The services provided by their platforms have an impact far beyond determining who can participate in public debate, and touch upon a broad array of human rights and public policy issues related to discrimination, privacy, data protection, access to information, freedom of opinion and expression, freedom of assembly and association, among others.

Part of the challenge with figuring out the right regulatory response to social media platforms like Facebook and Twitter results from their dual role as public and private spheres. The platforms have become an important part of contemporary public space: They facilitate our social infrastructure and have been compared with public utilities. Yet, as private services they are governed by commercial priorities and shareholder interests rather than the public interest. In fact, the companies’ private control over a widely used public resource and their ability to turn all kinds of human activity into highly valued data has created the wealthiest companies of our time.

Part of the challenge with figuring out the right regulatory response to social media platforms like Facebook and Twitter results from their dual role as public and private spheres.

In terms of human rights, social media platforms have a huge impact on how an individual may express, search for, and encounter information. Individuals may be subjected to discrimination through or by the platforms, or have their personal data and privacy restricted. However, as private companies they are not bound by human rights law, unless human rights standards are translated into national regulation. Such translation has happened in many areas of life—workers’ rights, protection of children, environmental protection, protection of journalists and press freedom, for example—but there is still no regulation that stipulates the role and responsibility of tech giants that have reached such a size and volume that their impact on individual speech, public debate, discrimination, and privacy may in many contexts be more significant than that of the state itself.

If we zoom in on social media content regulation as an example, there are (at least) two issues at stake: protecting freedom of expression (ensuring legal content remains online) and enforcing the boundaries of freedom of expression (removing unlawful content). 

So far, most attention has been directed towards the companies’ role in removing unlawful content. In Germany, for example, the limited liability regime that has guided internet services in Europe for the past 20 years has now been supplemented with the Network Enforcement Law (NetzDG). The NetzDG requires companies to act quickly to remove unlawful content and provides for substantial penalties where they fail to do so. Similar regulation has been tabled in other European countries, as well as across the globe. While Germany’s motives for such regulation are understandable (rapidly removing unlawful content), it raises freedom of expression issues, since it delegates an enormous amount of speech decisions to private companies with penalties and short time frames attached. While a court can take weeks or months to decide on a case—because many such cases are complicated and require careful consideration of context—the tech giants must decide upon thousands of cases within hours. Under such conditions, the risk of overregulation (that is, removing legal content) is substantial. 

While a court can take weeks or months to decide on a case, the tech giants must decide upon thousands of cases within hours. The risk of overregulation is substantial.

Moreover, the companies act without the safeguards for freedom of expression that a state would be obliged to follow, such as independent judicial review, oversight, and complaint mechanisms. Arguably, it is the state’s responsibility to ensure that freedom of expression is protected when prescribing private judgement over unlawful content through laws such as NetzDG. This obligation has not been met by existing laws and is mostly absent from the public debate on content regulation. 

In terms of ensuring that legal content remains online, there are no binding obligations on companies to ensure this. As private companies, they are free to define and enforce their terms of service and community guidelines, including towards content that is protected speech under human rights law. In response, the UN Special Rapporteur on Freedom of Expression has recommended that companies follow international freedom of expression standards in their content moderation practices. This implies that their decisions on content shall be guided by the same standards of legality, necessity, and legitimacy that bind states when they restrict freedom of expression. It follows that company rules should be clear and specific enough for users to predict with reasonable certainty what content will be ruled out (principle of legality). The restriction must serve a legitimate aim under human rights law (principle of legitimacy); and it must be applied as narrowly as possible while ensuring that no less invasive measure is available (principle of necessity). 

Taking such a human rights approach to content moderation is sensible for several reasons.

First, it provides a framework that is based on international law and as such can provide guidance to companies across a variety of national contexts—including those with national laws that undermine human rights. Rather than discussing whether companies should have greater responsibility over content (or no responsibility at all), the starting point is the protection of individual rights and freedoms, and the norm of holding states and companies accountable to the same standards. Human rights law provides social media companies with a predictable and consistent baseline for users to rely on, irrespective of context.

Human rights law provides social media companies with a predictable and consistent baseline for users to rely on, irrespective of context.

Second, human rights law provides a normative baseline that can be used to push back against illegitimate state restrictions. Soft law, such as the UN Guiding Principles on Business and Human Rights, provides guidance on how companies can respond to government demands for excessive content removals or other types of human rights violations. The guiding principles also establish standards of due diligence, transparency, and remediation that companies should implement across their polices, practices, and products. Such standards are long overdue and are crucial to ensure that companies can be held accountable for their human rights impact. 

Finally, human rights law is based on a societal vision that aims for inclusive, equitable, and diverse public participation that supports a broad range of different and potentially conflicting viewpoints. At the same time, it provides for restrictions to counter content that incites violence, hate, and harassment tailored to silence individuals, specific groups, or minorities. It calls for specific attention to vulnerable groups and communities at risk, and as such provides a normative framework for content moderation that incorporates both the values of free speech and protection against abuse, violence, and discrimination.

Opens in a new window