By 2017, Facebook was launching its latest emoji, the angry face, because undeniably it had become an angrier place. Now users could express resentments and opposition to each other’s posts about Donald Trump or Nancy Pelosi, the MAGA movement or “radical liberals,” and everything in between.
Certainly, Facebook is not alone in its struggle to maintain a healthy public sphere. Twitter may not have the “dislike” emoji, but it arguably does not need one. There, the heart icon is used often to approve of vitriolic memes, conspiracy rhetoric, and hateful comments that flood the network designed for perspective sharing. Like Facebook, Twitter has also had to contend with a wave of unwanted extremists, from ISIS to white nationalists to QAnon. Accordingly, social networks had to rethink their user policies to adjust to the increasingly anti-social climate. Most of these updates have addressed the surge in hate content. Of the aforementioned examples, nearly all were deplatformed by Facebook and Twitter—including Holocaust denial, which was banned in October.
Social networks had to rethink their user policies to adjust to the increasingly anti-social climate. Most of these updates have addressed the surge in hate content.
But while social networks pull more plugs on organized hate and militancy, the general climate of political vitriol remains. This raises more complex questions about how social media should move forward in an online environment that grossly personifies the acrimonious state of American politics. While social media cannot solve that problem, some suggest they should try to remove some of the jagged shards of political extremism that have made their spaces too unfriendly to travel. Doing so would place belligerent political speech in line with other forms of prohibited content, which narrowly include promotions of violence and hate speech.
But should belligerent political speech become grounds for community banishment? Few social media accounts have tested that controversial question as much as Donald Trump’s Twitter feed.
Deplatforming a President
There is arguably no more definitive form of protected speech than that of a sitting president, and fewer cases as contentious as that of President Trump’s Twitter account. Since 2015, there have been regular petitions for Donald Trump’s Twitter feed to be deactivated, citing divisive content that often tested the boundaries of Twitter’s user policies. From tweets that circulated false statistics about the number of “whites killed by blacks” in America, to conspiratorial videos about migrants being paid to travel to the U.S. border, the former president often used social media as a dog whistle. He never called for direct violence, but he often rejoiced in fake videos he reposted of himself driving a golf ball into Hillary Clinton’s back or physically assaulting the media.
There is arguably no more definitive form of protected speech than that of a sitting president, and fewer cases as contentious as that of President Trump’s Twitter account.
All the while, many of his surrogates insisted we should not take President Trump’s tweets so seriously. “He doesn’t really mean it,” supporters would often say. But many of the same followers then celebrated his social media diatribes for “telling it like it is,” exposing a contradictory logic that was destined to implode.
In his last months in office, Trump’s Twitter activity amplified the question of whether there should be limits to inflammatory political speech in social networks. After months of spreading fictitious claims that the election was stolen from him, and that this theft could be rectified if only something would stop Congress from certifying the results, Twitter began to flag the president’s tweets as false. President Trump then used his account to promote the #StopTheSteal rally that culminated in the storming of the U.S. Capitol. And during the insurrection, he continued to tweet that his vice president “didn’t have the courage to do what should have been done,” a message that would have been received on rioters’ phones like walkie talkies as they breached the building.
Though President Trump never called for any acts of violence, Facebook had evidently seen enough. The next day, the company suspended his account, indefinitely. And after some deliberation, Twitter also cut transmission, permanently.
Where We Go from Here
Certainly, some of the very same questions about whether Donald Trump’s words incited violence were recently weighed by Congress during his impeachment trial. But for social networks, the implications of deciding this issue do not concern just one man.
Deplatforming users who engage in volatile conspiracy theories, or suspending the accounts of those who cheer on violent aggression in the name of patriotism, would immediately purge a vast number of users. Free speech advocates would argue that censorship of this kind could quickly lead down a slippery slope of endless expulsions, because prior precedents would demand it. And they may be right.
But social networks are not a courtroom. Neither are they the halls of Congress. They do not have to be neutral or bound by tradition in governing over what they feel secures a safe environment for their users.
But social networks are not a courtroom...They do not have to be neutral or bound by tradition in governing over what they feel secures a safe environment for their users.
We might instead think of social networks more like universities which occasionally face the same challenge of an extremist speaker who has been invited to campus. The question of “to host or not to host” is partly about lending the stature of a school’s name to someone who would use it to sow hostility. It is also about the community that lives there. Likewise, for social media, these online communities have a right to decide to whom they choose to open their doors and provide a stage on behalf of their public.
At the same time, social networks will not be able to transform their spaces by playing a constant game of whack-a-mole, deplatforming one conspiracy theory after another. Other forms of QAnon or the Proud Boys will emerge by different names. There is also no cure-all set of guidelines that can expunge extremism. Identifying hate speech itself has proven a daunting task, with so much of it concealed in political context.
Instead, a more sweeping approach may be for companies like Facebook and Twitter to work together to stigmatize hateful political posting, using their powerful mantles to educate the community about that speech that is designed solely to inflame.
And there are a few common forms that networks might work to immediately expose to the disaffecting sunlight. These include not just incitements of violence, but of fear and manufactured outrage (“#WhiteGenocide is underway”). They can shine a spotlight on dog whistle-style conspiracies, devised to stoke cultural blame (“George Soros is secretly funding Antifa”), or posts that glorify patriot resistance against a designated enemy (“Defend the country against the #MigrantInvasion”). Such narratives are often smokescreens for hate, and they have been used as pretexts for violence.
No, social media cannot scrub these narratives from their networks. But they can spend some of their own political capital to educate and inoculate the greater community against those who have only come to poison it.