Quantcast
Channel: Facebook – Harvard Political Review
Viewing all articles
Browse latest Browse all 47

Social Media in Conversation

$
0
0

Legal Responsibility and Ethics

by Lauren Baehr

Removing Donald Trump from social media platforms is not enough. It is a start, but if reform on social media platforms starts and ends with banning Trump and COVID-19 misinformation, it will not be enough. While I approve of removing people who spread dangerous lies like Trump from Twitter, the lack of content liability of internet platforms and their status as for-profit businesses create an ethical conflict, making the simple removal of Trump feel inadequate compared to the broader ethical concerns.

Legally, social media platforms are not liable for what is put out on their platforms as publishers. This means they can support a free dialogue without worrying about legal trouble. Without that distinction, it would be difficult for social media platforms to exist at all. Though they are not responsible for what is put out on their sites, social media companies still profit from interactions on their platforms. One estimate from 2017 claimed that Trump was worth $2 billion as a Twitter user. Since social media platforms profit off of content created by others, they are never producing what they profit off of, and they can profit off of amoral content without the liability of endorsement.

Due to their legal status as non-publishers, there is little monetary or legal incentive for social media platforms to consistently and meaningfully regulate content. Former President Donald Trump spouted outright lies on topics ranging from immigration to climate change for years, even while president, before Twitter felt enough pressure to ban him from their platform due to the January attack on the U.S. Capitol. This attack also led Twitter to ban QAnon accounts. And because of their click-based profit models, Russian bots were able to exploit the site and pre-existing political divisions to expand pre-existing rifts of misinformation and create new ones. Each time, there has been change only after things have gone horribly wrong, even though bad actors had been spreading lies long before that point. And social media platforms have profited off of the buildup for both.

Giving credit where credit is due, these examples of Twitter bans are a good thing, as is the five-strike anti-COVID-19 vaccine tweet policy. However, there are still many people using social media platforms to propagate dangerous misinformation while these companies continue to profit. For example, the five-strike COVID-19 vaccine policy does not cover the general anti-vaccination community. Additionally, much of the site’s fact-checking is crowd-sourced, not company funded. While Twitter has happily profited off of dangerous misinformation, they seem unwilling to put a single dime into preventing misinformation. This misinformation remains rampant on other popular platforms. For example, Facebook and its subsidiary, Instagram, with their enduring reputations as hotbeds of misinformation, have been extremely hesitant to ban accounts. According to a CNN Business report, “Instagram continued to prominently feature anti-vaxxer accounts in its search results, while Facebook groups railing against vaccines remained easy to find.” These sites have committed to improving their handling of dangerous misinformation. However, they have said the same thing many times before and the situation has not changed.

It is easy to understand the hesitance of these for-profit companies to limit their user base. They have also defended their actions, or inaction, by claiming to avoid “censorship.” It must be said that censorship frequently goes poorly. It is easy to have a good laugh at the lists of books that have previously been banned in U.S. public schools; social progress has revealed them to be almost silly. Bannings have historically come from the fear of everything from socialism to questioning the government to depicting gay relationships. Some platforms continue to shut down queer conversations for being too “sexual.” At the same time, platforms like YouTube remain willing to entertain creators that make up the well-documented pipeline to white supremacy. Clearly, social media platforms and their business models should not be the arbiters of morality. A lot of the strength of social media stems from the fact that it allows for conversations that publications do not want to endorse. However, there is a clear line between censoring personal experiences and preventing people from spreading scientifically debunked claims as if they are fact.There is a difference between appointing social media platforms as a moral police and asking them to prevent the spread of harmful misinformation. Twitter was right to ban Trump, but this action should not simply be a publicity stunt, and more than just Twitter should be taking action. While tweets have begun to be fact-checked, people who are determined to believe lies are not going to stop just because Twitter adds a fact check note. Less notable figures than Trump spreading the same dangerous lies should also be banned. Anti-masker and anti-vaccination communities should not be allowed to percolate just because Facebook and other social media companies are hesitant to ban large and profitable communities. On the whole, websites and platforms should not be profiting off lies stated as fact. If people are fed outright lies but told they are the truth — not presented as  “opinions” or the current understanding within academia — then social media should not be profiting from it.

Freeing Speech from the Market

By Jacob Ostfeld

Allowing social media companies to regulate online speech is as dangerous as it is ineffective. Currently, online discourse reflects social and political reality, not the other way around. But by encouraging or even requiring social media companies to regulate such discourse, we risk giving these companies control over the way we interact with the real world. Media executives are already powerful enough. And we often fail to remember that the issues social media companies are being asked to mitigate were caused, in part, by these companies in the first place. Instead of concerning ourselves with the ethics of social media platforms’ profit models, we should focus on limiting the influence of corporate entities on the way we think and act.

If social media were less significant for facilitating societal discourse, the question of how platforms should regulate speech would not require a programmatic answer. However, all evidence points to the increasing importance of social media for new consumption and communication. Since the beginning of the pandemic, people have spent more time on social media. Pew Research estimates that over 70% of American adults use social media and over the past few years, they have increasingly relied on it for news. As the importance of social media increases, so does the importance of establishing regulations. 

Current online speech regulations are almost always internally coordinated by social media companies. Most of this regulation is more pervasive and less visible than the occasional suspension of someone’s account. A majority of the instances of online “censorship” that have made recent headlines, such as the Trump and QAnon social media bans, are exceptions. Not only did those banned advocate for the violent destabilization of a democratically elected government, but social media companies substantively responded to threats of political violence on their websites, neither of which is a regular occurrence.

In fact, social media companies played a part in causing the problems that eventually led to them banning Donald Trump from their platforms. The beliefs that propelled Trump to the White House and the insurrectionists to the Capitol were legitimized by disinformation, spread with the permission and sometimes tacit encouragement of social media sites. In 2016, Facebook replaced the editors of its trending tab with an algorithm, which repeatedly featured conspiratorial claims. They also boosted right-wing sources in an effort to appease conservatives. YouTube’s “alt-right rabbit hole,” which also used an algorithm, guided users toward bigoted videos. It was also responsible for the rise of figures like Nick Fuentes, who attended the 2017 Charlottesville rally and was present at the Capitol riot. Members of Fuentes’s online fanbase, called Groypers, took part in the Jan. 6 riots. Even after the riot, when Apple removed apps like Parler from the App Store, white nationalists and conspiracy theorists flocked to other platforms, becoming more extreme in the process.

If social media companies helped cause these problems, they cannot be relied on to fix them. Indeed, these companies have demonstrated they only do the right thing under immense popular pressure. Social media platforms appear to be exclusively interested in maintaining their profit margins, and thus respond to market forces. Sites responded to the anti-establishment far right out of fear of popular blowback, and they have shown an equal willingness to regulate speech on the left when convenient for business. Following left-wing news website Mother Jones’s repeated criticism of Facebook’s free speech rules, Facebook “choked traffic” to the site. In January, Facebook suspended numerous left-wing accounts and pages without warning or explanation. 

Social media companies have demonstrated they are incapable of regulating speech. Neither allowing disinformation to spread, nor banning violent individuals only after they engage in violence, nor filtering information based on profit constitutes freedom of speech. Rather, these three versions of regulation are perversions of healthy civic discourse. If we value discourse as a way to engage with democracy, we cannot allow the private sector to control the expression of dissidence. Social media companies have shown that they will always value profit over morality.Instead, we should focus on limiting the power of social media companies and empowering individual users. The Facebook antitrust lawsuit is a good starting point. Breaking up Facebook into smaller, decentralized companies would ensure that the power to regulate online speech is not concentrated in the hands of a few executives. An equally promising second step is President Biden’s plan to provide every American household with internet access. Giving more Americans internet access will make online discourse more representative. Beyond these two proposals, some have suggested taxonomizing and regulating online services based on their size and business model, allowing individual users to manage their interactions with such entities. Giving social media companies more power over speech, however, is not the answer. As the importance of online free speech increases, so does the importance of limiting corporations’ control over it.

The post Social Media in Conversation appeared first on Harvard Political Review.


Viewing all articles
Browse latest Browse all 47

Latest Images

Trending Articles





Latest Images