Social Media Censorship: A violation of our freedom of speech?
By Heather Mooney
In March 2022, 14-year-old Tyre Sampson fell fatally from a roller coaster in Orlando, Florida. Before law enforcement could inform him, the boy's father discovered his son’s death in a video on social media. The accident was recorded and posted to multiple social media platforms where it rapidly gained attention, with one video reaching 1.7 million views on Tik Tok. While some Tik Tokers spoke out about the insensitivity of the trending material, it did not stop the videos from gaining traction.
In recent years, censorship of social media content has become increasingly relevant in the U.S. It’s both an ethical and political issue, shaped by former President Donald Trump's ban from Twitter in 2021 and the increased spread of misinformation around elections and the Covid-19 pandemic. How should social media be censored and who has the authority to censor it? The nation is at a turning point in this complex issue, as people across the political spectrum attempt to construct solutions that could redefine media censorship in the long run.
When social media platforms were emerging decades ago, censorship was seen as a potential threat but was not a major concern among policymakers. That changed in 1995, when a libel judgment case argued that the social media platform Prodigy should be responsible for its users' content. Some political leaders feared that this approach could crush the budding social media industry, so they passed a statute that provided social media services the ability to self-censor their content without responsibility for users’ posting illegal, false or unethical information. The bill, section 230 of the Communications Decency Act, was somewhat overlooked at the time, as online service providers were seen as a small and non-threatening industry. However, as social media has emerged as a giant for-profit industry and a major source of information and news for many Americans, the topic of these sites’ ability to moderate their own content has become increasingly pressing.
Although social media platforms are not legally required to moderate content, without some form of censorship, their platforms would be overrun by inappropriate and hateful material. As a result, they enforce basic rules, but how do they regulate the vast amount of content? According to internet expert Alan Crowetz, “It will be absolutely impossible for them to be able to determine every piece of content, if any, that violates their terms of service. What a lot of the platforms kind of do instead is leave it up to the users to report content.” This system, while it might help limit pornography and hate speech, doesn’t stop the spread of videos like that of Sampson's death from circling the internet.
This kind of graphic content has negative effects on young people, who are being overexposed to inappropriate content at an early age, which can cause nightmares or behavioral changes and can lead to anxiety or mental health issues. Additionally, the spread of misinformation has increased since 2020, such as rumors around election fraud and Covid-19. A 2018 study by the Massachusetts Institute of Technology shows “tweets containing falsehoods reach 1, 500 people on Twitter six times faster than truthful tweets '.
Despite evidence to support censorship, some people argue that the practice limits freedom of speech. Trump's ban from Twitter outraged many Republicans, sparking criticism that conservative viewpoints are overly censored on media platforms. And to a certain extent, their suspicions are correct. According to a 2014 study of Facebook data, “liberal users are less likely than their conservative counterparts to get exposed to news content that opposes their political views,” and it is possible that algorithms have adopted political biases.
In addition, increased content moderation could be harmful to social justice activism.. In 2020, the video of George Floyd’s death at the hands of the police force went viral on social media. The incident had a significant impact on the Black Lives Matter movement, sparking nationwide protests against police brutality. At the same time, some people argued the graphic video’s circulation was immoral and insensitive.
Today, there is mounting political pressure from both sides for solutions around content moderation and the discussion has become increasingly polarized. According to a study done by Pew Research Center, 71% of Republicans are now against social media companies removing inaccurate or misleading information posted by elected officials, while 73% of Democrats support it.
The growing concern has led some states to take matters into their own hands. Texas and Florida have attempted to pass laws limiting media platforms' ability to moderate political content featuring conservative viewpoints. The U.S. Court of Appeals for the 11th Circuit denied most of Florida’s law, but Texas’ law was approved by the 5th Circuit. There seems to be limited agreement, even in the nation’s court system, about the constitutionality of these laws and whether censorship, or lack thereof, is a violation of rights.