By Staff Writers Tavish Mohanti & Naveed Shakoor
“Just a quick note of support for Hope Hicks and President Trump. I hope they both die,” reads a tweet published just days after President Donald Trump announced he tested positive for COVID-19 on Oct. 1. Tweets like these were immediately taken down by Twitter, citing its Abusive Behavior Policy which covers comments “wishing or hoping serious harm on a person or group of people.” While this tweet and many others wishing for Trump’s demise were removed, this influx of violent comments raised questions about the fairness of Twitter’s tweet removal process.
Politicians, including the Squad — Democratic Congresswomen Alexandria Ocasio-Cortez, Rashida Tlaib, Ilhan Omar, and Ayanna Pressley — expressed dissatisfaction in Twitter’s current policy system after Twitter removed hate tweets toward Trump. All four women had previously faced death threats on the platform, yet Twitter did not censor these messages as it did for Trump. “I hope you both hang for TREASON!” and “I hope they hang you,” are just a few callous and racist remarks hurled at Tlaib and Omar — both women of color — throughout their first term.
“Seriously though, this is messed up. The death threats towards us should have been taken more seriously by [Twitter],” Tlaib tweeted in response to Twitter’s swift response to the influx of hate tweets Trump received. And she’s right. When Twitter takes immediate action against comments calling for Trump’s death while ignoring comments about politicians of color, it reinforces notions that such stereotypical comments are acceptable. Further, the persistent hate speech from Twitter users fosters racist and sexist discourse around politicians of color and female politicians that translates into real life.
Seriously though, this is messed up. The death threats towards us should have been taking more seriously by @TwitterComms https://t.co/IOS7s2n1wx
— Rashida Tlaib (@RashidaTlaib) October 3, 2020
But these violent comments aren’t just restricted to regular users. Politicians have used social media as a tool to question opponents on policies, character, and history. However, when these attacks contain racist or sexist rhetoric, they become more than political commentary.
On July 20, Republican Rep. Ted Yoho called Ocasio-Cortez a “f***ing b***h.” Trump said “send her back” and ‘go back to your country” to the Squad. This toxicity spreads to the millions of Americans who follow both Trump and Yoho on Twitter, making it more difficult for minorities to get proper representation in the government.
In order to end this vicious cycle, Twitter needs to remove all racist and sexist comments immediately and make a decisive statement — these comments are unacceptable.
But that begs the question: who ultimately makes the call about which tweets are appropriate and which ones aren’t?
According to the Twitter Help Center, most tweets are reviewed by an algorithm. However, this digital approach introduces a potential flaw in the system — algorithmic bias. Technology is not blind to social prejudices and stereotypes. Developers create machine learning systems — like the one Twitter uses — with inherently biased datasets, creating an algorithm with prejudices automatically incorporated. Twitter’s algorithm is able to identify the ethnicities and genders of users by their icon, making it easier for them to suspend or limit users of color or minority groups. Previously, Twitter’s algorithm faced issues with racism when it identified the race of people in a photo and cropped it to showcase only those who were white.
To help combat this potential bias, Twitter has instituted a support team to monitor and review reports made on the platform. In order to truly be effective, however, they must ensure that this team is diverse in terms of gender, ethnicity, sexuality, and race.
Twitter should also tweak the current system of reporting tweets — both through the support team and algorithm — to be fairer; any account reported for hate speech should be suspended or locked. Twitter currently relies on users to report accounts manually, and it usually only takes action when a substantial number of users report an account. This system is flawed because if a user in violation of Twitter guidelines only has a few accounts report them, the likelihood of their account being suspended or locked is slim to none.
Many troll accounts or private accounts take advantage of this loophole and consequently do not face any penalties for breaking Twitter policies. However, their statements are still just as damaging, fueling racist and sexist rhetoric and feeding into a toxic political climate. Their tweets must be taken down regardless of the number of reports.
Twitter direct messages (DMs) are another easy way for accounts to send targeted hate speech without being reported. As DMs are private messages, there is no way for others to know what is being said unless a user shares their DMs with their followers. Without proof of guideline violations, users who send threatening DMs to others are unlikely to be penalized.
Twitter must take action against the unfair and inconsistent regulation of their policies that fails to protect users, specifically minorities, from hate speech. By pushing the platform to improve their policies and algorithms, we’ll be one step closer to more inclusive discourse.
Cover graphic by Opinion Editor Aria Lakhmani
Be the first to comment on "The Real Repercussions of Sloppy Social Media Policies"