Twitter has announced a new policy addressing dehumanizing speech. This policy will take effect later in 2018.
Most importantly, this is also done to limit real-world harms stemming from disclosure on the platform. The policy will prohibit content that dehumanizes others based on their membership in an identifiable group.
Twitter already has hateful conduct policy. It prohibits users from threatening violence or directly attacking a specific individual based on characteristics like race, sexual orientation, or gender.
Once the new change is in effect, a new clause will be added to the Twitter Rules: “You may not dehumanize anyone based on membership in an identifiable group, as this speech can lead to offline harm.”
Twitter’s post reads “Language that makes someone less than human can have repercussions off the service, including normalizing serious violence”.
Definition of dehumanization according to the policy
Basically, Dehumanization is a language that treats others less than a human. Animalistic dehumanization: The process of depriving particular person or group of positive human qualities. When others are denied of human nature, it is called mechanistic dehumanization.
Any group of people distinguished by their shared characteristics. Those characteristics could be, Race, Ethnicity, Sexual orientation, National origin, Gender, Gender identity, Religious affiliation, Age, Disability, Serious Disease, Occupation, Political beliefs, Location, Social practices.
This initiative shows that Twitter staff are thinking hard to reduce the harms that online content could lead.
News that Twitter was considering a policy on dehumanizing speech first broke in August. Twitter is giving users two weeks to comment on the new rule via a survey form. These questions include whether the policy is clear and how it could improve. It will be available in English, Spanish, Arabic, and Japanese.
As noted in the post, these rules have some overlap with more common provisions against hate speech or racism. Nonetheless, these can apply to all groups, rather than just specific protected classes.
Hate groups that use the platform to spread their messages would face problems. Due to that fact, it would affect them even if they’re not harassing a specific individual.
“We obviously get reports from people about content that they believe violates our rules that does not. “The dehumanizing content and the dehumanizing behavior is one of the areas that really makes up a significant chunk of those reports”, says Del Harvey, Twitter’s vice president of trust and safety.
TechCrunch says the following on their blog, “Instead, in over than twenty years since its enactment, courts have consistently found section 230 to be a bar to suing tech companies for user-generated content they host. And it’s not only social media platforms that have benefited from section 230; sharing economy companies have used section 230 to defend themselves, with the likes of Airbnb arguing they’re not responsible for what a host posts on their site. Courts have even found section 230 broad enough to cover dating apps. When a man sued one for not verifying the age of an underage user, the court tossed out the lawsuit finding an app user’s misrepresentation of his age not to be the app’s responsibility because of section 230.“