The Rise of Dehumanization on Social Media

Home » The Rise of Dehumanization on Social Media

The Rise of Dehumanization on Social Media

Share on facebook
Share on twitter

In Ten Steps to Genocidewe briefly discussed dehumanization and its relationship to mass violence and genocide. Dehumanization has a long history in relationship to mass violence and hate crimes and the process of disseminating dehumanization has been forever changed by social media, where things can be said and spread with a click of a button. What exactly is dehumanization and how is it spread online? 

Dehumanization can be subtle, using negative stereotypes attributed to a certain group, or blatant where a group is completely denied of human traits and deemed animalistic or in-human in nature. Dehumanization of a group is often done when another group feels threatened and believes their beliefs, resources, or identity is threatened by another group. We have seen dehumanization of groups throughout history such as the dehumanization of Black people in America, where racial stereotypes were developed to excuse slavery, or with Indigenous people in America who were often called savages and in-human. 

Each level of dehumanization has historically been used by citizen-led groups or governments to excuse insurmountable violence and suffering. We have seen dehumanization utilized through multiple genocides, including the holocaust and Uyghur genocide, by hate organizations such as the KKK, and in individual hate crimes across the world. 

Today, spreading dehumanizing language, pictures, and ideas is easier than ever before. Social media has been used in the same manner that flyers, newspapers, or word of mouth have been used in the past to spread hateful stereotypes, language, and ideas rapidly and efficiently. 

Dehumanizing language, ideas, and rhetoric have been spread utilizing multiple platforms online including Twitter, TikTok, Facebook, 4chan, and Reddit. It takes place in discussion threads, online messages, comments, and statuses. In a study conducted in 2020 by Wahlstrom and colleagues, the researchers found that the comment section of Reddit under links to news stories about immigrants was riddled with hateful, dehumanizing, and violent language. The researchers speculated that this was because the comment section was not closely monitored by Reddit. 

In addition to the lack of moderation on social media platforms, there are many other factors that contribute to the spread of dehumanization on social media. Two of the most important include the prevalence of echo chambers online and the sense of anonymity that a social media platform can provide. 

As many people are aware, social media platforms use algorithms to develop a pleasing and enjoyable experience for the user. These algorithms analyze the things people comment on, like, and spend the most time viewing. The things you like and comment on the most are the things you are most likely to see consistently. This has created what is called echo-chambers, or groups on social media that share the same viewpoints, beliefs, and ideals. In an online echo-chamber, people may feel freer to use racist, homophobic, or violent language, as many of their followers or people that see their posts likely agree with them, and they are unlikely to face any backlash. 

The anonymity of social media is also partially to blame for the massive spread of dehumanizing language. People on social media can hide their real names, faces, and facts of their lives. This allows people to share dehumanizing rhetoric online without the fear of consequences by separating themselves from it. 

Dehumanizing and hateful language has become increasingly common on social media platforms. A study conducted by Hawdon and others in 2015 found that roughly 53 percent of American teens, 49 percent of Finnish teens, and 39 percent of British teens surveyed reported being exposed to hate speech and dehumanizing language on social media. 

As dehumanizing language and ideas continue to spread on social media, the number of hate crimes and violence in real life also grows. We have seen this in the United States as Asian Americans have been dehumanized on Twitter and other social media platforms during the Covid-19 pandemic and hate crimes of Asians in America have spiked astronomically. 

The rapid spread of dehumanization on social media has real-world consequences. It places targeted groups in increased danger of experiencing violence and has resulted in a spike in hate crimes throughout the United States. Facebook and Twitter have both faced public backlash for their lack of moderation of hate speech and dehumanizing websites on their platforms and have subsequently promised to do better. But, while we wait and see what these companies do to prevent the spread of hate speech and dehumanization on their platforms, it’s important to remember why it is so important to prevent and start the conversation of what we can do to stop the hate spread across social media and rehumanize the people it is affecting. 

Sources:

Wahlström, M., Törnberg, A., & Ekbrand, H. (2020). Dynamics of violent and dehumanizing rhetoric in far-right social media. New Media & Society, 1461444820952795.

Hawdon, J., Oksanen, A. & Räsänen, P. (2015). Online Extremism and Online Hate Exposure among Adolescents and Young Adults in Four Nations. Nordicom-Information 37(3-4), 29-37.