Skip to content
Safety in Numbers

Dr. Cathy Buerger: Dangerous Speech and Gun Violence

A white woman with shoulder-length brown hair and blue eyes is pictured from the shoulders up. She wears a black shirt and is smiling. A brick wall is the background behind her.

Safety in Numbers

Welcome to Everytown Research’s Safety in Numbers blog, where we invite leading experts in the growing field of gun violence prevention to present their innovative research in clear, user-friendly language. Our goal is to share the latest developments, answer important questions, and stimulate evidence-based conversations on a broad range of gun safety topics in a form that allows all of us to participate. If you have a topic you want to hear more about, please feel free to suggest it at: [email protected].

Sarah Burd-Sharps, Senior Director of Research

Note: The views, opinions, and content expressed in this product do not necessarily reflect the views, opinions, or policies of Everytown.

Dr. Cathy Buerger is a researcher whose work examines global responses to dangerous and hateful speech, human rights and civil society, higher education, and culture and discourse norms.

Dangerous speech is any form of expression—whether spoken, written, or visual—that can increase the likelihood that its audience will support or engage in violence against members of another group. We have noticed striking similarities across different countries, cultures, and historical periods, in the language used by leaders to incite violence. One common pattern in dangerous speech is dehumanization, where individuals from a target group are, for example, compared to insects, harmful animals, bacteria, or cancer. 

What are the similarities and differences between dangerous speech and hate speech?

Dangerous speech and hate speech both describe types of speech that can cause harm. And there are many cases where dangerous speech is also hateful. But the two categories are distinct. The primary difference between the two types of speech is that dangerous speech is generally based on fear, not hate. Speech that describes another group as a threat may sometimes be hateful, but it is a feeling of fear that makes violence against that group come to seem acceptable or even necessary. For example, if a convincing speaker tells an audience that there are members of another community coming to attack them (a message that is not explicitly hateful), committing violence against members of that group begins to feel like self-defense. 

The term “hate speech” is often vague and broadly defined, leading to varying interpretations. This lack of clarity can result in overbroad understandings of hate speech, which may threaten freedom of expression. In practice, laws against hate speech are frequently misused to silence journalists, dissenters, and minorities, as we’ve seen in countries like Hungary, India, Rwanda, Kazakhstan, and Bahrain. Dangerous speech is a narrower and more specific category, defined by its connection to violence against a group of people, something that most would agree should be prevented.

What are common sources/mediums of dangerous speech?

Dangerous speech may take any number of forms and can be disseminated in many ways. For example, it can be shouted during a rally, posted in a comment or meme on social media, shared in conversation between friends, or even be part of the lyrics of a song. 

The form of the speech and the way it is disseminated affect how the message is received and therefore, how dangerous it is. If the speech is transmitted in a manner that allows it to reach a large audience, it of course has greater potential to incite intergroup violence than if it is only shared in a private conversation. That being said, we know that the impact of dangerous speech is cumulative. People generally hear a message from many different sources, and it is precisely that repetition, that feeling that everyone believes it, that makes it convincing. 

These days, many of the examples of dangerous speech that we study come from online platforms. There are a few reasons for this. First, online speech has the potential to be extremely dangerous as it can reach very large audiences quickly. Dangerous rumors and misinformation can be shared across platforms and between audiences that do not know each other or live near one another. Second, speech online exists beyond the time it is posted and therefore its impacts can be long lasting. It also means that it leaves a record that can be observed by researchers. We are able not only to see the original speech, but also the reactions of people who viewed it. This is incredibly useful for researchers trying to understand speech that is often specific and meaningful only in certain contexts. 

How does dangerous speech relate to gun violence? Are there examples that illustrate the connection between the two?

Fear can be a major motivator for buying, and potentially using, a gun. A 2023 survey conducted by the Pew Research Center found that over 70% of gun owners say that they own a gun for protection, despite the fact that research has shown the defensive use of guns to be exceedingly uncommon. 

Dangerous speech can convince people that they are at risk and that members of another group are dangerous. When this kind of speech has primed a population to believe the threat is real, it can trigger violence. The so-called “great replacement theory” is a good example of this. This conspiracy theory falsely claims that a deliberate effort is underway to replace white populations, particularly in Western countries, with non-white ones, often through immigration. It is a central tenet of white supremacist beliefs and has been connected to multiple shootings in the past few years. It is also an example of dangerous speech, making white audiences who believe this conspiracy feel as if their place in society is being destroyed. In 2022, a white supremacist who had espoused great replacement theory entered a grocery store and used a gun to murder 10 people in Buffalo, New York—specifically targeting Black shoppers. Similar beliefs motivated shooters at a mosque and an Islamic center in Christchurch, New Zealand, and a Walmart in El Paso, Texas, in 2019 and at the Pittsburgh Tree of Life synagogue in 2018. Even though the threats are unfounded, dangerous speech can convince people that their offensive actions are actually defensive, making them seem more acceptable to those who feel similar fears.

How do gun companies use dangerous speech in their marketing materials to produce biases and fears?

Gun companies strategically deploy dangerous speech to create demand. Ads portray gun owners as “protectors” or part of a “dying breed” of real men. These marketing campaigns support other false dangerous narratives suggesting, for example, that crime is an ever-present danger (and only guns can save you).

There is also another type of dangerous speech that often shows up in gun advertising. It is unique in that it doesn’t focus on the threat posed by another group. Instead, this speech valorizes violence, characterizing it as something honorable and connected to the identity of the in-group. This type of rhetoric tells the audience that they can (and should) be the heroic defenders of their group and is often strengthened by allusions to larger narratives of morality or identity such as religion and national or cultural folklore. 

In the US, we frequently see gun ownership being represented as essentially American, a connection that goes back to the very founding of our nation, as Alexandra Filindra has shown. Gun companies use words like “patriot” or images of the military to make audiences see buying guns as linked with American identity. It is also a way to make the violence caused by guns seem more acceptable—even noble.  Gun owners see themselves as “patriots” standing up to tyranny or as defenders, like members of the military.  

What are some solutions to prevent and/or address dangerous speech?

One way to prevent speech from becoming dangerous is to defang it—to find ways to help people understand what they are seeing or hearing and not be convinced by its message. At the Dangerous Speech Project, we tend to focus on two primary pathways to achieve that goal. The first is public education. Teaching people about dangerous speech, what it is and how it works, helps them recognize it and resist it. 

The second method is counterspeech: direct responses to hatred and harmful speech that seek to undermine it. Sometimes counterspeech is directed at the person who has shared the harmful speech in an effort to convince them to change their mind or behavior. This can be extremely hard to do (although not impossible). Thankfully, there are also other ways that counterspeech can be effective. My research with counterspeakers has shown that most of them respond to hateful and dangerous speech online with the goal of reaching other people who may be reading the comments. They generally hope to convince those people not to believe or spread the harmful speech. They may also seek to persuade others to join them in speaking out against it. In this way, counterspeech works to slow the spread of dangerous speech.

What are some next steps for your work?

With the rapid advances in generative AI technology, we’re keen to explore its impact on the spread of dangerous speech as well as the potential of generative AI in supporting efforts to counter dangerous speech worldwide. There are certainly many potential challenges associated with using such technology to counter harmful speech, but there is also great potential. Research that helps us better understand how these opportunities and pitfalls manifest differently in a variety of contexts around the world is a key next step. 

About Dr. Cathy Buerger

Cathy Buerger is the Director of Research at the Dangerous Speech Project where she studies the relationship between speech and intergroup violence as well as civil society responses to dangerous and hateful speech online. She is a Research Affiliate of the University of Connecticut’s Economic and Social Rights Research Group and Managing Editor of the Journal of Human Rights. She holds a PhD in Anthropology from the University of Connecticut.

The Latest