top of page
Measuring the Effects of Online Hate Speech

Role: Principal Investigator

Timeline: 1998-2000

 

This study was conducted at a formative moment in the evolution of online communication, when norms around digital interaction were still taking shape. It began with the premise that digital speech is not merely an extension of face-to-face or print communication. Unlike offline speech—which is often fleeting—online content can be copied, remixed, and recirculated indefinitely, reshaping its meaning and amplifying its reach.

​

Key Questions

​​​

  • How do speech acts (especially injurious or hateful ones) operate differently online compared to offline contexts?

  • In what ways do the circulation, archiving, and referencing of digital content influence the impact of hate speech?

  • What role do platform conditions (anonymity, pseudonymity, moderation, visibility) play in shaping the dynamics of cyberhate?

  • How might the design of digital communication environments enable or constrain harmful speech acts?

  • What are the implications for researchers and designers in terms of mitigating harm, supporting community resilience, and building ethical infrastructures for digital platforms?

 

Methodology

​

To understand how hate speech operates in these environments and how it acts upon individuals and communities, I conducted a discourse analysis of selected online hate speech events. The analysis drew on speech-act theory to examine how language not only conveys information but can, in some cases, act upon individuals—altering their status and causing harm. It further explored how the specific conditions of digital environments alter the capacity of hate speech to act upon individuals.

​

Study Design

​​​

  • Textual/Discourse Analysis: This study examined selected instances of harmful speech online, how they are composed, circulated, referenced, and archived.

  • Case Studies: The study focused in detail on specific sites (domains) and incidents of online hate speech, tracking how they move through digital spaces and how users respond.

  • Qualitative Interpretation: Rather than large-scale quantitative measurement, this research emphasized how meaning is produced, how harm is socially and digitally constituted, and how online infrastructures support or impede those processes.

 

Sampling

 

Because this was a discourse analysis rather than an ethnographic study, there were no participants; the data set consisted of examples of online speech. The study paid particular attention to godhatesfags.com, an early and widely publicized domain dedicated to spreading misinformation about the LGBTQ community.​ Notably, research for this study was carried out before the launch of the comprehensive search engines; as a result, the study relied upon examples of online hate speech that were already visible enough to be reported upon in the print media. Given that hate speech itself can be open to interpretation, I also acknowledge that the study was shaped by my own interpretation of what constitutes hate speech. In this respect, the study was shaped by both the limits of the search tools available at the time and my own potential interpretative biases. â€‹â€‹â€‹

 

Key Insights

​​​​​​

  • Digital hate speech can have amplified impacts: Because content can be archived, copied, reshared, and referenced indefinitely, a single injurious act can acquire an extended life and influence far beyond its original context.

  • Circulation shapes meaning and harm: The path a harmful message takes (e.g., who reshares it, where it appears, how it is framed) can shape its impact as much as its original content.

  • Platform affordances matter: Features such as anonymity/pseudonymity alter accountability; persistence and archiving mean speech acts often outlive their utterance; and hyperlinking enables harmful messages to spread across communities.

  • Design and moderation shape experience: Moderation systems, visibility controls, anonymity features, and archiving policies directly influence how hate speech is encountered and managed.

  • Digital communities require new ethical frameworks: Traditional approaches to addressing hate speech—and even existing regulatory frameworks—often fall short when applied to mediated digital environments.

​

Impact

​​

​Although carried out in the late 1990s, the publications and presentations based on this study anticipated many issues now central to discussions of digital experience, platform governance, and digital ethics: how harmful speech propagates online, how design choices amplify or mitigate harm, and how participation and visibility shape users’ sense of safety, identity, and agency.​​

​​​

​​​

Publications and Presentations

 

Eichhorn, Kate. 2001. “Re-in/citing linguistic injuries: speech acts, cyberhate, and the spatial and temporal character of networked environments." Computers and Composition 18 (3): 293-304. 

​

Eichhorn, Kate. 2000. "Cyberhate and Performative Speech in Accelerated Time(s)." M/C Journal 3(3). 

​

Eichhorn, Kate. 2000. "Legal and Pedagogical Perspectives on Hate Speech in Cyberspace." Invited talk. Canadian Association for the Practical Study of Law in Education. 

​

Eichhorn, Kate. 2000. “Cyberhate and Performative Speech.” Electronic Communication and Culture Area, PCA/ACA National Convention, New Orleans, LS.

​

​

​

​

bottom of page