Wednesday, November 5, 2025

The Fallout of Charlie Kirk’s Murder: Social Media’s Struggle with Graphic Content Moderation

Date:

After the tragic assassination of Charlie Kirk at Utah Valley University, graphic videos of the incident rapidly circulated across major social media platforms. The virality of such violent content is not surprising, given the high-profile nature of Kirk’s death. However, what stands out is the duration for which these videos have remained accessible online, raising significant concerns about content moderation practices.

Searching for Kirk’s name on platforms like Instagram reveals a disturbing trend. For every video showcasing his debates, there is at least one depicting the aftermath of his shooting. This contrasts sharply with the swift actions taken by social media companies following previous violent incidents. For instance, after the 2019 Christchurch mosque shooting, Meta reported removing over 1.2 million versions of the attack video before they could be uploaded by users. Research from the Southern Poverty Law Center indicated a marked decline in uploads of violent content within a week of mass shootings in various locations, including Christchurch and Buffalo.

Historically, social media platforms in the West have responded quickly to graphic violence. However, users in regions with less moderation, such as Gaza, have often faced a deluge of violent imagery. In the wake of Kirk’s murder, lawmakers like Rep. Lauren Boebert and Rep. Anna Paulina Luna have called for the removal of these videos, emphasizing the need to respect the dignity of victims and their families. Luna articulated this sentiment on X, stating that no one should have to relive such tragedies online.

The irony in this situation is palpable. For years, Republican legislators have pressured tech companies to relax content moderation policies, advocating for free speech while simultaneously expecting these platforms to protect users from graphic content. This contradiction has led to a chaotic information ecosystem that now affects those who once championed it.

In 2023, Rep. Jim Jordan, as chair of the House Judiciary Committee, initiated subpoenas against Big Tech and research organizations studying online hate speech. He accused them of infringing upon First Amendment rights. This follows a pattern where conservative figures have claimed that social media platforms disproportionately censor their content. Studies have shown that misinformation is more likely to originate from Republican-leaning posts, suggesting that the moderation policies they criticize may be a response to their own content’s violations.

The landscape has shifted significantly in recent years, particularly with Elon Musk’s acquisition of Twitter, now rebranded as X. Musk’s commitment to transforming the platform into a bastion of free speech led to significant layoffs in content moderation teams. This has resulted in a noticeable decline in the enforcement of hate speech policies, allowing for the resurgence of previously banned accounts and content.

Meta has also faced scrutiny for its moderation practices. Following pressure from conservative activists, the company announced a reduction in its third-party fact-checking operations and a relaxation of hate speech enforcement. This shift has coincided with an increase in graphic content appearing on its platforms, as evidenced by user reports shortly after policy changes were implemented.

Experts in tech accountability have voiced concerns about the implications of underfunding content moderation. Martha Dark, co-executive director of Foxglove Legal, highlighted that weakening moderation inevitably leads to the prolonged presence of violent content online. Similarly, former Twitter product manager Olivia Conti criticized the effectiveness of AI in content moderation, likening its capabilities to “pizza detectors” that fail to accurately flag harmful material.

While AI can assist in moderation, it cannot replace the nuanced understanding that human moderators bring to the table. Ellery Biddle, director of impact at Meedan, emphasized the necessity of skilled teams to guide AI in identifying harmful content. The very individuals targeted by conservative activists for their roles in moderating hate speech are also responsible for monitoring and removing graphic videos, such as those depicting Kirk’s death.

The ongoing debate surrounding content moderation, free speech, and the responsibilities of social media platforms continues to evolve. As the consequences of relaxed moderation become increasingly evident, it remains to be seen how tech companies will balance the demands for free expression with the need to protect users from graphic and violent content.

Latest stories