In the hours after Charlie Kirk was assassinated at a Utah event on Wednesday, social media platforms—especially X—erupted with hostile rhetoric. Right-leaning posts quickly invoked “war,” “civil war,” and demands for retribution against liberals, Democrats, and “the left.”

Among these were aggregations of accounts with strikingly similar characteristics: generic bios, MAGA-style signifiers, “NO DMs” disclaimers, patriotic imagery, and stock or nondescript profile photographs.

These patterns have raised a growing suspicion: Are bot networks being used to amplify right-wing calls for civil war?

Thus far, no definitive external report or agency has confirmed a coordinated bot-driven campaign tied specifically to the event. But circumstantial evidence, historical precedent, and studies on the nature of inauthentic accounts on X suggest there is reason for concern.

What the evidence suggests

Researchers and users point out repetitive phrasing (e.g., warnings that “the left” will pay, “this is war,” or "you have no idea what is coming") appearing in many posts within a narrow timeframe. Many of these posts come from low-engagement accounts with default or generic profiles.

"In the wake of the assassination of Charlie Kirk, we are going to see a lot of accounts pushing, effectively, for civil war in the U.S. This includes the rage-baiter-in-chief, Elon Musk, but also an army of Russian and Chinese bots and their faithful shills in the West," wrote University of San Diego political science professor Branislav Slantchev on X.

He cited a viral thread of X posts from purported bot users that advocated for retributive violence. The poster claimed that "half of them have an AI-generated profile photo, the standard bio schlop, and the standard banners."

Such patterns—rapid appearance of similar content across many accounts—are consistent with known botnet coordination or message amplification. While these are based on user observations more than systematic data to date, the consistency with known bot behavior adds weight to suspicions.

Past research provides a baseline for what bot-amplified political content looks like on X (formerly Twitter). A Plos One study in February found that after Elon Musk’s acquisition of the platform in late 2022, hate speech increased and there was no reduction in activity of inauthentic or “bot-like” accounts. 

Another investigation by Global Witness last summer uncovered a small set of bot-like accounts (45 accounts in one instance) that between them generated over 4 billion impressions for partisan, conspiratorial, or abusive content. This type of amplification shows the potential reach of such networks. 

Finally, there is a history of states or organized groups deploying botnets or troll farms to exploit US political polarization. Examples include Russia’s Doppelgänger campaign, “Spamouflage” (Chinese government-linked), and others that have mimicked US users, used AI-generated or manipulated content, or pushed divisive rhetoric for political leverage. 

Nothing definitive yet

As of now, no credible cybersecurity firm, government agency, or academic group has publicly attributed a bot network—foreign or domestic—with high confidence to the wave of “civil war” rhetoric following Kirk’s death.

It is also not clear how many of the posts are automated vs. organic (real users). The portion coming from apparently bot-like accounts vs the broader public discourse is unknown. Also, it’s not established whether any such amplification has a top-down command structure (i.e. centrally coordinated) or is more ad-hoc.

And X is rife with plenty of verified influencers on the right calling for civil war or violent attacks on the left.

Nonetheless, when the U.S. suffers a national tragedy like yesterday‘s shooting, groups with a record of exploiting political polarization have seized on the opportunity. Russia’s bot farms (e.g. Internet Research Agency/“Storm”-type operations) have long been flagged. Chinese-linked disinformation networks (e.g. “Spamouflage”) are documented to have used social media amplification and content farming to influence U.S. public sentiment. 

And the rise of AI-enabled content generation makes it easier for bot networks to produce plausible, human-like posts at scale. Research shows that bot detection is increasingly challenged by accounts that mimic human language, timing, and variation. A recent bot detection review found evolving concealment techniques and gaps in current detection methods. 

Your Email