A protester carries a Charlie Kirk sign during a demonstration. (Photo by James Willoughby/SOPA Images/LightRocket via Getty Images)

In the aftermath of Charlie Kirk’s killing, graphic video of his death spread quickly across social platforms, often autoplaying in millions of feeds. Almost as quickly, right-wing influencers used the tragedy to assign blame and call for retaliation, shaping a narrative before facts were known.

I spoke with Renee DiResta, who studies online disinformation, about how platforms amplify violent spectacle, how influencers exploit it for engagement, and what this moment reveals about America’s dangerously fractured information ecosystem.

Below is the Q&A, edited lightly for style.

In the moments after Charlie Kirk’s killing, video of his death was promoted widely on social media platforms with graphic footage auto-playing in the feeds of millions. What does that reveal about the algorithms and moderation policies baked into today’s platforms?

It’s always been a challenge for platforms to handle graphic footage in the immediate aftermath of a violent event. Some try to filter for gore and violence, but images and videos still make it through. People often re-upload altered clips to evade detection by human moderation staff, which has been reduced by some platforms, or to evade the hashes that platforms use to prevent redistribution. Outrageous and shocking content is often rewarded—even if the video is cut to avoid the most graphic portions, some makes it into feeds as users react or create commentary around it.

There’s also an ongoing debate about what the norms should be. Some platforms now err on the side of leaving violent content up on newsworthiness grounds, reasoning that if something is in the public interest, users should be able to find it. That’s defensible in theory—but in practice, users report that these videos are often not just available, they’re unavoidable.

Platform policies often state that they are supposed to obscure graphic content behind interstitials, so users can choose whether to view it. But that layer of protection is inconsistently applied, and the result is that many people end up being exposed to violent, potentially traumatizing footage they never opted into seeing.

Almost immediately, prominent right-wing figures framed the killing as the work of “the left” and called for retaliation. How does the speed of this kind of narrative-setting affect polarization and radicalization?

The speed is the point. When influencers frame an event within minutes—before facts are even known—they can shape the narrative about what happened while the greatest number of people are paying attention. They can indicate how their followers should engage. It becomes less about what happened and more about who to blame.

Immediate narrative-setting has strategic value: it can tap into existing grievances and direct outrage toward a target. It short-circuits deliberation and replaces it with mobilization. There is also a personal incentive for the influencer: on a platform like X, monetized accounts can make a lot of money if they get a lot of engagement, and there is no better time to farm engagement than when a lot of people are searching for a keyword or term in the aftermath of a tragedy. Posting ragebait and speculation in these moments can be very lucrative.

This is one way that polarization gets supercharged: each faction sees a different story unfold in real time, tailored by its influencers and curated by its feeds. When those stories create an impression that someone’s compatriots are being systematically targeted, or include calls for retaliation or violence, it can become a path to radicalization. Social media has always been effective at accelerating news and rumors via virality—users directly participating in sharing content. But it also accelerates alignment: driving people to pick a side and vocally commit to it before they even know all of the details about what actually happened. That participation can lead to sustained buy-in.

We’ve seen social media platforms pull back on removing, restricting, and labeling violent or misleading content. From your perspective, what should these platforms be doing differently in moments like this?

I am a longtime supporter of tools that give users more direct control of what’s in their feeds. Ideally, adult users could set their own tolerance thresholds—with defaults that obscure violent content. Since I recognize this is presently an unrealistic wish, I think the violent content should at a minimum be behind interstitials so users can choose whether or not they want to see it. It should absolutely not autoplay. Whatever policy they choose, however, platforms have an obligation to clearly state what their rules are and then actually enforce them. Private platforms are absolutely entitled to make a policy call to simply not host or surface graphic violence; most users don’t want to see it, and they’re under no obligation to show it.

Charlie Kirk built a career in the online culture war ecosystem, and now his death is being absorbed into the same viral machinery. What does that say about the relationship between online spectacle and weaponization of events?

Kirk’s tragic death was immediately—and terribly—absorbed into the spectacle-generating content machine that political influencers so often leverage. The culture-war internet treats even visceral human tragedy as raw material: an event becomes content, and content becomes an opportunity for mobilization. It happens daily. The online rumor mill and the factional propaganda machines go into overdrive: people aren’t just mourning, they’re pushing narratives, assigning blame, and rallying their side before the facts can possibly be known. Online crowds immediately get to work battling about whose niche faction is responsible. Platform algorithms reward extreme takes, so influencers deliver them, and the crowd spreads them like wildfire. Real-world tragedies don’t feel real for long–they get absorbed into a machine that turns them into fuel for whatever cause someone’s trying to advance.

Stepping back, what does this moment tell us about where America’s online discourse is headed, and whether our current information ecosystem is sustainable?

The culture war turns human tragedy into viral spectacle and point-scoring; that is profoundly unhealthy and corrosive. The structural dynamics of social media—the speed, the scale, and the directly participatory nature of virality—ensure that human tragedies will instantly become content, conspiracy theory, and call to action. That’s our new normal. But when that information dynamic intersects with affective polarization and a widespread breakdown in trust, the result is progressively destabilizing. It is eroding our ability to share basic facts, to mourn without taking sides, and to solve collective problems. And as the fractures deepen, the possibility of shared understanding seems to slip further out of reach.

Amid the toxic churn of disinformation, do you see any bright spots—whether in platform design, user behavior, or society—that give you hope we’re learning to better handle these moments?

There are ideas that show promise—like using bridging algorithms to reduce polarization, or giving users more direct control over their feeds–but the major platforms that have the power to implement these experiments don't seem particularly interested. Instead, platforms are still driven largely by the same broken incentives they’ve followed for 20 years. Influencers can now monetize ragebait more easily than ever–even when they are being directly paid, they often aren’t required to disclose it to their audiences.

And yet we act surprised when things keep getting worse. It’s the definition of insanity: doing the same thing over and over and expecting a different result. So many people across the political spectrum acutely feel that our information ecosystem is hopelessly broken. But changing things would require people in power to make hard choices, and we apparently still haven’t hit the degree of societal rock bottom needed to motivate change.

In this week’s episode of Power Lines: Right-wing activist Charlie Kirk’s shocking killing set off a wave of reactions across the media landscape. We examine the viral spread of graphic footage on social media, how the left and right are reacting to the tragedy, and what the response could mean for America’s already heightened political tensions.

You can watch on YouTube—or listen on Apple Podcasts, Spotify, or wherever you get your podcasts.

Elon Musk. (Photo by Sean Gallup/Getty Images)

  • Elon Musk continued to engage in more extremist rhetoric in the aftermath of Charlie Kirk's death: "We must fight back or be murdered," the Tesla and SpaceX boss declared on X. In a speech to a far-right anti-immigration protest in London, Musk additionally told activists, "Violence is coming to you. You either fight back or you die.” [CNN]

    • When will business journalists cover Musk’s extremism in a meaningful way? Outlets like CNBC usually almost always look the other way. It’s hard to imagine a liberal businessman being given the same treatment for such extremist and dangerous political rhetoric.

  • Party of free speech? In an interview with Maria Bartiromo, GOP Sen. Katie Britt assailed "dangerous" media rhetoric toward Republicans and demanded "consequences" for such speech. [Mediaite]

    • Meanwhile, Pete Hegseth has "directed staff to identify and discipline service members who mocked or condoned Kirk’s killing," Mirna Alasharif reported. [NBC News]

    • "At least 15 people have been fired or suspended from their jobs after discussing the killing online, according to a Reuters tally based on interviews, public statements and local press reports," Raphael Satter and A.J. Vicens reported. [Reuters]

  • Jay Leno said the Kirk assassination could represent the "death" of free speech: "We’re at a point in this country where, if you don’t agree with everybody on everything, you take out a gun and you shoot them?” [Mediaite]

  • Brian Kilmeade issued a bland apology after saying the government should "just kill" homeless people via "involuntary lethal injection." [Variety]

    • Just a reminder: If a major host on CNN or MSNBC made such a vile remark, they would be fired. On Fox News, Kilmeade went days before issuing an extraordinarily brief apology on a weekend show.

  • Howard Kurtz signed off "MediaBuzz" after Fox News canceled the show following a 12-year run. [Mediaite]

    • Greg Gutfeld mocked the end of the show after Kurtz remarked Charlie Kirk "was not a saint." On X, Gutfeld posted, "Maybe we can tune in next week for a clarification…oh wait…" [Mediaite]

  • Penske Media Corporation sued Google over the search giant's A.I. overviews: "As a leading global publisher, we have a duty to protect PMC's best-in-class journalists and award-winning journalism as a source of truth," Jay Penske said. [Axios]

  • The 77th annual Emmy Awards are airing now.

    • Stephen Colbert was treated to a standing ovation and joked about the cancellation of “The Late Show,” opening the show up by asking the crowd, “While I have your attention, is anyone hiring?” [People]

    • “Hacks” star Hannah Einbinder closed her speech saying, "God birds, fuck ICE, and free Palestine." [Variety]

    • THR is updating this page with winners. [THR]

    • Missed the red carpet? Vogue has you covered. [Vogue]

A still from "Demon Slayer: Kimetsu no Yaiba—Infinity Castle." (Courtesy of Sony Pictures)

  • Sony's "Demon Slayer: Kimetsu no Yaiba—Infinity Castle" topped the domestic box office with $70 million.

  • Warner Bros. Pictures' "The Conjuring: Last Rites" placed second, with $26 million in receipts—down 69% from its opening weekend.

  • Focus Features' "Downton Abbey: The Grand Finale" debuted with $18.1 million; Lionsgate's "The Long Walk" made $11.5 million; and Disney's re-release of "Toy Story" earned $3.5 million.