Those who use social media platforms to monitor risk are witnessing a meaningful shift. Following the January 6th attack on the Capitol, extremist content left mainstream platforms like Facebook, X (the former Twitter), and YouTube as such platforms instituted policies to clamp down on speech that incited violence. But this speech didn't disappear - it migrated to other regions of the Internet - regions that had nominal content moderation, like Gab, Parler, Rumble, Truth Social, and the Chan boards. And new social media platforms emerged to fill the void.
However, since Elon Musk's acquisition of X, the trend has shifted. Extremist speech has found its way back to the online mainstream via X, evidenced by analyses by hate-speech watchdogs. Reporting by Time has highlighted changes to X’s content moderation policies that have made it easier for extremists to spread their messages – “loosening its rules, laying off trust and safety employees” and “reinstating accounts previously banned for violating the platform’s policies.” Musk has personally amplified the noise by, as The New York Times reports, endorsing an antisemitic conspiracy theory. All of this has worked to drive away advertisers in droves and reduce monthly active user numbers by 15 percent worldwide, year-over-year.
So, when surveying the web for threats against a client, don't be surprised if you find yourself increasingly monitoring posts on X.