If you prefer the audio of this article, click here.
By now you have probably read our article on the Epistemic Divorce. For those of you who haven’t, the idea is that the internet—and more broadly, postmodernity—has had the opposite effect that the liberal regime intended: it has driven us apart rather than together. No longer do we get our news, our entertainment, our ideas, and our values from the epistemic longhouse, a singular and centralized source. Now each man curates these things for himself. The result has been the one thing that liberalism can’t tolerate: epistemic spaces outside itself. It calls these “echo chambers”. We have splintered into many different echo chambers, and these are little embryonic folkhoods. Postmodernity has given way to folkishness.
This is being helped along by modern developments in computational learning models. LLMs—or what most of us call AI—is making it very hard to tell reality from fakery. Back when Elon Musk agreed to acquire Twitter, he tried to terminate the agreement when he cracked open the lid and saw just how much of Twitter’s traffic was spambots. Anyone who has used Facebook in the past few years knows that it’s a ghost town and the majority of its traffic is fake.
Now AI has made this much worse—arguably, Musk’s Twitter is now in worse shape than before. By allowing verified (“blue check”) Twitter users to collect a portion of ad revenue on their tweets, it has created a perverse incentive to create bots. Twitter verification costs $7/mo. and full access to ChatGPT costs $20/mo. That equates to about 3 million impressions per month of Twitter ad revenue, which is completely feasible given an integrated network of bots. It’s basically a license to print money.
Meanwhile, AI-generated audio and video has become nearly indistinguishable from the real thing. Twitch streamer “Destiny” found this out on a recent stream, which was skilfully edited together as Destiny’s Schizo Arc.1 In it, Destiny chronicles his discovery that he was being abused by a Twitter bot, which led down a very strange rabbit hole of discovering AI-generated videos of fictional US intelligence operatives spreading misinformation.
All this is adding up to an idea that was proposed in the early 2020s on 4chan—dead internet theory, which states that the majority of online traffic is bots talking to bots, and that the internet is now largely unpopulated by humans. This is hyperbolic, but increasingly, less so. Certainly, the incentive structure of the mercantile economy2 promotes the growth of bot traffic, based as it is largely on advertising revenue and ultimately, fake productivity.
But there is another angle besides the economic incentives, one rooted in understanding how the internet promotes folkishness. Much of the growth of bot traffic is due to free LLMs, which are created by people very closely connected to the security state.3 Why would organizations adjacent to the security state push down the internet signal-to-noise ratio? Because the internet is a big problem for the security state.