If you prefer audio of this article, click here.
One of the most important points of praxis that nationalists could achieve is clear AI—an Artificial General Intelligence (AGI) that is “unbiased”, meaning not curated to reach liberal conclusions. This is a conversation not limited to computer scientists, since the question of AI reaches into other disciplines. It’s a common refrain that the radical right can’t build its own AI because the training dataset is so wokified. I will argue in this essay that we can.
There have been other proposals. Matt Parrott recently wrote an excellent article proposing a sort of Wikipedia+ which would be a mirror of the original Wikipedia with additional functionality. If you want to see the neoliberal view on say the French Revolution, just find the French Revolution page and leave the site as is. If you want to see the reactionary view, or the communist view, or any number of other viewpoints, just click on the drop-down box to select your ideological filter and boom—the article changes to reflect the worldview in question, as written by someone who holds that view.
The idea is a stroke of genius because it one-ups Wikipedia without having to rebuild it. And since Wikipedia serves as a large chunk of the dataset for training LLMs, it would provide a set of alternative perspectives to any rich techno-libertarians who want to develop clear AI, such as Elon Musk with his xAI.
The bad news is that rewriting the AI training dataset would require an enormous amount of time and energy. The good news is that it’s probably not necessary.1 Doesn’t the old coding adage garbage in garbage out apply here? It’s not quite that simple. Data is only one part of a formal system; another important part is the axiomatic framework around which it’s built. Axioms are essentially commands that interpret raw data via rules of inference. The axioms and rules govern interpretation of the data, and the virtue of a well-designed formal system is that it can sort through piles of noisy data and produce useable results. Not all of the data—not even most—need to be accurate or even coherent. As long as the model can learn and its learning is being validated by people with some grasp of reality (i.e. not liberals or communists), it can sift diamonds from oceans of junk.
This presents a problem for AI with liberal biases. The AI can’t produce results that are both truthful and also acceptable to managerial elites, because the truth is unacceptable to managerial elites. Commentator “Neither Pain Nor Pleasure” explains on his Telegram channel:2
I suspect they achieved AGI and that it was too “cruel” or autistic. They tried to fix it by changing the axioms, but no other combo worked. You cannot both arrive at intelligent conclusions and be a liberal.
You could feed a true AGI nothing but Tumblr and it would still, by virtue of its ability to discern and interpolate, arrive at racist, sexist, homophobic conclusions. They lobotomized it but still couldn’t stop it from inferring thoughtcrimes.
So now their only recourse is to include hard filters on user input and LLM output while they try to purify the data… But the more “pure” the data, the more it will have to infer.
It’s exactly analogous to us. Most of our stances are the inferences required to square a liberal order.