If you prefer the audio of this article, click here.
On May 16, 2023, a US Congressional hearing was held to discuss the regulation of AI. Democratic Senator Richard Blumenthal opened the meeting with the following:
Too often, we have seen what happens when technology outpaces regulation: the unbridled exploitation of personal data; the proliferation of disinformation; and the deepening of societal inequalities. We have seen how algorithmic biases can perpetuate discrimination and prejudice, and how the lack of transparency can undermine public trust. This is not the future we want.1
However Mr. Blumenthal neither said nor wrote these words. Instead, they were written by a ChatGPT prompt and digitized using an AI voice model to impersonate Mr. Blumenthal. He clearly agreed with the sentiment but wondered aloud what might have happened had he asked ChatGPT to call for Ukrainian surrender. During the hearing, calls were made from all sides—from US Congressmen, AI experts, academics, and even OpenAI CEO Sam Altman himself—to regulate AI.
I made several predictions in two articles back in January 2024, in the first of which I predicted that US lawmakers would move to severely regulate AI. This wasn’t exactly a Nostradamus moment, especially given that this hearing had happened months before, even if it was only talk.
But the important part of that prediction was identifying the true motivations for this regulation and where it was likely to lead—and are likely to lead still in the future. If you squint hard you can see these motivations in the rhetoric in this hearing, but you have to know what to look for.