I'm nervous about it. This is a mighty big statement to make, especially if you are the CEO of OpenAI. But history has shown several examples — the most cited of them being of Nobel — of innovators who later scorned, even expressed contempt, at their own inventions.
During the US Senate Judiciary Subcommittee hearing, Sam Altman stressed the need for regulating AI. Altman warned lawmakers that if this technology goes wrong, it can go quite wrong. This adds fuel to the existing discourse about AI being the avalanche end of humanity as we know it. Some even point out the inherent prejudices (as we've done before in our feature on AI) which are nonetheless a reflection of engineers — and engineered data — that it was fed. Senator Booker even called it the "genie" you cannot put "back in the bottle."
AI is no longer science fiction but a reality that we must deal with, according to Senator Richard Blumenthal, who opened the hearing with a text written by ChatGPT — what a way to begin. And then Altman urged Congress to impose new rules on big tech, despite deep political divisions that have blocked legislation regulating the internet. The generative AI, according to Altman, could address some of humanity's biggest challenges, like climate change and curing cancer. However, he also acknowledged that it creates serious risks, such as disinformation and job security, and recommended regulatory intervention by governments to mitigate them.
Altman proposed that the US government might consider a combination of licensing and testing requirements before the release of powerful AI models, with the power to revoke permits if rules were broken. He also recommended labeling and increased global coordination in setting up rules over the technology, as well as the creation of a dedicated US agency to handle artificial intelligence. Altman suggested that the US should lead in this regard, but to be effective, something global is needed. He cited the EU's AI Act, which could see bans on biometric surveillance, emotion recognition, and certain policing AI systems, as an example of a successful regulatory framework.
Lawmakers underlined the need for transparency measures for generative AI systems, such as ChatGPT and DALL-E, which create images from words. OpenAI's DALL-E last year sparked an online rush to create lookalike Van Goghs and has made it possible to generate illustrations and graphics with a simple request. However, Professor Emeritus Gary Marcus warned that the technology was still in its early stages. "We don't have machines that can really improve themselves. We don't have machines that have self-awareness, and we might not ever want to go there," he said.
But, are people like Altman really offering themselves up for a landslide? Or is it reverse psychology? Looks like it's neither. The interest in licensing is largely due to their concern over open-source A.I. software, which poses a significant competitive threat. The open-source community is advancing at an impressive pace in this rapidly evolving field and is capable of creating innovative and agile models that match or exceed the performance and capabilities of proprietary models. Furthermore, as the article specifies, these models are smaller, easier, and cheaper to train, and can be downloaded for free.
This isn't the last hearing — we're sure to see plenty more. What we can say for sure is, the technology has its merits. Pruning it to perfection (in an almost idealistic, human way) is a tough task. Until then, we may have to bite our grudge in and make do with the structured help it offers. And when you spot prejudices, do the most human thing possible: tell Chat GPT that it made a mistake — a grave one at that.