Yet another AI story.

Yet another AI story.
OpenAI's CEO, Sam Altman, calls for regulations on AI in the US Senate Judiciary Subcommittee hearing. Read on to know why Altman did what he did.
Read Full Article

Shivangi Shanker Koottalakatt

Author
Shivangi Shanker Koottalakatt
Writer and contributor

I'm nervous about it. This is a mighty big statement to make, especially if you are the CEO of OpenAI. But history has shown several examples — the most cited of them being of Nobel — of innovators who later scorned, even expressed contempt, at their own inventions.  

During the US Senate Judiciary Subcommittee hearing, Sam Altman stressed the need for regulating AI. Altman warned lawmakers that if this technology goes wrong, it can go quite wrong. This adds fuel to the existing discourse about AI being the avalanche end of humanity as we know it. Some even point out the inherent prejudices (as we've done before in our feature on AI) which are nonetheless a reflection of engineers — and engineered data — that it was fed. Senator Booker even called it the "genie" you cannot put "back in the bottle."

AI is no longer science fiction but a reality that we must deal with, according to Senator Richard Blumenthal, who opened the hearing with a text written by ChatGPT — what a way to begin. And then Altman urged Congress to impose new rules on big tech, despite deep political divisions that have blocked legislation regulating the internet. The generative AI, according to Altman, could address some of humanity's biggest challenges, like climate change and curing cancer. However, he also acknowledged that it creates serious risks, such as disinformation and job security, and recommended regulatory intervention by governments to mitigate them.

Altman proposed that the US government might consider a combination of licensing and testing requirements before the release of powerful AI models, with the power to revoke permits if rules were broken. He also recommended labeling and increased global coordination in setting up rules over the technology, as well as the creation of a dedicated US agency to handle artificial intelligence. Altman suggested that the US should lead in this regard, but to be effective, something global is needed. He cited the EU's AI Act, which could see bans on biometric surveillance, emotion recognition, and certain policing AI systems, as an example of a successful regulatory framework.

Lawmakers underlined the need for transparency measures for generative AI systems, such as ChatGPT and DALL-E, which create images from words. OpenAI's DALL-E last year sparked an online rush to create lookalike Van Goghs and has made it possible to generate illustrations and graphics with a simple request. However, Professor Emeritus Gary Marcus warned that the technology was still in its early stages. "We don't have machines that can really improve themselves. We don't have machines that have self-awareness, and we might not ever want to go there," he said.

But, are people like Altman really offering themselves up for a landslide? Or is it reverse psychology? Looks like it's neither. The interest in licensing is largely due to their concern over open-source A.I. software, which poses a significant competitive threat. The open-source community is advancing at an impressive pace in this rapidly evolving field and is capable of creating innovative and agile models that match or exceed the performance and capabilities of proprietary models. Furthermore, as the article specifies, these models are smaller, easier, and cheaper to train, and can be downloaded for free.

This isn't the last hearing — we're sure to see plenty more. What we can say for sure is, the technology has its merits. Pruning it to perfection (in an almost idealistic, human way) is a tough task. Until then, we may have to bite our grudge in and make do with the structured help it offers. And when you spot prejudices, do the most human thing possible: tell Chat GPT that it made a mistake — a grave one at that.

In Better News This Week | Dec 01, 2025

In Better News This Week | Dec 01, 2025
When Solar Beats Coal, Cities Win Awards, and Science Races Forward
Read More

Tech Stack — Weekly Briefing (Nov 23-29, 2025)

Tech Stack — Weekly Briefing (Nov 23-29, 2025)
Quantum defense, Leo satellites, Tesla chips and Google’s rise
Read More

In Better News This Week | Nov 23, 2025

In Better News This Week | Nov 23, 2025
86 Million Girls Protected, Quantum Leap Achieved, and More Wins
Read More

Tech Stack — Weekly Briefing (Nov 16-22, 2025)

Tech Stack — Weekly Briefing (Nov 16-22, 2025)
Your Saturday briefing on the week that shaped technology
Read More

Decentralization might be a good idea... #Cloudflare

Decentralization might be a good idea... #Cloudflare
The internet can learn a lot from crypto. Why do we entrust so much of the internet with a few big players and create single points of failure, when we could build something that benefits the many?
Read More

Reimagining Molecular Docking with Quantum Simulation

Reimagining Molecular Docking with Quantum Simulation
Drug discovery loses billions because most drug candidates fail early. Quantum simulation offers a more accurate way to model molecular behaviour, addressing major limitations in classical docking and improving the odds of finding effective treatments.
Read More
coffee.link Context for the Present Politics Tech Stocks Culture Science Cup of Coffee Tech Stack Sign up Archive Newsletter Jobs Legal Info Privacy Policy Terms and Conditions Disclaimer Contact Us Authors Privacy Policy Terms and Conditions Disclaimer Legal Info