Monday, November 25, 2024
HomeEntrepreneurAI Regulation Is 'Crucial,' Says OpenAI ChatGPT CEO

AI Regulation Is ‘Crucial,’ Says OpenAI ChatGPT CEO


Sam Altman, CEO of OpenAI (creator of the prompt-driven chatbot ChatGPT) knows that the acceleration of artificial intelligence and its potential risks is unsettling — to some.

Altman spoke to the Senate Judiciary subcommittee on Tuesday in his first appearance before Congress, and said it is “crucial” that lawmakers implement safety standards and regulations for AI to “mitigate the risks of increasingly powerful models.”

“We understand that people are anxious about how it can change the way we live. We are, too,” Altman said. “If this technology goes wrong, it can go quite wrong.”

During the nearly three-hour hearing, Altman, along with two other witnesses (Professor Emeritus Gary Marcus and IBM’s Chief Privacy and Trust Officer, Christina Montgomery), spoke with nearly 60 lawmakers about the potential AI dangers when left unchecked — from job disruption to intellectual-property theft.

“My worst fear is we cause significant harm to the world,” he said.

One suggested move for lawmakers, Altman said, is to implement a licensing system for companies developing powerful AI systems. Lawmakers would outline a series of safety standards that companies need to abide by to grant them a license, and then also have the power to revoke it should they not comply with the standards.

As far as the looming question of how AI will disrupt the job market, Altman agreed that the technology has the potential to eliminate many positions. However, he doesn’t think it means there won’t be new jobs created as well.

Related: Goldman Sachs Says AI Could Replace The Equivalent of 300 Million Jobs — Will Your Job Be One of Them? Here’s How to Prepare.

“I think, [AI can] entirely automate away some jobs,” he said. “And it will create new ones that we believe will be much better.”

In March, tech magnates like Elon Musk called for a six-month pause on AI in an open letter. On Tuesday, in response to subcommittee member Sen. Josh Hawley’s question to the witnesses about the letter, Altman began by saying that the “frame of the letter is wrong,” and that what is important is audits and safety standards that need to pass before training the technology. He then added, “If we pause for six months, I’m not sure what we do then, do we pause for another six?”

Altman also stated that before OpenAI deployed GPT4, the company waited more than six months to release it to the public and that the standards that OpenAI has developed and used before deploying technology is the direction the company “wants to go in” rather than “a calendar clock pause.”

The chair of the subcommittee, Sen. Richard Blumenthal, also weighed in and said that implementing a moratorium and “sticking our head in the sand” is not a viable solution. “The world won’t wait,” he said, adding that “safeguards and protections, yes, but a flat stop sign? I would be very worried about that.”

It remains to be seen what actions, if any, the government will take on AI, but in closing remarks, Blumenthal said that “hard decisions” will need to be made, but, for now, companies developing AI should take a “do no harm” approach.

Related: Google CEO Sundar Pichai Says There Is a Need For Governmental Regulation of AI: ‘There Has To Be Consequences’



This story originally appeared on Entrepreneur

RELATED ARTICLES
- Advertisment -

Most Popular

Recent Comments