U.S. senators expressed their concerns Tuesday, May 16, about the risks of artificial intelligence, as they heard testimony from the CEO of OpenAI, a nonprofit research company that promotes and develops AI. In the hearing, senators on both sides of the aisle expressed concern regarding ChatGPT’s ability to hallucinate, or produce false information, as it can affect public opinion.
OpenAI’s CEO Sam Altman recommended that companies develop internal corporate policies to govern the use of AI, and that the U.S. government develop broader regulations for AI products.
“I would form a new agency that licenses any effort above a certain scale of capabilities and can take that license away and ensure compliance with safety standards,” Altman said.
Altman went on to recommend the development of specific tests that a model has to pass before it can be deployed into the world, as well as independent audits.
He also suggested the creation of an international agency that would create responsible standards for AI, and that Section 230 should not apply to AI.
When it comes to potential regulation, leaders in AI development have warned against being too heavy-handed, as that could harm innovation. IBM’s Chief Privacy and Trust Officer Christina Montgomery, who also testified Tuesday, recommended a “precision regulation” approach, which would mean governing AI deployment in specific use-cases — not regulating the technology itself.
NYU professor Gary Marcus was present, as well. Marcus and Altman were both asked for more specific recommendations regarding agencies, policy and regulation, but they pointed out that such advice is out of their purview.