The Biden administration on Tuesday took a step toward regulating artificial intelligence, as the overnight explosion of A.I. tools like ChatGPT spurs scrutiny from regulators around the globe.
Biden administration considers rules for A.I. systems like ChatGPT
As an A.I. arms race heats up in Silicon Valley, the agency is considering how to develop an auditing process, to ensure artificial intelligence powered technology is trustworthy. New assessments and protocols may be needed to ensure A.I. systems work without negative consequences, the Commerce Department said, much like financial audits confirm the accuracy of business statements.
“For these systems to reach their full potential, companies and consumers need to be able to trust them,” said Alan Davidson, the administrator of the Commerce Department’s National Telecommunications and Information Administration, in a news release.
Tools like ChatGPT have dazzled the public with their ability to engage in humanlike conversations and write essays. But the technology’s swift evolution has prompted new fears that it may perpetuate bias and amplify misinformation.
In recent weeks, government’s interest in A.I. has accelerated, as consumer advocates and technologists alike descend on Washington, aiming to influence the debate. As companies compete to bring new A.I. tools to market, policymakers are struggling to both foster innovation in the tech sector while limiting public harms.
Many policymakers express a desire to move quickly on A.I., having learned from the slow process of assembling proposals for social media.
Last week, President Biden convened a group of advisers on science and technology to discuss A.I. risks. When asked whether A.I. was dangerous, he said it remains to be seen. “Could be,” he replied.