OpenAI’s Sam Altman warns Congress AI could cause ‘harm to the world’

OpenAI chief executive Sam Altman delivered a sobering account of ways artificial intelligence could “cause significant harm to the world” during his first congressional testimony, expressing a willingness to work with nervous lawmakers to address the risks presented by his company’s ChatGPT and other AI tools.

Altman advocated for a number of regulations — including a new government agency charged with creating standards for the field — to address mounting concerns that generative AI could distort reality and create unprecedented safety hazards. The CEO tallied a litany of “risky” behaviors presented by technology like ChatGPT, including spreading “one-on-one interactive disinformation” and emotional manipulation. At one point he acknowledged AI could be used to target drone strikes.

“If this technology goes wrong, it can go quite wrong,” Altman said.

How Sam Altman unleased ChatGPT on an unsuspecting Silicon Valley

Yet in nearly three hours of discussion of potentially catastrophic harms, Altman affirmed that his company will continue to release the technology, despite likely dangers. Rather than being reckless, he argued OpenAI’s “iterative deployment” of AI models gives institutions time to understand potential harms — a strategic move puts “relatively weak” and “deeply imperfect” technology in the world to understand the associated safety risks.

For weeks, Altman has been on a global good-will tour, privately meeting with policymakers — including the Biden White House and members of Congress — to address apprehension of the rapid rollout of ChatGPT and other technologies. Tuesday’s hearing marked the first opportunity for the broader public to hear his message to policymakers, at a moment when Washington is increasingly grappling with ways to regulate a technology that is already upending jobs, empowering scams and spreading falsehoods.

In sharp contrast to contentious hearings with other tech CEOs, including TikTok’s Shou Zi Chew and Meta’s Mark Zuckerberg, lawmakers from both parties gave Altman a relatively warm reception. They appeared to be in listening mode, expressing a broad willingness to consider regulatory proposals from Altman and the two other witnesses in the hearing, IBM executive Christina Montgomery and New York University professor emeritus Gary Marcus.

Members of the Senate Judiciary subcommittee expressed deep fears about the rapid evolution of artificial intelligence, repeatedly suggesting that recent advances could be more transformative than the advent of the internet — or as risky as the atomic bomb.

“This is your chance, folks, to tell us how to get this right,” Sen. John Kennedy (R-La.) told the witnesses. “Please use it.”

Lawmakers from both parties expressed an openness to the idea of creating a new government agency tasked with regulating artificial intelligence, though past attempts to create a specific agency with oversight of Silicon Valley have languished in Congress, amid partisan divisions about how to form such a behemoth.

Yet it’s unclear if such a proposal would gain broad traction with Republicans, who are generally wary of expanding government power. Sen. Josh Hawley (Mo.), the top Republican on the panel, warned such a body could be “captured by the interests that they’re supposed to regulate.”

CEO of OpenAI Sam Altman said in May 16 hearing that interactive disinformation is a cause for concern especially with election year approaching. (Video: The Washington Post, Photo: Reuters/The Washington Post)

Sen. Richard Blumenthal (D-Conn.), who chairs the host subcommittee, said Altman’s testimony was a “far cry” from past outings by other top Silicon Valley CEOs, who lawmakers have criticized for historically declining to endorse specific legislative proposals.

“Sam Altman is night and day compared to other CEOs,” Blumenthal told reporters after the hearing. “Not just in the words and the rhetoric but in actual actions and his willingness to participate and commit to specific action.”

Altman’s appearance comes as Washington policymakers are increasingly waking up to the threat of artificial intelligence, as the broad popularity of ChatGPT and other generative AI tools have dazzled the public but unleashed a fleet of new safety concerns. The Biden administration has called AI a key priority, and lawmakers repeatedly say they want to avoid the same mistakes they’ve made with social media.

Yet despite broad bipartisan agreement that AI presents a threat, lawmakers have not yet coalesced around rules to govern its use or development. Blumenthal said Tuesday’s hearing had “successfully raised” hard questions about AI, but not answered them. Senate Majority Leader Charles E. Schumer (D-N.Y.) has been developing a new AI framework, which would “deliver transparent, responsible AI while not stifling critical and cutting edge innovation.” But his office has not released any specific bills to support the proposal, or commented on when the framework might be ready.

OpenAI CEO Sam Altman and IBM Chief Privacy Officer Christina Montgomery openly rejected the idea of a 6-month AI moratorium in a May 16 Senate hearing. (Video: The Washington Post, Photo: Reuters/The Washington Post)

Altman’s rosy reception signals the success of his recent charm offensive, which included a dinner with lawmakers Monday night about artificial intelligence regulation and a private huddle following Tuesday’s hearing with House Speaker Kevin McCarthy (R-Calif.), House Minority Leader Hakeem Jeffries (D-N.Y.) and members of the Congressional Artificial Intelligence Caucus.

The sharpest critiques of Altman throughout the hearing came not from lawmakers, but another witness: Gary Marcus, a professor emeritus at New York University, who warned the panel they were confronting a “perfect storm of corporate irresponsibility, widespread deployment, lack of regulation and inherent unreliability.”

Marcus warned that lawmakers should be wary of trusting the tech industry, noting there are “mind boggling” sums of money at stake and that companies’ missions can “drift.”

Marcus critiqued OpenAI, citing a divergence from its original mission statement to advance AI to “benefit humanity as a whole” unconstrained by financial pressures. Now, Marcus said, the company is “beholden” to its investor Microsoft, and that its rapid release of products is putting pressure on a fleet of companies — most notably Google parent company Alphabet — to swiftly roll out products too.

“Humanity has taken a back seat,” Marcus he said.

In addition to creating a new regulatory agency, Altman proposed creating a new set of safety standards for AI models, testing whether they could go rouge and start acting on their own. He also suggested that independent experts could conduct independent audits, testing the performance of the models on various metrics.

Sam Altman, CEO of OpenAI, said on May 16 that his system is not protected under Section 230, and that there is a need for a new legal framework for AI. (Video: The Washington Post, Photo: Reuters/The Washington Post)

However, Altman sidestepped other suggestions, such as requirements for transparency in the training data that AI models use. OpenAI has been secretive about the data it uses to train its models, while some rivals are building open-source models that allow researchers to scrutinize the training data.

Altman also dodged a call from Sen. Marsha Blackburn (R-Tenn.) to commit to not to train OpenAI’s models on artists’ copyrighted works, or to use their voices or likenesses without first receiving their consent. And when Sen. Cory Booker (D-N.J.) asked if OpenAI would ever put ads in its chatbots, Altman replied, “I wouldn’t say never.”

Add a Comment

Your email address will not be published. Required fields are marked *