Of course, even the Pentagon will worry about agreeing to many limits.
“I fought very hard to get a policy that if you have autonomous elements of weapons, you need a way of turning them off,” said Danny Hillis, a famed computer scientist who was a pioneer in parallel computers that were used for artificial intelligence. Mr. Hillis, who also served on the Defense Innovation Board, said that the pushback came from Pentagon officials who said “if we can turn them off, the enemy can turn them off, too.”
So the bigger risks may come from individual actors, terrorists, ransomware groups or smaller nations with advanced cyber skills — like North Korea — that learn how to clone a smaller, less constricted version of ChatGPT. And they may find that the generative A.I. software is perfect for speeding up cyberattacks and targeting disinformation.
Tom Burt, who leads trust and safety operations at Microsoft, which is speeding ahead with using the new technology to revamp its search engines, said at a recent forum at George Washington University that he thought A.I. systems would help defenders detect anomalous behavior faster than they would help attackers. Other experts disagree. But he said he feared it could “supercharge” the spread of targeted disinformation.
All of this portends a whole new era of arms control.
Some experts say that since it would be impossible to stop the spread of ChatGPT and similar software, the best hope is to limit the specialty chips and other computing power needed to advance the technology. That will doubtless be one of many different arms control formulas put forward in the next few years, at a time that the major nuclear powers, at least, seem uninterested in negotiating over old weapons, much less new ones.