But Mr. Smith also acknowledged that A.I. developers need to show restraint in creating new products with potentially broad, and negative, social consequences, and said that Microsoft wasn’t trying to pass the buck onto government regulators. “There is not an iota of abdication of responsibility,” he said.
The message echoes calls from other top A.I. executives. Sam Altman, the C.E.O. of OpenAI (which counts Microsoft as a top investor and business partner), told lawmakers last week that Congress should create a new A.I. regulator. And Sundar Pichai, the chief of Alphabet, called on trans-Atlantic regulators to work together to create effective new rules.
Proactively calling for more regulation is a playbook used by other industries, including social media and crypto, with mixed results: Congress largely hasn’t written many new laws to oversee social networks, to the consternation of several lawmakers.
But A.I. executives’ tolerance for new regulations goes only so far. Altman warned on Thursday that OpenAI may pull services like ChatGPT from European markets if Brussels moves forward with expansive A.I. legislation. “We will try to comply, but if we can’t comply we will cease operating,” he said.
In other A.I. news: JPMorgan Chase is reportedly developing a chatbot to help clients make investment decisions, according to CNBC. And the tech evangelist Cathie Wood missed out on $560 billion paper gains by selling its holdings in Nvidia early this year.