Elon Musk, others sign letter calling for a pause on AI experiments


A group of business leaders and academics signed a letter asking companies like OpenAI, Google and Microsoft to stop training more powerful AI systems so the industry can assess the risks they pose.

Twitter CEO Elon Musk, veteran AI computer scientist Yoshua Bengio and Emad Mostaque, the CEO of fast-growing start-up Stability AI, all signed the letter, along with around 1,000 other members of the business, academic and tech worlds, though the list was not independently verified.

“Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us?,” the letter said. “Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable.”

The list did not include senior executives at OpenAI or the Big Tech companies. It also didn’t include prominent AI critics like former Google engineer Timnit Gebru who have been warning of the more immediate risks of the technology for months and years.

Experts have fretted about the risks of building supersmart AIs for years, but the conversation has become louder over the last six months as new image generators and chatbots that can have eerily humanlike conversations have been released to the public. Interacting with the newly-released chatbots like OpenAI’s GPT4 has prompted many to declare that a human-level AI is just around the corner, but other experts point out that the way the chatbots work is by simply guessing the right words to say next based on their training, which included reading trillions of words online. The bots often devolve into bizarre conversational loops if prompted for long enough, and pass off made-up information as factual.

AI isn’t magic or evil. Here’s how to spot AI myths.

AI ethics researchers have repeatedly said in recent months that focusing on the risks of building AIs with human-level intelligence or greater is distracting from the more immediate problems that the technology has already created, like infusing sexist and racist biases into more technology products. Pushing the theory that AI is about to become sentient can actually have detrimental effects by obscuring the harm the technologies are already causing, Gebru and others have argued.

AI has been used for years in recommendation algorithms used by social media platforms, and AI critics have argued that by removing human control over these technologies, the algorithms have promoted anti-vaccine conspiracies, election denialism and hate content.

Some U.S. lawmakers have called for new regulations on AI and its development, but no substantive proposals have advanced through the legislature. The European Union released a detailed proposal for regulating AI in 2021, but it still has not been adopted into law.

Add a Comment

Your email address will not be published. Required fields are marked *