Italian authorities have temporarily banned artificial-intelligence chatbot ChatGPT while they investigate the company behind it for allegedly violating data collection rules.
Italy temporarily bans ChatGPT over privacy concerns
The Italian data authority cited a March data leak, which the company said allowed some users to see information about other users’ chat history and some users’ payment information. Open AI announced last week that it had patched the bug.
The regulator cited broader concerns about Open AI’s data-collection practices, which could affect a host of companies that build systems by vacuuming up massive volumes of data, often scraped from the internet. The Italian agency, which enforces both E.U. and domestic data protection rules, said in a news release “there was no legal basis underpinning the massive collection and processing of personal data in order to ‘train’ the algorithms on which the platform relies.”
The ban signals the regulatory and compliance challenges ahead for Open AI, as ChatGPT’s ability to hold remarkably humanlike conversations on complex topics, draft emails and generate articles has stunned the public. Its responses rely on a wide range of sources from across the internet — and are not always accurate or age-appropriate — sparking concerns about the technology’s ability to amplify misinformation, harm children and encroach on users’ privacy.
The action in Italy is a signal that European regulators may be more aggressive in attempting to regulate the future of artificial intelligence than their U.S. counterparts. The European Union has stricter privacy regulations than the United States, which lacks a comprehensive federal consumer privacy law. The bloc is also expected to resume negotiations this year on new artificial intelligence regulation.
The Italian regulator’s announcement highlighted the stakes for Open AI: If the start-up does not respond to the agency within 20 days, it could face a fine of about $21 million or 4% of its annual revenue.
American regulators are increasingly under pressure to take action against chatbots, as they grow in popularity. On Thursday, a think tank submitted a complaint to the Federal Trade Commission, asking the agency to probe the privacy and public safety risks associated with ChatGPT. The organization, the Center for Artificial Intelligence and Digital Policy, called for the agency to enjjoin “further commercial releases of GPT-4,” the latest iteration of the chatbot technology.
The FTC has signaled an increasing focus on artificial intelligence. In an advisory last month, the agency asked companies to “keep your AI claims in check,” warning businesses not to falsely exaggerate what such products can do and evaluate risks before pushing products. FTC Chair Lina Khan (D) said at an antitrust conference this week that her agency is working to protect competition in the growing artificial intelligence market.
The release of Chat-GPT set off a race among competitors to develop AI of similar sophistication: Microsoft last month made a new AI chatbot powered by the same technology open to journalists, some of whom reported bizarre and troubling interactions.
Benjamin Soloway, Pranshu Verma and Rachel Lerman contributed to this report.