For years Google has been dominant in both online search and artificial intelligence. Suddenly it’s threatened in both — and so are its core values.
AI race puts Google’s search for responsibility at risk
Then Microsoft incorporated similar OpenAI technology into its Bing search engine, reviving a product that had been a punchline. Despite some glaring shortcomings, the Bing chatbot’s popularity has helped to boost Microsoft’s search engine to 100 million active users, and the company is feverishly wedging similar AI tools into everything from Microsoft Office to Skype to Windows 11.
On Sunday, the New York Times reported that Samsung, which makes more smartphones than any other company, has considered switching its devices’ default search engine from Google to Bing, thanks in part to the excitement around Bing’s AI features. The Times reported that threat sparked “panic” at Google, whose name is synonymous with online search. (Google responded to a request for comment with a previously prepared statement, while Microsoft declined to comment.)
The report comes as Google CEO Sundar Pichai has been on a charm offensive. Across a series of interviews, including with CBS News’s 60 Minutes on Sunday, he has portrayed the latest AI boom as an opportunity for his company, not a threat. At the same time, he said he’s wary of an AI “race,” adding that the technology “clearly has the potential to cause harm in a deep way.”
“You will see us be bold and ship things,” he assured the Times tech podcast Hard Fork in March, “but we are going to be very responsible in how we do it.”
There may be more tension between those two goals than he’s letting on.
Before ChatGPT, Google was considered the industry leader in developing “large language models,” the complex AI systems that underpin chatbots. It used those models behind the scenes to improve its search results, and for language tools such as Google Translate.
But the company had been reticent about releasing its most powerful language tools to the public in chatbot form, preferring to publish its cutting-edge advances in academic journals while keeping them under wraps for internal study. The company quietly dropped its unofficial motto “Don’t be evil” from its code of conduct years ago but still pats itself for a “responsible” approach to AI, whether in search or self-driving cars.
There were good reasons for its relative caution.
Anyone who’s asked ChatGPT a highly specific question and gotten back an instant, precisely tailored answer can attest to its appeal. Instruct it to plan an hour-long youth soccer practice, and it will draw up a better agenda in 15 seconds than a rookie coach might after 15 minutes of Googling.
On the other hand, anyone who’s watched ChatGPT flail at a basic math problem, embrace crude stereotypes or confidently falsify its own résumé can see the pitfalls of relying on it for research.
Microsoft’s Bing chatbot, built on similar technology but with the ability to search the web, has its own problems. It went gonzo in the days after its launch, displaying a penchant for aggressive and weirdly personal interactions, forcing the company to scale it back. A recent Washington Post test found that it gave inaccurate or problematic responses to about 1 in 10 queries.
Google has had its own brushes with AI backlash. Two years ago, the company fired the co-leaders of its Ethical AI team after clashing with them over the publication of an academic paper warning about the downsides of large language models. And last summer, the company fired a researcher on its Responsible AI team who had become convinced that one of its language models, LaMDA, was sentient.
That may help explain why OpenAI was the first to market with ChatGPT, and why Microsoft was quicker to incorporate a chatbot into search. And of course Google, as the incumbent, had more to lose if its flagship search engine were to be associated with the “hallucinations” — responses that are incoherent or unmoored from reality — that ChatGPT and Bing are prone to.
Google’s search engine is explicitly built around values such as “expertise,” “authoritativeness” and “trustworthiness” that today’s language models notably lack. And its model of sending internet users to sites across the web, rather than keeping them on Google, underpins much of the online economy. Answering users’ questions directly, via conversational AI, risks both.
Yet Pichai is clearly feeling the pressure, even as he downplays it. In March, Google launched its own “experimental” AI chatbot, Bard, but gave it a separate website rather than building it into Google.com. So far, it has failed to impress, with even Pichai comparing it to a “souped-up Civic” racing against “more powerful cars.”
Now Google has to figure out how to capture the excitement of experimental AI tools in a search engine that is anything but experimental. No doubt Pichai is sincere in wanting to find a balance between moving quickly and moving carefully. The hard part will be deciding which side to err on.