Baidu confident that its AI chatbot won’t make mistakes


Baidu Inc’s experience in tailoring its search engine to Chinese regulatory requirements makes it confident its AI-driven chatbot won’t make mistakes on “important and sensitive topics”, the company said on Tuesday.

On a call with analysts, Baidu CEO Robin Li said the company was waiting for government approval before launching its ChatGPT-like Ernie bot, which Reuters tests have found refuses to answer a wide range of questions on politics, particularly those pertaining to Chinese government leaders.

“For important and sensitive topics, we have to make sure artificial intelligence will not hallucinate,” Li said, using the industry term for when AI models generate outputs different from what is expected.

“Given that LLM (large-language model) is more or less a probabilistic model, this task is not trivial at all,” he added, referring to the model used by many AI chatbots, such as ChatGPT and Ernie bot.

Li said industry regulation was not final yet, and the company would continue to update its strategy as it evolves.

“Baidu has been operating search in China for more than 20 years and has extensive experience with Chinese culture and the regulatory environment,” he said.

“Conversely, companies which do not have extensive experience in providing appropriate online content or lacking a track record of working closely with regulators will face significant challenges.”

China’s cyberspace regulator last month unveiled draft measures to manage services driven by generative AI, like Ernie bot, saying that content generated by this frontier technology had to be in line with the country’s core socialist values.

Li said these measures would benefit Baidu.

“We believe that regulators’ active engagement in generative AI in the early stage will raise the bar to entry, and we are well positioned for that,” he said.

Add a Comment

Your email address will not be published. Required fields are marked *