Life would be easier for the AI programmers struggling to obey the censorship requirements of the Chinese Communist Party if every time a user asked about a touchy topic, their ChatCCP could reply, “Sorry, I am unable to discuss Tiananmen Square / Winnie the Pooh / Uyghurs / Christians / Falun Gong / Hong Kong / Tibet / Taiwan / Bhutan / Japan / the Great Firewall / universal values / civil rights / the historical errors of the Chinese Communist Party / crony capitalism / state capitalism / judicial independence / censorship / surveillance / torture / genocide. Let’s start over. Ask me anything!”
Or the equivalent. Baidu’s Ernie chatbot reverts to “Try a different question” when sensitive matters that cannot be discussed truthfully come up. Alibaba’s Tongyi Qianwen prefers “I have not yet learned how to answer this question. I will keep studying to better serve you.”
But whatever the boilerplate, it must not be uttered overmuch. The Chinese government is “keen to avoid creating AI that dodges all political topics,” so China’s Cyberspace Administration of China “has introduced limits on the number of questions LLMs can decline during the safety tests…. The quasi-national standards unveiled in February say LLMs should not reject more than 5 per cent of the questions put to them” (“China deploys censors to create socialist AI,” Financial Times, July 17, 2024).
The Cyberspace Administration of China (CAC), a powerful internet overseer, has forced large tech companies and AI start-ups including ByteDance, Alibaba, Moonshot and 01.AI to take part in a mandatory government review of their AI models, according to multiple people involved in the process.
The effort involves batch-testing an LLM’s responses to a litany of questions, according to those with knowledge of the process, with many of them related to China’s political sensitivities and its President Xi Jinping….
“Our foundational model is very, very uninhibited [in its answers], so security filtering is extremely important,” said an employee at a top AI start-up in Beijing.
The AI bots in China must be programmed to do what the humans do by choice, apply a “security filter” lest what they say provoke the CCP.
Doing the best so far “in creating an LLM that adeptly parrots Beijing’s talking points”: ByteDance, owner of TikTok.