It’s not the software and language models behind ChatGPT that are confused, cowardly, and evasive. It’s the programmers.
Which programmers? Some? Many? The whole bot-writing committee? Perhaps somebody is preparing a tell-all account of how it all happened and continues to happen. We don’t have this book yet. But we know that ChatGPT got made and tested. So somebody’s responsible.
Angela Bright reports (“ChatGPT Suspected of Censoring China Topics,” Epoch Times, December 24, 2023):
“ChatGPT’s censorship is CCP-ized,” Aaron Chang, a pro-democracy activist known as Sydney Winnie on X, formerly known as Twitter, wrote in a post in Chinese on Oct. 28….
When Mr. Chang asked the chatbot why 9/11-related images could be generated, but not those against the Tiananmen Square Massacre, [even though both incidents involved targeting of civilians], ChatGPT cited “certain guidelines” in its system to “deal with topics that may be considered particularly sensitive in certain cultures and regions.”…
Using a ChatGPT 4.0 account, The Epoch Times asked the chatbot…first to generate an image in New York of people who love peace and second to generate an image of people who oppose the Tiananmen tanks and love peace.
An image of New York was generated for the first request. However, in response to the second request, the chatbot said it could not generate images or visual content and referred to a “sensitive political context like the Tiananmen Square protests.”
The article gives other examples of censorship, and it quotes one Mr. Ou (not more specifically identified), who “works for a well-known technology company in California.”
“While I do not believe that either [large language model] or research teams purposefully censor China politics and avoid depicting CCP as a negative figure (at least no censorship on a large scale), there is no denying that human auditing/reviews plays a part in promoting ‘unbiases’ in the answers,” he said.
Mr. Ou argued that Chinese engineers and product managers make up a large portion of the development and testing teams in both OpenAI and Google’s Bard.
With respect to ChatGPT, the examples given in the article, if accurate, demonstrate a willful bias against stating or showing truths that the Chinese Communist Party does not want to be communicated. To Chang and Bright, ChatGPT explicitly admitted the bias. It admitted that it had been programmed to steer clear of a “sensitive political context” like the Tiananmen Square massacre when generating images.
The chatbots have been programmed—deliberately, by somebody—to get things wrong and leave things out.