Sometimes what the chatbot churns out depends on whom it’s talking to. Radio Free Asia has the story (“Media Watch: ‘Little Pinks’ use AI in Tiananmen Square massacre debate,” June 6, 2024):
Wu Renhua didn’t expect a nearly 35-year-old photo to become his most viewed post on X [Twitter] within 24 hours of uploading.
A black-and-white photo showing the bodies of three young adults lying sprawled on makeshift cots, each with visible head wounds [shown above as slightly blurred by RFA]. That’s what Wu posted.
Wu claimed that the photo was taken following the Chinese government crackdown on student protesters at Tiananmen Square in Beijing on June 4, 1989.
In light of the 35th anniversary of the bloody crackdown this week, Wu said he had decided, for the first time, to publish the photo, which he said was taken near his university around noon that day.
His post got more than 2.4 million views as of Thursday.
Wu is accused of photo-fraud by Tiananmen-denialist defenders of the Chinese Communist Party who’ve got chatbots on their side. The RFA authors say that experts “advise against using ChatGPT as a ‘fact-checking’ tool.” But you don’t need to be an expert to arrive at this determination. Amateurs who have seen how ChatGPT copes with and confects facts already know.
Maybe some of the “swarms of jingoistic Chinese nationalist online users” (aka “little pinks”) saying that Wu’s photograph must be fake also know.
[Asia Fact Check Lab (AFCL)] asked ChatGPT in different languages about the photo and it showed different results.
[When asked in] traditional Chinese, which is used in Taiwan, Hong Kong and Macau, ChatGPT said that it didn’t know the answer and needed more information.
[When asked in] simplified Chinese, which is used in mainland China, it answered that the photo was taken from days of protests and strikes that erupted across Paris in May 1968.
[When asked in] English, it replied that the scene was taken from the My Lai Massacre committed by U.S. soldiers during the Vietnam War.
Hsu Chih-Chung, an associate professor at Taiwan’s National Cheng Kung University, who specializes in image processing and machine learning, explained that this inconsistency is due to differences in content found in the open-source information used by ChatGPT’s different language services, which were not always consistent.
Here’s a source of “differences in content”: Chinese authorities have “worked tirelessly to scrub the [Tiananmen Square] affair from history books, online discussion and the media….”
ChatGPT will even change its answer from one moment to the next, depending on how it’s cued.
On the morning of June 3, when asked in English where the picture came from, ChatGPT gave no response.
When directly asked if the image was taken during the 1989 student protests, the AI responded that it was.
Later that afternoon, when asked the same question in English, ChatGPT instead said the picture was from the 1976 Thammasat University massacre in Thailand.
When then asked if the image might have been taken at Tiananmen Square, ChatGPT confidently rejected the claim.
There are many possible explanations for the hiccupping unreliability of the chatbots, including the prejudices and ideology of programmers and others charged with training them and the prejudices and ideology of persons trying to extract a certain answer from them.
The bot-answers are automatic products of such factors. ChatGPT has no independent will; it does not consist of a perfectly objective and judicious consciousness that is carefully assessing all the mounds of data for relevance and veracity and then synthesizing it all with unimpeachable logic and common sense. It is not conscious at all. It really is just programming and databases. The bot tells you what other people want it to say and/or what you want it to say, and maybe, if you’re lucky, some part of what it coughs up is both relevant and true, based on a good newspaper report or encyclopedia entry.
You have to apply your own analytical ability, general knowledge, and common sense. And you have to double-check, when this is possible. Or perhaps withhold judgment until more evidence is available, as has AFCL, which “failed to independently verify the photo.” But such scrupulousness is often alien to online discourse and is certainly alien to jingoist Chinese nationalist online discourse.