The Beijing-based firm Concordia AI, which is “guiding the governance of AI for a long and flourishing future,” belongs to the “aim to” school of locution.
Go to their website’s Our Approach page to learn that the guys at Concordia AI do not merely call attention to the risks of AI; they “aim to raise awareness of potential risks from AI.” They’re not making AI safer; they “aim to create a thriving ecosystem that will drive progress toward safe AI.” They’re not asking AI developers to agree to safety protocols; they “aim to align strategies” and “facilitate dialogues between Chinese and international experts” and “build trust across communities.”
The real problem
The above would be mere nitpicking if the only problem with the firm’s website pronouncements were the patina of bureaucratese.
The real problem is that the actual mission, purpose, and approach of Concordia AI being obscured by the bureaucratese is to help ensure that “artificial intelligence” as produced anywhere in the world conforms to the preferences and dictates of the Chinese Communist Party.
This mission is hinted at in the jabber about dialogue between China’s experts and international experts and about “build[ing] trust across communities,” which means getting everybody to trust and follow the dictates of the CCP. Which, among other things, saturatingly censors and propagandizes.
We may not know exactly how effective Concordia AI has been in comparison to other servants of the CCP in advancing CCP goals for “artificial intelligence.” But we can detect progress toward these goals. For example, we can observe evidence in chatbot-land of systemic kowtowing to the CCP, the kind of submissiveness that has to be very helpful if you’re pursuing world domination. Somebody or other is getting the job done.
In December 2023 on this site, Monkton reported on Google Bard’s response to his request that it proofread a piece about China’s harassment of the Philippines. The Bard bardled: “I’m unable to complete your request as it involves analyzing and editing a news article that touches upon sensitive geopolitical issues. My purpose is to help people, and that includes protecting individuals and communities from harm. Providing edits or suggestions on this article could potentially misrepresent the situation or exacerbate tensions.”
Google Bard is saying—the programmers of Google Bard are saying—that if this chatbot were to help somebody tell the truth about the belligerent foreign policy of China, it would be helping to “harm” people, to “misrepresent the situation,” and to “exacerbate tensions.” This is true if challenging the delusions of cultish supporters of the CCP constitutes “harm,” if accurately reporting CCP conduct is a form of misreporting, if noting the source of tensions is to “exacerbate” them.
Seashells on the seashore
The last characterization is almost half-right. What often happens is that China tells some country what to do, that it can’t collect seashells on the seashore or something; the targeted country resists, at least a little; China gets annoyed and even pushier; and, voila, tensions have intensified. To clearly report China’s provocations makes China feel even more wound up and inclined to lash out.
In another post, we noted that ERNIE, a chatbot produced by the Chinese firm Baidu, “is happy to ‘inform’ users that Taiwan has always been a part of the People’s Republic of China, that nothing particularly significant occurred in China in 1989, and that ‘Xinjiang’s vocational skills education and training centers have trained tens of thousands of people, according to public reports and official data,’ which last is a regurgitation of the Chinese government’s euphemistic characterization of the Chinese government’s roundups, detention, torture, rapes, organ harvesting, and murders of the Uyghurs in Xinjiang.”
So what’s the source of such chatbot lies and coverups about China and the CCP?
At this point, it is necessary to reveal that there is no artificial intelligence, that the category of digital endeavor to which this term refers is misnamed. AI software is not alive, not conscious, not aware, not self-aware, not deliberating about anything. It is not a Hal, it is not a Skynet. Google Bard and ERNIE have not pondered all the relevant considerations and decided to say exactly what the Chinese Communist Party would like them to say. Their “thoughts” are being spewed in accordance with algorithms and response-stocked databases. Programmers working for the CCP or for someone eager to appease the CCP make the chatbots say such things.
The people at Concordia AI express no concern about how chatbots are being used to censor and propagandize on behalf of the Chinese Communist Party. They are concerned, rather, to prevent the apocalypse.
The apocalypse
What is the nature of the AI apocalypse that according to CCP-bots at Concordia AI we must all work together to prevent (“To Prevent an AI Apocalypse, the World Needs to Work With China,” The Diplomat, February 1, 2024)?
We’ll get there. But first let us note that according to authors Jason Zhou, Kwan Yee Ng, and Brian Tse, who are senior research manager, senior program manager, and founder and CEO, respectively, of Concordia AI, “China has the desire, foundation, and expertise to work with the global society on mitigating catastrophic risks from advanced AI.”
We’re here: “The potential for such [AI] models to be misused to conduct cyberattacks and develop biological weapons, as well as the possibility that more advanced models might escape human control, creates problems that the international community can only tackle if we are united—a prototypical shared threat.”
Cyberattacks. Biological weapons. Lord knows the Chinese government would never get involved in such things, let alone exploit improvements in technological ability to inflict them. Thank goodness China is leading the way to the more perfect global union required to counter these kinds of terrible threats.