We’re guessing China.
Early in 2023, Adam Satariano and Paul Mozur reported on “How Deepfake Videos Are Used to Spread Disinformation” for The New York Times (February 7, 2023):
But something was off. Their voices were stilted and failed to sync with the movement of their mouths. Their faces had a pixelated, video-game quality and their hair appeared unnaturally plastered to the head. . . .
The two broadcasters, purportedly anchors for a news outlet called Wolf News, are not real people. They are computer-generated avatars created by artificial intelligence software. And late last year, videos of them were distributed by pro-China bot accounts on Facebook and Twitter, in the first known instance of “deepfake” video technology being used to create fictitious people as part of a state-aligned information campaign.
“This is the first time we’ve seen this in the wild,” said Jack Stubbs, the vice president of intelligence at Graphika, a research firm that studies disinformation. Graphika discovered the pro-China campaign, which appeared intended to promote the interests of the Chinese Communist Party and undercut the United States for English-speaking viewers. . . .
In China, A.I. companies have been developing deepfake tools for more than five years. In a 2017 publicity stunt at a conference, the Chinese firm iFlytek made deepfake video of the U.S. president at the time, Donald J. Trump, speaking in Mandarin.
It’s all still very crude, but we’re assured that the fakery will only get more sophisticated. Eventually, perhaps, the stilted and stentorian, stertor-inducing effusions of live Chinese Communist Party officials, stock propaganda that sounds as if authored by bots, will be authored by bots that sound like people.