According to recent reports, Chinese authorities have taken action against a ChatGPT user who allegedly used the AI-powered chatbot to create a fabricated news story about a train crash that never occurred. This marks one of the first instances of enforcement under a new Chinese law that aims to regulate the use of AI-generated “deep fakes” – realistic yet completely fabricated digital content.
The police report from the northwest Chinese province of Gansu stated that a man, identified only by his surname, Hong, was recently detained for allegedly using ChatGPT to generate a fake news article about a non-existent train crash. According to the report, the fake news article claimed that nine construction workers had died in the crash. Afterward, a media company based in southern China used 21 accounts on a popular social platform to spread the fake story within a short period of time.
It’s worth noting that, due to China’s strict internet censorship laws, ChatGPT is technically unavailable within the country. However, some individuals are still able to access it using virtual private network (VPN) software that can bypass the country’s internet firewall. The police report did not provide details on how Hong was able to access ChatGPT to create the fake news article.
Following the discovery that the article was fake, it had already garnered 15,000 views, and security officials had to act fast. Authorities conducted a search of Hong’s residence to collect evidence and took “criminal coercive measures” against him. This term refers to temporary measures taken by police to limit the freedom of a suspect.
The Chinese deepfake law, which came into effect on January 10th, prohibits several categories of fake media generated through “deep synthesis technologies,” such as virtual reality and machine learning. However, it provides vague definitions for many of these prohibited categories.
The new Chinese law on deepfakes, which took effect in January, bans several categories of fake media produced by “deep synthesis technologies” such as machine learning and virtual reality. The law prohibits deepfakes used in activities that endanger national security, harm the nation’s image or the societal public interest, or disturb “economic or social order.”
Specifically, it prohibits the use of such technologies to produce, publish or transmit fake news. However, the law offers only vague definitions for many of these forbidden classes, leaving some room for interpretation.