AI Generated Media Now Being Banned
This article is more than 2 years old
The Cyberspace Administration of China (CAC) has banned the use of AI-generated media that isn’t clearly labeled as such. The move comes as the world grapples with artificial intelligence technology that can alter video, audio, images, and text. Introduced in Beijing last week, the new deep synthesis technology regulation will take effect on January 10, 2023, TechCrunch reports.
Deep synthesis technology, as defined by the Cyberspace Administration of China, refers to AI media that uses deep learning, virtual reality, and other synthesis algorithms to generate text, images, audio, video, and virtual scenes. The rules and restrictions are similar to existing laws that oversee other forms of consumer internet services in China. This includes games, social media, and short videos.
According to the newly minted AI media rules, people are prohibited from using generative artificial intelligence to engage in illegal activities, that may endanger Chinese national security, or damage the public interest. The restrictions are possible due to the real-name verification apparatus, as anonymity doesn’t really exist for internet users in the region.
People are asked to link their online accounts to their phone numbers, which are registered to their government identification, TechCrunch says. Generative AI media companies are similarly required to verify users using mobile phone numbers or other forms of documentation. China also wants to censor what these algorithms can generate through manual content audits.
AI media company, Baidu, already filters politically sensitive content from its text-image program. Censorship is a standard practice across all forms of media in China. The Cyberspace Administration of China previously banned the publishing and distribution of fake news created by AI in 2019, The Byte reports. But it remains to be seen if the country can keep up with the sheer volume of media created by artificial intelligence.
Still, Chinese citizens in violation of the new regulations will be punished by law. AI media service operators have been asked to keep records of illegal behavior and report them to the relevant authorities. Platforms will also issue warnings, restrict usage, suspend service, or even shut down the accounts of folks who don’t adhere to the rules.
China’s latest set of regulations regarding AI media is likely in response to its rising popularity. Apps like Lensa, which sends users AI-generated avatars, have become a hit with social media users the world over. However, questions of legality have been raised about how these systems are trained. Like most machine learning software, they work by identifying and replicating patterns in data, The Verge explains.
But since AI media generates code, text, music, and art — the data it “learns” from is created by humans. It’s simply sourced from the internet and is usually copyrighted in some form. This wasn’t a problem when the system created a blurry image back in the early 2010s. But in 2022, software like Stable Diffusion can copy an artist’s style in hours, which is essentially stealing.
It’s even more worrying when companies sell AI-generated prints and social media filters that have been swiped from real artists. As a result, the questions of legality and ethics have become more urgent.