China is moving to regulate deepfake video and audio content, applying a strong-armed approach to an increasingly pervasive issue that other countries have struggled to control.

The Cyberspace Administration of China (CAC) announced new rules on Friday, which includes potential criminalization of publishing fake news content that uses artificial intelligence or virtual reality, Reuters reported

The rules come into effect next year.

Chinese regulators are taking action in response to increased popularity of deepfake videos, noting these technologies and fake information could “endanger national security, disrupt social stability, disrupt social order and infringe upon the legitimate rights and interests of others,” according to the website.

The popular Zao app, which was developed by Chinese social network Momo, became the top app in the Apple Store within the first two days of its launch in August.

Even though this technology has been gaining traction in Europe and the U.S., the Western countries have been slower to regulate it. 

“Despite the fact that apps such as FaceApp or Zao have already made deepfake technologies widespread and readily available on almost every smartphone, the EU fails to address the problem,” Sarah Bressan, a Berlin-based research associate at the Global Public Policy Institute, wrote recently.

The emergence of “fake news” and “deepfakes” over the last several years coincided with increased political polarization in the U.S. The accelerated technological breakthroughs have also forced tech companies and state legislatures to review and update their policies, even ahead of comprehensive federal-level regulation governing this new space.

In the United States, the big tech companies that host the bulk of the deepfake content are largely regulating themselves.

The U.S. government had wanted to tackle deepfakes before 2020, but different pieces of legislature that floated in Congress did not get far after a hearing in June.

Deepfake Report Act of 2019 was referred to the House Committee on Energy and Commerce in October. The Deepfake Accountability Act, which was introduced back in June, didn’t go anywhere. No federal-level regulation has been implemented so far.

Twitter said in October it would update its “synthetic and manipulated media” policy. Facebook has been more hesitant, working on a broader policy that would separate deepfakes from misinformation.

Facebook is preparing for the long haul and is looking to leverage AI to help reign in deepfakes. Antoine Bordes, a director of Facebook’s A.I. Research lab, told Fortune Facebook is working on a “benchmark test that can be used to train a machine learning algorithm to automatically detect deepfakes.”

Alphabet Inc.’s Google updated its stance on deepfakes two weeks ago as a part of its broader ad policy.

“We’re clarifying our ads policies and adding examples to show how our policies prohibit things like “deep fakes” (doctored and manipulated media), misleading claims about the census process, and ads or destinations making demonstrably false claims that could significantly undermine participation or trust in an electoral or democratic process,” Scott Spencer, VP of product management at Google Ads, said in a blog post

Meanwhile the volume of falsified online video content has been growing fast.

“Our research revealed that the deepfake phenomenon is growing rapidly online,

with the number of deepfake videos almost doubling over the last seven months,” Giorgio Patrini, founder and CEO of Deeptrace, said in their recent report. 

“The speed of the developments surrounding deepfakes means this landscape is constantly shifting, with rapidly materializing threats resulting in increased scale and impact,” the report concludes. “It is essential that we are prepared to face these new challenges. Now is the time to act.”

  • The volume of deepfakes has soared since last year: the volume almost doubled to 14,678 from 7,964 in December, according to the report published by cybersecurity startup Deeptrace in September.
  • Private companies like Amber Video and Truepic have been offering solutions to tackle misinformed and false images and audio online. Blockchain-based startup Truepic is working with Qualcomm to improve its image verification technology.