2 April 2025

On 17 March 2025, the Cyberspace Administration of China, the Ministry of Industry and Information Technology, the Ministry of Public Security, and the State Administration of Radio and Television (“Agencies”) jointly issued the “Measures for the Identification of Synthetic Content Generated by Artificial Intelligence” (“Measures”), which will come into effect on 1 September 2025.

The Agencies’ press release provides that the Measures are intended to promote the healthy development of artificial intelligence (“AI”), standardise the identification of synthetic content generated by AI, protect the legitimate rights and interests of citizens, legal persons and other organisations, and safeguard the public interest. The Measures seek to address concerns brought to the fore by the recent rapid development of new technologies, such as generative AI (“GenAI”) and deep synthesis. AI-generated or composed content refers to text, images, audio, video, virtual scenes and other information generated or composed by using AI technologies. Such content, while promoting economic development, enriching online content, and facilitating public life, poses issues such as the spread of false information and damage to the internet ecology. To address these concerns, the Measures are intended as a means to “put an end to the misuse of AI generative technologies and the spread of false information”.

The Measures apply to internet information service providers that carry out AI-generated synthetic content identification activities in accordance with the following regulations (“Providers”):

  • Administrative Provisions on Algorithm Recommendations for Internet Information Services, which regulates algorithm technologies to provide users with information;
  • Administrative Provisions on Deep Synthesis of Internet Information Services, which regulates the application of deep synthesis technologies in the provision of internet-based information services; and
  • Interim Measures for the Administration of Generative Artificial Intelligence Services, which regulates the use of GenAI technologies in the provision of services that generate text, audio, video, images, and the like.

This article provides an overview of the Measures’ requirements.

Labelling requirements

The Measures stipulate that Providers must add:

  • explicit labels, in the form of text, sound or graphics etc which can be clearly perceived by users: (a) at reasonable positions or areas of the information content generated, synthesised or edited using AI technologies; and (b) in the files of AI-generated content downloaded, reproduced, or exported; and
  • implicit labels, in the form of file data embedded through technical measures which cannot be easily perceived by users, to the metadata of AI-generated content files, including attributes of the content, name or code of the Provider, content serial number, etc.

The Measures stipulate that no organisation or individual may maliciously delete, tamper with, forge, or conceal generated synthetic content identifiers specified in the Measures, provide tools or services for others to carry out these malicious acts, or damage the legitimate rights and interests of others through improper identification means.

As a complement to the Measures, the mandatory national standard “Cybersecurity Technology-Labelling Method for Content Generated by Artificial Intelligence” (“National Standards”) has been issued by the State Administration for Market Regulation and the National Standards Administration (GB 45438-2025) and will be implemented concurrently with the Measures on 1 September 2025. The National Standards specify the format of explicit labels required by the Measures, such as inserting “AI” by text, superscript, voice and rhythm, as well as the metadata to be added as implicit labels.

Verification obligations

When reviewing applications for launching or online release, internet application distribution platforms (“Platforms”) shall require internet application service providers to state whether they provide AI-generation or synthesis services. If they do, the Platforms must verify whether such providers implement the appropriate labelling mechanisms or functions as required by the Measures.

User agreements

Providers must clearly set out their labelling methods and requirements in their user service agreements to ensure that users understand their obligations regarding content labelling. A Provider is permitted to supply generated and synthetic contents without explicit labels if a user so requests, provided that it has clarified the user’s identification obligations and usage responsibilities through a user agreement and retains relevant logs for not less than six months.

Declaration of use

The Measures require those who use online information content transmission services to publish generated synthetic content to proactively declare this and use the identification functions provided by the Provider to label the content as such.