China’s Generative AI Measures and the Need for Global AI Norms

Almost a year since the public release and mainstream popularity of ChatGPT, the global debate surrounding the need for artificial intelligence (AI) norms and regulations has gained significant traction. Generative AI applications such as ChatGPT, Bard, Dall-E and more have shown the world just how simple it is to generate text, pictures, audio and video content within a few seconds, and simulate human intelligence and ability. With AI capabilities advancing at a rapid pace, the content generated from AI applications will only improve and become more difficult to differentiate from actual human-created content. The use of deep fakes, disinformation, and other malicious applications of generative AI will pose a number of ethical, moral and philosophical questions to humanity in the near future.

To deal with such challenges, China announced measures for the regulation of generative AI on 15th August 2023. The measures were intended to “promote the healthy development and standardized application of generative AI, safeguard national security and social public interests, and protect the legitimate rights and interests of citizens, legal persons, and other organizations.” The regulations were quite comprehensive in nature, certainly more so than any previous AI regulations had been, from China or any other state for that matter.

The AI measures aimed to balance between “development and security”, ensuring that although the malicious uses of AI are stopped, the progress of China’s technology sector is not. Through the recently announced measures, China believes generative AI services should “adhere to the core values ​​of socialism, and must not generate incitement to subvert state power, or endanger national security and interests.” It aims to ensure this by monitoring generative AI providers at essentially every step of the process, from algorithm design, data selection, training the AI model, content moderation, user protection, and during the actual use of the application. This would be quite a monumental task, and certainly easier said than done. The measures also mandate that labels should be placed on all AI-generated content, including photos and videos.

At the same time, China wants to “encourage the independent innovation of basic technologies such as generative artificial intelligence algorithms, frameworks, chips, and supporting software platforms.” It also wants to participate in the formulation of international rules related to generative AI. This is in line with China’s overall AI ambitions, as outlined through its New Generation AI Plan. China aims to become the global leader in AI by 2030, while also developing global AI norms. There is certainly a need for global AI norms, not just for generative AI applications, but AI as a nascent concept. So far, however, any such global attempts to regulate AI have failed.

The US has responded to China’s AI measures by launching its own generative AI task force on 10th August, largely to evaluate its use for defence purposes. Task Force Lima, as the US Pentagon dubbed it, will “explore the use of this technology (generative AI) and the potential of these models’ scale, speed, and interactive capabilities to improve the department’s mission effectiveness while simultaneously identifying proper protection measures and mitigating a variety of related risks”. The US Department of Defence (DOD) clearly views generative AI as having a range of military applications.

The announcement went on to state that Task Force Lima will “develop, evaluate, recommend, and monitor the implementation of generative AI technologies across DOD”. By indicating that generative AI will be applied ‘across the DOD’, the US has given a clear indication of exactly how it views AI; as an enabling technology that has value in all aspects of the military. By leveraging generative AI across the DOD, the Pentagon hopes to enhance its operations in areas such as warfighting, readiness, health and policy. Again, this is consistent with the previous US strategic approach towards AI. The US sees AI as a major strategic technology, and one that could ultimately decide its future global competition with China.

With the US and China being locked in a global AI competition, and advancements in AI coming at a rapid pace, the need for global AI norms and regulations is stronger now than it ever has been. The reality, unfortunately, is that states currently have completely different perspectives on how AI should be governed. The US, for example, has given a free reign to its technology sector, and has so far been hesitant to introduce any AI curbs, despite significant pleas from its own industry leaders. China, on the other hand, prefers to have total state oversight on all generative AI applications. This difference in approach makes any sort of progress towards global AI norms extremely difficult. The likely scenario is that states will develop their own AI norms and regulations within their own borders, such as the ones China announced. The European Union has also proposed its own framework for the regulation of AI, and other states will likely follow suit.

Still, having a lack of global AI norms will certainly be an issue in the near future. During a crisis situation, deep fakes and disinformation could easily wreak havoc and cause serious misinterpretation of an adversary’s intentions. States seem to be waiting for a major AI catastrophe to happen before working towards global AI norms. Imagine if a modern Cuban Missile Crisis were to occur between the US and China, and malicious deep fakes were spread throughout social media. In the midst of a crisis, minutes would feel like hours, and serious escalation would always be a possibility. Global AI norms, then, are the need of the hour.

Loading

About Shayan Hassan Jamy 6 Articles
The writer is currently enrolled in MS Strategic Studies program at the Air University Islamabad. He has an interest in Artificial Intelligence, emerging technologies and global power competition.

Be the first to comment

Leave a Reply

Your email address will not be published.


*