Top Chinese research institutions linked to the People’s Liberation Army have reportedly used Meta’s Llama model to develop an AI tool for potential military applications. The Academy of Military Science (AMS), a leading research body under the PLA, was involved in this development, according to three academic papers and analysts.
In a June paper reviewed by Reuters, Chinese researchers detailed how they utilized an early version of Meta’s Llama model to create “ChatBIT,” a military-focused AI tool aimed at gathering and processing intelligence for operational decision-making. This tool was fine-tuned for dialogue and question-answering tasks in the military field, outperforming some other AI models.
While Meta requires a license for certain uses of their models, they stated that any use by the PLA is unauthorized and goes against their acceptable use policy. The United States is closely monitoring competitors’ AI capabilities, including China’s advancements in this field.
The research conducted by Chinese institutions raises concerns about the misuse of AI for military purposes. The development of such tools could have implications for national security and technological competition between countries. The Pentagon is assessing the capabilities of open-source AI models and the potential risks associated with their widespread use.
Despite restrictions on the use of AI models for military applications, the public availability of these models poses challenges for enforcement. China’s efforts to leverage Western-developed AI models for military and domestic security purposes highlight the need for increased scrutiny and regulation in this rapidly evolving field.
As China continues to invest heavily in AI research and development, experts warn that excluding the country from global advancements in AI may not be feasible. Collaboration between Chinese and American scientists in AI research underscores the interconnected nature of technological innovation and the challenges of regulating its applications.
The evolving landscape of AI development and deployment requires a comprehensive approach to ensure that these technologies are used responsibly and ethically. The intersection of national security, technological competition, and ethical considerations in AI research presents complex challenges that will require ongoing monitoring and evaluation by governments and regulatory bodies.