Top Chinese research institutions linked to the People’s Liberation Army have used Meta’s publicly available Llama model to develop an AI tool for potential military applications, according to three academic papers and analysts.
In a June paper reviewed by sources, six Chinese researchers from three institutions, including two under the People’s Liberation Army’s (PLA) leading research body, the Academy of Military Science (AMS), detailed how they had used an early version of Meta’s Llama as a base for what it calls “ChatBIT”.
The researchers used an earlier Llama 13B large language model (LLM) from Meta, incorporating their own parameters to construct a military-focused AI tool to gather and process intelligence, and offer accurate and reliable information for operational decision-making.
Meta has embraced the open release of many of its AI models, including Llama. It imposes restrictions on their use, including a requirement that services with more than 700 million users seek a license from the company.
Its terms also prohibit use of the models for “military, warfare, nuclear industries or applications, espionage” and other activities subject to U.S. defence export controls, as well as for the development of weapons and content intended to “incite and promote violence”.
However, because Meta’s models are public, the company has limited ways of enforcing those provisions.
In response to Reuters questions, Meta cited its acceptable use policy and said it took measures to prevent misuse.
The Chinese researchers include Geng Guotong and Li Weiwei with the AMS’s Military Science Information Research Center and the National Innovation Institute of Defense Technology, as well as researchers from the Beijing Institute of Technology and Minzu University.
“In the future, through technological refinement, ChatBIT will not only be applied to intelligence analysis, but also … strategic planning, simulation training and command decision-making will be explored,” the paper said.
China’s Defence Ministry did not reply to a request for comment, nor did any of the institutions or researchers.
Reuters could not confirm ChatBIT’s capabilities and computing power, though the researchers noted that its model incorporated only 100,000 military dialogue records, a relatively small number compared with other LLMs.
The research comes amid a heated debate in U.S. national security and technology circles about whether firms such as Meta should make their models publicly available.
U.S. President Joe Biden in October 2023 signed an executive order seeking to manage AI developments, noting that although there can be substantial benefits to innovation, there were also “substantial security risks, such as the removal of safeguards within the model”.
This week, Washington said it was finalising rules to curb U.S. investment in artificial intelligence and other technology sectors in China that could threaten national security.