As gunpowder and the atomic bomb did before, artificial intelligence (AI) has the capacity to once again revolutionize war, according to analysts, transforming human confrontations in ways that were previously unimaginable and much more lethal.

The integration of this technology into military weapons, vehicles and computer programs has changed the lines of battle in conflicts such as the one in Ukraine and also threatens to change the competition for global supremacy between China and the United States.

The issue had arisen before the summit that leaders Joe Biden and Xi Jinping held this Wednesday in California and there had been speculation that both could agree on a ban on the use of lethal autonomous weapons.

However, no such agreement occurred between the leaders of the United States and China, who left their teams of experts to continue analyzing the application of this technology that can revolutionize the war scenario in the air, sea or land.

Western experts argue that Beijing is investing massively in AI, to the point that it may soon change the balance of power in the Asia-Pacific region, and perhaps beyond.

And that would mean profound changes in the world order long dominated by the United States.

Robots, drones, torpedoes and other devices: thanks to technologies ranging from computer vision to sophisticated field sensors, all types of weapons could be transformed into automatic systems controlled by AI algorithms.

But autonomy doesn’t mean a weapon can “wake up in the morning and decide to start a war,” said Stuart Russell, a computer science professor at the University of California, Berkeley.

“It means they have the ability to locate, select and attack human targets, or targets carrying human beings, without human intervention,” he explained.

The killer robots of numerous science fiction stories and films are an obvious example and have been analyzed. Although Russell considers that perhaps “that is the least useful.”

Many weapons are still in the prototype phase, but the war that unfolds after the Russian invasion of Ukraine offers a glimpse of the potential of this technology.

Remotely piloted drones are not new, but they are gaining more autonomy and are used by both sides, forcing troops to seek more underground shelters.

That could be one of the biggest immediate changes, according to Russell. “A likely consequence of having autonomous weapons is that basically being visible anywhere on the battlefield would be a death sentence,” he said.

Autonomous weapons have potential advantages at the military level: they can be more efficient, they are cheaper and they lack human emotions, such as fear or rage, present in combat. But all these advantages generate ethical problems.

For example, if they are cheap to manufacture, there is virtually no limit on the offensive power of an aggressor, according to Russell.

“I can just throw a million of them at once and if I want I can decimate an entire city or an entire ethnic group,” he said.

Autonomously operating submarines, ships and aircraft could be a great advance in surveillance or logistical support in remote or dangerous areas.

This is the goal of the “Replicator” program, launched by the Pentagon to counter China’s powerful numerical supremacy in troops.

The goal is to be able to send several cheap and easy-to-replace systems quickly in different scenarios, said US Undersecretary of Defense Kathleen Hicks.

He explained that if numerous devices are “launched into space at the same time… it becomes impossible to remove or degrade all of them.”

Many companies develop and test autonomous vehicles, such as the Californian Anduril, which has an autonomous submarine “optimized for various defense and commercial missions”, such as long-range oceanographic detection, underwater battlespace reconnaissance, anti-mine maneuvers, seabed mapping and anti-submarine warfare.

Controlled by AI and capable of processing endless data collected by satellites, radars, sensors and intelligence services, software for tactical purposes can offer humans a true advance in military planning.

“Everyone [in the Department of Defense] needs to understand that information is really the ammunition in an AI war,” Alexander Wang, head of the programming company Scale AI, stressed during a US Congressional hearing this year. .

“We have the largest military fleet in the world. This fleet generates 22 terabytes of data a day. So if we can properly configure and instrument that information that is being generated in groups of data sets for AI, we can create an information advantage quite unsurpassed when it comes to the military use of artificial intelligence,” he explained.

Scale AI has a contract to develop a language model in an intelligence network for a major US Army unit.

Its chatbot (conversation agent), named ‘Donovan’, should allow managers to “plan and act in a matter of minutes, instead of several weeks,” the company maintains.

However, the United States’ chief diplomat, Anthony Blinken, has already announced that there are limits, as in the case of decisions to use nuclear weapons.