Researchers have succeeded in having artificial intelligence design 40,000 toxic molecules – in just a few hours. They want to draw attention to the potential dangers of the technology. However, experts emphasize that it takes more to create a deadly weapon.

They call it a “wake-up call”: Researchers at a US pharmaceutical company used machine learning to develop software for toxic molecules. In less than six hours they had 40,000 molecules – in the model. It takes a few steps before toxic substances are produced that could be used as weapons, for example in the form of gas attacks. But the scientists warn in Nature Machine Intelligence magazine: “The reality is that this is not science fiction.”

Collaborations Pharmaceuticals is just one small company in a universe of hundreds of companies that use artificial intelligence (AI) for drug discovery, for example. The authors ask how many of them would have thought of converting things with their possibilities or even misusing them. With technological advances such as AI, does the danger of new weapons that make you sick, paralyze entire bodily functions or even kill living beings increase?

There is no reason to panic, says political scientist Frank Sauer from the Bundeswehr University in Munich. “But the risk is there.” The so-called dual-use problem is particularly pronounced in this area: that technologies or goods can be used for both civilian and military purposes.

The natural sciences in particular are good at collecting data, says Alexander Schug from the Steinbuch Center for Computing at the Karlsruhe Institute of Technology. For example, there are well prepared databases for drugs and protein structures, many of which are publicly accessible. “That makes them predestined for AI.”

But even a good programmer would have to familiarize himself with the structure of the data sets, says Schug. “It’s all on a very theoretical level at the moment.” In addition to sufficient training, sufficient computer capacity is also required. “You can do great calculations with graphics cards, but not really big calculations.”

Schug admits: “It could be a new type of hazard if new toxins and new synthetic routes are developed on the computer.” But the big question is always whether the substances can be synthesized – i.e. manufactured – at all. Some connections may not hold, in other cases there is a lack of raw materials.

The possible use would also depend on this: AI-designed molecules as a toxic gas that attacks cells via the respiratory tract? As an invisible substance that you pick up when you grab something? Or manipulated pathogens that are spread via drinking water systems?

The international disarmament adviser Ralf Trapp speaks of a “whole chain of things” that are necessary before something can be used militarily. The technical possibilities shortened the time until something came onto the market – that was also seen in the development of the corona vaccines. But implementing it as a weapon requires more resources. “What happens in the laboratory is one thing. What can become of it is something completely different,” says Trapp. “That doesn’t mean we don’t have to worry, it’s always a race against time.”

According to Una Jakob from the Hessian Foundation for Peace and Conflict Research, awareness of the potential danger is growing. From her point of view, research and companies could be informed more about the risks: “Not every scientist is aware that research can also be misused.” This should be pointed out in the training. The more research is done, the greater the danger that someone will work in a laboratory who does not only have good intentions.

The possible misappropriation makes it difficult to discover bioweapons programs, says Jakob. Many substances and materials are used medicinally or pharmaceutically. It’s difficult to weigh up – or inspectors should be able to look behind the scenes of the armaments industry. Sauer from the Bundeswehr University explains that it is not easy to ban individual substances because of the double use – in a positive as well as in a negative sense.

Nevertheless, Jakob also assesses the risk of new bioweapons developed at a technologically high level as low: “I think it is unlikely that someone will conduct complicated genetic research if they are planning an attack. Then they can also order ricin on the Internet.” That is less time-consuming and financially expensive.

And even if the worst comes to the worst, technological progress could have one good thing, as Trapp says: the ability to react more quickly. Ideally, with the scientific capacities, an antidote could be found more quickly – even if the trigger was developed with AI.

(This article was first published on Wednesday, June 01, 2022.)