The British Government will host the first international conference next autumn to assess “the most significant risks” of Artificial Intelligence (AI) and create a body to ensure the security of technology on a planetary scale. Artificial intelligence, after the war in Ukraine, was the priority issue of the ‘premier’ Rishi Sunak in his meeting at the White House with President Joe Biden.

“Artificial Intelligence has incredible potential to transform our lives for the better, but we need to ensure it is developed and used in a safe way,” Sunak said, announcing the UK’s willingness to lead the global effort. .

“Our objective will be to unite the countries, the researchers and the technology companies to agree on a series of measures that allow us to monitor the risks”, stressed the “premier”, who recalled the advantageous position of his country, with more than 50,000 jobs linked to the AI ​​sector, valued at around 4,500 million euros.

The announcement comes days after a letter signed by 350 experts in the field – including the CEO of OpenAI, which developed ChatGPT – warning that the technology could lead to the extinction of humanity. According to Professor Nick Bostrom, head of the Institute for the Future of Humanity at Oxford University, Artificial Intelligence may become the “biggest existential risk” ahead of climate change and other threats.

Matt Clifford, at the head of the British government’s shock force, launched the alert this week in an interview on TalkTV that has had a wide echo in British society: “In the industry we talk about short-term and long-term threats , which are really quite scary… We’re talking about the ability of AI to create new biological weapons or large-scale cyberattacks, really bad things.”

“We can have very dangerous threats to humanity just with the models that can be ready within two years,” Clifford added. “It sounds like a movie script, but it’s a real threat. If we don’t know how to control this, we’re going to create the potential for all kinds of risks.”

The reactions to the announcement of the AI ​​summit have not been long in coming. Yasmin Afina, in charge of the Chatham House Digital Society Initiative, warned that the meeting promoted by the British Government could clash with reality and err on the side of “too ambitious”, starting with the fact that “none of the leading AI companies are based in the UK”.

“Today there are very marked differences in chapters such as governance and regulation of new technologies between the European Union and the United States,” Afina declared. “The UK can try to reconcile the different perspectives, but there are already initiatives like the UN Global Digital Compact that start from a broader base.”

For its part, the European Union is developing its own Artificial Intelligence Law, but the community authorities have recognized that in the best of cases it would still take two years for it to enter into force. The EU’s chief technology officer, Margrethe Vestager, reported last month that Brussels is already working on developing a “voluntary code” that can also be signed by the AI ​​sector in the United States.

China has taken the first steps to introduce regulations, including a proposal to make it mandatory to notify consumers when a company uses an AI algorithm. The British government plans in fact to invite China to the global summit, given the weight of the Asian country in the sector. The issue has raised blisters among supporters of a ‘hard line’ towards Beijing, such as the former leader of the Conservative Party Iain Duncan Smith, who used the occasion to denounce China’s record of “thwarting its international trade commitments” and using sophisticated technology to spy on its own citizens. According to a British government spokesman, the countries invited to the summit will be all those who “share the recognition that AI offers great opportunities, but who admit the need to put guardrails for its development.” Russia will not be invited to the summit because of the war in Ukraine and because it is not considered a power in the sector. The initiative has been received with suspicion and skepticism in some sectors. The American billionaire Alex Karp, executive director of Palantir, told the BBC that it is the technology companies that have not yet commercialized AI products that want to press the “pause” button: “The race is on, and the only question is this : do we continue to lead or do we cede leadership?

According to the criteria of The Trust Project