The European Union has given this week what can be considered the starting signal to definitively create a regulatory framework around the new artificial intelligence tools. The European Parliament, after a month of deliberation, has approved by majority to start a regulation process in which the different members of the Union will now be involved, but which starts from defined guides and that have been created from a proposal for the European Commission, the first draft of which was presented two years ago.

Parliament maintains all the elements of this proposal, but has added several modifications to try to regulate, as well, generative engines such as ChatGPT or Bard and general-purpose artificial intelligence systems, that is, artificial intelligences that are designed to act similar to human intelligence. These tools have become incredibly popular in recent months and, not surprisingly, were not in the Commission’s original proposal.

Parliament’s intention is to cover several fronts with the new legislation, which could start to take shape later this year.

True to the spirit of the Commission’s initiative, it will prohibit, for example, the use of artificial intelligence in biometric identification systems in real time and will limit its use in previous recordings to security forces that have judicial authorization. It will also prevent videos recorded by security cameras from being used to extract content with which to train facial recognition systems.

These measures, according to the European Commissioner Margarethe Vestager, are focused on preventing racial biases that have been shown to have this type of tool.

Another important point of the future legislation is that it adds to the list of “high risk” artificial intelligence tools the algorithms that social networks use to recommend different content. Entering this list, it is expected that these algorithms will have greater supervision by different governments and be audited frequently.

Finally, the approved text also specifies that generative systems such as ChatGPT or Bard will have to comply with certain transparency requirements. The content that comes out of them will have to be easily identifiable as content generated by artificial intelligence, using watermarks or other similar technologies, and have protections against the generation of illegal content.

The companies responsible for these models will also have to disclose which sources were used during the initial training and ensure that it does not infringe copyright law.

“We want to see the positive potential of AI for creativity and productivity harnessed, but we will also fight to protect our position and counter the dangers to our democracies and freedoms,” explains Brando Benifeim, an Italian parliamentarian.

According to the criteria of The Trust Project