The Center for Data Innovation, representing the intersection of data, technology, and public policy, has submitted a response to the EU AI Office’s call for submissions on the AI Act’s Code of Practice. The main aim of the Center is to develop public policies that maximize the benefits of data-driven innovation in both public and private sectors. Through education and advocacy, the Center aims to inform policymakers and the public about the potential opportunities and challenges associated with data, artificial intelligence, and emerging technology trends.
The AI Act, which came into law on 12th July 2024, has prompted the AIO to take actions to ensure compliance with the new regulations. The Code of Practice, which outlines rules for general-purpose AI providers with systemic risks, is a key component of the AI Act. The consultation on the Code evaluates transparency, risk identification and assessment, risk mitigation, and internal risk management for GPAI providers. The Center has put forward six recommendations to support AI innovation and adoption within this new framework.
In terms of transparency and copyright-related provisions, the Center suggests that the AIO should establish different levels of disclosure tailored to different stakeholders across the AI value chain. This approach would help operationalize transparency while avoiding overwhelming recipients with unnecessary information. The Center also recommends that the AIO consider existing sector-specific legislation to prevent redundancy and streamline compliance processes.
When it comes to risk identification and assessment, the Center suggests that the AIO should align its risk taxonomy and assessment measures with international standards to promote standardization and facilitate compliance for stakeholders. By leveraging established international frameworks, the AIO can enhance the effectiveness of its regulatory efforts and attract global talent and businesses to Europe.
Regarding risk mitigation measures for systemic risks, the Center emphasizes the need to differentiate between actual and speculative risks to focus resources on tangible threats. By prioritizing observed risks, the Code can serve as a practical mechanism for enhancing AI safety and trust among stakeholders.
In terms of internal risk management and governance for GPAI providers, the Center recommends the development of policies and procedures to operationalize risk management within organizations. This includes documenting and reporting incidents and implementing corrective measures to enhance overall governance and compliance.
In conclusion, the Center highlights the importance of promoting AI innovation and adoption as the core focus of the Code. By developing the Code iteratively and rooted in technical feasibility, the AIO can create a regulatory environment that encourages innovation while ensuring safety and compliance are not compromised. This approach will lead to a more adaptable and responsive framework that aligns with the evolving landscape of AI technologies and their applications, ultimately fostering a culture of responsibility and trust within the AI community.