G7 Competition Authorities Address AI Competition Challenges in Joint Statement – The Washington Report

news-14102024-202724

On October 4, the US FTC and DOJ joined the other G7 competition authorities and policymakers to release a joint Digital Competition Communique on competition concerns in the AI ecosystem, at the conclusion of a summit on the same topic. The communique identifies “several significant competition concerns” in the AI ecosystem and also presents guiding principles for promoting and safeguarding AI competition.
Although the communique does not put forth concrete policy proposals or enforcement actions, the communique’s release underscores that the G7 member countries are all concerned about competition – or the lack thereof – in the AI ecosystem and an intention toward apparent cooperation on this subject. With the EU’s AI legislation becoming effective, and the United States Congress not moving forward on US legislation, the EU’s approach may become the tip of the enforcement sphere around the world.
International Eyes on AI
This is not the first time that international authorities have jointly addressed AI. In July, as we covered, the top competition authorities in the US, the UK, and the EU released a Joint Statement on Competition in Generative AI Foundation Models and AI Products. That joint statement highlighted risks to competition in the AI ecosystem and presented shared principles for fostering innovation in the ecosystem. Considering that AI competition concerns transcend international borders, the joint statement underscored the competition authorities’ willingness to collaborate with other jurisdictions to promote and safeguard AI competition.
The G7 Communique
In the communique, the G7 member countries’ enforcement authorities recognize “that AI holds a transformative promise for our society and economies,” potentially unleashing a flurry of new innovations and technological developments. It is therefore essential, according to the countries, “that we maintain open, fair, and contestable markets in order to ensure that our economies benefit from” innovations stemming from AI.
The communique also acknowledges that several factors unique to the AI landscape heighten AI competition risks. From network and platform effects to high costs to economies of scale and scope, the accumulation of proprietary data, data feedback loops, and the availability of essential inputs for AI development, many factors may “make entry difficult and may reward first movers,” exacerbating the tendency toward concentration.
Five Concerns about AI Competition Risks
The communique identifies “several significant competition concerns” in the AI ecosystem, including:

• The concentrated control of crucial AI inputs may raise competition concerns, potentially putting a small number of firms in key market positions enabling them to reduce competition, restrict market access, or exploit bottlenecks.
• Dominant tech platforms may exploit their market power to “[limit] consumer choice and [raise] barriers to entry for smaller firms and startups.”
• Firms with significant market power in certain digital markets may also potentially “entrench or extend that power into adjacent AI markets.” “Through network effects, data feedback loops, and cross-ecosystem integration,” these firms’ activity may stifle competition.
• Partnerships between large digital market incumbents and AI firms “raises concerns that these incumbents could suppress competition in AI-related markets.” Through alliances and strategic talent acquisition, firms may cement their dominance while avoiding merger scrutiny.
• AI algorithms may also power collusion between firms, “making it easier for them to coordinate prices or wages, share competitively sensitive information, and undermine competition.” Even where algorithms are the facilitating mechanism, collusion is still unlawful.
Spillover Effects of AI Competition
The effects of AI competition may have spillover effects in other sectors of society. The communique highlights three main areas that may be affected:

• Human Innovation and Copyright. The communique acknowledges that AI systems “heavily rely on human creations – knowledge, art, writing, and ideas” – for inputs and training. As a result, AI systems could potentially harm innovators and content creators, “leaving them undercompensated for their work and stifling human creativity and innovation.” In the absence of sufficient competition, these potential harms may be increased, with dominant AI firms “exercising monopsony power over creators with respect to the use of their works and preventing smaller AI firms from accessing the same works.” A competitive market for copyrighted input data, in which better competition and consent are guaranteed, would incentivize further investments in and the creators of more content for training AI models.
• Consumer Protection. AI models and outputs can also potentially affect and harm consumers, misleading them, preventing them from making informed choices, and influencing their preferences. “Ensuring that AI systems do not distort consumer decision-making processes through false or misleading information” is necessary to promote consumer trust and foster a healthy competitive environment.
• Privacy and Data Protection. AI models are often trained and built off “the collection, aggregation, processing, and use of vast amounts of personal data.” The countries “affirm that such data must be handled in full compliance with existing privacy rules and laws.”
The risks to human innovation and copyright, consumer protection, and privacy and data that AI poses “can significantly affect the diversity of voices, the range and quality of choices available to consumers and businesses, and the quality and reliability of information available to the public.”
Six Guiding Principles for Safeguarding AI Competition
To respond to the concerns around AI competition, the communique outlines six guiding principles which “aim to enable contestability and foster innovation:”
• Fair Competition. The AI ecosystem should remain competitive and “free from distortions caused by competitively harmful behaviors of incumbent companies.” The countries’ competition authorities “aim to take steps to prevent incumbent digital and technology companies from leveraging their dominant positions to foreclose competition, exploiting existing and emerging bottlenecks across the AI stack, engaging in unfair dealing, and impeding innovation that would benefit competition.”
• Fair Access and Opportunity. Barriers to market entry can impact and stall innovation and growth within the AI ecosystem. “Fair access to key inputs is necessary for the development of AI systems” throughout the AI stack, from applications to foundation models and AI chips.
• Choice. Consumers and businesses alike benefit from having choices among a range of products. In AI markets, diverse business models, including “public, freely accessible foundation models, licensing models, and proprietary systems,” help provide choice. The competition authorities vow to “remain vigilant in identifying and addressing any threats to consumers and businesses being able to meaningfully make choices among a variety of options.”
• Interoperability. Interoperability and open technical standards can promote innovation, “mitigating the concentration of market power and preventing consumers and businesses from being locked into closed ecosystems.” The competition authorities will “closely scrutinize any claims that interoperability requires sacrifices to privacy and security of AI models and systems.”
• Innovation. Recognizing that “innovation lies at the heart of economic growth,” the competition authorities are committed to fostering innovation within the AI ecosystem.
• Transparency and Accountability. Transparency fosters trust in AI systems. Users of such systems should be made aware of the types and sources of data used to train AI models,” as well as the models’ limitations in terms of reliability and accuracy.
Five Commitments to Safeguarding Competition
Reaffirming their “shared goal of using available enforcement and regulatory tools to protect competition in AI markets,” the competition authorities outlined five shared commitments to promoting and safeguarding competition in the AI ecosystem.
• Vigorous Antitrust Enforcement. The competition authorities are “committed to using our respective powers and legal frameworks” to promote fair competition in AI markets.
• Digital and AI-specific Regulation. The authorities also recognize that technological advancements and the evolving nature of AI models also necessitate “adaptive and forward-looking policies” in AI markets.
• Strengthening Digital Capacity. They also pledge to deepen their understanding of AI models and enhance their digital capabilities, tools, and skillsets to “better identify competitive issues early and to carry out effective enforcement.”
• Enhanced International Cooperation. The authorities reaffirm their “commitment to dialogue and knowledge sharing among G7 competition agencies and policymakers.”
• Multidisciplinary Approach. AI-related competition issues intersect with broader policy dimensions, necessitating a multidisciplinary approach to confronting these issues.
Conclusion: Potential Increased Scrutiny
Sometimes the fact of a joint document can be as important as what the substance of the document says. While the communique does not announce concrete enforcement actions or policies, the communique’s release highlights a consensus among the G7 competition authorities about the importance of promoting and safeguarding a competitive and robust AI ecosystem. In all of the G7 countries, interested stakeholders should pay attention to future activity from their competition authorities. In these Washington-oriented reports, we will continue to closely monitor and analyze activity by the DOJ and FTC on AI competition issues.

Exit mobile version