The agency’s comments largely focused on two areas: potential threats to competition from AI, and copyright.
Competition: The FTC cautioned that “the rapid development and deployment of AI also poses potential risks to competition” for several reasons:
- “The rising importance of AI to the economy may further lock in the market dominance of large incumbent technology firms. These powerful, vertically integrated incumbents control many of the inputs necessary for the effective development and deployment of AI tools, including cloud-based or local computing power and access to large stores of training data. These dominant technology companies may have the incentive to use their control over these inputs to unlawfully entrench their market positions in AI and related markets, including digital content markets.”
- “AI tools can be used to facilitate collusive behavior that unfairly inflates prices, precisely target price discrimination, or otherwise manipulate outputs.”
- “Many large technology firms possess vast financial resources that enable them to indemnify the users of their generative AI tools or obtain exclusive licenses to copyrighted (or otherwise proprietary) training data, potentially further entrenching the market power of these dominant firms.”
Copyright: The agency noted its interest around potential unfair practices involving AI:
- “consumers may be deceived when authorship does not align with consumer expectations, such as when a consumer thinks a work has been created by a particular musician or other artist, but it has been generated by someone else using an AI tool.”
- “questions surrounding liability issues arising from the development or deployment of generative AI . . . For instance, under certain circumstances, the use of pirated or misuse of copyrighted materials could be an unfair practice or unfair method of competition under Section 5 of the FTC Act”
The agency noted that the evolution of the fair use doctrine “could influence the competitive dynamics of the markets for AI tools and for products with which the outputs of those tools may compete.” For example, “conduct that may violate the copyright laws––such as training an AI tool on protected expression without the creator’s consent or selling output generated from such an AI tool, including by mimicking the creator’s writing style, vocal or instrumental performance, or likeness—may also constitute an unfair method of competition.” Likewise, “conduct that may be consistent with the copyright laws nevertheless may violate Section 5.”
The FTC’s comments highlight how AI intersects with many complex policy and legal issues beyond copyright law. The agency states that it has already been examining risks associated with AI, including around consumer privacy, automation of discrimination and bias, deceptive practices, and imposter schemes.
What does this mean for companies using generative AI tools? Potential FTC scrutiny of AI usage could come in multiple forms:
- Rulemakings could establish standards for appropriate consent, transparency, and data usage
- Investigations involving potentially misleading or deceptive AI outputs
- Privacy/data security investigations for mishandling data used to train AI systems
- Unfair competition investigations
If the FTC plans to use its authority to regulate AI-related areas that could be deemed unfair, deceptive, or anti-competitive (including by using its expanded interpretation of its Section 5 powers to go after “unfair methods of competition” more broadly), the FTC could profoundly shape the AI landscape. For example, in its pending antitrust complaint against Amazon, the FTC alleged that Amazon’s pricing algorithm “is an unfair method of competition in violation of Section 5” because it allegedly “raised prices by manipulating other online stores’ pricing algorithms into matching Amazon’s increases in the prices offered to shoppers.”
Given the aggressive stance of FTC Chairwoman Lina Khan, companies should take a proactive approach towards using generative AI. Steptoe’s artificial intelligence and antitrust groups can assist with an audit of AI internal practices (including consent, transparency, data handling, and competition issues).