Steptoe has been tracking the fast-moving developments in artificial intelligence both in the United States and internationally. Below is an update on recent legal and policy developments related to AI, with a focus on intellectual property and high-profile policy issues. 

AI Copyright Update:

  • Several authors brought a proposed class action lawsuit against OpenAI for using their works to train its ChatGPT software: “OpenAI incorporated Plaintiffs’ and Class members’ copyrighted works in datasets used to train its GPT models powering its ChatGPT product. Indeed, when ChatGPT is prompted, it generates not only summaries, but in-depth analyses of the themes present in Plaintiffs’ copyrighted works, which is only possible if the underlying GPT model was trained using Plaintiffs’ works.” The complaint notes that some of Chat GPT’s training data was allegedly derived from “infamous shadow library websites . . . which host massive collections of pirated books, research papers, and other text-based materials.” The case is Chabon et al. v. OpenAI, Inc. et al., 3:23-cv-04625 (N.D. Cal.) (Sept. 8, 2023). 
  • The US Copyright Office Review Board has rejected copyright protection for an AI-generated artwork that won a Colorado State Fair art contest last year. The Board found that the work “contains more than a de minimis amount of content generated by artificial intelligence,” which cannot be copyrightable because “to be copyrightable, a work must qualify as an ‘original work of authorship,’ which excludes works produced by non-humans.”
  • Microsoft announced a policy regarding its Copilot AI system to defend customers sued for copyright infringement for content generated by Copilot: “Specifically, if a third party sues a commercial customer for copyright infringement for using Microsoft’s Copilots or the output they generate, we will defend the customer and pay the amount of any adverse judgments or settlements that result from the lawsuit, as long as the customer used the guardrails and content filters we have built into our products.”

AI Policy Update:


  • On September 12 the Senate held two concurrent hearings regarding AI governance, offering insight into how Congress may approach future AI legislation:
    • The Senate Judiciary Subcommittee on Privacy, Technology, and the Law held a hearing entitled “Oversight of A.I.: Legislating on Artificial Intelligence.” Senators Hawley (R-Mo.) and Blumenthal (D-Conn.) released a bipartisan framework for regulating AI. Among other things, it would require companies developing AI models to register with an independent oversight body, clarify that Section 230 does not apply to AI, and requiring transparency about the limits and uses of AI models.
    • The Senate Commerce Subcommittee on Consumer Protection, Product Safety and Data Security held a hearing entitled “The Need for Transparency in Artificial Intelligence.” Members and witnesses discussed potential transparency and risk-mitigation principles that could be incorporated into a potential AI framework. Witnesses included the CEO of The Software Alliance, the Dean of the Heinz College of Information Systems and Public Policy at Carnegie Mellon University, the Executive Director of WITNESS, and the Executive Vice President for Policy at the Information Technology Industry Council.
  • An AI forum was held with national lawmakers and tech leaders including Elon Musk, Bill Gates, Sam Altman of OpenAI, Satya Nadella of Microsoft, and Jensen Huang of Nvidia.


  • California Governor Newsom signed an executive order relating to AI, which authorizes a report on the benefits and potential harms of the technology.
  • California state Sen. Scott Wiener (D) proposed a plan to regulate AI, noting that: “The introduction of ChatGPT in November 2022 demonstrated that ‘generative AI’ had progressed to the point that companies could release products that produce output that is often indistinguishable from what a human might produce.”


  • Officials from G7 nations have agreed to create “voluntary guidelines” for generative AI. The code of conduct “is expected to include commitments from companies to take steps to stop potential societal harm created by their AI systems; to invest in tough cybersecurity controls over how the technology is developed; and to create risk management systems to curb the potential misuse of the technology.” The proposal will be presented to G7 leaders as early as November.
  • Some EU member states are pushing back against a draft Artificial Intelligence Act as proposed by the European Parliament. The Act would require AI companies to disclose any copyrighted material used to develop their systems. There is concern that the rule could be too onerous and difficult to comply with because it might be difficult to separate copyright-protected data from non-protected data.
  • Several experts testified before a UK parliamentary committee regarding intellectual property issues surrounding generative AI, noting that there is currently a lack of clarity in the law.