Steptoe has been tracking the fast-moving developments in artificial intelligence both in the United States and internationally. Below is this week’s update on legal and policy developments related to AI, with a focus on intellectual property and high-profile policy issues.
AI Intellectual Property Update:
- Following on the heels of a similar announcement from Microsoft, Google has announced that “if you are challenged on copyright grounds, we will assume responsibility for the potential legal risks involved.” Google is offering two protections: (1) Google’s use of training data: “our training data indemnity covers any allegations that Google’s use of training data to create any of our generative models utilized by a generative AI service, infringes a third party’s intellectual property right”; and (2) AI output: “The generated output indemnity . . . now also apply to allegations that generated output infringes a third party’s intellectual property rights.” Note that the first prong (training data) covers “any of our generative models utilized by a generative AI service” but the second prong (output) only covers specific listed services including Duet AI in Google Workspace and some Google cloud services such as Vertex AI, but does not include Google Bard. Regarding the output indemnity, Google adds an important proviso: “this indemnity only applies if you didn’t try to intentionally create or use generated output to infringe the rights of others, and similarly, are using existing and emerging tools, for example to cite sources to help use generated output responsibly.”
- The Canadian government has launched a public consultation (similar to a notice of inquiry) on the “implications of generative artificial intelligence for copyright.” Specifically, the consultation will examine: (1) the use of copyright-protected works in the training of AI systems (2) authorship and ownership rights related to AI-generated content and (3) liability for AI-generated works.” Comments are due by December 4, 2023.
- Google is now allowing website publishers a way to opt-out of having their content used to train Google’s Bard and Vertex AI models. A website can update its “robots.txt” file to allow itself to continue to be indexed by Google but not included in future AI training models.
- Adobe has introduced a new generative AI model. The “Firefly Image 2 Model . . . generates higher quality outputs with better model architecture and metadata, training algorithm improvements, and better image generation capabilities.”
- The “Generative Match” feature “generate images based on the look and feel of an existing image and create images with a consistent style.”
- Adobe notes that: “We’ve trained our Firefly generative models responsibly on licensed content and public domain content for which copyright has expired . . . Adobe is also offering enterprise customers the opportunity to obtain IP indemnification for Firefly-generated content.” The company also states that “we require users to confirm they have the right to use any work that they upload to Generative Match as a reference image.”
- A startup company called Spawning is launching a new tool to help websites block AI from scraping content. The tool, called “Kudurru,” “is a way to indicate that you do not wish your media to be scraped. Unlike an opt-out list, Kudurru doesn’t give scrapers a choice about whether or not to respect your wishes. Kudurru identifies and blocks scraping automatically.”
- More than 40 percent of the Forbes Global 2000 companies do not have control over their “.AI” internet domain names, potentially exposing them to fraud and brand infringement, according to a new study from domain registrar CSC. The .AI domains have become increasingly popular amid the rise in artificial intelligence, but third parties are scooping up the domains and misusing them.
AI Litigation Update:
- Google has filed its motion to dismiss to one of the several pending class action lawsuits against the use of scraping for generative AI. In its response, Google stated the complaint would “take a sledgehammer not just to Google’s services but to the very idea of Generative AI.”
- A recruitment platform that uses AI to publish job postings on Google sued a competitor, accusing it of illegally scraping its proprietary database. The case is Jobiak LLC v. Aspen Technology Labs Inc., 2:23-cv-08728, in the U.S. District Court for the Central District of California.
AI Policy Update:
- A bipartisan group of senators is circulating draft legislation to protect artists against unauthorized digital replicas created by AI technology. The so-called No Fakes Act would prohibit the production or distribution of unauthorized AI-generated replicas in audiovisual or sound recordings without consent.
- President Joe Biden should incorporate the administration’s AI Bill of Rights into his forthcoming executive order, 16 House and Senate Democrats wrote in a letter last week. The bill of rights and “detailed best practices” should be “binding when federal agencies develop, purchase, fund, deploy, or regulate the use of automated systems,” they wrote.
- The Biden administration said it would tighten rules against exporting advanced microchips to China, escalating an effort to slow Beijing’s development of artificial intelligence and other technologies that could assist its military.
- Stanford University researchers issued a report measuring the transparency of artificial intelligence foundation models from companies like OpenAI and Google, and the authors urged the companies to reveal more information such as the data and human labor used to train models.
- France’s Society of Authors, Composers and Publishers of Music (Sacem) has announced that it is opting out from making its members’ work freely available for use in the development of artificial intelligence tools. “From now on, data mining of works in Sacem’s repertoire by entities developing artificial intelligence tools will require prior authorisation from Sacem, in order to ensure fair remuneration for the authors, composers and music publishers it represents,” the group said in a statement.
- The agenda for UK Prime Minister Rishi Sunak’s AI Safety Summit has been published. The UK will use the summit to showcase its “Frontier AI Taskforce,” an advisory panel reporting directly to the prime minister that is in talks with major AI companies like OpenAI, Anthropic and Google to gain access to their models and evaluate risks.