Below is this week’s tracker of the latest legal and regulatory developments in the United States and in the EU. Sign up here to ensure you do not miss an update.

AI Intellectual Property Update:

  • The SAG-AFTRA strike has ended, with union members set to vote on a proposed contract. The tentative agreement’s provisions on AI state that if a producer plans to make a computer-generated character that has a main facial feature that clearly looks like a real actor (and use of the actor’s name and face to prompt the AI), the producer must first get permission from the actor. The agreement also requires that performers are compensated for the creation and use of any digital replicas of the performer.
  • Adobe is working on a new AI-powered audio tool designed to break apart different layers of sound within a single recording. Called “Project Sound Lift,” the tool can automatically detect each sound and spit out separate files containing the background noise and the track users want to prioritize, such as someone’s voice or the sound of an instrument.
  • YouTube plans to adopt new disclosure requirements and content labels for content created by generative AI. Starting next year, the video platform will “require creators to disclose when they’ve created altered or synthetic content that is realistic . . . For example, this could be an AI-generated video that realistically depicts an event that never happened, or content showing someone saying or doing something they didn’t actually do.” Penalties for not labeling AI-generated content could include takedowns and demonetization.
  • Some of Bing’s search results now have AI-generated descriptions, according to a blog post from Microsoft. The company will use GPT-4 to garner “the most pertinent insights” from webpages and write summaries beneath Bing search results, and users can check which search result summaries are AI-generated.

Continue Reading AI Legal & Regulatory News Update—Week of 11/19/23

AI Intellectual Property Update:

  • Joining other large technology companies, OpenAI has announced a “copyright shield” to indemnify OpenAI users against copyright infringement claims over works created using its tools. Under the program, “we will now step in and defend our customers, and pay the costs incurred, if you face legal claims around copyright infringement.” The feature applies to ChatGPT Enterprise and OpenAI’s developer platform.

General AI Update:

  • OpenAI also made several other major announcements at its developer conference, including:
    • Custom versions of ChatGPT called “GPTs” that can be used for specific purposes such as “a natural language-based data analysis app, a coding assistant, an AI-powered vacation planner, a voice-controlled DJ, [or] a smart visual canvas.” The assistants can be augmented with data outside the core model, “such as proprietary domain data, product information or documents provided by your users.” OpenAI states that such data will not be used to train future versions of its model.
    • The launch of a “GPT Store” for people to sell their customized models.
    • GPT-4 Turbo, which has knowledge of the world up to April 2023 and the ability to have up to 300 pages of text in a single prompt.
  • AI remains a major point of contention in the ongoing strike negotiations between SAG-AFTRA and the Alliance of Motion Picture and Television Producers, with SAG-AFTRA pushing back on an AI clause that is included in the latest offer from the studios and streamers. The AMPTP’s suggested contractual clauses on AI would reportedly require studios and streamers to pay to scan the likeness of certain performers, and would allow studios and streamers secure the right to use scans of deceased performers without the consent of their estate or SAG-AFTRA.

Continue Reading AI Legal & Regulatory News Update—Week of 11/5/23

General AI Update:

  • The News Media Alliance, a trade group that represents more than 2,200 publishers, has released a white paper that it says shows that AI developers “significantly overweight publisher content by a factor ranging from over 5 to almost 100 as compared to the generic collection of content that the well-known entity Common Crawl has scraped from the web.”  
  • OpenAI is rolling out a new beta tool that gives ChatGPT users the ability to upload files—meaning the chatbot is able to summarize data, answer questions, or generate data visualizations from the uploaded file based on prompts. While this feature has been available to ChatGPT Enterprise subscribers, this is the first time its been offered to individual users.
  • Forbes has launched a beta generative AI search platform called Adelaide to provide personalized searches for readers. Built with Google Cloud, Adelaide was trained on the past 12 months of Forbes news coverage and allows users to ask specific questions or input general topic areas and get recommended articles about their query, along with a summarized answer to question prompts.
  • LinkedIn is introducing AI features for paid users. Subscribers to LinkedIn premium will now have access to AI tools that can tell them whether they’re a good candidate based on the information in their profile, and recommend profile changes to make the user more competitive for a job.

Continue Reading AI Legal & Regulatory News Update—Week of 10/29/23

On October 30, 2023, President Biden issued a landmark Executive Order (EO) addressing artificial intelligence. The EO builds upon the White House’s Blueprint for an AI Bill of Rights released last year and includes requirements for cabinet secretaries, the corporate sector, and various White House offices, as well as proposed steps for independent federal agencies.

Key Definitions.

The EO sets forth several key definitions, including the definitions of AI, AI model, AI system, generative AI, and machine learning, among others.

  • “Artificial intelligence” or “AI”: A machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. Artificial intelligence systems use machine- and human-based inputs to perceive real and virtual environments; abstract such perceptions into models through analysis in an automated manner; and use model inference to formulate options for information or action.
  • “AI model”: A component of an information system that implements AI technology and uses computational, statistical, or machine-learning techniques to produce outputs from a given set of inputs.
  • “AI system”: Any data system, software, hardware, application, tool, or utility that operates in whole or in part using AI.
  • “Generative AI”: The class of AI models that emulate the structure and characteristics of input data in order to generate derived synthetic content. This can include images, videos, audio, text, and other digital content.
  • “Machine learning”: A set of techniques that can be used to train AI algorithms to improve performance at a task based on data.

Continue Reading Biden Administration Issues Executive Order on Artificial Intelligence

Below is this week’s tracker of the latest legal and regulatory developments in the United States and internationally:

AI Intellectual Property Update:

  • Former Arkansas Governor Mike Huckabee is a plaintiff in a new class action brought against Meta, Microsoft, EleutherAI, and Bloomberg. The complaint alleges that LLMs developed by these companies used their work without permission to train generative AI models. The lawsuit focuses on the “Books3” repository, “a dataset of information scraped from a large collection of approximately 183,000 pirated ebooks, most of which were published in the past 20 years.” The case is Huckabee et al. v. Meta Platforms, Inc. et al., 1:23-cv-09152 (SDNY).
  • A group of music publishers, including Universal Music Group, has sued AI company Anthropic for alleged infringement of their copyrighted song lyrics through its  AI tool Claude. The music publishers allege that Claude’s results use phrases extremely similar to existing lyrics “even when the models are not specifically asked to do so.” The case is Concord Music Group Inc. et al. v. Anthropic PBC, 3:23-cv-01092 (M.D. Tenn.)
  • A Reuters article discusses Google’s new Search Generative Experience tool, which uses AI to create summaries in response to search queries.  For instance, “[s]earching for ‘Who is Jon Fosse’ – the recent Nobel Prize in Literature winner – [] generates three paragraphs on the writer and his work. Drop-down buttons provide links to Fosse content on Wikipedia, NPR, The New York Times and other websites; additional links appear to the right of the summary.”
  • YouTube is in the process of developing an AI-powered tool that allows users to replicate the voice of famous musicians while recording audio, and has reportedly approached music companies to obtain the rights to train its new AI tool on songs from their music catalogs.
  • Universal Music Group announced that it has partnered with digital music firm BandLab Technologies to help protect the rights of artists and songwriters amid the growing use of artificial intelligence. The “expansive, industry-first strategic relationship” will “pioneer market-led solutions with pro-creator standards to ensure new technologies serve the creator community effectively and ethically.”

Continue Reading AI Legal & Regulatory News Update—Week of 10/22/23

Steptoe has been tracking the fast-moving developments in artificial intelligence both in the United States and internationally. Below is an update on recent legal and policy developments related to AI, with a focus on intellectual property and high-profile policy issues. 

AI Intellectual Property Update:

  • Microsoft CEO Satya Nadella testified in the DOJ’s antitrust suit against Google on Monday, stating that he believes AI could further entrench Google’s dominance:
    • The tech giants are competing over the “vast troves” of content needed to train AI systems, and Nadella testified to the concern that publishers and platforms may sign exclusive deals to allow only Google to use their data in that manner: In addition to training its models on search queries, Google has also been moving to secure agreements with content publishers to ensure that it has exclusive access to their material for AI training purposes, according the Microsoft CEO. “When I am meeting with publishers now, they say Google’s going to write this check and it’s exclusive and you have to match it,” he said.
    • Microsoft CEO warns of ‘nightmare’ future for AI if Google’s search dominance continues: the enormous amount of search data that is provided to Google through its default agreements can help Google train its AI models to be better than anyone else’s — threatening to give Google an unassailable advantage in generative AI that would further entrench its power. “This is going to become even harder to compete in the AI age with someone who has that core… advantage,” Nadella testified . . . Now, Nadella has said that the same data advantage could create “even more of a nightmare” as large language models compete on the basis of the data they are trained on.
  • Kevin Scott, Microsoft’s CTO and EVP of AI, told the Verge that:
    • “expert contributions that you can make to the model’s training data, particularly in this step called reinforcement learning from human feedback, can really substantially improve the quality of the model in that domain of expertise…through selection of training data — you can get a model to be very high performing in a particular domain.”
    • Regarding AI copyright issues, Scott stated that while “everybody thinks that all of the training that is being done right now is covered by fair use” there are outstanding issues that will have to be decided by judges or lawmakers. Scott also emphasized that “someone who pours their heart and soul into writing a piece of fiction… need[s] to be compensated for it[s use in a training dataset]” but did not discuss what a potential remuneration scheme might look like.

Continue Reading AI Legal & Regulatory News Update—Week of 10/1/23

A Federal Court ruled that works that are entirely created by artificial intelligence (“AI”) systems cannot receive a copyright under United States law. Contrary to how this decision has been reported in news summaries, the case was decided on a relatively narrow issue, and kept the door open for future decisions to expand on this new area of the law.

The plaintiff used an AI system called the “Creativity Machine” to generate a piece of visual art called “A Recent Entrance to Paradise” (reproduced below):

The plaintiff claimed that the work had been “autonomously created by a computer algorithm running on a machine,” and attempted to register the work with the Copyright Office.  The Copyright Office denied the application on the ground that copyright law only extends to works created by human beings.Continue Reading 100% AI-Generated Works Cannot Receive A Copyright (But Works Created Jointly By Humans And AI May Be Copyrightable)

Hal 9000The EU has recently proposed a draft regulation that would address future challenges of artificial intelligence. The EU is attempting a balancing act between promoting the development of AI against over-regulation of new technologies. The regulation categorizes AI based on the perceived risk level, regulating more strictly an AI system that is deemed as being higher-risk.

The draft regulation adopts a broad definition of AI to encompass software developed with a variety of machine learning techniques. These include supervised, unsupervised and reinforcement learning and a variety of methods such as deep learning, logic and knowledge-based and statistical approaches, Bayesian estimation, and search optimization methods that generate outputs. Relevant outputs include content, predictions, recommendations, or decisions influencing the environments with which the AI interacts.

Below are highlights of the main aspects of the draft regulation.Continue Reading The End of Skynet? The EU Takes on AI