Photo of Andrew Golodny

Below is this week’s tracker of the latest legal and regulatory developments in the United States and in the EU. Sign up here to ensure you do not miss an update.

AI Intellectual Property Update:

  • The SAG-AFTRA strike has ended, with union members set to vote on a proposed contract. The tentative agreement’s provisions on AI state that if a producer plans to make a computer-generated character that has a main facial feature that clearly looks like a real actor (and use of the actor’s name and face to prompt the AI), the producer must first get permission from the actor. The agreement also requires that performers are compensated for the creation and use of any digital replicas of the performer.
  • Adobe is working on a new AI-powered audio tool designed to break apart different layers of sound within a single recording. Called “Project Sound Lift,” the tool can automatically detect each sound and spit out separate files containing the background noise and the track users want to prioritize, such as someone’s voice or the sound of an instrument.
  • YouTube plans to adopt new disclosure requirements and content labels for content created by generative AI. Starting next year, the video platform will “require creators to disclose when they’ve created altered or synthetic content that is realistic . . . For example, this could be an AI-generated video that realistically depicts an event that never happened, or content showing someone saying or doing something they didn’t actually do.” Penalties for not labeling AI-generated content could include takedowns and demonetization.
  • Some of Bing’s search results now have AI-generated descriptions, according to a blog post from Microsoft. The company will use GPT-4 to garner “the most pertinent insights” from webpages and write summaries beneath Bing search results, and users can check which search result summaries are AI-generated.

Continue Reading AI Legal & Regulatory News Update—Week of 11/19/23

We’re pleased to announce the launch of our new blog StepTechToe: AI, Data & Digital!

StepTechToe provides fresh and seasoned insights into the regulatory world of digital services. It covers a wide range of topics, including data protection, privacy, cybersecurity, artificial intelligence, virtual reality, augmented reality, and digital governance issues. We deliver a global perspective

AI Intellectual Property Update:

  • Joining other large technology companies, OpenAI has announced a “copyright shield” to indemnify OpenAI users against copyright infringement claims over works created using its tools. Under the program, “we will now step in and defend our customers, and pay the costs incurred, if you face legal claims around copyright infringement.” The feature applies to ChatGPT Enterprise and OpenAI’s developer platform.

General AI Update:

  • OpenAI also made several other major announcements at its developer conference, including:
    • Custom versions of ChatGPT called “GPTs” that can be used for specific purposes such as “a natural language-based data analysis app, a coding assistant, an AI-powered vacation planner, a voice-controlled DJ, [or] a smart visual canvas.” The assistants can be augmented with data outside the core model, “such as proprietary domain data, product information or documents provided by your users.” OpenAI states that such data will not be used to train future versions of its model.
    • The launch of a “GPT Store” for people to sell their customized models.
    • GPT-4 Turbo, which has knowledge of the world up to April 2023 and the ability to have up to 300 pages of text in a single prompt.
  • AI remains a major point of contention in the ongoing strike negotiations between SAG-AFTRA and the Alliance of Motion Picture and Television Producers, with SAG-AFTRA pushing back on an AI clause that is included in the latest offer from the studios and streamers. The AMPTP’s suggested contractual clauses on AI would reportedly require studios and streamers to pay to scan the likeness of certain performers, and would allow studios and streamers secure the right to use scans of deceased performers without the consent of their estate or SAG-AFTRA.

Continue Reading AI Legal & Regulatory News Update—Week of 11/5/23

General AI Update:

  • The News Media Alliance, a trade group that represents more than 2,200 publishers, has released a white paper that it says shows that AI developers “significantly overweight publisher content by a factor ranging from over 5 to almost 100 as compared to the generic collection of content that the well-known entity Common Crawl has scraped from the web.”  
  • OpenAI is rolling out a new beta tool that gives ChatGPT users the ability to upload files—meaning the chatbot is able to summarize data, answer questions, or generate data visualizations from the uploaded file based on prompts. While this feature has been available to ChatGPT Enterprise subscribers, this is the first time its been offered to individual users.
  • Forbes has launched a beta generative AI search platform called Adelaide to provide personalized searches for readers. Built with Google Cloud, Adelaide was trained on the past 12 months of Forbes news coverage and allows users to ask specific questions or input general topic areas and get recommended articles about their query, along with a summarized answer to question prompts.
  • LinkedIn is introducing AI features for paid users. Subscribers to LinkedIn premium will now have access to AI tools that can tell them whether they’re a good candidate based on the information in their profile, and recommend profile changes to make the user more competitive for a job.

Continue Reading AI Legal & Regulatory News Update—Week of 10/29/23

On October 30, 2023, President Biden issued a landmark Executive Order (EO) addressing artificial intelligence. The EO builds upon the White House’s Blueprint for an AI Bill of Rights released last year and includes requirements for cabinet secretaries, the corporate sector, and various White House offices, as well as proposed steps for independent federal agencies.

Key Definitions.

The EO sets forth several key definitions, including the definitions of AI, AI model, AI system, generative AI, and machine learning, among others.

  • “Artificial intelligence” or “AI”: A machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. Artificial intelligence systems use machine- and human-based inputs to perceive real and virtual environments; abstract such perceptions into models through analysis in an automated manner; and use model inference to formulate options for information or action.
  • “AI model”: A component of an information system that implements AI technology and uses computational, statistical, or machine-learning techniques to produce outputs from a given set of inputs.
  • “AI system”: Any data system, software, hardware, application, tool, or utility that operates in whole or in part using AI.
  • “Generative AI”: The class of AI models that emulate the structure and characteristics of input data in order to generate derived synthetic content. This can include images, videos, audio, text, and other digital content.
  • “Machine learning”: A set of techniques that can be used to train AI algorithms to improve performance at a task based on data.

Continue Reading Biden Administration Issues Executive Order on Artificial Intelligence

Below is this week’s tracker of the latest legal and regulatory developments in the United States and internationally:

AI Intellectual Property Update:

  • Former Arkansas Governor Mike Huckabee is a plaintiff in a new class action brought against Meta, Microsoft, EleutherAI, and Bloomberg. The complaint alleges that LLMs developed by these companies used their work without permission to train generative AI models. The lawsuit focuses on the “Books3” repository, “a dataset of information scraped from a large collection of approximately 183,000 pirated ebooks, most of which were published in the past 20 years.” The case is Huckabee et al. v. Meta Platforms, Inc. et al., 1:23-cv-09152 (SDNY).
  • A group of music publishers, including Universal Music Group, has sued AI company Anthropic for alleged infringement of their copyrighted song lyrics through its  AI tool Claude. The music publishers allege that Claude’s results use phrases extremely similar to existing lyrics “even when the models are not specifically asked to do so.” The case is Concord Music Group Inc. et al. v. Anthropic PBC, 3:23-cv-01092 (M.D. Tenn.)
  • A Reuters article discusses Google’s new Search Generative Experience tool, which uses AI to create summaries in response to search queries.  For instance, “[s]earching for ‘Who is Jon Fosse’ – the recent Nobel Prize in Literature winner – [] generates three paragraphs on the writer and his work. Drop-down buttons provide links to Fosse content on Wikipedia, NPR, The New York Times and other websites; additional links appear to the right of the summary.”
  • YouTube is in the process of developing an AI-powered tool that allows users to replicate the voice of famous musicians while recording audio, and has reportedly approached music companies to obtain the rights to train its new AI tool on songs from their music catalogs.
  • Universal Music Group announced that it has partnered with digital music firm BandLab Technologies to help protect the rights of artists and songwriters amid the growing use of artificial intelligence. The “expansive, industry-first strategic relationship” will “pioneer market-led solutions with pro-creator standards to ensure new technologies serve the creator community effectively and ethically.”

Continue Reading AI Legal & Regulatory News Update—Week of 10/22/23

Steptoe has been tracking the fast-moving developments in artificial intelligence both in the United States and internationally. Below is this week’s update on legal and policy developments related to AI, with a focus on intellectual property and high-profile policy issues. 

AI Intellectual Property Update:

  • Following on the heels of a similar announcement from Microsoft, Google has announced that “if you are challenged on copyright grounds, we will assume responsibility for the potential legal risks involved.” Google is offering two protections: (1) Google’s use of training data: “our training data indemnity covers any allegations that Google’s use of training data to create any of our generative models utilized by a generative AI service, infringes a third party’s intellectual property right”; and (2) AI output: “The generated output indemnity . . . now also apply to allegations that generated output infringes a third party’s intellectual property rights.” Note that the first prong (training data) covers “any of our generative models utilized by a generative AI service” but the second prong (output) only covers specific listed services including Duet AI in Google Workspace and some Google cloud services such as Vertex AI, but does not include Google Bard. Regarding the output indemnity, Google adds an important proviso: “this indemnity only applies if you didn’t try to intentionally create or use generated output to infringe the rights of others, and similarly, are using existing and emerging tools, for example to cite sources to help use generated output responsibly.”
  • The Canadian government has launched a public consultation (similar to a notice of inquiry) on the “implications of generative artificial intelligence for copyright.”  Specifically, the consultation will examine: (1) the use of copyright-protected works in the training of AI systems (2) authorship and ownership rights related to AI-generated content and (3) liability for AI-generated works.” Comments are due by December 4, 2023.
  • Google is now allowing website publishers a way to opt-out of having their content used to train Google’s Bard and Vertex AI models. A website can update its “robots.txt” file to allow itself to continue to be indexed by Google but not included in future AI training models.
  • Adobe has introduced a new generative AI model. The “Firefly Image 2 Model . . . generates higher quality outputs with better model architecture and metadata, training algorithm improvements, and better image generation capabilities.”
    • The “Generative Match” feature “generate images based on the look and feel of an existing image and create images with a consistent style.”
    • Adobe notes that: “We’ve trained our Firefly generative models responsibly on licensed content and public domain content for which copyright has expired . . . Adobe is also offering enterprise customers the opportunity to obtain IP indemnification for Firefly-generated content.” The company also states that “we require users to confirm they have the right to use any work that they upload to Generative Match as a reference image.”
  • A startup company called Spawning is launching a new tool to help websites block AI from scraping content. The tool, called “Kudurru,” “is a way to indicate that you do not wish your media to be scraped. Unlike an opt-out list, Kudurru doesn’t give scrapers a choice about whether or not to respect your wishes. Kudurru identifies and blocks scraping automatically.”
  • More than 40 percent of the Forbes Global 2000 companies do not have control over their “.AI” internet domain names, potentially exposing them to fraud and brand infringement, according to a new study from domain registrar CSC. The .AI domains have become increasingly popular amid the rise in artificial intelligence, but third parties are scooping up the domains and misusing them.

Continue Reading AI Legal & Regulatory News Update—Week of 10/15/23

Comments responding to an FCC notice of inquiry that seeks insight into how to obtain more sophisticated real-time knowledge of non-Federal spectrum usage have highlighted the importance and potential of AI and machine learning systems.

The Satellite Industry Association stated that, “sensing the physical surroundings together with AI will further enhance situational awareness. Sensing supports various innovative applications such as high precision positioning and localization of devices and objects, high resolution and real-time 3D-mapping for automated and safe driving/transport, digital twins, and industrial automation.”

SpectrumX recognized the “substantial opportunity to apply [AI] tools to spectrum data analysis and spectrum management,” while Lockheed Martin agreed that “artificial intelligence [] and machine learning [] offer promise in evaluating big datasets and providing . . .  insights into spectrum use over time, spectral band, and geography.” The NCTA stated that “Chairwoman Rosenworcel’s optimism on the transformative capabilities of AI and ML tools is well-founded,” noting that “AI and ML can be powerful tools to mine unstructured data and digest it into a more user-friendly format.”Continue Reading Commenters Discuss the Role of Artificial Intelligence and Machine Learning in How the FCC Should Manage Spectrum 

Steptoe has been tracking the fast-moving developments in artificial intelligence both in the United States and internationally. Below is an update on recent legal and policy developments related to AI, with a focus on intellectual property and high-profile policy issues. 

AI Intellectual Property Update:

  • Microsoft CEO Satya Nadella testified in the DOJ’s antitrust suit against Google on Monday, stating that he believes AI could further entrench Google’s dominance:
    • The tech giants are competing over the “vast troves” of content needed to train AI systems, and Nadella testified to the concern that publishers and platforms may sign exclusive deals to allow only Google to use their data in that manner: In addition to training its models on search queries, Google has also been moving to secure agreements with content publishers to ensure that it has exclusive access to their material for AI training purposes, according the Microsoft CEO. “When I am meeting with publishers now, they say Google’s going to write this check and it’s exclusive and you have to match it,” he said.
    • Microsoft CEO warns of ‘nightmare’ future for AI if Google’s search dominance continues: the enormous amount of search data that is provided to Google through its default agreements can help Google train its AI models to be better than anyone else’s — threatening to give Google an unassailable advantage in generative AI that would further entrench its power. “This is going to become even harder to compete in the AI age with someone who has that core… advantage,” Nadella testified . . . Now, Nadella has said that the same data advantage could create “even more of a nightmare” as large language models compete on the basis of the data they are trained on.
  • Kevin Scott, Microsoft’s CTO and EVP of AI, told the Verge that:
    • “expert contributions that you can make to the model’s training data, particularly in this step called reinforcement learning from human feedback, can really substantially improve the quality of the model in that domain of expertise…through selection of training data — you can get a model to be very high performing in a particular domain.”
    • Regarding AI copyright issues, Scott stated that while “everybody thinks that all of the training that is being done right now is covered by fair use” there are outstanding issues that will have to be decided by judges or lawmakers. Scott also emphasized that “someone who pours their heart and soul into writing a piece of fiction… need[s] to be compensated for it[s use in a training dataset]” but did not discuss what a potential remuneration scheme might look like.

Continue Reading AI Legal & Regulatory News Update—Week of 10/1/23