General AI Update:

  • The News Media Alliance, a trade group that represents more than 2,200 publishers, has released a white paper that it says shows that AI developers “significantly overweight publisher content by a factor ranging from over 5 to almost 100 as compared to the generic collection of content that the well-known entity Common Crawl has scraped from the web.”  
  • OpenAI is rolling out a new beta tool that gives ChatGPT users the ability to upload files—meaning the chatbot is able to summarize data, answer questions, or generate data visualizations from the uploaded file based on prompts. While this feature has been available to ChatGPT Enterprise subscribers, this is the first time its been offered to individual users.
  • Forbes has launched a beta generative AI search platform called Adelaide to provide personalized searches for readers. Built with Google Cloud, Adelaide was trained on the past 12 months of Forbes news coverage and allows users to ask specific questions or input general topic areas and get recommended articles about their query, along with a summarized answer to question prompts.
  • LinkedIn is introducing AI features for paid users. Subscribers to LinkedIn premium will now have access to AI tools that can tell them whether they’re a good candidate based on the information in their profile, and recommend profile changes to make the user more competitive for a job.

AI Litigation Update

  • A federal judge in California has dismissed some of the claims alleged in the proposed class action suit Andersonv. Stability AI (N.D. Cal. No. 3:23-cv-00201-WHO). Of the three named plaintiffs, District Court Judge William Orrick entirely dismissed—with prejudice— the claims of two of them because the works they claimed were infringed were not registered with the Copyright Office. The scope of claims of the third artist, Anderson, are now limited to the works she has registered with the Copyright Office.
    • The judge denied the defendants’ argument that Anderson cannot proceed with her copyright infringement allegations unless she specifically identified each of her registered works that she alleges were used as images to train StabilityAI’s AI products. The judge found that “review of the output pages [] confirm[ing] that some of [Anderson’s] registered work was used as Training Images” was sufficient to allow her copyright claims to proceed, “particularly in light of the nature of this case” that involves billions of images.
    • The judge also dismissed two defendants from the suit: DeviantArt, the website that hosted the artists’ work, and Midjourney, another AI developer. Plaintiffs may still amend their complaint with regard to copyright infringement claims against these parties. The judge noted, however, that the defendants “make a strong case that [he] should dismiss the [copyright infringement claims based on a] derivative work theory without leave to amend because plaintiffs cannot plausibly allege the Output Images are substantially similar or re-present protected aspects of copyrighted Training Images[.]”
    • The judge dismissed plaintiffs’ right of publicity claims, breach of contract claims, and claims under the DMCA with leave to amend the complaint, and dismissed their claims under California’s Unfair Competition Law with no leave to amend.

AI Policy Update—Federal

  • President Biden has signed a wide-ranging executive order on artificial intelligence, directing agencies to develop safety guidelines, and requiring developers of AI systems that pose risks to U.S. national security, the economy, public health or safety to share the results of safety tests with the U.S. government, among other directives. Please see Steptoe’s full summary of the order here. In relevant part for readers of this blog:
    • Copyright: within 270 days or 180 days after the Copyright publishes its forthcoming AI study (whichever is later), the Under Secretary of Commerce for Intellectual Property and the USPTO Director shall consult with Copyright Office and issue recommendations on potential executive actions relating to copyright and AI. “The recommendations shall address any copyright and related issues discussed in the United States Copyright Office’s study, including the scope of protection for works produced using AI and the treatment of copyrighted works in AI training.”
    • Communications: the order encourages the FCC to examine the potential for AI to improve spectrum management, increase the efficiency of non-Federal spectrum usage, and expand opportunities for the sharing of non-Federal spectrum including by: providing support for efforts to improve network security, resiliency, and interoperability using next-generation technologies that incorporate AI, including self-healing networks, 6G, and Open RAN; and to deploy AI technologies that better serve consumers by blocking unwanted robocalls and robotexts.
    • Competition: the order encourages the FTC to ensure fair competition in the AI marketplace and to ensure that consumers and workers are protected from harms that may be enabled by the use of AI.
  • Vice President Harris has emphasized the “existential threats of AI” at a speech at the US Embassy in London, highlighting consumer protection in AI and how AI tools could be used to exacerbate existing inequalities. The Vice President referred to the “full spectrum” of risks that have already emerged in AI, such as bias, discrimination and the proliferation of misinformation, and argued that A.I. safety should “be based on the public interest.”
  • A group of House Democrats has reintroduced legislation to curtail law enforcement’s use of facial recognition software, citing concerns about how unrestricted use of the technology could erode Americans’ constitutional rights.

The Facial Recognition Act would limit and, in some cases, prohibit law enforcement agencies at the federal, state and local levels from using the surveillance tools, while also requiring more transparency when it comes to the deployment and use of the technology.

AI Policy Update—International:

  • OpenAI has received a new questionnaire from the Data Protection Officer of the German state of Rhineland-Palatinate, concerning a data protection review of ChatGPT. The local data protection watchdog is leading a coordinated action of German data regulators against ChatGPT, and said that the questions aim to assess the lawful processing of personal data. The questionnaire focuses on special data categories, rights of data subjects, and data recognition and protection during ChatGPT’s training and usage. The Rhineland-Palatinate watchdog sent an earlier request for information in April.
  • G7 leaders have officially adopted both International Guiding Principles on Artificial Intelligence and a voluntary Code of Conduct for AI developers. The Code of Conduct is intended “complement, at international level, the legally binding rules that the EU co-legislators are currently finalising under the EU AI Act.” The Guiding Principles include commitments to mitigate risks and misuse and identify vulnerabilities, to encourage responsible information sharing, reporting of incidents, and investment in cybersecurity as well as a labelling system to enable users to identify AI-generated content. 
  • More than 25 countries—including the US, China, and the EU—have signed the “Bletchley Declaration” as part of the UK’s AI Safety Summit happening this week in London. The Declaration states that countries need to work together and establish a common approach on oversight of artificial intelligence, and establishes a two-pronged approach focused on identifying risks of shared concern while developing cross-country policies to mitigate them. China, for its part, announced at the Summit that the country is “willing to enhance our dialogue and communication in AI safety will all sides, contributing to an international mechanism with global participation in governance framework.”
  • The United Nations has announced the creation of an advisory body to address issues in the international governance of artificial intelligence. Made up of 39 members that include tech company executives, government officials, and academics from around the world, the advisory body issue preliminary recommendations on the use and governance of artificial intelligence by the end of this year and final recommendations by the summer of 2024.