AI Intellectual Property Update:

  • Joining other large technology companies, OpenAI has announced a “copyright shield” to indemnify OpenAI users against copyright infringement claims over works created using its tools. Under the program, “we will now step in and defend our customers, and pay the costs incurred, if you face legal claims around copyright infringement.” The feature applies to ChatGPT Enterprise and OpenAI’s developer platform.

General AI Update:

  • OpenAI also made several other major announcements at its developer conference, including:
    • Custom versions of ChatGPT called “GPTs” that can be used for specific purposes such as “a natural language-based data analysis app, a coding assistant, an AI-powered vacation planner, a voice-controlled DJ, [or] a smart visual canvas.” The assistants can be augmented with data outside the core model, “such as proprietary domain data, product information or documents provided by your users.” OpenAI states that such data will not be used to train future versions of its model.
    • The launch of a “GPT Store” for people to sell their customized models.
    • GPT-4 Turbo, which has knowledge of the world up to April 2023 and the ability to have up to 300 pages of text in a single prompt.
  • AI remains a major point of contention in the ongoing strike negotiations between SAG-AFTRA and the Alliance of Motion Picture and Television Producers, with SAG-AFTRA pushing back on an AI clause that is included in the latest offer from the studios and streamers. The AMPTP’s suggested contractual clauses on AI would reportedly require studios and streamers to pay to scan the likeness of certain performers, and would allow studios and streamers secure the right to use scans of deceased performers without the consent of their estate or SAG-AFTRA.
  • Elon Musk has unveiled the first AI bot from his company X (formerly known as Twitter). The AI bot, called Grok, “has real-time access to info via the X platform, which is a massive advantage over other models.”
  • Meta has banned political campaigns and advertisers in other regulated industries from using the company’s generative AI ad tools.  “As we continue to test new Generative AI ads creation tools in Ads Manager, advertisers running campaigns that qualify as ads for Housing, Employment or Credit or Social Issues, Elections, or Politics, or related to Health, Pharmaceuticals or Financial Services aren’t currently permitted to use these Generative AI features,” the company said.
  • Amazon is reportedly dedicating a team to train its new AI model, codenamed “Olympus,” in an effort to compete with companies like OpenAI and Google.
  • YouTube will begin experimenting with new generative AI features, including conversational tool that uses AI to answer questions about YouTube’s content and make recommendations, as well as try out a new feature that will summarize topics in the comments of a video.

AI Policy Update—Federal:

  • Senators Mark Warner and Jerry Moran have introduced the bipartisan Federal Artificial Intelligence Risk Management Act to require federal agencies to adopt National Institute of Standards and Technology’s AI risk management framework in regards to their AI-related operations. While President Biden’s AI Order references the NIST AI framework, it stops short of mandating that all agencies adopt it.
  • NIST has announced that it will establish the Artificial Intelligence Safety Institute Consortium to “equip and empower the collaborative establishment of a new measurement science that will enable the identification of proven, scalable, and interoperable techniques and metrics to promote development and responsible use of safe and trustworthy AI, particularly for the most advanced AI systems[.]” The agency is calling for letters of interest to join in the consortium, and participation is open to “all interested organizations that can contribute their expertise, products, data, and/or models to the activities of the consortium[.]”

AI Policy Update—European Union:

  • The President of the European Commission made remarks regarding “AI safety priorities for 2024 and beyond” at the UK AI Safety Summit. President von der Leyen stressed the importance of the independence of the scientific community, the establishment of AI safety safeguards and procedures that are acceptable worldwide, the development of a culture of cybersecurity and the application of an international system of alerts fed by trust flaggers. President von der Leyen also mentioned the establishment of a European AI Office that could contribute to the future design of AI governance is under discussion.
  • The EU AI Act is at the final phase of the EU legislative process, the so-called trilogue negotiations, where the European Parliament, the Council of the European Union and the European Commission negotiate on the final text of the EU AI law. In this context, Euractive reports that the European Parliament might be very close to agreeing to some narrow conditions for using remote biometric identification technologies in real-time, as opposed to its previous position for a complete ban on the use of such technologies in real time. A similar approach is likely followed by the Council of the European Union.
  • The German Ministry of Education and Research presented a new AI Action plan which aims to boost the development of AI in Germany and the EU, and outlines 11 key areas for action, including strengthening the AI value chain both at the national and EU level. To implement its AI Action Plan, the ministry aims to invest over €1.6 billion in AI during the current government’s term.

AI Policy Update—International:

  • Concluding UK Prime Minister Rishi Sunak’s AI Safety Summit, companies including Meta, Google DeepMind and OpenAI have agreed to allow regulators to test their latest AI products before releasing them to the public. The Prime Minister announced that the United States, EU and other “like-minded” countries had reached a “landmark agreement” with select companies working at AI’s cutting edge on the principle that models should be rigorously assessed before and after they are deployed.