On October 30, 2023, President Biden issued a landmark Executive Order (EO) addressing artificial intelligence. The EO builds upon the White House’s Blueprint for an AI Bill of Rights released last year and includes requirements for cabinet secretaries, the corporate sector, and various White House offices, as well as proposed steps for independent federal agencies.
The EO sets forth several key definitions, including the definitions of AI, AI model, AI system, generative AI, and machine learning, among others.
- “Artificial intelligence” or “AI”: A machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. Artificial intelligence systems use machine- and human-based inputs to perceive real and virtual environments; abstract such perceptions into models through analysis in an automated manner; and use model inference to formulate options for information or action.
- “AI model”: A component of an information system that implements AI technology and uses computational, statistical, or machine-learning techniques to produce outputs from a given set of inputs.
- “AI system”: Any data system, software, hardware, application, tool, or utility that operates in whole or in part using AI.
- “Generative AI”: The class of AI models that emulate the structure and characteristics of input data in order to generate derived synthetic content. This can include images, videos, audio, text, and other digital content.
- “Machine learning”: A set of techniques that can be used to train AI algorithms to improve performance at a task based on data.
Requirements for Private Companies.
While the EO is primarily directed toward federal agencies, it also includes important requirements for private companies.
- Dual-Use Foundation Models – Requires companies that develop potential dual-use foundation models to submit reports to the U.S. Department of Commerce (Commerce) outlining their training and testing procedures and plans to protect their technology.
- Cloud Computing Reporting – Requires entities that have a large-scale computing cluster to report to Commerce certain information, including the location of these clusters and the amount of total computing power per cluster.
- Foreign Use of Cloud Computing for Training AI Models – Directs the Secretary of Commerce, within 90 days, to propose regulations that require U.S. Infrastructure as a Service (IaaS) providers to submit a report to Commerce when a foreign person transacts with that U.S. provider to “train a large AI model with potential capabilities that could be used in a malicious cyber-enabled activity.”
- Identification of Foreign Actors – Directs the Secretary of Commerce, within 180 days, to propose regulations that require U.S. IaaS providers to ensure that foreign resellers of United States IaaS products verify the identity of any foreign person that obtains an IaaS account.
Other Key Requirements for Agencies. The EO also includes additional directives for cabinet-level agencies, and suggestions to be implemented by independent agencies:
- Critical Infrastructure – Requires the head of each agency with regulatory authority over critical infrastructure and the heads of Sector Risk Management Agencies, within 90 days, to provide to the Secretary of Homeland Security an assessment of potential risks related to the use of AI in their critical infrastructure sectors.
- Financial Services – Requires the Secretary of Treasury, within 150 days, to issue a public report on best practices for financial institutions to manage AI-specific cybersecurity risks.
- AI Content Authentication – Requires the Secretary of Commerce, within 240 days, to submit a report to the Office of Management and Budget (OMB) and to the White House identifying standards and practices for authenticating content and labeling synthetic content among other things. Within 180 days of the report, the Secretary of Commerce must develop guidance for digital content authentication and synthetic content detection measures.
- Potential Dual-Use Regulation – Requires the Secretary of Commerce, within 270 days, to solicit input on potential risks and benefits and policy and regulatory approaches related to dual-use foundation models and model weights.
- Patent Inventorship – Requires the Under Secretary of Commerce for Intellectual Property and the Director of the U.S. Patent and Trademark Office (USPTO), within 120 days, to publish guidance on inventorship and the use of AI, including generative AI, and then, within 270 days, issue guidance to address other considerations at the intersection of AI and IP.
- Copyright Creation and Training Data – Requires the U.S. Copyright Office, within 270 days or 180 days after it publishes its forthcoming AI study, to issue recommendations on potential executive actions relating to copyright and AI. The recommendations must address the scope of protection for works produced using AI and the treatment of copyrighted works in AI training.
- Mitigate IP Risks – Requires the Secretary of Homeland Security, within 180 days, to develop a training program to mitigate AI-related IP risks, including theft and stakeholder sharing of information with law enforcement.
- Small Business – Requires the Secretary of Commerce, when implementing the Creating Helpful Incentives to Produce Semiconductors (CHIPS) Act of 2022 (L. 117-167), to promote competition by increasing the availability of resources to small businesses.
- Employee Well-Being – Requires the Secretary of Labor, within 180 days, to develop and publish principles and best practices for employers that could be used to mitigate AI’s potential harms to employees’ well-being and maximize its potential benefits.
- AI Discrimination – Requires the Attorney General to coordinate with and support agencies in their implementation and enforcement of existing Federal laws to address civil rights and civil liberties violations and discrimination related to AI.
- Federal Contractors Hiring Non-Discrimination – Directs the Secretary of Labor, within 365 days, to publish guidance for Federal contractors regarding nondiscrimination in hiring involving AI.
- Housing and Credit Underwriting Discrimination – Encourages the Director of the Federal Housing Finance Agency and the Director of the Consumer Financial Protection Bureau to prevent housing and credit underwriting discrimination including focusing on automated collateral-valuation and appraisal processes.
- Tenant Screening – Requires the Secretary of Housing and Urban Development and encourages the Director of the Consumer Financial Protection Bureau, within 180 days, to issue additional guidance on tenant screening systems that may violate Federal housing and credit laws including advertising and real estate transactions through digital platforms.
- Fraud, Discrimination and Privacy – Encourages regulatory agencies to consider using their full range of authorities to protect American consumers from fraud, discrimination, and threats to privacy and to address other risks arising from AI, including risks to financial stability.
- Health Care – Directs the Secretary of Health & Human Services (HHS) to establish an AI Task Force that will develop a strategic plan on responsible use of AI in research and discovery, drug and device safety, healthcare delivery, financing, and public health.
- Health Care Quality – Requires the Secretary of HHS, within 90 days, to develop a strategy to determine whether AI-enabled technologies in the health and human services sector maintain appropriate levels of quality.
- Health Care Non-Discrimination – Requires the Secretary of HHS, within 180 days, to consider appropriate actions to advance compliance with Federal nondiscrimination laws as those laws relate to AI.
- Drug-Development – Directs the Secretary of HHS, within 365 days, to develop a strategy for regulating the use of AI or AI-enabled tools in drug-development processes.
- Education – Directs the Secretary of Education, within 365 days, to develop resources, policies, and guidance regarding AI.
- Communications – Encourages the Federal Communications Commission to consider actions related to how AI will affect communications networks and consumers.
- Competition – Encourages the Federal Trade Commission to ensure fair competition in the AI marketplace and to ensure that consumers and workers are protected from harms that may be enabled by the use of AI.
Steptoe’s artificial intelligence group continues to monitor these and other legislative and regulatory developments in AI.