AI in Washington, D.C. - White House Executive Order
On October 30, 2023, the White House released an Executive Order on the "Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence".
The White House acknowledged: (1) Irresponsible use of AI could result in "fraud, discrimination, bias, and disinformation; displace and disempower workers; stifle competition; and pose risks to national security"; (2) AI is driven by the underlying data along with those who build and use the technology; and (3) Responsible use of AI may "solve urgent challenges while making our work more prosperous, productive, innovative, and secure".
Eight guiding principles and priorities were listed to advance and govern the development and use of AI:
1. AI must be safe and secure. This includes ethical development and disclosure of AI-generated content to users.
2. Investment in AI-related education, training, development, research, and capacity along with intellectual property (IP) rights.
3. A commitment to supporting American workers.
4. AI policies must protect equity and civil rights. This requires that those who develop and deploy AI systems to be held accountable to standards that protect against unlawful discrimination and abuse.
5. Consumers must be protected. This includes mistakes in healthcare or harm caused to small businesses.
6. Privacy and civil liberties must be protected. AI makes it easy to extract, re-identify, link, infer, and act on sensitive information. The Federal Government will ensure that the collection, use, and retention of data is lawful, secure, and mitigates such risks.
7. The Federal Government itself must manage risks of it's own use of AI while utilizing AI to deliver better results for Americans.
8. The United States should lead the way to global societal, economic, and technological progress. The Federal Government will work with international allies and partners to develop a framework to manage AI risk, unlock potential for good, and promote common approaches to shared challenges.
The definition of AI is defined in 15 U.S.C. 9401(3) as a machine-based system that can make predictions, recommendations, or decisions influencing real or virtual environments. AI systems use machine and human-based inputs to perceive real and virtual environments; abstract such perceptions into models through analysis in an automated manner; and use model inference to formulate options for information or action.
The term "AI red-teaming" was defined as an adversarial method to identify flaws and vulnerabilities, such as harmful or discriminatory outputs from an AI system, unforeseen or undesirable system behaviors, limitations, or potential risks associated with the misuse of the system.
Crime forecasting was discussed as machine-generated predictions that use algorithms to analyze large volumes of data.
The concept of "dual-use foundation models" was discussed in regard to the risks associated with (a) substantially lowering the barrier of entry for non-experts to design, synthesize, acquire, or use chemical, biological, radiological, or nuclear weapons; and (b) enabling powerful offensive cyber operations.
Highlights of the Executive Order
Reporting guidelines that pertain to the Secretary of Commerce, in consultation with the Secretary of State, the Secretary of Defense, the Secretary of Energy, and Director of National Intelligence for: (1) any model that was trained using a quantity of computing power greater than 10^26 integer or floating-point operations, or using primarily biological sequence data using a quantity of computing power greater than 10^23 integer or floating-point operations; and (2) any computing cluster that has a set of machines physically co-located in a single datacenter, transitively connected by data center networking of over 100 Gbits/s, and having a theoretical maximum computing capacity of 10^20 integer of floating-point operations for second for training AI.
Guideline for U.S. IaaS (infrastructure as a service) providers to submit a report to the Secretary of Commerce when a foreign person attempts to train a large AI model with potential capabilities for malicious cyber-enabled activity.
The Secretary of Commerce shall identify the existing standards, tools, methods, and practices, as well as the potential development of further science-backed standards and techniques for (i) authenticating content and tracking its provenance; (ii) labeling synthetic content, such as watermarks; (iii) detecting synthetic content; (iv) preventing generative AI from producing child sexual abuse or non-consensual intimate imagery; (v) testing such software; and (vi) auditing and maintaining synthetic content.
Guidance to the USPTO (U.S. Patent and Trademark Office) for patent examiners and applicants that address inventors and the use of AI in the inventive process. This also includes patent eligibility.
The U.S. Copyright Office should address copyright issues associated with AI, such as works produced by AI and treatment of copyrighted works in AI training.
The Secretary of Homeland Security, Director of the National Intellectual Property Rights Coordination Center, and Attorney General shall develop a training, analysis, and evaluation program to mitigate AI-related IP risks, such as investigating AI-related IP theft.
Best practices for AI developers and law enforcement personnel to evaluate AI systems for IP law violations along with mitigation strategies.
Health care-related AI issues: (1) AI-enabled tools that develop personalized immune-response profiles for patients, (2) clinical care, real-world evidence programs, population health, public health, and related research; and (3) improve quality of veterans' healthcare.
Climate-change provisions: (1) strengthening America's resilience against climate change impacts and building an equitable clean energy economy; (2) potential for AI to improve planning, permitting, investment, and operations for electric grid infrastructure; (3) provision of clean, affordable, reliable, resilient, and secure electric power to all Americans; and (4) AI tools to mitigate climate change risks
Potential role of AI in research to solve social and global challenges. This includes practices to ensure that AI is used responsibly.
The Federal Trade Commission was encouraged to ensure fair competition in the AI marketplace and to ensure that consumers and workers are protected from harms attributed to AI. Innovation in the semiconductor industry was identified as critical to AI competition.
Various provisions addressed hiring practices by the federal government and evaluation of immigration laws with regard to work visas for job candidates with AI skills. Other provisions discussed underserved communities, mentoring, funding, education, and small businesses.
The Secretary of Labor should analyze the ability to support workers displaced by the adoption of AI and other technological advancements. This included principles and best practices for employers to mitigate AI's potential harms to employee well-being. One issue was AI-related collection and use of data about workers.
Strengthening AI and civil rights in the criminal justice system by identifying AI and algorithmic discrimination associated with automated systems.
Use of AI in criminal justice system included: sentencing, parole, supervised release, probation, bail, pretrial release/detention, risk assessment, police surveillance, crime forecasting and predictive policing, prison-management, and forensic analysis.
Unlawful discrimination from AI risks: hiring, housing markets (i.e. tenant screening), and consumer financial markets. An example was used that people with disabilities may receive unequal treatment from use of biometric data, such as gaze direction, eye tracking, gait analysis, and hand motions.
Incorporation of safety, privacy, and security standards into software development lifecycle for protection of personally identifiable information, such as AI-enhanced cybersecurity threats in health and human services.
Combat unwanted robocalls and robotexts facilitated or exacerbated by AI and to deploy AI technology to block unwanted communication.
A variety of ways for federal agencies to identify AI advantages and disadvantages. For example, the Secretary of Defense and Secretary of Homeland Security shall develop plans for, conduct, and complete an operational pilot project to identify, develop, test, evaluate, and deploy AI capabilities to aid in the discovery and remediation of vulnerabilities in critical U.S. government software, systems, and networks.