Michael E. Byczek
Attorney at Law


AI in Washington, D.C. - Blueprint for AI Bill of Rights

In October 2022 the White House Office of Science and Technology Policy (OSTP) released a white paper titled "Blueprint for an AI Bill of Rights: Making Automated Systems Work for the American People". It is intended to support the development of policies and practices that protect civil rights and promote democratic values in the building, deployment, and governance of automated systems. The OSTP provides advice on the scientific, engineering, and technological aspects of the economy, national security, health, foreign relations, the environment, and the technological recovery and use of resources, among other topics, to the President and others within the Executive Office of the President.

The paper defined an "automated system" as "any system, software, or process that uses computation as whole or part of a system to determine outcomes, make or aid decisions, inform policy implementations, collect data, or observations, or otherwise interact with individuals and/or communities".

Five principles and associated practices to protect civil rights, civil liberties, and privacy

1. Safe and Effective Systems: The concerns, risks, and potential impacts of the system should be identified. Systems should undergo pre-deployment testing, risk identification and mitigation, and ongoing monitoring. Automated systems should not endanger safety, but rather proactively provide protection.

2. Algorithmic Discrimination Protections: Algorithms should not treat anybody different based on race, color, ethnicity, sex (pregnancy, childbirth, gender identity, intersex status, and sexual orientation), religion, age, national origin, disability, veteran status, genetic information, or any other classification protected by law.

3. Data Privacy: Designers, developers, and deployers of automated systems should seek permission and respect decisions about collection, use, access, transfer, and deletion of data.

4. Notice and Explanation: Plain language should be used to describe the overall system function and role that automation is used. The user of an automated system should know how and why an outcome impacting the user was determined.

5. Human Alternatives, Consideration, and Fallback: Users should be allowed to opt out and engage with a human to remedy problems.

Applying the Blueprint

There is a relationship to existing law and policy. For example, medical devices have regulatory safety requirements. The AI Blueprint would require new laws to be enacted. Exceptions might be required to comply with existing laws. Courts may need to extend statutory protections to new and emerging technologies.

Automated systems should include safeguards to protect the public and avoid the use of inappropriate/irrelevant data. The public should be consulted in the design, implementation, deployment, acquisition, and maintenance phases of software development. Certain applications may require confidentiality, such as law enforcement. Stakeholders should include experts in subject matter, sector-specific, context-specific, privacy. civil rights, and civil liberties. Extensive pre-deployment testing should be implemented. Prior to deployment, there should be proactive and ongoing identification and mitigation of potential risks. There should be ongoing monitoring procedures.

Derived data is potentially high-risk input that could lead to inaccurate results. Data reuse may spread or scale harm.

Examples were used to show how federal agencies have developed specific frameworks for ethical use of AI, such as the Department of Energy's AI Advancement Council, the Department of Defense Artificial Intelligence Ethical Principles, and the U.S. Intelligence Community's Principles of Artificial Intelligence Ethics.

Idaho enacted procedures for pretrial risk assessment. It must be "shown to be free of bias against any class of individuals protection from discrimination by state of federal law" and "all documents, records, and information used to build or validate the risk assessment shall be open to public inspection". Assertion of trade secrets cannot be used "to quash discovery in a criminal matter by a party to a criminal case".

Designers, developers. and deployers of automated systems should take proactive and continuous measures to protect individuals and communities from algorithmic discrimination and to use and design systems in an equitable way.

Data used for algorithmic discriminatory assessment should be representative of local communities and should be reviewed for bias based on the historical and society context of the data. There should be both disparity assessment and mitigation.

Individuals and communities should be free from unchecked surveillance and such technology should be subject to heightened oversight. Continuous surveillance and monitoring should not be used in education, work, housing, or other contexts that limit rights, opportunities, or access.

The paper acknowledged that federal law does not address the expanding scale of private data collection and that the country lacks a comprehensive statutory or regulatory framework for personal data. Americans are often unable to access their personal data or make decisions of its collection or use. Data brokers do not seek permission from consumers when compiling data. Inaccurate data may be used to make life-impacting decisions, such as qualifying for a loan or getting a job.

Privacy protection should be part of software design by default. Data collection should be minimized and clearly communicated to the people whose data is being collected.

One proposal for privacy-preserving security is cryptography or other privacy-enhancing technologies or fine-grained permissions and access control.

Individuals should be allowed to access their data and metadata, know who has access to the data, and have a way to correct the data. The entity responsible for the development of an automated system should quickly provide a report on the data.

Sensitive domains, such as health, employment, education, criminal justice, and personal finance, should meet additional expectations.

An automated system should provide clear, timely, understandable, and accessible notice of use, and explanations as to how and why a decision was made or action taken.

The American public needs to know if an automated system is being used. The impact of automated systems used to determine outcomes are not always visible, such as whether a real person or an algorithm rejected a job application or a criminal defendant was denied bail by a judge or software that falsely labeled him/her as high-risk.

The Illinois Biometric Information Privacy Act provides that private entities may not "collect, capture, purchase, receive through trade, or otherwise obtain" biometric data and identifiers unless written notice is provided to that individual.

Examples used in the paper to highlight real-world scenarios

- Social media data collection has been used to threaten opportunities, undermine privacy, or track activity

- The use of technology for patent care has been unsafe, ineffective, or biased

- Hiring and credit decisions use algorithms that are inequitable, biased, or discriminatory

- An algorithm deploys police to neighborhoods they regularly visit, regardless of whether those neighborhoods have high crime rates

- A device to help people find their lost items is used to track the users themselves

- Create or alter images of real people without their consent

- AI-powered cameras in delivery cans penalize drivers for events beyond their control

- Facial recognition can contribute to wrongful and discriminatory arrests

- Discriminatory hiring algorithms, such as scanning for words in a resume

- Nontraditional factors, such as education and employment history, for loan underwriting and pricing models impose higher loan prices for particular demographics

- Statements posted online are flagged by an automated sentiment analyzer that is biased against certain groups

- Remote proctoring AI systems flag test takers with disabilities as suspicious due to their disability-specific access needs

- Healthcare clinical algorithms lead to race-based health inequities

- Social media used by insurance companies to determine rates

- Data brokers have their systems breached resulting in the risk of identity theft

- Public housing using facial recognition systems that send video of all people near the buildings to law enforcement whenever a police report is filed

- Companies track employee discussions about union activity

- A patent's insurance company receiving data generated by medical devices and subsequently denying coverage

- Retail stores gathering consumer data about shoppers to send targeted marketing to households that reveal secrets to other family members

- Employers send employee data to third party services, which is used by future employers, banks, or landlords

- Predictive policing used to place individuals on a watch list without any explanation

- Individuals are denied benefits due to data entry errors

- Unemployment benefits system that requires an applicant to verify his/her identity through a smartphone denied access to benefits without an alternative human option

- Unemployment insurance fraud detection incorrectly flagged entries over slight discrepancies with the result of people having their wages withheld without any chance to explain themselves to a real person

- Medical doctors who are unwilling to override a hospital's software system that confused a patent's prescription history with other family members, such as pets

- Employees are fired over automated performance evaluation without any possibility of human review

Source

The White House / OSTP. Blueprint for an AI Bill of Rights - https://www.whitehouse.gov/ostp/ai-bill-of-rights/ [accessed 12/10/2023]


Main Page