Letter to Secretary Lloyd Austin, Department of Defense - Brown, Booker, Warren, Ten Other House Members Call For Additional Oversight Of DOD's Use Of AI

Letter

Date: April 7, 2022
Location: Washington, DC

We write to address concerns regarding the use of potentially discriminatory artificial intelligence or other automated systems by the United States within armed conflict. These technologies present the potential to reduce civilian casualties through more precise and responsive targeting while ensuring that a greater number of the men and women who serve our nation can return home from the frontlines. However, they also pose the danger of causing adverse harm if not developed, maintained, and operated in full accordance with our values, domestic law, and international law.

The Department of Defense's investment and deployment of artificial intelligence and other automated systems has accelerated in recent years. While increased automation has long been utilized to reduce civilian casualties and to improve the effectiveness of military operations, we are approaching an inflection point where engagement decisions require an exceedingly limited role for the human operator or commander. No longer does automation simply ensure that a weapon is guided to its intended target, it now enables fully automated missions from launch to return to base. Soldiers may soon have facial recognition technology in helmet mounted displays, performing real time checks of people encountered against a "digital most wanted card deck" and recommending engagement by our forces. Automated decisions continue to expand throughout the engagement chain into Joint All Domain Command and Control.

These technologies are often developed through commercial innovation, where training sets based on market demographics are overwhelmingly white and young with a bias towards male users. While advances in artificial intelligence over the last decade have reduced bias in these systems, applications such as facial recognition remain more likely to misidentify women and individuals of color. Simultaneously, the next generation of soldiers has been raised in a digital environment - trained to trust the heads up displays of first-person shooter video games without question or to rely on their synthetic wingman without any thought towards the morals contained within its digital shell. These converging trends have elevated the potential that technology deployed to the battlefield may result in an adverse impact on individuals of color, women, or other demographics.

The principles of non-discrimination are central to international humanitarian law and the laws of armed conflict. In current conflicts, we witness on a daily basis the damage to civilian populations through the use of indiscriminate weapons and we are reminded of how warfare unfairly impacts those who we stand to protect. We must be cognizant of how advanced technologies intersect with our values and with international law, and how they may adversely cause harm in new and unique ways to civilian populations where our armed forces and our allies are deployed. Beginning with the Third Geneva Convention and continuing through Protocol II, the international community has consistently found, affirmed, and extended protections to civilians and non-combatants from any harm that arises from adverse distinction on the basis of race, gender, and other categories.

The recent domestic use of the military in response to civil unrest and other national security events demonstrates that these concerns of bias extend to the civil rights of every American. The National Guard and active duty troops have been repeatedly deployed in recent years following the murder of George Floyd, the attack on the United States Capitol, and surging migrants on the southwestern border. In each of these cases, servicemembers were deployed while equipped with military equipment and weapon systems. As adoption of automation continues to expand throughout the Department, technology embedded in helmet mounted displays may soon be available for use in policing or crowd control missions. Further, we risk that military equipment with bias may be transferred to state and local governments through surplus programs and be used by law enforcement throughout the country. If these systems contain bias against certain demographics, whether by the military or after transfer, their use would be a violation of the civil rights and liberties provided to all Americans by the Constitution, the Civil Rights Act, and other domestic law.

Department of Defense Instruction 3000.9, Autonomy in Weapon Systems, is the nearest set of policies established by the Department to address these issues. The instruction states that the Department's use of autonomy should be consistent "with all applicable domestic and international law and, in particular, the law of war". However, the instruction's definitions of autonomous or semi-autonomous weapons do not appear to include the full spectrum of potential use of autonomation in the engagement chain, whether that is command and control systems, tactical decision aids, or other systems that are embedded with the warfighter. These systems have a direct and clear involvement in the use or deployment of a weapon system and must be addressed by Department policies and procedures.

The Department has indicated in its AI Ethical Principles, specifically the second principle of "equitable", its intent only to "take deliberate steps to minimize unintended bias in AI capabilities." Such a standard of "minimizing" or reducing is insufficient to meet our obligations under international law and adhere to our nation's values. The Department of Defense's FY2021 Annual Report on Operational Test Evaluation found that artificial intelligence's "[n]on-linear, time-varying, and emergent behaviors reduce confidence in fully assessing effectiveness across a range of scenarios/environments." If further found that "[e]thical concerns related to lethal decisions may preclude warfighting capability if testing does not confirm exceptionally high confidence in its behavior" and that "[t]esting for compliance with ethical constraints on behaviors is an open research issue."

As the indispensable nation, it is incumbent on us to see further into the future and to ensure that the use of automated systems within the Department is in accordance with our values, the Constitution, and domestic and international law. We must ensure that any system deployed under the laws of armed conflict does so without adverse distinction founded on race, colour, sex, language, religion or belief, political or other opinion, national or social origin, wealth, birth or other status, or on any other similar criteria. The Department must also ensure that any automation technology used by the Department for a law enforcement or other domestic mission does so without discrimination on the basis of race, color, national origin, sex, and religion. In accordance with the above, we request the following information regarding the Department's policies and procedures as they relate to automated systems, including artificial intelligence and machine learning:

What is the Department's process for determining which systems are included under DODI 3000.9?

Is the Department including tactical decision aids, such as heads up displays, and command and control systems in reviews conducted against DODI 3000.9?

How is the Department applying DODI 3000.9 to automated software or applications utilized by the Department that are indirectly connected to the use of lethal or other force?

What systems have been reviewed under DODI 3000.9 to date and what were the results of such reviews?

How is the Department determining consistency with domestic law, to include the Constitution and the Civil Rights Act of 1964?

What is the Department's status in implementing the National Security Commission on Artificial Intelligence's recommendations, particularly those in Chapter 8 of its Final Report: Upholding Democratic Values: Privacy, Civil Liberties and Civil Rights in the Uses of AI for National Security?

How is the Department reviewing and restricting transfer of military equipment with automation under its surplus equipment programs, such as the Section 1033 program?

How is the Department determining consistency with international law, to include the Geneva Conventions and the portions of the Additional Protocols which are consistent with customary international law.

What standards have the Department established for identifying bias in automation?

What standard is the Department applying to determine adverse distinction or disparate impact, as applicable?

How is the Department performing verification and validation against such standard?

How is the Department continuously monitoring and evaluating verification and validation of automated systems against the Department's bias standards, domestic law, and international law, including as systems are upgraded or new training data is incorporated?

How is the Department conducting verification and validation on emergent behaviors that arise through operational use of automated technologies?

How is the Department including requirements in its contracts to address bias in automated systems during development, test, procurement, and operation?

How is the Department ensuring that any datasets collected through commercial means conform to the standards of the Department, domestic law, and international law?

Which contracts for which systems have included explicit requirements regarding compliance with domestic law and international law and bias as it relates to automated systems?

What is the Department's plan to update, expand, or supplement DODI 3000.9 to address changing technology as it relates to artificial intelligence, machine learning, and automation?

We thank you for your consideration of this request and your commitment to our values as a nation and under domestic and international law. We know that you share our belief that the United States has a crucial role to lead by a standard to which all nations should aspire. We look forward to your response to these matters and ensuring that as automation decreases the harm from the brutal nature of warfare that we do so in a manner that ensures equality for all.


Source
arrow_upward