Blog


Reimagining the Law of War: The Artificial Intelligence Revolution and U.S. National Security

Blog Post | 112 KY. L. J. ONLINE | April 16, 2024

Reimagining the Law of War: The Artificial Intelligence Revolution and U.S. National Security

By: Preiss Terry, Staff Editor, Vol. 112 

In an opinion piece for the New York Times, the CEO of Palantir Technologies analogized the advancement of Artificial Intelligence (AI) to the creation of nuclear weapons stating: “We have now arrived at a similar crossroad in the science of computing, a crossroad that connects engineering and ethics, where we will again have to choose whether to proceed with the development of a technology whose power and potential we do not yet fully apprehend.”[1]

Integrating weapons systems with AI brings serious ethical dilemmas and experts in the defense field urge care as we begin to realize the ability this new technology will have to change the landscape of war and current national security policy.[2]

While caution is necessary, AI has the power to advance accuracy for military missions and improve analysis when identifying threats.[3] Recently, the U.S. Department of Defense signaled that its AI adoption strategy couples caution with speed.[4] Speed is imperative to maintain the United States’ competitive advantage in this field, as an arms race is arguably in sight,[5] while caution is necessary for ensuring the safety of new technology.[6] Deputy Defense Secretary Hicks explained that the U.S. will continue to be a leader in the ethical use of AI while harnessing its power and remaining mindful of its dangers.[7]  

In order to prepare for the broad national security implications of the AI revolution, the law in this area must evolve as quickly as the emerging technologies. There are few safety nets in place to deal with consequences of an impending arms race. It is imperative to understand what happens if AI is unable to properly calculate targets, or what happens when these technologies  fall into the hands of rogue actors, such as terrorist organizations, or states who will not be as cautious as the U.S.[8] The race for big technology companies to create generative AI has caused safety to take a back seat and raised concerns that autonomous systems will be able to improve themselves without human intervention.[9]

Federal legislation could aid in this area and several bills have been proposed to regulate private companies’ expansion of development of AI technology.[10] However, when it comes to the development of AI to keep national security risks at bay, the private sector is not the only important actor.[11] It remains clear that the state who is able to harness the power of AI in their defense systems will solidify themselves as a global leader for the near future. The United States and China are already vying to develop AI weapons systems which could resemble the arms race of the 20th century .[12] An arms race with this technology could lead states to create unsafe systems that policymakers do not yet fully understand. Further, even a “perceived” arms race could cause companies and governments to cut corners on necessary safety for this technology. “For AI — a technology whose safety relies upon slow, steady, regulated, and collaborative development — an arms race may be catastrophically dangerous.”[13] In other words, this technology will take years of research to ensure its safety and efficacy, but the pressure of a potential arms race could undermine these important steps.

There are also claims that AI proliferation in defense could strengthen deterrence among powerful countries.[14] The U.S. is urging caution of this technology in some of the most powerful areas of defense such as nuclear command and control and ensuring human intervention and decision-making remain paramount.[15] However, there remains the question of whether other countries will maintain human discretion in the use of this technology and what level of safety and research they will employ when faced with a potential arms race.[16]

The law must keep pace with this developing technology. Some have suggested that if more is not done to regulate this area, it could mean the end of the human race, while others have criticized this as “fear mongering.”[17] Federal legislation and international law can assist here. As with any new technology, like nuclear weaponry or social media, laws, governments, and international organizations step in to ensure the safety of citizens. While there is cause for concern as to how this technology will aid the defense of the U.S., Congress has the power to ensure the government and private companies do not let the desire for autonomous technology outpace the need for safety and regulation.

Further, the international community can respond as it did with the advent of nuclear weapons. The international community could impose limits on the abilities of AI defense technology, similar to nonproliferation treaties. Bilateral and multilateral treaties could be extremely effective in this area to prevent an arms race the caliber of what was seen with nuclear weapons.[18]

The regulation of AI in the fields of national security and defense comes down to whether policymakers feel comfortable placing such an important task in the hands of a technology that is rapidly emerging and excelling, which we have yet to fully understand. It is imperative that laws keep pace with this technology and how it is used in the private and public sectors. Federal legislation and international laws are necessary to regulate the extent of AI’s defense power.

[1] Alexander C. Karp, Our Oppenheimer Moment: The Creation of AI Weapons, N.Y. Times (July 25, 2023), https://www.nytimes.com/2023/07/25/opinion/karp-palantir-artificial-intelligence.html.

[2] Id.

[3] Charles Cohen, AI in Defense: Navigating Concerns, Seizing Opportunities, National Defense Magazine (July 25, 2023), https://www.nationaldefensemagazine.org/articles/2023/7/25/defense-department-needs-a-data-centric-digital-security-organization.

[4] Clark, infra note 6.

[5] Michael Hirsh, How AI Will Revolutionize Warfare, Foreign Policy (Apr. 11, 2023, 10:09 AM), https://foreignpolicy.com/2023/04/11/ai-arms-race-artificial-intelligence-chatgpt-military-technology/.

[6] Joseph Clark, DOD Releases AI Adoption Strategy, U.S. Dept. of Defense (Nov. 2, 2023), https://www.defense.gov/News/News-Stories/Article/Article/3578219/dod-releases-ai-adoption-strategy/.

[7] Id.

[8] Cohen, supra note 3.

[9] Andrew Chow and Billy Perrigo, The AI Arms Race Is Changing Everything, Time Magazine (Feb. 17, 2023, 1:47 PM), https://time.com/6255952/ai-impact-chatgpt-microsoft-google/.

[10] Artificial Intelligence Research, Innovation, and Accountability Act of 2023, S. 3312, 118th Cong. (2023).

[11] Sam Meacham, A Race to Extinction: How Great Power Competition Is Making Artificial Intelligence Existentially Dangerous, Harv. Int’l Rev. (Sep. 8, 2023), https://hir.harvard.edu/a-race-to-extinction-how-great-power-competition-is-making-artificial-intelligence-existentially-dangerous/.

[12] Id.

[13] Id.

[14] Hirsh, supra note 5.

[15] Id.

[16] Id.

[17] Id.

[18] U.N. GAOR, 78th Sess., 20th & 21st mtg., U.N. Doc. GA/DIS/3725 (Oct. 24, 2023).