مسئولیت بین المللی دولت ها در ارتباط با کاربست سلاح های مبتنی بر هوش مصنوعی در درگیری های مسلحانه (مقاله علمی وزارت علوم)
درجه علمی: نشریه علمی (وزارت علوم)
آرشیو
چکیده
با تحولات چشمگیر در فناوری ها، به تدریج از هوش مصنوعی به جای انسان در درگیری های مسلحانه استفاده می شود. استفاده از سلاح های خودمختار مجهز به هوش مصنوعی در مخاصمات مسلحانه، مفاهیم سنتی ما از مسئولیت را به چالش کشانده و چگونگی انتساب مسئولیت بین المللی را به دولت ها نمایان ساخته است؛ چراکه مفاهیم سنتی ما از مسئولیت و اصول مسئولیت بین المللی دولت مبتنی بر رفتار انسانی تبیین شده است و تطبیق آن با رفتار مبتنی بر هوش مصنوعی نیازمند بررسی جدید است. ازاین رو، مقالهٔ حاضر با استفاده از روش توصیفی تحلیلی، به بررسی پیامدهای مسئولیت بین المللی دولت ها در کاربست هوش مصنوعی نظامی در مخاصمات مسلحانه و جبران خسارت حاصل از عملکرد آن ها می پردازد. در مواجهه با نظارت بر عملکرد فناوری های نظامی هوشمند، این مقاله درصدد پاسخ به این سؤال است که مسئولیت بین المللی دولت ها در رابطه با نقض حقوق بین الملل هنگام استفاده از هوش مصنوعی نظامی در چه وضعیت هایی محقق می شود؟ نگارنده با بررسی طرح مسئولیت بین المللی دولت به این نتیجه می رسد که نه کاربردهای معاصر هوش مصنوعی و نه تجسم های واقعاً خودمختار آیندهٔ آن هیچ مانع مفهومی در چهارچوب حقوق مسئولیت ایجاد نمی کند و نه تنها رفتارهای متخلفانهٔ تسلیحات مبتنی بر هوش مصنوعی در میدان نبرد را می توان به دولت نسبت داد، بلکه مسئولیت دولت می تواند قبل از به کارگیری هوش مصنوعی در مراحل توسعه یا تدارک آن و در طول چرخه حیات آن ایجاد شود.Consequences of International Responsibility of States Regarding the Military Use of Artificial Intelligence in Armed Conflicts
With significant developments in technologies, artificial intelligence is used instead of humans in armed conflicts gradually. The use of AI weapons systems in armed conflicts has challenged not only our traditional concepts of responsibility but also attribution of international responsibility to states; because traditional concepts of responsibility and the principles of states responsibility has been based on human behavior and its adaptation to AI-based behavior requires a new review. Therefore, using the descriptive-analytical method, this article examines the consequences of the international responsibility of states in the use of military AI in armed conflicts and reparation for the damages resulting from their performance. To encounter monitoring the performance of AI technologies, the main question of this essay is that in what situations are states internationally responsible in relation to violations of international law when using military artificial intelligence? By examining the draft articles on responsibility of states for internationally wrongful acts, the author conclude that neither the contemporary applications of AI nor the truly autonomous visions of its future create a conceptual obstacle in the framework of responsibility laws, and not only the illegal behaviors of AI-based systems in the battlefield can be attributed to the states, but also the responsibility of the states can be stablished before the use of artificial intelligence (in the study, development, acquisition or adoption of a new weapons).
Keywords: AI, State Responsibility, Attribution of Conduct, International Wrongful Act, IHL, Armed Conflicts
1. Introduction
daily lives, significantly impacting a wide range of sectors within the international community. One of which is the realm of armed conflicts. In fact, artificial intelligence and autonomous weapon systems represent an important step in the evolution of military conflicts. The advancement of AI technologies presents unprecedented opportunities for the execution of new forms of military operations in armed conflicts. By employing AI in the production of weapons and armaments, human presence on the battlefield has been minimized (Wood, 2023, p. 16). AI is used as a suppressive weapon to gain military advantage, capable of autonomously selecting and engaging targets in hostilities without human intervention (Lee, 2022, p. 177). Most states regard autonomous weapons systems as pivotal technologies in the struggle for global dominance. In 2017, Russian President Vladimir Putin described AI not only as the future of Russia but also of humanity. He also foresaw its threats, stating: "Whoever becomes the leader in this sphere will become the ruler of the world."
In the absence of meaningful regulations concerning AI, international law faces new challenges. In this context, identifying and understanding the international legal cocepts arising from the rapid emergence and deployment of AI, as well as analyzing existing norms of international law from the perspective of this new phenomenon, is deeply concerning. Despite the various benefits and constructive applications of AI in human daily life, the world has witnessed its adverse effects when employed in armed conflicts an issue that may entail violations of fundamental human rights. Sooner or later, such weapons, like all others, will malfunction due to systemic flaws (Schmitt, 2013, p. 7), causing harm to civilians and damage to civilian targets and objects, thereby raising the question: Who is responsible?
AI-based weapons operate in truly unpredictable ways and are inherently volatile and dangerous. Their use will have unintended and detrimental consequences for global stability, stemming from either the use or misuse of such systems. Lethal AI weapons are sparking a new arms race that endangers everyone yet on a far broader scale than the nuclear arms race, as they are cheaper and easier to develop independently. Advanced military AI dehumanizes warfare, as its capacity to strike targets thousands of kilometers away and autonomously select human targets complicates the attribution of responsibility a crucial element in holding war criminals accountable and, due to the evolving nature of this technology, makes accountability even more complex.
On one hand, the inherent complexity of AI systems, particularly their autonomy and unpredictability, along with the fact that (like humans) they will never be entirely flawless, means that violations of international law are inevitable. Breaches of IHL through the use of AI entail both criminal and non-criminal liability. While individual and state responsibility are complementary and concurrent, the focus of this discussion is on state responsibility and the concept of "effective control" over the conduct of AI-based weapons. On the other hand, the near-endless list of potentially liable parties including software developers, military personnel or commander, weapons users, manufacturer, and political leaders creates difficulties in assigning responsibility.
The global political landscape shows that a comprehensive ban on military AI technology is unlikely to be adopted in the near future. However, given the remarkable technological advancements in recent years, the continued integration of AI into military weaponry is inevitable. While it is widely accepted that IHL fully applies to the use of AI-based technologies, the issue of state responsibility for IHL violations stemming from such technologies remains highly contentious. Thus, the central question is: How does the international law of state responsibility apply to violations of international law arising from AI military technologies? Can international responsibility even be established for such incidents? Given the role of humans as accountable agents during the production or deployment stages, the very assumption of responsibility is undermined by the autonomous nature of AI. On what basis, wrongful act occurring during the use of AI weapons in the armed conflicts can be attributed to a state?
To answer these questions, this paper first examines state obligations regarding the use of AI-based weapons in armed conflicts. It then analyzes the general rules of state responsibility under de lege lata in international law (the ILC’s 2001 Draft Articles) concerning military AI. Next, it explores the possibility of attributing wrongful act of military AI to states under de lege ferenda . Finally, it specifically addresses the dimensions of direct and indirect state responsibility in the development, acquisition, and use of military AI. The article also pays particular attention to the liability regime for compensation concerning acts not prohibited under international law.
With significant developments in technologies, artificial intelligence is used instead of humans in armed conflicts gradually. Military AI technologies create new challenges in a variety of fields of international law. The use of AI weapons systems in armed conflicts has challenged not only our traditional concepts of responsibility but also attribution of international responsibility to states; Because the traditional concepts of responsibility and the principles of states responsibility has been based on human behavior and its adaptation to AI-based behavior requires a new review. Therefore, the goal of this analysis is to examines the consequences of the international responsibility of states in the use of military AI in armed conflicts and reparation for the damages resulting from their performance. To encounter monitoring the performance of AI technologies, the main question of this essay is that who is legally responsible for the effects of weapons equipped with artificial intelligence? In what situations are states responsible in relation to violations of international law when using military artificial intelligence?
Regarding the research background and the innovative aspect of this paper, it must be noted that the systematic study of AI-based weapons in armed conflicts is relatively new in domestic scholarship. Recent Persian-language literature, whether original or translated, has addressed the use of AI-based weapons in armed conflicts from the perspectives of IHL and international criminal law (ICL), with key works cited in this article. However, the author has not encountered any specific study examining the application of AI-based weapons in armed conflicts within the framework of state responsibility under international law.
At the international level, due to the significance of the issue, research has been conducted on the accountability gap concerning AI-based weapons and their use in armed conflicts. For instance, Gabriel Wood (2023) wrote an article titled "Autonomous Weapon Systems and Responsibility Gaps: A Taxonomy," which, contrary to its title, focused only on the challenges of such weapons under the law of armed conflict rather than state responsibility. Bernic Boutin (2022), in an article titled "State Responsibility in Relation to Military Applications of AI," discussed AI in military structures, state responsibility in the production and sale of such technologies, and their wrongful use. Damian Bielicki (2021), in "Regulating AI in Industry," edited a collection of essays by international law scholars, providing a relatively comprehensive overview of AI applications and their legal challenges. Additionally, Magdalena Pacholska (2020) examined ‘‘Military Artificial Intelligence and the Principle of Distinction: A State Responsibility Perspective’’, though her focus remained on IHL and individual responsibility rather than state responsibility, as she argued that no unique issues arise in this context. Thus, while the aforementioned works touch on the subject, none systematically address, defend, or refute the hypothesis of this paper.
The primary hypothesis of the article is that existing international law provides a suitable legal framework to deal with the effects of AI-equipped weapons, but more clarity is needed on how to apply this existing legal framework to new technologies.
.
2. Methodology
This research aims to gather information from various sources, including books, articles, theses, research reports, The article has been performed based on the descriptive-analytical research method. The necessary data has been collected by library method. Following that, relevant data from legal and jurisprudence doctrine, academic commentaries and international jurisprudence are collected and analyzed from various perspectives adopting scientific evidence in order to answer the research question. Finally, the findings are discussed.
.
3. Results and Discussion
This article shows that although the use of AI technologies in armed conflicts is irrefutable, but the applicable international treaties did not explicitly address the issue of AI systems sand its used in armed conflicts, nor has any other international regulation expressly referenced the application of AI in hostilities. The absence of international norms governing AI creates complex and potential problems concerning the applicable law in resolving inter-state disputes over responsibility for the use of AI in armed conflicts.
The author also believes that the lack of international rule (treaty or customary) regarding AI systems creates complex and potential problems regarding applicable law in resolving disputes between states to determine responsibility for the use of AI in armed conflicts. The inherent complexity of AI based systems and the multitude of different actors (software programmers, military personnel or commander, users, manufacturer, weapons inspectors and political leaders) involved in their construction, development and deployment in military operations in military operations, lead to ambiguities regarding the attribution of responsibility for violating international obligations. However, the author is of the opinion that neither contemporary employment of AI systems nor their future ‘truly autonomous’ incarnations create any major conceptual hurdles under the law of state responsibility. There is no doubt that the states can be held responsible for the wrongful applications of AI systems in armed conflicts, inattentive procurement and omission to respect for international law by other states and private actors who are developing or applying artificial intelligence.
At the deployment stage, due to the characteristics of AI, the direct acts or omissions of human operators do not always provide sufficient grounds for attribution. However, attributing conduct involving AI may instead be based on human behavior and decision-making by other entities (i.e., developers, political and military decision-makers). At the development stage, existing obligations impose a duty to ensure compliance with international regulations—specifically, the obligation to embed applicable norms into AI design. This obligation also comes into play at the procurement and supply stage, where states must verify compliance. Thus, states must subject military AI to continuous oversight by their institutions to prevent violations of international obligations through preventive measures.
4. Conclusions and Future Research
By examining the draft articles on responsibility of states for internationally wrongful acts and IHL rules, the author conclude that neither the contemporary applications of AI nor the truly autonomous visions of its future create a conceptual obstacle in the framework of responsibility laws, and not only the illegal behaviors of AI-based systems in the battlefield can be attributed to the states, but also the responsibility of the states can be stablished before the use of artificial intelligence (in the study, development, acquisition or adoption of a new weapons). However, better application of secondary rules in the face of military AI problems requires the adoption of a binding international treaty or the formation of customary rules to cover these weapons.
The arguments presented in this article are intended solely to initiate a discussion on the necessity of state responsibility under international law for their tools and instruments, analogous to their responsibility for state organs. This does not mean that machines can never be held accountable. The scope of responsibility in the use of AI is only discernible if unlawful acts can be attributed to an identifiable person. However, the lack of human agency in autonomous weapon systems does not ipso facto negate responsibility.
The existing framework of international law ( de lege ferenda ) applicable to military AI reflects that AI-based weapon systems regardless of their autonomy are ultimately the product of human behavior, social institutions, and decisions. Therefore, even with technological advancements and the increasing use of military AI, the essential causal link between AI-related malfunctions and state responsibility remains intact. While the framework of state responsibility plays a useful role in regulating AI and addressing accountability challenges specific to AI-based technologies, the mens rea (mental element) of violations in the law of armed conflict hinges on perpetrator intent, which is exceedingly difficult to prove in AI-driven systems. Consequently, the law of state responsibility cannot address all major challenges posed by such weaponry.
To fill accountability gaps regarding military AI, state responsibility under international law serves a complementary role alongside other liability frameworks, collectively ensuring comprehensive accountability at all levels. Efforts to hold states accountable for AI underscore their unique position and primary role in regulating military technologies and supervising non-state and private actors. Specifically, state responsibility demonstrates that a viable approach exists to ensure military AI development aligns with applicable international norms.
Any miscalculation by AI-based systems in armed conflicts may result in significant civilian casualties or severe damage to civilian property. Since blaming artificial or mechanical tools is futile, it follows that only humans are subjects of legal rules. Human conduct remains a critical factor in applying state responsibility for violations of jus ad bellum , IHL, and human rights law. The final decision on whether AI-based weapons are actually used in a specific operation rest with humans particularly military authorities responsible for operational planning. In this sense, when weighing potential collateral damage against operational advantages, the unpredictability inherent in autonomous systems must be factored in.
Ultimately, to meet the demands of international law, states should adopt a binding international treaty establishing design standards for AI weapons, the degree and form of human control, permissible targets, and the scope of their use.








