هوش مصنوعی و مسئله دار کردنِ درون مایه های امنیت ملی (مقاله علمی وزارت علوم)
درجه علمی: نشریه علمی (وزارت علوم)
آرشیو
چکیده
زمینه اصلی این پژوهش، ارائه چارچوبی گسترده است که از طریق آن نقش هوش مصنوعی در استفاده غیراخلاقی از کارکردهای آن در مسائل امنیتی بصورت نظری و عملی مورد ارزیابی جدی قرار گیرد. پرسش اصلی پژوهش این است که درون مایه های امنیت ملی بر اساس آموزه های دوران استیلاء و هژمونی؛ هوش مصنوعی را چگونه می توان در قالب تروریسم، جنگ، تنازع و دفاع مسئله دار کرد؟ در پاسخ به پرسش، این فرضیه مطرح می گردد که «امنیت ملی» بر اساس شاخصه های معرفتی- عملی ارائه شده، ظرفیت و قابلیت تببین و تفهیم این مسئله را دارد چراکه بر اساس نظریه های موجود در مطالعات امنیت ملی اعم از جریان اصلی و جریان انتقادی، هیچ کدام بصورت جامع و کامل، نه می تواند درون مایه های مذکورِ امنیت ملی در عصر هوش مصنوعی و کلان داده را مسئله مند کند و نه پاسخگوی مسائل مرتبط با این حوزه در ساحتِ عملِ مناسبات امنیت ملی باشد. زمینه سازی برای کاربست عملی امنیت ملی با رویکرد داده بنیاد جز با نگاه مسئله مند تروریسم، جنگ و تنازع به سامان نخواهد شد و یکی از مصادیق راهبردی این الگوواره، «امنیت ملی الگوریتمی» است که پتانسیل ایجاد و تکوین نظم نوین امنیتی دیگری را به اثبات رسانیده است.Artificial Intelligence and Problematization of National Security Topics
Introduction Some governments and organizations are preparing to exploit artificial intelligence (AI) in order to destabilize the world and benefit from numerous cyber-attacks. The rapid advancement of AI enables cybercriminals to amplify their destructive impact worldwide, as AI has the potential to reshape and disrupt global conditions in the coming years. The primary objective of this research was to establish a comprehensive framework for critically evaluating the role of AI in facilitating unethical practices within the realm of security, both in theory and in practice. To lay the groundwork for the practical implementation of national security measures based on grounded theory requires adopting a problem-oriented perspective on terrorism, warfare, and conflict. A strategic instance of this approach is the concept of algorithmic national security which has the potential for creating and developing a new security order. This article aimed to contribute to the existing body of scientific literature, as there is currently a dearth of research in this field, thereby paving the way for future investigations. The primary objective of this research was to establish a comprehensive framework for critically evaluating the role of AI in facilitating unethical practices within the realm of security, both in theory and in practice. The present research aimed to develop a comprehensive framework for the critical evaluation of the role of AI in the unethical utilization of its functions in security matters, both in theory and in practice. In this line, the main question is: How can national security topics be problematized in light of AI hegemony and within the framework of terrorism, war, conflict, and defense? Materials and Methods Adopting a descriptive–analytical approach, the present research relied on library research and documentary method to collect the data from various printed and electronic sources, including websites and magazines. Note-taking was used as a tool in data collection stage. In this study, AI and national security were considered as the independent and dependent variables, respectively. Results and Discussion The debate surrounding the use of AI and its autonomy on future battlefields has predominantly centered on the ethical implications of granting complete authority to independent and autonomous weapons, often referred to as killer robots, capable of making life or death decisions. Is it truly feasible for these systems to operate without any human intervention, or does their deployment potentially violate the principles of warfare and international humanitarian laws? Avoiding such a predicament necessitates that those involved in warfare differentiate between combatants and civilians on the battlefield, prioritizing the preservation of civilian lives and minimizing harm to them to the greatest extent possible. Proponents of this emerging technology argue that machines will eventually possess enough intelligence to distinguish themselves from humans. Conversely, opponents maintain that machines will never possess the capability to make such a fundamental distinction. They argue that machines lack the capacity to make split-second decisions in the heat of war or exhibit timely empathy. In response to these concerns, several human rights and humanitarian organizations have launched the Campaign to Stop Killer Robots, aiming to establish an international ban on the development and deployment of fully automated and autonomous weapon systems. In the meantime, a highly contentious debate is unfolding within the military sphere regarding the use of AI in the command and control systems governing how senior officers convey essential orders to their subordinate soldiers. Throughout history, generals and admirals have consistently sought to enhance the reliability of command and control systems to ensure the fullest realization of their strategic objectives. Nowadays, these systems are heavily relied upon to ensure the security of radio and satellite communication systems that connect headquarters to the front lines. However, strategists are concerned that in a future hyper-warfare environment, these systems could be vulnerable to disruptions caused by jamming, which would make the speed of military operations exceed commanders’ limited ability to receive battlefield reports, process data, and issue timely orders. It is important to go beyond these concerns and consider the practical definition of the uncertain fog of war, which is further complicated by the multiplication effect of AI and the potential for failure. Many military officers see a solution to this dilemma in relinquishing the control of machines to these systems. As stated in a report by the Congressional Research Service, AI algorithms can offer more reliable tools for real-time analysis of the battlefield and enable faster decision-making, thus being able to stay updated. Conclusion We are currently witnessing a turning point in technology. The pace of advancements in AI is surpassing even the expert predictions. These breakthroughs offer significant advantages to humanity, enabling AI systems to tackle complex issues in medicine, the environment, and other fields. However, along with progress, there are also accompanying risks. The implications of AI for national security are becoming increasingly profound with each passing year. In this article, the aim was to assess the extent of these consequences in the years ahead. The findings indicate that AI is likely to highlight several, if not all, of the most challenging aspects of transformative military technologies. It thus becomes increasingly crucial to address its implications in examining how policymakers in the realm of national security respond to this technology. Unfortunately, AI carries the potential for risks comparable to those posed by previous technologies, and in some cases, its impact could be even more devastating, owing to the rapid pace of technological advancement and the intricate relationship between government and industry in the present era. While we appreciate the increasing number of high-quality AI reports published in recent years, we acknowledge that a certain degree of conservatism has somewhat impeded comprehensive analysis. In this article, the objective was to provide an honest description of the AI revolution as truly revolutionary rather than merely different. To address this challenge effectively, governments must approach the issue with ambition, emphasizing both research and development while considering its ramifications. The advancement of AI technology in the military, information technology, cybersecurity, and economic sectors over the next decade will lead to profound transformations worldwide. These changes are occurring at a faster pace than anticipated, and undoubtedly, they will present their own set of challenges, with implications extending to various aspects, including national security. AI introduces a level of complexity in the interactions between states, industries, and individuals, necessitating the deployment of skilled experts to respond quickly and effectively to the evolving landscape shaped by this phenomenon.