آرشیو

آرشیو شماره‌ها:
۵۹

چکیده

با گسترش کاربرد الگوریتم های هوش مصنوعی در سازمان ها و بخش های دولتی، دغدغه هایی در مورد مسئولیت اجتماعی کاربست عامل های هوشمند همانند شفافیت، پاسخگویی و انصاف در محافل دولتی و دانشگاهی مطرح شده است. بر همین اساس، هدف پژوهش، تبیین مدل ساختاری- تفسیری الزامات طراحی سیستم های هوش مصنوعی با قابلیت تبیین در تصمیم گیری های مبتنی بر مشارکت انسان و هوش مصنوعی است. برای رسیدن به این هدف، از روش آمیخته اکتشافی طراحی مبتنی بر اقدام- دلفی فازی و مدل سازی ساختاری- تفسیری برای توسعه و ارزیابی اصول طراحی سیستم هوش مصنوعی با قابلیت تبیین استفاده می شود. زمینه پژوهش، اداره کل تنقیح قوانین و مقررات در معاونت حقوقی قوه قضائیه است. مشارکت کنندگان پژوهش، دست اندرکارانی از اداره کل تنقیح قوانین و مقررات و مرکز فناوری اطلاعات بوده که به همراه پژوهشگران تیم پژوهش را تشکیل می دهند و در مجموع 15 نفر هستند. بر اساس یافته های پژوهش، مدل، پنج ویژگی قابلیت درک، قابلیت حکمرانی، قابلیت اقناع، دقت پیش بینی (توصیفی)، شفافیت و سودمندی را در بر می گیرد. این قابلیت ها در دو بُعد طبقه بندی شدند. بُعد توانش شامل قابلیت درک، قابلیت حکمرانی و قابلیت اقناع  است. بُعد محقق سازی نیز شامل ویژگی های دقت پیش بینی، شفافیت و سودمندی است. افزون بر این، مدل می تواند سازوکار تقویت هوشمندی در تعامل انسان-هوش مصنوعی را تبیین کند. 

Identifying the design requirements of explainable artificial intelligence systems

IntroductionThe use of artificial intelligence algorithms in public sectors is increasingly expanding. At the same time, there are still concerns about the social responsibility of using intelligent agents, such as transparency, accountability and fairness in governmental and academic communities. Based on this, the purpose of this research is to develop a theorical model that includes the requirements for the design of explainable artificial intelligence systems in Human-Artificial Intelligence interaction for decision making. The most important issue is understanding how the system reaches a decision by users, which can be investigated. Because as much as human decision-makers are expected to provide explanations (explain) about their decisions, artificial intelligence systems can also be asked to explain proposed solutions. In turn, this topic provides prescriptive knowledge for system designers to create new insights for users by considering the linkage of user information with various sources through the use of artifacts. Therefore, the scope of this research is the enhancement of intelligence in which artificial intelligence models provide recommendations for human users. According to these cases, the research’s goel is to identify the descriptive and prescriptive knowledge required for designing a class of recommender systems based on human-artificial intelligence interaction. To achieve this goal, it is borrowed from the theory of ability to specify the relationships between the extracted categories. Finally, it is discussed how these requirements are used in design cycles.Methodology A Mixed methods research designs: Action design rsearch and Fuzzy delphi method and interpretive structural modeling approach used to develop and evaluate the design principles of that. We follow the design research developed by Mlarki et al. The General Department of Revision of Laws and Regulations in the Legal Department of the Judiciary has been selected as the context of research. The participants of this research are professionals from the General Department of Revision of Laws and Regulations and the Information Technology Center, who together with researchers (supervisors and students) constitute the research team, which is a total of 15 people. Using the triangulation technique, data have been collected from different sources. And data analysis was done in two steps: in the first step, by continuous refining the concepts, the extracted components were aggregated in theoretical dimensions. Also, a data structure was created by combining concepts, components and dimensions. In the second step, the research lens was used to develop the theory.Results and Discussion The research develops a framework that conceptualizes the characteristics of explainable artificial intelligence systems that include of understanding capability, governance capability, persuasion capability, predictive (descriptive) accuracy, transparency and usefulness.  These characteristics classified in two dimensions. The affordance dimension includes the ability to understand, the ability to rule, and the ability to persuade. The actualization dimension includes the features of accuracy of prediction, transparency and usefulness. In addition, the model can explain the mechanism of enhancing intelligence in human-artificial intelligence interaction. Therefore, we proposed following propositions in designing explainable artificial intelligence system for humans -artificial intelligence interaction:1) understanding capability of intelligent agent in the human - artificial intelligence interaction lead to intelligence reinforcing; 2) governance capability of intelligent agent in the human - artificial intelligence interaction lead to intelligence reinforcing; 3) persuasion capability of intelligent agent in the human - artificial intelligence interaction lead to intelligence reinforcing; 4) transparency of intelligent agent in the human - artificial intelligence interaction lead to intelligence reinforcing; and 5) the accuracy of prediction of intelligent agent in the human - artificial intelligence interaction lead to intelligence reinforcing. Finally, from the indiscernibility aspect, the findings of this research emphasize the explainable algorithmic activities and increase the understanding and persuasiveness abilities of users through the feature of algorithmic transparency. Since it is difficult to assess indiscernibility. In other words, algorithm-based decision-making process is understandable for some and not understandable for others. The design requirements of this research are a practical guide for clarifying the algorithmic activity in policy-making according to the user understanding and the purpose of use of artificial intelligence.Conclusion We argued that users and artifacts create an affordance in each other that lead to learning. Accordingly, the following can be considered as the theoretical contribution of this research. First, through developing a theoretical model, we established the mechanism of designing systems based on human-artificial intelligence interaction. In addition, from a human-oriented perspective, we identify the characteristics that users need to enhance intelligence which are: capability of understanding, capability of governance, and capability of persuasion; Second, it provides knowledge about the solution space. In other words, the intelligent agent actualized the user's affordance through the accuracy of prediction and transparency; Next, we provide sets of requirements for the implementation of human-artificial intelligence systems. These set of requirements in a theoretical model, constitute a guide for the artificial systems design principles in organizations, which has been the main concern of prior research; and finally, we showed that the design of Human-Artificial Intelligence (AI) interaction systems for decision making is not only technical, but also technical, social and organizational elements are intertwined in different cycles, which correspond to three related and interdependent aspects of AI management, i.e. automation, learning, indiscernibility: In terms of automation, the findings of this research showed that the policy-making functions cannot be coded. As a result, AI automation is constraining and AI should be tools to augmentation. In other hand, augmentation can be automated, so augmentation can become automatic over time. Accordingly, artificial intelligence tools can automate policy-making processes through the features of transparency and predictive accuracy that needed for policy maker's affordance, i.e. the ability to understand, the ability to rule, and the ability to persuade. In terms of learning, the findings of this research have emphasized the capacity of machine learning algorithms for semantic search to increase the accuracy of artificial intelligence prediction. 

تبلیغات