
پژوهشنامه پردازش و مدیریت اطلاعات (علوم و فناوری اطلاعات سابق)
پژوهشنامه پردازش و مدیریت اطلاعات دوره 40 زمستان 1403 ویژه نامه انگلیسی 3 (پیاپی 122) (مقاله علمی وزارت علوم)
مقالات
حوزههای تخصصی:
Data is considered the most crucial element in open banking processes and services. Therefore, it is necessary to pay attention to various aspects of the quality of this data in order to provide appropriate and expected services to customers. In this research, various dimensions representing different aspects of data quality were investigated in the field of open banking. This research has been conducted in two main steps: The Delphi method and the pairwise comparisons method. In the first step, various dimensions of data quality in open banking were extracted based on the Delphi method. In the next step, the importance of each of these dimensions was assessed relative to each other using the pairwise comparisons method, and the most crucial dimensions were determined. Based on the results obtained from these two methods, the significance of eleven dimensions of data quality in this field was determined. The best overall weighted averages were related to dimensions such as accuracy, accessibility, relevancy, timeliness, consistency, security, interpretability, reputation, believability, ease of understanding, and value-added, respectively. Banks and fintech companies offering open banking services can consider these dimensions when evaluating the quality of their data to enhance the provision of superior services
Improving the Quality of Business Process Event Logs Using Unsupervised Method(مقاله علمی وزارت علوم)
حوزههای تخصصی:
In the contemporary dynamic business environment, the dependability of process mining algorithms is intricately tied to the quality of event logs, often marred by data challenges stemming from human involvement in business processes. This study introduces a novel approach that amalgamates insights from prior works with unsupervised techniques, specifically Principal Component Analysis (PCA), to elevate the precision and reliability of event log representations. Executed through Python and the pm4py library, the methodology is applied to real event logs. The adoption of Petri nets for process representation aligns with systematic approaches advocated by earlier studies, enhancing transparency and interpretability. Results demonstrate the method’s efficacy through enhanced metrics such as Fitness, Precision, and F-Measure, accompanied by visualizations elucidating the optimal number of principal components. This study offers a comprehensive and practical solution, bridging gaps in existing methodologies, and its integration of multiple strategies, particularly PCA, showcases versatility in optimizing process mining analyses. The consistent improvements observed underscore the method’s potential across diverse business contexts, making it accessible and pertinent for practitioners engaged in real-world business processes. Overall, this research contributes an innovative approach to improve event log quality, thereby advancing the field of process mining with practical implications for organizational decision-making and process optimization.
Decoding DQM for Experimental Insights on Data Quality Metadata’s Impact on Decision-Making Process Efficacy(مقاله علمی وزارت علوم)
حوزههای تخصصی:
Decision-making processes are significantly influenced by a myriad of factors, with data quality emerging as a crucial determinant. Despite widespread awareness of the detrimental effects of poor-quality data on decisions, organizations struggle with persistent challenges because of the sheer volume of data within their systems. Existing literature advocates for providing Data Quality Metadata (DQM) to help decision-makers communicate data quality levels. However, concerns about potential cognitive overload induced by DQM may hinder decision-makers and negatively impact outcomes. To address this concern, we conducted an experimental exploration into the impact of Data Quality Management (DQM) on decision outcomes. Our study aimed to identify specific groups of decision-makers benefiting from DQM and uncover factors influencing its usage. Statistical analyses revealed that decision-makers with a heightened awareness of data quality demonstrated improved Data Quality Management (DQM) utilization, leading to increased decision accuracy. Nevertheless, a trade-off was observed as the efficiency of decision-makers suffered when employing Decision Quality Management (DQM). We propose that the positive impact of incorporating Data Quality Management (DQM) on decision outcomes is contingent on characteristics such as a high level of knowledge about data quality. However, we acknowledge that the inference of this positive impact could be more transparent and thoroughly explained. Our findings caution against a blanket inclusion of Data Quality Management (DQM) in data warehouses, emphasizing the need for tailored investigations into its utility and impact within specific organizational settings.
Performance Evaluation and Accuracy Improvement in Individual Record Linking Problems Using Decision Tree Algorithm in Machine Learning(مقاله علمی وزارت علوم)
حوزههای تخصصی:
Record linkage is vital for consolidating data from different sources, particularly in Persian records where diverse data structures and formats present challenges. To tackle these complexities, an expert system with decision tree algorithms is crucial for ensuring precise record linkage and data aggregation. Adaptation operations are created based on predefined rules by incorporating decision trees into an expert system framework, simplifying the aggregation of disparate data sources. This method surpasses traditional approaches like IF-THEN rules in effectiveness and ease of use and improves accessibility for non-technical users due to its intuitive nature. Integrating probabilistic record linkage results into the decision tree model within the expert system automates the linkage process, allowing users to customize string metrics and thresholds for optimal outcomes. The model’s accuracy rate of over 95% on test data highlights its effectiveness in predicting and adjusting to data variations, confirming its reliability in various record linkage scenarios. The innovative utilization of machine learning decision trees alongside probabilistic record linkage in an expert system represents a significant advancement in the field, providing a robust solution for data aggregation in intricate environments and large-scale projects involving Persian records. Combining decision tree algorithms and probabilistic record linkage within an expert system offers a powerful tool for handling complex data integration tasks. This approach not only streamlines the process of consolidating diverse data sources but also enhances the accuracy and efficiency of record linkage operations By leveraging machine learning techniques and automated decision-making processes, organizations can achieve significant improvements in data quality and consistency, paving the way for more reliable and insightful analytical results in implementing statistical registers. In conclusion, integrating decision trees and probabilistic record linkage in an expert system represents a cutting-edge solution for addressing data aggregation challenges in Persian records and beyond.
Information Quality and SMEs Innovative Performance: The Role of Knowledge Sharing and Business Process Management(مقاله علمی وزارت علوم)
حوزههای تخصصی:
In the current business environment and the post-corona era, quality information builds the foundation of knowledge sharing and facilitates business process management, and all of these play a fundamental role in the innovative performance of small and medium enterprises (SMEs). Therefore, the purpose of this research is to investigate the effect of information quality on the innovative performance of SMEs concerning the mediating role of knowledge sharing and business process management. The research method is applied in terms of the goal and the descriptive survey is in terms of the method. The statistical population of the research is 460 SMEs in Kerman, of which 210 companies were selected as a statistical sample based on Morgan’s table. The sampling method is random sampling, and the research tool is a questionnaire whose validity and reliability were confirmed. Finally, 420 questionnaires were distributed among managers and vice presidents of companies. The data was analyzed with SPSS 26 and SmartPLS 3 software. The results of the research indicate that knowledge sharing and business process management play a mediating role in influencing the quality of information on the innovative performance of SMEs. Information quality has an impact on knowledge sharing, business process management, and innovative performance of SMEs. Knowledge sharing and business process management affect the innovative performance of SMEs. SMEs should continuously improve the quality of company information, and by using this information, and through knowledge sharing and business process management, improve innovative performance.
Quality Metrics for Business Process Event Logs Based on High Frequency Traces(مقاله علمی وزارت علوم)
حوزههای تخصصی:
In today’s data-centric business landscape, characterized by the omnipresence of advanced Business Intelligence and Data Science technologies, the practice of Process Mining takes center stage in Business Process Management. This study addresses the critical challenge of ensuring the quality of event logs, which serve as the foundational data source for Process Mining. Event logs, derived from interactions among process participants and information systems, offer profound insights into the authentic behavior of business processes, reflecting the organizational rules, procedures, norms, and culture. However, the quality of these event logs is often compromised by interactions among various actors and systems. In response, our research introduces a systematic approach that leverages Python and the pm4py library for data analysis. We employ trace filtering techniques and utilize Petri nets for process model representation. This paper proposes a methodology demonstrating a significant improvement in the quality metrics of extracted subprocesses through trace filtering. Comparative analyses between the original logs and filtered logs show enhancements in fitness, precision, generalization, and simplicity, highlighting the practical importance of trace filtering in refining complex process models. These findings offer practical insights for practitioners and researchers involved in process mining and modeling, highlighting the significance of data quality in obtaining precise and dependable business process insights.
Comparative Analysis of Evaluation of Business Process Management Regarding Digital Transformation Using the COCOSO Method(مقاله علمی وزارت علوم)
حوزههای تخصصی:
Nowadays, business process management (BPM) contributes to the success of companies by ensuring that their processes are both effective and efficient. A comprehensive description of a business process can serve as a foundation for designing IT systems, ensuring data quality, establishing performance metrics, and implementing processes using Business Process Management Systems (BPMS), among other applications. Currently, many Iranian companies are also interested in evaluating their Business Process Management (BPM) practices. In recent decades, significant advancements in the digital realm have become increasingly vital for companies, making it essential to utilize these developments effectively to impact business processes. Consequently, the current research has ranked BPM measurement methods within the context of digital transformation, employing the COCOSO hierarchical analysis technique in Semnan Industrial Town. In this context, BPM measurement methods and measurement criteria with the digital transformation approach and data quality, are derived from a review of the research literature. Subsequently, an appropriate BPM evaluation method is identified using a multi-criteria decision-making approach. The results of this ranking indicate that BPM measurement models grounded in comprehensive quality management are the most appropriate. Additionally, a sensitivity analysis has been conducted to validate these findings.
Revolutionizing Molten Gold Ownership Transfer: Unlocking the Power of Blockchain Technology(مقاله علمی وزارت علوم)
حوزههای تخصصی:
Molten gold ownership transfer holds both historical and economic significance, serving as a crucial aspect of financial and wealth management practices. Traditional systems for transferring ownership of molten gold are often inefficient, susceptible to fraud, and lack transparency. In contrast, blockchain technology, with its decentralized, immutable, and transparent characteristics, presents a promising solution to these challenges. This paper explores the transformative potential of blockchain technology in revolutionizing the transfer of molten gold ownership. Utilizing blockchain for this purpose provides a secure and transparent method to track and verify ownership of gold assets. The proposed model facilitates the creation of digital tokens that represent physical gold, which can then be exchanged on a blockchain platform. By highlighting the transformative potential of blockchain in molten gold ownership transfer, this paper contributes to the ongoing discourse at the intersection of blockchain technology and asset management, paving the way for a more efficient, secure, transparent, and decentralized gold market.
A Review of QoS-Driven Task Scheduling Algorithms and Their Impact on Data Quality in Process Management(مقاله علمی وزارت علوم)
حوزههای تخصصی:
The term “cloud computing (CC)” has been extensively studied and utilized by major corporations since its inception. Within the realm of cloud computing, various research topics and perspectives have been explored, including resource management, cloud security, and energy efficiency. This paper explores the intersection of data quality and business process management within the context of cloud computing. Specifically, it examines how Quality of Service (QoS)-driven task scheduling algorithms in cloud environments can enhance data quality and optimize business processes. Cloud computing still faces the significant challenge of determining the most effective way to schedule tasks and manage available resources. We need effective scheduling strategies to manage these resources because of the scale and dynamic resource provisioning in modern data centers. The purpose of this work is to provide an overview of the various task scheduling methods that have been utilized in the cloud computing environment to date. An attempt has been made to categorize current methods, investigate issues, and identify important challenges present in this area. Our data reveals that 34% of researchers are focusing on makespan for QoS (Quality of Service) metrics, 17% on cost, 15% on load balancing, 10% on deadline, and 9% on energy usage. Other criteria for the Quality of Service (QoS) parameter contribute far less than the ones mentioned above. According to this study, scheduling algorithms commonly used by researchers include the genetic algorithm in bio-inspired systems and particle swarm optimization in swarm intelligence 80% of the time. According to the available literature, 70% of the studies have utilized CloudSim as their simulation tool of choice. Our findings suggest that current methodologies mainly employ genetic algorithms and particle swarm optimization, with CloudSim being a popular simulation tool. Ongoing work emphasizes refining scheduling strategies to enhance resource management in dynamic data center environments, providing crucial insights into future quality-of-service (QoS)-driven scheduling algorithms for cloud computing.
The Quality of Decision-Making for School Vocational Principals: An Information System and Decision Support System Technology(مقاله علمی وزارت علوم)
حوزههای تخصصی:
The principal sometimes makes decisions that cannot solve the problem because the principal does not adequately study the factors that influence and hinder the implementation of decisions before making a decision. The low quality of decisions is because of the lack of data availability or current information related to planning parameters. The unavailability of a decision support system as part of an education management information system that manages the data becomes information for school principals and is an obstacle to the quality of decision-making. The main objective of this research is proposing the decision-making model of school principals. This research is a quantitative study with a questionnaire as a research instrument. The research sample was the head of vocational schools in the Special Region of Yogyakarta and Central Java. The data analysis technique used was the analysis technique used by Structural Equation Modeling (SEM) using SmartPLS software. The results showed that the Decision Information System and the Decision Support System must support a practical decision-making quality. Both systems will have an impact on increasing alternative selection ability and analysis of problem-solving ability. Future research may develop an Android-based principal’s decision support system.
Comparative Assessment of Data Quality Dimensions in Scientific Multimedia Indexing Process(مقاله علمی وزارت علوم)
حوزههای تخصصی:
Organizing a large volume of scientific multimedia data requires the use of appropriate indexing methods as one of the processes of information organization. Appropriate methods and algorithms are those that lead to the improvement of various aspects of quality in the process of organizing and retrieving information. For this reason, the purpose of this research is to identify the most important dimensions of data quality in the field of scientific multimedia indexing. In order to achieve this goal, a comparison of different dimensions of data quality has been made based on different criteria and the most important dimensions have been identified using Shannon entropy weighting approach and TOPSIS group ranking method. Also, using the correlation matrix, the intensity and direction of the relationship and correlation between the different dimensions of data quality have been evaluated. Based on the results of the first part of the research, the best ranks (priorities) were related to the data quality dimensions of recall, precision, completeness, appropriate amount of data, accuracy, relevancy, concise 1, consistency, concise 2, interpretability, value-added and accessibility, respectively. The results obtained from the second part of the research showed that the data quality dimensions of interpretability and relevancy had the highest correlation with the most important dimensions, i.e. recall and precision. As one of the implications of this research, it is possible to consider the measurement and evaluation of scientific multimedia data indexing methods based on different aspects of data quality and their importance.