۱.
Over the past few years, there has been a significant increase in patent applications, which has resulted in a heavier workload for examination offices in examining and prosecuting these inventions. To adequately perform this legal process, examiners must thoroughly analyze patents by manually identifying the semantic information such as problem description and solutions. The process of manually annotating is both tedious and time-consuming. To solve this issue, we have introduced a deep ensemble model for semantic paragraph-level pattern classification based on the semantic content of patents. Specifically, our proposed model classifies the paragraphs into semantic categories to facilitate the annotation process. The proposed model employs stack generalization as an ensemble method for combining various deep models such as Long Short-Term Memories (LSTM), bidirectional LSTM (BiLSTM), Convolutional Neural Networks (CNN), Gated Recurrent Units (GRU), and the pre-trained BERT model. We compared the proposed model with several baselines and state-of-the-art deep models on the PaSA dataset containing 150000 USPTO patents classified into three classes of 'technical advantages', 'technical problems', and 'other boilerplate text'. The results of extensive experiments show that the proposed model outperforms both traditional and state-of-the-art deep models significantly.
۲.
Cloud computing has emerged as a pivotal technology for managing and processing data, with a primary objective to offer efficient resource access while minimizing expenses. The allocation of resources is a critical aspect that can significantly reduce costs. This process necessitates the continuous assessment of the current status of each resource to design algorithms that optimize allocation and enhance overall system performance. Numerous algorithms have been developed to address the challenge of resource allocation, yet many fail to satisfy requirements of time efficiency and load balancing in cloud computing environments. This paper introduces a novel approach that classifies tasks according to their resource demands, employs a modified particle swarm optimization (PSO) algorithm, and incorporates load balancing strategies. The proposed method initially clusters tasks based on their resource requirements, subsequently utilizes the PSO algorithm to determine the best task-to-resource assignments, and finally implements a load balancing algorithm to reduce costs through balanced load distribution. The validity of the proposed method is tested and simulated using the Cloudsim tool. The simulation results indicate that the proposed method achieves lower average response time, waiting times, and energy consumption than existing baseline methods.
۳.
A social network consists of individuals and the relationships between them, which often influence each other. This influence can propagate behaviors or ideas through the network, a phenomenon known as influence propagation. This concept is crucial in applications like advertising, marketing, and public health. The influence maximization (IM) problem aims to identify key individuals in a social network who, when influenced, can maximize the spread of a behavior or idea. Given the NP-hard nature of IM, non-exact algorithms, especially metaheuristics, are commonly used. However, traditional metaheuristics like the variable neighborhood search (VNS) struggle with large networks due to vast solution spaces. This paper introduces DQVNS (Deep Q-learning Variable Neighborhood Search), which integrates VNS with deep reinforcement learning (DRL) to enhance neighborhood structure determination in VNS. By using DQVNS, we aim to achieve performance similar to population-based algorithms and utilize the information created step by step during the algorithm's execution. This adaptive approach helps the VNS algorithm choose the most suitable neighborhood structure for each situation and find better solutions for the IM problem. Our method significantly outperforms existing metaheuristics and IM-specific algorithms. DQVNS achieves a 63% improvement over population-based algorithms on various datasets. The results of implementation on different real-world social networks of varying sizes demonstrate the superiority of this algorithm compared to existing metaheuristic, IM-specific algorithms, and network-specific measures.
۴.
Diabetes, a metabolic disorder, poses significant annual risks due to various factors, requiring effective management strategies to prevent life-threatening complications. Classified into Type 1, Type 2, and Gestational diabetes, its impact spans diverse demographics, with Type 2 diabetes being particularly concerning due to cellular insulin deficiencies. Early prediction is crucial for intervention and complication prevention. While machine learning and artificial intelligence show promise in predictive modeling for diabetes, challenges in interpreting models hinder widespread adoption among physicians and patients. The complexity of these models often raises doubts about their reliability and practical utility in clinical settings. Addressing interpretability challenges is crucial to fully harnessing predictive analytics in diabetes management, leading to improved patient outcomes and reduced healthcare burdens. Previous research has utilized various algorithms like Naïve Bayes, Support Vector Machines (SVM), K-Nearest Neighbors (KNN), and decision trees for patient classification. In this study using the Pima dataset, we applied a preprocessing technique that utilized the most important features identified by the Random Forest algorithm and we used an ensemble method combining the SVM algorithm and Naïve Bayes for the model. In the first section of the proposed method, we provided explanations regarding the dataset. In the second section, we elucidated all preprocessing steps applied to this dataset, and in the third section, we evaluated the model using the selected algorithm under investigation. The proposed model, after going through the various stages, was able to report an accuracy of 81.82%, a precision of 82.34%, an AUC of 88.19% and a Recall of 70.68%. Considering the review of similar studies, an improvement of 3.99% in accuracy demonstrates a significant advancement that highlights the benefits of traditional methods in disease prediction. These findings suggest the potential use of web-based applications to encourage both physicians and patients in diabetes prediction efforts.
۵.
This study investigates the potential of blockchain technology to support compliance with the General Data Protection Regulation (GDPR). By integrating blockchain's core features, such as transparency, immutability, and data encryption, with GDPR principles like data minimization and accuracy, the research develops a comprehensive applicability model. This model serves as a Reference Framework for evaluating blockchain systems' alignment with GDPR requirements. The study employs a meta-synthesis method and qualitative content analysis of 67 articles, culminating in a detailed examination of 31 selected articles. The findings reveal that blockchain technology can significantly enhance GDPR compliance, particularly in securing personal data and ensuring transparency. Importantly, the research introduces a novel model validated by a panel of 13 experts, which identifies and prioritizes key areas where blockchain can effectively support GDPR requirements. This model provides valuable insights for policymakers, industry leaders, and technology developers, emphasizing blockchain's strategic role in enhancing data protection under GDPR.
۶.
Due to the popularity of mobile phones and the internet, as well as the development of electronic devices and various software, numerous benefits and facilities have emerged. However, misuse of these tools can lead to serious problems such as deliberate harm to others or cyber victimization. This study utilizes the Everyday Activities Theory and questionnaire techniques to analyze the risk factors and stimulating reasons for cyber victimization among citizens of Kashan City. According to the research findings, there is a significant relationship between independent variables such as online protection, online proximity to motivated offenders, risky offline activities, online attractive targets, deviant lifestyle, and the level of cyber victimization. Among these, the correlation between online proximity to motivated offenders and cyber victimization is greater than other independent variables (r=0.505, sig=0.000). Path analysis results also indicate that the linear combination of the independent variables present in the model can explain 37% of the variance in cyber victimization.