مطالب مرتبط با کلیدواژه

Explainable AI


۱.

Artificial intelligence in credit risk assessment

کلیدواژه‌ها: credit risk assessment Artificial Intelligence Machine Learning Explainable AI model interpretability Financial Technology

حوزه‌های تخصصی:
تعداد بازدید : ۴۲ تعداد دانلود : ۳۸
This study presents a structured literature review on the application of AI in credit risk assessment, synthesizing empirical and conceptual research published between 2016 and 2022. It critically examines a range of AI models, including artificial neural networks (ANN), support vector machines (SVM), fuzzy logic systems, and hybrid architectures, with an emphasis on their predictive accuracy, robustness, and operational applicability. The review highlights that AI-based models consistently outperform traditional statistical techniques in handling nonlinear patterns, imbalanced datasets, and complex borrower profiles. Furthermore, AI enhances the inclusivity of credit evaluation by integrating alternative data sources and adapting to dynamic financial environments. However, the study also identifies ongoing challenges related to model interpretability, fairness, and regulatory compliance. By evaluating model performance metrics and methodological innovations across multiple contexts—including emerging markets, peer-to-peer platforms, and digital banking—the study offers a nuanced understanding of AI's strengths and limitations. The paper concludes with a call for balanced integration of explainable AI tools and ethical governance to ensure responsible deployment in financial institutions.
۲.

Risk-Aware Suicide Detection in Social Media: A Domain-Guided Framework with Explainable LLMs(مقاله علمی وزارت علوم)

کلیدواژه‌ها: Suicide Risk Detection Large Language Models Social media analysis Mental Health Monitoring Explainable AI

تعداد بازدید : ۲۸ تعداد دانلود : ۲۱
Nowadays, the close connection between people's lives and social media has led to the emergence of their psychological and emotional states in social media posts. This type of digital footprint creates a rich and novel entry point for early detection of suicide risk. Accurate detection of suicidal ideation is a significant challenge due to the high false negative rate and sensitivity to subtle linguistic features. Current AI-based suicide detection systems are unable to detect linguistic subtleties. These approaches do not consider domain-specific indicators and ignore the dynamic interaction of language, behaviour, and mental health. Identifying lexical and syntactic markers can be a powerful diagnostic lens for diagnosing psychological distress. To address these issues, we propose a new domain-based framework that integrates the specialized frequent-rare suicide vocabulary (FR-SL) into the fine-tuning process of large language models (LLMs). This vocabulary-aware strategy draws the model's attention to common and rare suicide-related phrases and enhances the model's ability to detect subtle signs of distress. In addition to improving performance on various metrics, the proposed framework adds interpretability for understanding and trusting the models' decisions while creating transparency. It also enables the design of a structure that is generalizable to the linguistic and mental health domains. The proposed approach offers clear improvements over baseline methods, especially in terms of reducing false negatives and general interpretability through transparent attribution.
۳.

Explainable Diabetes Prediction via Hybrid Data Preprocessing and Ensemble Learning(مقاله علمی وزارت علوم)

کلیدواژه‌ها: Diabetes Prediction Explainable AI Ensemble learning lime SHAP E-Health

تعداد بازدید : ۲۶ تعداد دانلود : ۱۶
Accurate and early prediction of diabetes is crucial for initiating prompt treatment and minimizing the risk of long-term health issues. This study introduces a comprehensive machine learning model aimed at improving diabetes prediction by leveraging two clinical datasets: the PIMA Indians Diabetes Dataset and the Early-Stage Diabetes Dataset. The pipeline tackles common challenges in medical data, such as missing values, class imbalance, and feature relevance, through a series of advanced preprocessing steps, including class-specific imputation, engineered feature construction, and SMOTETomek resampling. To identify the most informative predictors, a hybrid feature selection strategy is employed, integrating recursive elimination, Random Forest-based importance, and gradient boosting. Model training uses Random Forest and Gradient Boosting classifiers, which are fine-tuned and combined through weighted ensemble averaging to boost predictive performance. The resulting model achieves 93.33% accuracy on the PIMA dataset and 98.44% accuracy on the Early-Stage dataset, outperforming previously reported approaches. To enhance transparency and clinical applicability, both local (LIME) and global (SHAP) explainability methods are applied, highlighting clinically relevant features. Furthermore, probability calibration is performed to ensure that predicted risk scores align with true outcome frequencies, increasing trust in the model’s use for clinical decision support. Overall, the proposed model offers a robust, interpretable, and clinically reliable solution for early-stage diabetes prediction.