مطالب مرتبط با کلیدواژه

Convolutional neural networks


۱.

A Deep Learning Based Analysis of the Big Five Personality Traits from Handwriting Samples Using Image Processing(مقاله علمی وزارت علوم)

کلیدواژه‌ها: computer vision Convolutional neural networks Artificial Neural Networks Machine Learning Big Five Personality Traits Handwriting Graphology

حوزه‌های تخصصی:
تعداد بازدید : ۳۹۰ تعداد دانلود : ۱۹۴
Handwriting Analysis has been used for a very long time to analyze an individual’s suitability for a job, and is in recent times, gaining popularity as a valid means of a person’s evaluation. Extensive Research has been done in the field of determining the Personality Traits of a person through handwriting. We intend to analyze an individual’s personality by breaking it down into the Big Five Personality Traits using their handwriting samples. We present a dataset that links personality traits to the handwriting features. We then propose our algorithm - consisting of one ANN based model and PersonaNet, a CNN based model. The paper evaluates our algorithm’s performance with baseline machine learning models on our dataset. Testing our novel architecture on this dataset, we compare our algorithm based on various metrics, and show that our novel algorithm performs better than the baseline Machine Learning models.
۲.

VG-CGARN: Video Generation Using Convolutional Generative Adversarial and Recurrent Networks(مقاله علمی وزارت علوم)

تعداد بازدید : ۱۰ تعداد دانلود : ۸
Generating dynamic videos from static images and accurately modeling object motion within scenes are fundamental challenges in computer vision, with broad applications in video enhancement, photo animation, and visual scene understanding. This paper proposes a novel hybrid framework that combines convolutional neural networks (CNNs), recurrent neural networks (RNNs) with long short-term memory (LSTM) units, and generative adversarial networks (GANs) to synthesize temporally consistent and spatially realistic video sequences from still images. The architecture incorporates splicing techniques, the Lucas-Kanade motion estimation algorithm, and a loop feedback mechanism to address key limitations of existing approaches, including motion instability, temporal noise, and degraded video quality over time. CNNs extract spatial features, LSTMs model temporal dynamics, and GANs enhance visual realism through adversarial training. Experimental results on the KTH dataset, comprising 600 videos of fundamental human actions, demonstrate that the proposed method achieves substantial improvements over baseline models, reaching a peak PSNR of 35.8 and SSIM of 0.96—representing a 20% performance gain. The model successfully generates high-quality, 10-second videos at a resolution of 720×1280 pixels with significantly reduced noise, confirming the effectiveness of the integrated splicing and feedback strategy for stable and coherent video generation.