مطالب مرتبط با کلیدواژه

Sparse representation


۱.

Malware Detection and Identification using Multi-View Learning based on Sparse Representation(مقاله علمی وزارت علوم)

تعداد بازدید : ۳۳۶ تعداد دانلود : ۱۰۶
With the widespread using Internet in any device and services, several homes and workplace applications have been provided to avoid attacks. Connecting a system or device to an insecure network can create the possibility of being infected by unwanted files. Detecting such files is a vital task in any system. Employing machine learning (ML) is the most efficient method to detect these penetrations. On the other hand, malware programmers try to design malicious files that are hard to detect. A file can hide from detection in a feature view, but concealing in all views would be very difficult. In this paper, inspiring Multi-View Learning (MVL), we proposed to incorporate some various features such as Opcodes, Bytecodes, and System-calls to achieve complementary information to identify a file. In this way, we developed a modified version of Sparse Representation based Classifier (SRC) to aggregate the effect of all modalities in a unified classifier. To show the efficiency of the proposed method, we used several real datasets. Experimental results show the high performance of the proposed approach and its ability to cope with the imbalanced conditions.
۲.

Speech Enhancement using Greedy Dictionary Learning and Sparse Recovery(مقاله علمی وزارت علوم)

کلیدواژه‌ها: Sparse representation Greedy Dictionary Learning Singular Value Decomposition Orthogonal Matching Pursuit Quantization

حوزه‌های تخصصی:
تعداد بازدید : ۱۳۳ تعداد دانلود : ۱۰۵
Most real-time speech signals are frequently disrupted by noise such as traffic, babbling, and background noises, among other things. The goal of speech denoising is to extract the clean speech signal from as many distorted components as possible. For speech denoising, many researchers worked on sparse representation and dictionary learning algorithms. These algorithms, however, have many disadvantages, including being overcomplete, computationally expensive, and susceptible to orthogonality restrictions, as well as a lack of arithmetic precision due to the usage of double-precision. We propose a greedy technique for dictionary learning with sparse representation to overcome these concerns. In this technique, the input signal's singular value decomposition is used to exploit orthogonality, and here the ℓ1-ℓ2 norm is employed to obtain sparsity to learn the dictionary. It improves dictionary learning by overcoming the orthogonality constraint, the three-sigma rule-based number of iterations, and the overcomplete nature. And this technique has resulted in improved performance as well as reduced computing complexity. With a bit-precision of Q7 fixed-point arithmetic, this approach is also used in resource-constrained embedded systems, and the performance is considerably better than other algorithms. The greedy approach outperforms the other two in terms of SNR, Short-Time Objective Intelligibility, and computing time.