مقاله انگلیسی رایگان در مورد بهبود کارایی در شبکه های عصبی پیچشی با فیلترهای چند خطی – الزویر 2018

 

مشخصات مقاله
انتشار مقاله سال 2018
تعداد صفحات مقاله انگلیسی 33 صفحه
هزینه دانلود مقاله انگلیسی رایگان میباشد.
منتشر شده در نشریه الزویر
نوع نگارش مقاله مقاله پژوهشی (Research article)
نوع مقاله ISI
عنوان انگلیسی مقاله Improving Efficiency in Convolutional Neural Networks with Multilinear Filters
ترجمه عنوان مقاله بهبود کارایی در شبکه های عصبی پیچشی با فیلترهای چند خطی
فرمت مقاله انگلیسی  PDF
رشته های مرتبط مهندسی فناوری اطلاعات و کامپیوتر
گرایش های مرتبط شبکه های کامپیوتری و هوش مصنوعی
مجله شبکه های عصبی – Neural Networks
دانشگاه Laboratory of Signal Processing – Tampere University of Technology – Finland
کلمات کلیدی شبکه های عصبی پیچشی، طرح چند خطی، فشرده سازی شبکه
کلمات کلیدی انگلیسی Convolutional Neural Networks, Multilinear Projection, Network Compression
شناسه دیجیتال – doi
https://doi.org/10.1016/j.neunet.2018.05.017
کد محصول E8626
وضعیت ترجمه مقاله  ترجمه آماده این مقاله موجود نمیباشد. میتوانید از طریق دکمه پایین سفارش دهید.
دانلود رایگان مقاله دانلود رایگان مقاله انگلیسی
سفارش ترجمه این مقاله سفارش ترجمه این مقاله

 

بخشی از متن مقاله:
1. Introduction

In recent years, deep neural network architectures have excelled in several application domains, ranging from machine vision [1, 2, 3], natural language processing [4, 5] to biomedical [6, 7] and financial data analysis [8, 9] . Of those important developments, Convolutional Neural Network (CNN) has evolved as a main workhorse in solving computer vision tasks nowadays. The architecture was originally developed in the 1990s for handwritten character recognition using only two convolutional layers [10]. Over the years, with the development of Graphical Processing Units (GPUs) and efficient implementation of convolution operation, the depth of CNNs has been increased to tackle more complicated problems. Nowadays, prominent architectures such as Residual Network (ResNet) [11] or Google Inception [12] with hundreds of layers have become saturated. Researchers started to wonder whether millions of parameters are essential to achieve such performance. In order to extend the benefit of such deep nets to embedded devices with limited computation power and memory, recent works have focused on reducing the memory footprint and computation of a pre-trained network, i.e. they apply network compression in the post-training stage. In fact, recent works have shown that traditional network architectures such as Alexnet, VGG or Inception are highly redundant structures [13, 14, 15, 16, 17, 18, 19, 20, 21, 22]. For example, in [13] a simple heuristic based on magnitude of the weights was employed to eliminate the connections in a pre-trained network, which achieved considerable amount of compression without hurting the performance much. Additionally, representing network parameters with low bitwidth numbers, like in [23, 24, 25], has shown that the performance of a 32-bit network can be closely retained with only 4-bit representations. It should be noted that the two approaches are complementary to each other. In fact, a compression pipeline called “Deep Compression” [13] which consists of three compression procedures, i.e. weight pruning, weight quantization and Huffman-based weight encoding, achieved excellent compression performance on AlexNet and VGG-16 architectures. Along pruning and quantization, low-rank approximation of both convolutional layers and fully connected layers was also employed to achieve computational speed up [26, 27, 28]. Viewed as high-order tensors, convolutional layers were decomposed using traditional tensor decomposition methods, such as CP decomposition [21, 20, 29] or Tucker decomposition [30], and the convolution operation is approximated by applying consecutive 1D convolutions. Overall, efforts to remove redundancy in already trained neural networks have shown promising results by determining networks with a much simpler structure. The results naturally pose the following question: why should we compress an already trained network and not seek for a compact network representation that can be trained from scratch?. Subsequently, one could of course exploit the above mentioned compression techniques to further decrease the cost.

دیدگاهتان را بنویسید

نشانی ایمیل شما منتشر نخواهد شد. بخش‌های موردنیاز علامت‌گذاری شده‌اند *

دکمه بازگشت به بالا