مشخصات مقاله | |
ترجمه عنوان مقاله | چند مسیری-DenseNet: اثر کلی معماری نظارت شده شبکه های پیچشی یکپارچه متراکم |
عنوان انگلیسی مقاله | Multipath-DenseNet: A Supervised ensemble architecture of densely connected convolutional networks |
انتشار | مقاله سال 2019 |
تعداد صفحات مقاله انگلیسی | 10 صفحه |
هزینه | دانلود مقاله انگلیسی رایگان میباشد. |
پایگاه داده | نشریه الزویر |
نوع نگارش مقاله |
مقاله پژوهشی (Research Article) |
مقاله بیس | این مقاله بیس نمیباشد |
نمایه (index) | Scopus – Master Journals List – JCR |
نوع مقاله | ISI |
فرمت مقاله انگلیسی | |
ایمپکت فاکتور(IF) |
6.774 در سال 2018 |
شاخص H_index | 154 در سال 2019 |
شاخص SJR | 1.620 در سال 2018 |
شناسه ISSN | 0020-0255 |
شاخص Quartile (چارک) | Q1 در سال 2018 |
مدل مفهومی | ندارد |
پرسشنامه | ندارد |
متغیر | ندارد |
رفرنس | دارد |
رشته های مرتبط | مهندسی کامپیوتر |
گرایش های مرتبط | معماری سیستم های کامپیوتری، مهندسی نرم افزار، هوش مصنوعی |
نوع ارائه مقاله |
ژورنال |
مجله | علوم اطلاعات – Information Sciences |
دانشگاه | Department of Computer Science and Engineering, Korea University, Seoul, South Korea |
کلمات کلیدی | طبقه بندی تصویر، شبکه عصبی، یادگیری عمیق |
کلمات کلیدی انگلیسی | Image classification، Neural network، Deep-learning |
شناسه دیجیتال – doi |
https://doi.org/10.1016/j.ins.2019.01.012 |
کد محصول | E11559 |
وضعیت ترجمه مقاله | ترجمه آماده این مقاله موجود نمیباشد. میتوانید از طریق دکمه پایین سفارش دهید. |
دانلود رایگان مقاله | دانلود رایگان مقاله انگلیسی |
سفارش ترجمه این مقاله | سفارش ترجمه این مقاله |
فهرست مطالب مقاله: |
Abstract
1- Introduction 2- Related work 3- Methodology 4- Experiments 5- Results and discussion 6- Conclusion References |
بخشی از متن مقاله: |
Abstract Deep networks with skip-connections such as ResNets have achieved great results in recent years. DenseNet exploits the ResNet skip-connections by connecting each layer in convolution neural network to all preceding layers and achieves state-of-the-art accuracy. It is well-known that deeper networks are more efficient and easier to train than shallow or wider networks. Despite the high performance of very deep networks, they are limited in terms of vanishing gradient, diminishing forward flow, and slower training time. In this paper, we propose to combine the benefits of the depth and width of networks. We train supervised independent shallow networks on the same input in a block fashion. We use a state-of-the-art DenseNet block to increase the number of paths for gradient flow. Our proposed architecture has several advantages over other deeper networks including DenseNet; our architecture which we call Multipath-DenseNet is deeper as well as wider, reduces training time, and uses a smaller number of parameters. We evaluate our proposed architecture on the following four object recognition datasets: CIFAR-10, CIFAR-100, SVHN, and ImageNet. The evaluation results show that Multipath-DenseNet achieves significant improvement in performance over DenseNet on the benchmark datasets. Introduction Convolutional networks have been employed in research for many years. They have been successfully applied to image processing [1], natural language processing (NLP) [2], and recommender systems [3]. Research on convolutional neural networks has resulted in outstanding algorithms such as LeNet [4], AlexNet [5], VGGNet [6], ResNet [7], and GoogLeNet [8]. Highway [9] and ResNet [7] are considered to be the pioneer networks that were proposed to extract features from over 100 layers. Training very deep neural networks can be challenging due to the vanishing gradient problem. ResNet proposed skip-connection for deeper layers in order to make very deep neural networks. Stochastic depth algorithm proposed by [10] proved that depth is not the only parameter behind the success of residual networks (ResNets). The authors [10] proposed to shorten network depth by randomly skipping the layers in residual networks. Wide residual network [11] followed the similar hypothesis that depth is not the only important parameter. They shorten the network depth too but increased the number of features at individual layer of the network that makes a wider neural network. |