مقاله انگلیسی رایگان در مورد تکامل یکسره لایه های انتزاع در یادگیری عمیق شبکه های عصبی – الزویر ۲۰۱۸

مقاله انگلیسی رایگان در مورد تکامل یکسره لایه های انتزاع در یادگیری عمیق شبکه های عصبی – الزویر ۲۰۱۸

 

مشخصات مقاله
ترجمه عنوان مقاله تکامل یکسره لایه های انتزاع در یادگیری عمیق شبکه های عصبی
عنوان انگلیسی مقاله Evolution of Abstraction Across Layers in Deep Learning Neural Networks
انتشار مقاله سال ۲۰۱۸
تعداد صفحات مقاله انگلیسی ۱۱ صفحه
هزینه دانلود مقاله انگلیسی رایگان میباشد.
پایگاه داده نشریه الزویر
نوع نگارش مقاله
مقاله پژوهشی (Research Article)
مقاله بیس این مقاله بیس نمیباشد
نوع مقاله ISI
فرمت مقاله انگلیسی  PDF
ایمپکت فاکتور(IF)
۱٫۰۱۳ در سال ۲۰۱۷
شاخص H_index ۳۴ در سال ۲۰۱۹
شاخص SJR ۰٫۲۵۸ در سال ۲۰۱۷
شناسه ISSN ۱۸۷۷-۰۵۰۹
رشته های مرتبط مهندسی کامپیوتر، مهندسی فناوری اطلاعات
گرایش های مرتبط سامانه های شبکه ای، شبکه های کامپیوتری، هوش مصنوعی، مهندسی نرم افزار
نوع ارائه مقاله
کنفرانس
کنفرانس پروسدیای علوم کامپیوتر – Procedia Computer Science
دانشگاه Department of Computer Science, University of Massachusetts, Amherst, MA 01003, USA
کلمات کلیدی یادگیری عمیق، شبکه های عصبی پیچشی، لایه انتزاع، پردازش تصویر، دانش
کلمات کلیدی انگلیسی Deep Learning، Convolutional Neural Networks، Abstraction Level، Image Processing، Knowledge
شناسه دیجیتال – doi
https://doi.org/10.1016/j.procs.2018.10.520
کد محصول E11182
وضعیت ترجمه مقاله  ترجمه آماده این مقاله موجود نمیباشد. میتوانید از طریق دکمه پایین سفارش دهید.
دانلود رایگان مقاله دانلود رایگان مقاله انگلیسی
سفارش ترجمه این مقاله سفارش ترجمه این مقاله

 

فهرست مطالب مقاله:
Abstract

۱- Introduction

۲- Methodology for Abstraction Evaluation

۳- Results and Discussions

۴- Conclusions

References

بخشی از متن مقاله:

Abstract

Deep learning neural networks produce excellent results in various pattern recognition tasks. It is of great practical importance to answer some open questions regarding model design and parameterization, and to understand how input data are converted into meaningful knowledge at the output. The layer-by-layer evolution of the abstraction level has been proposed previously as a quantitative measure to describe the emergence of knowledge in the network. In this work we systematically evaluate the abstraction level for a variety of image datasets. We observe that there is a general tendency of increasing abstraction from input to output with the exception of a drop of abstraction at some ReLu and Pooling layers. The abstraction level is relatively low and does not change significantly in the first few layers following the input, while it fluctuates around some high saturation value at the layers preceding the output. Finally, the layer-by-layer change in abstraction is not normally distributed, rather it approximates an exponential distribution. These results point to salient local features of deep layers impacting overall (global) classification performance. We compare the results extracted from deep learning neural networks performing image processing tasks with the results obtained by analyzing brain imaging data. Our conclusions may be helpful in future designs of more efficient, compact deep learning neural networks.

Introduction

Cutting-edge machine learning tools produce excellent results in automatic pattern recognition [1-4]. In recent decade, deep learning (DL) dominates the field of AI and machine learning due its outstanding performance in many if not most benchmark problems [5-9]. The well-established backpropagation learning is the backbone of most of the DL tools [10]. The reason of these spectacular successes, however, is not fully understood in terms of model design and parameterization [11]. There are no known general principles to design a deep learning network that has optimal performance for a specific task and the design is often accomplished on the trial and error basis. To have a more systematic and robust design approach, we need to answer the crucial open question concerning the relationship between information and meaning in intelligent computational systems. We pose the question about the way input data (e.g., images) are converted into meaningful knowledge (i.e., assigned class labels) at the output of DLNNs. In previous studies, we defined abstraction level as a quantitative measure to describe the emergence of knowledge in the deep network from layer to layer [12, 13]. This work expands on previous results by testing the robustness of this measure for different image dataset, under various experimental conditions and multi-layer structures to improve the learning process. Our results confirmed that the so-called Q matrix is suitable to monitor abstraction level in various image classification tasks. The analysis of the layer-by-layer change of the abstraction in Deep Learning Neural Networks (DLNNs) points to the following key observations: (1) a general tendency of increasing abstraction from input to output; (2) exceptions to rule (1) are some convolutional layers with high abstraction, which drops in consecutive ReLu and pooling layers; (3) abstraction is relatively low and it does not change significantly in the first few layers following the input; (4) there is significant increase of abstraction in some intermediate layers, which reaches a saturation at high level in laters preceding the output; (5) the layer-by-layer increments of the abstraction demonstrate statistically significant deviation from normal distribution, and are consistent with exponential distribution. These results point to salient local features of DLNNs impacting overall (global) classification performance. These conclusions are helpful in future designs of more efficient, compact DL designs without notable sacrifice in overall performance characteristics.

ثبت دیدگاه