مشخصات مقاله | |
ترجمه عنوان مقاله | فیلترهای مخصوص عمیق برای تشخیص چهره: نمایش ویژگی از طریق یادگیری فیلتر چند ساختاری بدون نظارت |
عنوان انگلیسی مقاله | Deep eigen-filters for face recognition: Feature representation via unsupervised multi-structure filter learning |
انتشار | مقاله سال 2020 |
تعداد صفحات مقاله انگلیسی | 38 صفحه |
هزینه | دانلود مقاله انگلیسی رایگان میباشد. |
پایگاه داده | نشریه الزویر |
نوع نگارش مقاله |
مقاله پژوهشی (Research Article) |
مقاله بیس | این مقاله بیس میباشد |
نمایه (index) | Scopus – Master Journals List – JCR |
نوع مقاله | ISI |
فرمت مقاله انگلیسی | |
ایمپکت فاکتور(IF) |
7.346 در سال 2019 |
شاخص H_index | 180 در سال 2020 |
شاخص SJR | 1.363 در سال 2019 |
شناسه ISSN | 0031-3203 |
شاخص Quartile (چارک) | Q1 در سال 2019 |
مدل مفهومی | دارد |
پرسشنامه | ندارد |
متغیر | ندارد |
رفرنس | دارد |
رشته های مرتبط | کامپیوتر |
گرایش های مرتبط | مهندسی الگوریتم ها و محاسبات، هوش مصنوعی، معماری سیستم های کامپیوتری |
نوع ارائه مقاله |
ژورنال |
مجله | تشخیص الگو – Pattern Recognition |
دانشگاه | Department of Electrical Engineering, City University of Hong Kong, Hong Kong, China |
کلمات کلیدی | فیلترهای مخصوص عمیق، کرنل های پیچشی، تشخیص چهره، شبکه های عصبی پیچشی، نمایش ویژگی |
کلمات کلیدی انگلیسی | Deep eigen-filters، Convolution kernels، Face recognition، Convolutional neural networks، Feature representation |
شناسه دیجیتال – doi |
https://doi.org/10.1016/j.patcog.2019.107176 |
کد محصول | E14724 |
وضعیت ترجمه مقاله | ترجمه آماده این مقاله موجود نمیباشد. میتوانید از طریق دکمه پایین سفارش دهید. |
دانلود رایگان مقاله | دانلود رایگان مقاله انگلیسی |
سفارش ترجمه این مقاله | سفارش ترجمه این مقاله |
فهرست مطالب مقاله: |
Abstract
1- Introduction 2- Related work 3- Deep eigen-filters and DEFNet for feature representation 4- Experiments and results 5- Analysis on the strategy of proposed deep eigen-filters approach 6- Conclusion References |
بخشی از متن مقاله: |
Abstract Training deep convolutional neural networks (CNNs) often requires high computational cost and a large number of learnable parameters. To overcome this limitation, one solution is computing predefined convolution kernels from training data. In this paper, we propose a novel three-stage approach for filter learning alternatively. It learns filters in multiple structures including standard filters, channel-wise filters and point-wise filters which are inspired from variations of CNNs’ convolution operations. By analyzing the linear combination between learned filters and original convolution kernels in pre-trained CNNs, the reconstruction error is minimized to determine the most representative filters from the filter bank. These filters are used to build a network followed by HOG-based feature extraction for feature representation. The proposed approach shows competitive performance on color face recognition compared with other deep CNNs-based methods. Besides, it provides a perspective of interpreting CNNs by introducing the concepts of advanced convolutional layers to unsupervised filter learning. Introduction With the development of deep learning in recent years, deep neural networks, especially deep convolutional neural networks (CNNs) have achieved state-of-the-art performance in many image-based applications [1], e.g., image classification [2, 3], face recognition [4, 5], fine-grained image categorization [6, 7] and depth estimation [8, 9]. Compared with traditional visual recognition methods, CNNs have the advantage of learning both low-level and high-level feature representations automatically instead of designing hand-crafted feature descriptors [10, 11]. Due to these powerful features, CNNs have revolutionized the computer vision community and become one of the most popular tools in many visual recognition tasks [7, 12, 13]. Generally, CNNs are made up of three types of layers, i.e. convolutional layers, pooling layers, and fullyconnected layers. The features are extracted by stacking many convolutional layers on top of each other, and backpropagation starts from the loss function and goes back to the input in order to learn the weights and biases contained in the layers. However, how this kind of mechanism works on images remains an open question and yet needs to be explored. Besides, learning powerful feature representations requires a large amount of labeled training data otherwise the performance may deteriorate [14, 15], whereas training data in practical applications are often not readily available. To solve these problems, some researchers propose learning convolutional layers alternatively independent of training data. In [16], ScatNet was proposed by using wavelet transforms to represent convolutional filters. These predefined wavelet transforms are cascaded with nonlinear and pooling operations to build a multilayer convolutional network. Therefore, no learning is needed in computing image representation. Different from ScatNet, researchers in [17] introduced a structured receptive field network that combines the flexible learning property of CNNs and the fixed basis filters. |