مشخصات مقاله | |
انتشار | مقاله سال 2018 |
تعداد صفحات مقاله انگلیسی | 10 صفحه |
هزینه | دانلود مقاله انگلیسی رایگان میباشد. |
منتشر شده در | نشریه الزویر |
نوع مقاله | ISI |
عنوان انگلیسی مقاله | Deep neural networks for texture classification—A theoretical analysis |
ترجمه عنوان مقاله | شبکه عصبی عمیق برای طبقه بندی بافت |
فرمت مقاله انگلیسی | |
رشته های مرتبط | مهندسی کامپیوتر |
گرایش های مرتبط | هوش مصنوعی |
مجله | شبکه های عصبی – Neural Networks |
دانشگاه | Louisiana State University – Baton Rouge – USA |
کلمات کلیدی | شبکه عصبی عمیق، طبقه بندی بافت، بعد VC |
کد محصول | E5850 |
وضعیت ترجمه مقاله | ترجمه آماده این مقاله موجود نمیباشد. میتوانید از طریق دکمه پایین سفارش دهید. |
دانلود رایگان مقاله | دانلود رایگان مقاله انگلیسی |
سفارش ترجمه این مقاله | سفارش ترجمه این مقاله |
بخشی از متن مقاله: |
1. Introduction
Texture is a key recipe for various object recognition tasks which involve texture-based imagery data like Brodatz (WWW1, 0000), VisTex (WWW2, 0000), Drexel (Oxholm, Bariya, & Nishino, 2012), KTH (WWW3, 0000), UIUCTex (Lazebnik, Schmid, & Ponce, 2005) as well as forest species datasets (de Paula Filho, Oliveira, & Britto Jr, 2009). Texture characterization has also been shown to be useful in addressing other object categorization problems like the Brazilian Forensic Letter Database (BFL) (Freitas, Oliveira, Sabourin, & Bortolozzi, 2008) which was later converted into a textural representation in Hanusiak, Oliveira, Justino, and Sabourin (2012). In Costa, Oliveira, Koerich, and Gouyon (2013), a similar approach was used to find a textural representation of the Latin Music Dataset (Silla Jr., Koerich, & Kaestner, 2008). Over the last decade, Deep Neural Networks have gained popularity due to their ability to learn data representations in both supervised and unsupervised settings and generalize to unseen data samples using hierarchical representations. A notable contribution in Deep Learning is a Deep Belief Network (DBN) formed by stacking Restricted Boltzmann Machines (Hinton, Osindero, & Teh, 2006). Another closely related approach, which has gained much traction over the last decade, is the Convolutional Neural Network (CNN) (Lecun, Bottou, Bengio, & Haffner, 1998). CNNs have been shown to outperform DBN in classical object recognition tasks like MNIST (WWW4, 0000) and CIFAR (Krizhevsky, 2009). Despite these advances in the field of Deep Learning, there has been limited success in learning textural features using Deep Neural Networks. Does this mean that there is some inherent limitation in existing Neural Network architectures and learning algorithms? In this paper, following Basu et al., 2016, we try to answer this question by investigating the use of Deep Neural Networks for the classification of texture datasets. First, we derive the size of the feature space for some standard textural features extracted from the input dataset. We then use the theory of Vapnik–Chervonenkis (VC) dimension to show that hand-crafted feature extraction creates low-dimensional representations, which help in reducing the overall excess error rate. As a corollary to this analysis we derive for the first time upper bounds on the VC dimension of Convolutional Neural Network as well as Dropout and Dropconnect networks and the relation between excess error rate of Dropout and Dropconnect networks. |