مشخصات مقاله | |
انتشار | مقاله سال 2018 |
تعداد صفحات مقاله انگلیسی | 9 صفحه |
هزینه | دانلود مقاله انگلیسی رایگان میباشد. |
منتشر شده در | نشریه امرالد |
نوع مقاله | ISI |
عنوان انگلیسی مقاله | Multi-modal haptic image recognition based on deep learning |
ترجمه عنوان مقاله | تشخیص تصویر لمسی چندگانه بر اساس یادگیری عمیق |
فرمت مقاله انگلیسی | |
رشته های مرتبط | مهندسی کامپیوتر |
گرایش های مرتبط | هوش مصنوعی |
مجله | بررسی سنسور – Sensor Review |
دانشگاه | Nanjing University of Aeronautics and Astronautics – China |
کلمات کلیدی | یادگیری عمیق، ادغام داده ها، درک هپتیک، تصاویر هتیک چند مدل |
کلمات کلیدی انگلیسی | Deep learning, Data fusion, Haptic perception, Multi-modal haptic images |
کد محصول | E6426 |
وضعیت ترجمه مقاله | ترجمه آماده این مقاله موجود نمیباشد. میتوانید از طریق دکمه پایین سفارش دهید. |
دانلود رایگان مقاله | دانلود رایگان مقاله انگلیسی |
سفارش ترجمه این مقاله | سفارش ترجمه این مقاله |
بخشی از متن مقاله: |
1. Introduction
Haptic recognition is one of the major issues in robotics manipulation (Chitta et al., 2011; Dargahi and Najarian, 2004), medical diagnostics (Arian et al., 2014), prosthetics (Cu et al., 2016) and haptic display (Tian et al., 2016, 2017). Although there have been lots of studies on vision-based recognition in recent years (Abdulnabi et al., 2015; Chen et al., 2013), it is difficult to infer many properties of objects from vision alone in some special scenarios (Zhang et al., 2017). Therefore, this paper focuses on developing a novel haptic recognition method. A wide variety of technologies have been presented for haptic recognition recently. Song et al. (2014) designed a novel fabric surface texture sensor using polyvinylidene fluoride film. Orii et al. (2017) extracted tactile textures from the time-series data of a pressure sensor and a six-axis acceleration sensor using Convolutional Neural Networks (CNNs). Gorges et al. (2010) developed a planar tactile sensor matrix for perceiving object shapes. Zhang et al. (2017) described a Monte-Carlo-TreeSearch-based algorithm for actively selecting a sequence of end-effector poses to recognize objects. These methods mainly focus on individual sensing modalities rather than the multimodal combination of sensory capabilities found in human skin. BioTac, a multi-modal biomimetic sensor developed by SynTouch LLC, can measure force, micro-vibration and thermal flux simultaneously, which provides a better solution to multi-modal haptic perception (Han et al., 2016; Fishel and Loeb, 2012a; Lin et al., 2009). Fishel and Loeb (2012b) proposed a Bayesian exploration method that can adaptively select the optimal exploratory movement to accurately discriminate textures with BioTac. Wettels and Loeb (2011) designed an artificial neutral network to extract multiple haptic features from the BioTac’s output. Chu et al. (2013,2015) used Hidden Markov Models to recognize 25 binary haptic adjectives using the BioTac’s data from several exploratory procedures (EPs). Based on Chu’s research, Gao et al. (2016) proposed a multi-modal CNN model to fuse haptic and visual inputs for recognizing haptic adjectives. However, the studies of Chu and Gao are essentially a binary classification task, which only outputs a “yes” or “no” answer and cannot provide richer features. Therefore, this paper combines the multi-modal haptic signals from BioTac into haptic images, and builds a new multi-class and multi-label haptic image recognition model based on CNN. This recognition model is able to extract four haptic features, which are hardness, thermal conductivity, roughness and texture, and recognize objects (Figure 1). Compared with the methods by Chu et al. (2013, 2015) and Gao et al. (2016), the main innovation of the proposed model is that it can simultaneously output multiple haptic features and multiple classes for each feature instead of only single binary feature, which is conducive to more diversified and richer haptic perception. |