مشخصات مقاله | |
انتشار | مقاله سال 2018 |
تعداد صفحات مقاله انگلیسی | 9 صفحه |
هزینه | دانلود مقاله انگلیسی رایگان میباشد. |
منتشر شده در | نشریه اسپرینگر |
نوع نگارش مقاله | مقاله پژوهشی (Research Article) |
مقاله بیس | این مقاله بیس نمیباشد |
نوع مقاله | ISI |
عنوان انگلیسی مقاله | Deep Artificial Neural Networks as a Tool for the Analysis of Seismic Data |
ترجمه عنوان مقاله | شبکه های عصبی مصنوعی عمیق به عنوان ابزاری برای تحلیل داده های لرزه ای |
نمایه (index) | Master Journals List |
فرمت مقاله انگلیسی | |
شناسه ISSN |
1934-7871
|
رشته های مرتبط | مهندسی عمران، مهندسی کامپیوتر، مهندسی فناوری اطلاعات |
گرایش های مرتبط | زلزله، هوش مصنوعی، مهندسی الگوریتم ها و محاسبات، شبکه های کامپیوتری |
نوع ارائه مقاله | ژورنال |
مجله | لرزه نگاری – Seismic Instruments |
دانشگاه | Institute of Earthquake Prediction Theory and Mathematical Geophysics – Russia |
کلمات کلیدی | شبکه های عصبی عمیق، یادگیری عمیق، الگوریتم گریدی، داده های لرزه ای، یادگیری چند کاره |
کلمات کلیدی انگلیسی | deep neural networks, deep learning, greedy algorithm, seismic data, multitask learning |
شناسه دیجیتال – doi | https://doi.org/10.3103/S0747923918010073 |
کد محصول | E6176 |
وضعیت ترجمه مقاله | ترجمه آماده این مقاله موجود نمیباشد. میتوانید از طریق دکمه پایین سفارش دهید. |
دانلود رایگان مقاله | دانلود رایگان مقاله انگلیسی |
سفارش ترجمه این مقاله | سفارش ترجمه این مقاله |
بخشی از متن مقاله: |
INTRODUCTION
Artificial neural networks (NNs) are widely used for the processing of seismic data (Böse et al., 2008; Gravirov et al., 2012; Lin et al., 2012; Kislov and Gravirov, 2017). It is possible to train the NN to solve such complex tasks as pattern recognition, signal detection, nonlinear modeling, classification, and regression using a training sample in which the correct answer is known for each example (supervised learning). However, the expansion of neural network technologies is constrained by a large number of heuristic rules for network design and training. The main thing is that, in this case, it is never known whether the architecture of the constructed network is optimal or whether it has been trained in the best way, i.e., whether the global minimum of the error function is found. Although an NN with one hidden layer can approximate any function with any accuracy, it can be considered to be a lookup table for the training sample with the more or less correct interpolation of intermediate values and extrapolation at the edges (Cybenko, 1989). The main limitations of the applicability of the NN are also associated with the problems of overfitting, stability-plasticity tradeoff, and the curse of dimensionality (Friedman, 1994). Obviously, different methods are developed to bypass these difficulties, but they are mostly heuristic. A trained NN works fast, but the learning process requires an indefinite time. In addition, the preparation of the training sample is usually a time-consuming process in itself (Gravirov and Kislov, 2015). Some solutions for this problem have also been found. For example, some NNs can cluster examples with an unknown answer (unsupervised learning), which reduces preparatory work (Köhler et al., 2010). Preparation of the training sample (study, evaluation, analysis, and preprocessing) is time-consuming, but it largely determines the efficiency of the network. It should be added that this is almost an art, and the result depends heavily on the experience of the researcher. |