مقاله انگلیسی رایگان در مورد توسعه شبکه های عصبی اسپایک بر روی جریان داده های دریفت – الزویر ۲۰۱۸

مقاله انگلیسی رایگان در مورد توسعه شبکه های عصبی اسپایک بر روی جریان داده های دریفت – الزویر ۲۰۱۸

 

مشخصات مقاله
ترجمه عنوان مقاله توسعه شبکه های عصبی اسپایک برای یادگیری آنلاین بر روی جریان داده های دریفت
عنوان انگلیسی مقاله Evolving Spiking Neural Networks for online learning over drifting data streams
انتشار مقاله سال ۲۰۱۸
تعداد صفحات مقاله انگلیسی ۴۷ صفحه
هزینه دانلود مقاله انگلیسی رایگان میباشد.
پایگاه داده نشریه الزویر
نوع نگارش مقاله
مقاله مروری (Review Article)
مقاله بیس این مقاله بیس نمیباشد
نمایه (index) MedLine – Scopus – Master Journal List – JCR
نوع مقاله ISI
فرمت مقاله انگلیسی  PDF
ایمپکت فاکتور(IF)
۸٫۴۴۶ در سال ۲۰۱۷
شاخص H_index ۱۲۱ در سال ۲۰۱۹
شاخص SJR ۲٫۳۵۹ در سال ۲۰۱۷
شناسه ISSN ۰۸۹۳-۶۰۸۰
شاخص Quartile (چارک) Q1 در سال ۲۰۱۷
رشته های مرتبط مهندسی کامپیوتر، فناوری اطلاعات
گرایش های مرتبط هوش مصنوعی، شبکه های کامپیوتری
نوع ارائه مقاله
ژورنال
مجله  شبکه های عصبی – Neural Networks
دانشگاه Tecnalia. División ICT. Parque Tecnológico de Bizkaia – c/ Geldo – Spain
کلمات کلیدی اسپایک شبکه های عصبی، کاهش داده ها، یادگیری آنلاین، دریفت مفهومی
کلمات کلیدی انگلیسی Spiking neural networks، data reduction، online learning، concept drift
شناسه دیجیتال – doi
https://doi.org/10.1016/j.neunet.2018.07.014
کد محصول  E10731
وضعیت ترجمه مقاله  ترجمه آماده این مقاله موجود نمیباشد. میتوانید از طریق دکمه پایین سفارش دهید.
دانلود رایگان مقاله دانلود رایگان مقاله انگلیسی
سفارش ترجمه این مقاله سفارش ترجمه این مقاله

 

فهرست مطالب مقاله:
Abstract

۱- Introduction

۲- Evolving Spiking Neural Network (eSNN)

۳- Data reduction techniques

۴- Proposed approach: Online evolving Spiking Neural Network (OeSNN)

۵- Computer experiments

۶- Results

۷- Discussion

۸- Conclusions and future research lines

References

بخشی از متن مقاله:

Abstract

Nowadays huge volumes of data are produced in the form of fast streams, which are further affected by non-stationary phenomena. The resulting lack of stationarity in the distribution of the produced data calls for efficient and scalable algorithms for online analysis capable of adapting to such changes (concept drift). The online learning field has lately turned its focus on this challenging scenario, by designing incremental learning algorithms that avoid becoming obsolete after a concept drift occurs. Despite the noted activity in the literature, a need for new efficient and scalable algorithms that adapt to the drift still prevails as a research topic deserving further effort. Surprisingly, Spiking Neural Networks, one of the major exponents of the third generation of artificial neural networks, have not been thoroughly studied as an online learning approach, even though they are naturally suited to easily and quickly adapting to changing environments. This work covers this research gap by adapting Spiking Neural Networks to meet the processing requirements that online learning scenarios impose. In particular the work focuses on limiting the size of the neuron repository and making the most of this limited size by resorting to data reduction techniques. Experiments with synthetic and real data sets are discussed, leading to the empirically validated assertion that, by virtue of a tailored exploitation of the neuron repository, Spiking Neural Networks adapt better to drifts, obtaining higher accuracy scores than naive versions of Spiking Neural Networks for online learning environments.

Introduction

With the increasing number of applications based on fast-arriving information flows and applied to real scenarios (Zhou, Chawla, Jin & Williams, 2014; Alippi, 2014), concept drift has become a paramount issue for online learning environments. The distribution modeling data captured by sensor networks, mobile phones, intelligent user interfaces, industrial machinery and others alike is usually assumed to be stationary along time. However, in many real cases such an assumption does not hold, as the data source itself is subject to dynamic externalities that affect the stationarity of its produced data stream(s), e.g. seasonality, periodicity or sensor errors, among many others. As a result, possible patterns behind the produced data may change over time, either in the feature domain (new features are captured, part of the existing predictors disappear, or their value range evolves), or in the class domain (new classes emerge from the data streams, or some of the existing ones fade along time). This paradigm is what the literature has coined as concept drift, where the term concept refers to a stationary distribution relating a group of features to a set of classes. When the goal is to infer the aforementioned class patterns from data (online classification), incremental models trained over drifting streams become obsolete when transitioning from one concept to another. Consequently, they do not adapt appropriately to the new emerged data distribution, unless they are modified to handle efficiently this unwanted effect. In order to minimize the impact of concept drift on the performance of predictive models, recent studies (Ditzler, Roveri, Alippi & Polikar, 2015; Webb, Hyde, Cao, Nguyen & Petitjean, 2016; Khamassi, Sayed-Mouchaweh, Hammami & Ghedira, 2018) have been focused ´ on the development of efficient techniques for continuous adaptation in evolving environments or, alternatively, in the incorporation of drift detectors and concept forgetting mechanisms (Zliobait ˇ e, Pechenizkiy & Gama, 2016).

ثبت دیدگاه