مشخصات مقاله | |
انتشار | مقاله سال 2017 |
تعداد صفحات مقاله انگلیسی | 2 صفحه |
هزینه | دانلود مقاله انگلیسی رایگان میباشد. |
منتشر شده در | نشریه IEEE |
نوع مقاله | ISI |
عنوان انگلیسی مقاله | A Software Reliability Prediction Model Using Improved Long Short Term Memory Network |
ترجمه عنوان مقاله | مدلی برای پیش بینی قابلیت اطمینان نرم افزار با استفاده از شبکه حافظه طولانی کوتاه مدت |
فرمت مقاله انگلیسی | |
رشته های مرتبط | مهندسی کامپیوتر |
گرایش های مرتبط | مهندسی نرم افزار، برنامه نویسی کامپیوتر |
مجله | کنفرانس بین المللی کیفیت نرم افزار، قابلیت اطمینان و امنیت – International Conference on Software Quality Reliability and Security |
دانشگاه | School of Reliability and Systems Engineering – Beihang University – China |
کلمات کلیدی | پیش بینی قابلیت اطمینان نرم افزار؛ شبکه حافظه طولانی کوتاه مدت؛ ناپدید شدن و حساسیت |
کلمات کلیدی انگلیسی | software reliability prediction; long short term memory network; vanishing and exploding sensitivity |
شناسه دیجیتال – doi | https://doi.org/10.1109/QRS-C.2017.115 |
کد محصول | E8058 |
وضعیت ترجمه مقاله | ترجمه آماده این مقاله موجود نمیباشد. میتوانید از طریق دکمه پایین سفارش دهید. |
دانلود رایگان مقاله | دانلود رایگان مقاله انگلیسی |
سفارش ترجمه این مقاله | سفارش ترجمه این مقاله |
بخشی از متن مقاله: |
I. INTRODUCTION
Software failure data is the basis of software reliability estimation. For evaluate software reliability, the researchers usually use failure-count data and time-between failures to collect failure data. Failure data is the basis for software reliability assessment and prediction. This paper presents an improved long short term memory network model, belonging to a kind of recurrent neural network (RNN). After the RNN model was first used by Karunanidhi and Darrell in the field of reliability prediction [1], they were also the first research team to introduce neural network into the field of software reliability prediction, scholars have invented many variants to use. But the bottleneck of the current performance improvement is still the problem of gradient disappearance. II. IMPROVED LONG SHORT TERM MEMORY NETWORK Training deep network always based on backward propagation random gradient descent method. With the increase in the number of layers, the lower level of the gradient exponent can not accept the effective training signal. The neural network is limited by the problem of unstable gradients. If the network uses the sigmoid activation function, then the gradient of the previous layer will vanish exponentially. Long short term memory network, referred to as LSTM, is a kind of special RNN, has the ability to learn long-term dependencies. LSTM was proposed by Hochreiter & Schmidhuber [2], and many researchers carried out a series of work to improve and make it flourish. LSTM can avoid the problem of gradient instability because of its special design of the architecture. LSTM has the ability to add or remove information about the cell state, which is controlled by a structure called gate. The gate is a way of selectively passing information. The forget gate inputs t 1 h − and t x , and outputs a number between 0 and 1 for each element in cell stateCt−1 . Value 1 means completely retain the information, value 0 means completely discard the information: Where t f is the output of forget gate, σ means sigmoid function, W and b are the weights and biases need to be calculated. The next step is to decide which new information we will store in the cell state. t i ,the information we want to update. Ct , which may be added to the cell state. t t i C∗ is made up of the new candidate value Ct multiplied by the candidate value of update degree t i we decided in each state. t o is the information we output each state. Back propagation (BPTT) through time is too sensitive to recent distractions. In a feedforward neural network, exponential vanish means that changes in weight in the early neural layers will be much less than those in late neurons. Williams et al. proposed the truncated back-propagation [3]. This operation can overcome a series of questions complete BPTT brings when training model. Inspired by batch normalization, Lei Ba et al. proposed an RNN performance optimization method – layer normalization [4] which can reduce RNN network training time and get better overall performance. |