مقاله انگلیسی رایگان در مورد مدل توالی به توالی سلسله مراتبی – IEEE 2019

 

مشخصات مقاله
ترجمه عنوان مقاله مدل توالی به توالی سلسله مراتبی برای طبقه بندی متن چند برچسبی
عنوان انگلیسی مقاله Hierarchical Sequence-to-Sequence Model for Multi-Label Text Classification
انتشار مقاله سال 2019
تعداد صفحات مقاله انگلیسی 9 صفحه
هزینه دانلود مقاله انگلیسی رایگان میباشد.
پایگاه داده نشریه IEEE
نوع نگارش مقاله
مقاله پژوهشی (Research Article)
مقاله بیس این مقاله بیس نمیباشد
نمایه (index) Scopus – Master Journals List – JCR
نوع مقاله ISI
فرمت مقاله انگلیسی  PDF
ایمپکت فاکتور(IF)
4.641 در سال 2018
شاخص H_index 56 در سال 2019
شاخص SJR 0.609 در سال 2018
شناسه ISSN 2169-3536
شاخص Quartile (چارک) Q2 در سال 2018
مدل مفهومی ندارد
پرسشنامه ندارد
متغیر ندارد
رفرنس دارد
رشته های مرتبط مهندسی کامپیوتر، مهندسی فناوری اطلاعات
گرایش های مرتبط شبکه های کامپیوتری
نوع ارائه مقاله
ژورنال
مجله / کنفرانس دسترسی – IEEE Access
دانشگاه  School of Computer Science and Technology, Qilu University of Technology (ShanDong Academy of Sciences), Jinan 250353, China
کلمات کلیدی توالی به توالی، طبقه بندی متن چند برچسبی، توجه خودی، رمزگشا سلسله مراتبی، مکانیسم رسیدگی
کلمات کلیدی انگلیسی  Sequence-to-sequence, multi-label text classification, self-attention, hierarchical decoder, attention mechanism
شناسه دیجیتال – doi
https://doi.org/10.1109/ACCESS.2019.2948855
کد محصول  E13895
وضعیت ترجمه مقاله  ترجمه آماده این مقاله موجود نمیباشد. میتوانید از طریق دکمه پایین سفارش دهید.
دانلود رایگان مقاله دانلود رایگان مقاله انگلیسی
سفارش ترجمه این مقاله سفارش ترجمه این مقاله

 

فهرست مطالب مقاله:
Abstract
I. Introduction
II. Related Work
III. Method
IV. Experimental Design
V. Results
Authors
Figures
References

 

بخشی از متن مقاله:
Abstract

We propose a novel sequence-to-sequence model for multi-label text classification, based on a ‘‘parallel encoding, serial decoding’’ strategy. The model combines a convolutional neural network and self-attention in parallel as the encoder to extract fine-grained local neighborhood information and global interaction information from the source text. We design a hierarchical decoder to decode and generate the label sequence. Our method not only gives full consideration to the interpretable fine-gained information in the source text but also effectively utilizes the information to generate the label sequence. We conducted a large number of comparative experiments on three datasets. The results show that the proposed model has significant advantages over the state-of-the-art baseline model. In addition, our analysis demonstrates that our model is competitive with the RNN-based Seq2Seq models and that it is more robust at handling datasets with a high label/sample ratio.

Introduction

Multi-label text classification [1], [2] is an important and challenging task in natural language processing (NLP), that is more complicated than single-label classification because labels often exhibit complex dependencies. A real life, typical example is that terms such as ‘‘politics’’, ‘‘economics’’, ‘‘culture’’ and other labels often appear on the front pages of news websites. The goal is to aid users in selecting the information they desire without being presented with irrelevant information. As a significant NLP task, many methods have been proposed and have gradually achieved satisfactory performances. Binary relevance (BR) [3] is one of the earliest methods; it models the task as consisting of multiple singlelabel classification problems by actively ignoring the label dependencies to achieve a certain level of performance. To capture the label dependencies, a classifier chain (CC) [4] is used to convert the task into a series of binary classification problems and model the dependencies. Conditional random fields (CRF) [5] and conditional Bernoulli mixtures (CBM) [6] have also been utilized to handle label dependencies. However, the above methods are applicable only for small or medium-scale datasets, which makes them difficult to apply to large-scale datasets. With the development of neural networks, some neural models have been applied to solve this task that have achieved improvements.

دیدگاهتان را بنویسید

نشانی ایمیل شما منتشر نخواهد شد. بخش‌های موردنیاز علامت‌گذاری شده‌اند *

دکمه بازگشت به بالا