مقاله انگلیسی رایگان در مورد هرس مبتنی بر آنتروپی برای یادگیری شبکه های بیزی – الزویر 2018

 

مشخصات مقاله
ترجمه عنوان مقاله هرس مبتنی بر آنتروپی برای یادگیری شبکه های بیزی با استفاده از BIC
عنوان انگلیسی مقاله Entropy-based pruning for learning Bayesian networks using BIC
انتشار مقاله سال 2018
تعداد صفحات مقاله انگلیسی 14 صفحه
هزینه دانلود مقاله انگلیسی رایگان میباشد.
پایگاه داده نشریه الزویر
نوع نگارش مقاله
Short communication
مقاله بیس این مقاله بیس نمیباشد
نمایه (index) scopus – master journals – JCR
نوع مقاله ISI
فرمت مقاله انگلیسی  PDF
ایمپکت فاکتور(IF)
3.034 در سال 2017
شاخص H_index 129 در سال 2018
شاخص SJR 0.88 در سال 2018
رشته های مرتبط مهندسی کامپیوتر، فناوری اطلاعات
گرایش های مرتبط شبکه های کامپیوتری
نوع ارائه مقاله
ژورنال
مجله / کنفرانس هوش مصنوعی – Artificial Intelligence
دانشگاه Utrecht University – The Netherlands
کلمات کلیدی یادگیری ساختاری؛ شبکه های بیزی؛ BIC؛ هرس مجموعه
کلمات کلیدی انگلیسی Structure learning; Bayesian networks; BIC; Parent set pruning
شناسه دیجیتال – doi
https://doi.org/10.1016/j.artint.2018.04.002
کد محصول E10167
وضعیت ترجمه مقاله  ترجمه آماده این مقاله موجود نمیباشد. میتوانید از طریق دکمه پایین سفارش دهید.
دانلود رایگان مقاله دانلود رایگان مقاله انگلیسی
سفارش ترجمه این مقاله سفارش ترجمه این مقاله

 

فهرست مطالب مقاله:
Abstract
Keywords
1 Introduction
2 Structure learning of Bayesian networks
3 Pruning rules
4 Novel pruning rules
5 Experiments
6 Conclusions
Acknowledgement
References

بخشی از متن مقاله:
Abstract

For decomposable score-based structure learning of Bayesian networks, existing approaches first compute a collection of candidate parent sets for each variable and then optimize over this collection by choosing one parent set for each variable without creating directed cycles while maximizing the total score. We target the task of constructing the collection of candidate parent sets when the score of choice is the Bayesian Information Criterion (BIC). We provide new non-trivial results that can be used to prune the search space of candidate parent sets of each node. We analyze how these new results relate to previous ideas in the literature both theoretically and empirically. We show in experiments with UCI data sets that gains can be significant. Since the new pruning rules are easy to implement and have low computational costs, they can be promptly integrated into all state-of-the-art methods for structure learning of Bayesian networks.

Introduction

A Bayesian network [1] is a well-known probabilistic graphical model with applications in a variety of fields. It is composed of (i) an acyclic directed graph (DAG) where each node is associated to a random variable and arcs represent dependencies between the variables entailing the Markov condition: every variable is conditionally independent of its non-descendant variables given its parents; and (ii) a set of conditional probability mass functions defined for each variable given its parents in the graph. Their graphical nature makes Bayesian networks excellent models for representing the complex probabilistic relationships existing in many real problems ranging from bioinformatics to law, from image processing to economic risk analysis. Learning the structure (that is, the graph) of a Bayesian network from complete data is an NP-hard task [2]. We are interested in score-based learning, namely finding the structure which maximizes a score that depends on the data [3]. A typical first step of methods for this purpose is to build a list of suitable candidate parent sets for each one of the n variables of the domain. Later an optimization is run to find one element from each such list in a way that maximizes the total score and does not create directed cycles. This work concerns pruning ideas in order to build those lists. The problem is unlikely to admit a polynomial-time (in n) algorithm, since it is proven to be LOGSNP-hard [4]. Because of that, usually one forces a maximum in-degree (number of parents per node) k and then simply computes the score of all parent sets that contain up to k parents. A worth-mention exception is the greedy search of the K2 algorithm [5]. A high in-degree implies a large search space for the optimization and thus increases the possibility of finding better structures. On the other hand, it requires higher computational time, since there are Θ(nk) candidate parent sets for a bound of k if an exhaustive search is performed. Our contribution is to provide new rules for pruning sub-optimal parent sets when dealing with the Bayesian Information Criterion score [6], one of the most used score functions in the literature. We devise new theoretical bounds that can be used in conjunction with currently published ones [7]. The new results provide tighter bounds on the maximum number of parents of each variable in the optimal graph, as well as new pruning techniques that can be used to skip large portions of the search space without any loss of optimality. Moreover, the bounds can be efficiently computed and are easy to implement, so they can be promptly integrated into existing software for learning Bayesian networks and imply immediate computational gains. The paper is divided as follows. Section 2 presents the problem, some background and notation. Section 3 describes the existing results in the literature, and Section 4 contains the theoretical developments for the new bounds and pruning rules. Section 5 shows empirical results comparing the new results against previous ones, and finally some conclusions are given in Section 6.

دیدگاهتان را بنویسید

نشانی ایمیل شما منتشر نخواهد شد. بخش‌های موردنیاز علامت‌گذاری شده‌اند *

دکمه بازگشت به بالا