مقاله انگلیسی رایگان در مورد پیچیدگی کلاسهای شبکه های گسسته مارکوف و بیزی – الزویر ۲۰۱۸

مقاله انگلیسی رایگان در مورد پیچیدگی کلاسهای شبکه های گسسته مارکوف و بیزی – الزویر ۲۰۱۸

 

مشخصات مقاله
ترجمه عنوان مقاله پیچیدگی کلاسهای مفهومی ناشی از شبکه های گسسته مارکوف و شبکه های بیزی
عنوان انگلیسی مقاله Complexity of concept classes induced by discrete Markov networks and Bayesian networks
انتشار مقاله سال ۲۰۱۸
تعداد صفحات مقاله انگلیسی ۷ صفحه
هزینه دانلود مقاله انگلیسی رایگان میباشد.
منتشر شده در نشریه الزویر
نوع نگارش مقاله مقاله پژوهشی (Research article)
نوع مقاله ISI
فرمت مقاله انگلیسی  PDF
رشته های مرتبط مهندسی کامپیوتر، فناوری اطلاعات
گرایش های مرتبط هوش مصنوعی، شبکه های کامپیوتری
مجله الگو شناسی – Pattern Recognition
دانشگاه School of Mathematics and Statistics – Xidian University – PR China
کلمات کلیدی شبکه های بیزی، طبقه بندی، شبکه های مارکوف، ایده آل توریک، بعد Vapnik-Chervonenkis
کلمات کلیدی انگلیسی Bayesian networks, Classification, Markov networks, Toric ideal, Vapnik–Chervonenkis dimension
شناسه دیجیتال – doi
https://doi.org/10.1016/j.patcog.2018.04.026
کد محصول  E8918
وضعیت ترجمه مقاله  ترجمه آماده این مقاله موجود نمیباشد. میتوانید از طریق دکمه پایین سفارش دهید.
دانلود رایگان مقاله دانلود رایگان مقاله انگلیسی
سفارش ترجمه این مقاله سفارش ترجمه این مقاله

 

بخشی از متن مقاله:
۱٫ Introduction

Markov networks and Bayesian networks, also known as undirected and directed acyclic graphical models respectively, are widely used in many fields, such as machine learning and bioinformatics [8,12,22,24,25]. We know that a statistical model is a family of probability distributions, it is a (part of) real algebraic variety from the viewpoint of algebraic geometry [26]. In particular, consider discrete data, a statistical model is the set of all solutions of some polynomials in the probability simplex [15,27]. There are two ways to describe Markov networks and Bayesian networks, either by conditional independence statements or by a factorization of probability distributions. This corresponds to the computational algebraic geometry principle that some varieties can be described using either defining equations or parametric equations [see §۳٫۳ of 9]. We know that these two representations are inequivalent for Markov networks without positivity assumptions on probability distributions [see Proposition 3.8 and Example 3.10 in 22], while these two ways are equivalent for Bayesian networks [see Theorem 3.27 in 22]. Classification is one of the main problems in machine learning. To improve classification, there has been a growing body of work ∗ Corresponding author. E-mail addresses: libc580@nenu.edu.cn (B. Li), ylyang@mail.xidian.edu.cn (Y. Yang). on Markov networks in the computer vision community [21], in social and affiliation networks [39] and in classifier learning [30]. Ghofrani et al. proposed a new probabilistic classifier based on decomposable models and applied it to two real-world internet traffic data sets [16]. Bayesian networks are another powerful tool for classification due to their simplicity and accuracy [4,6,11,13]. Naive Bayes classifiers, a special case of Bayesian network classifiers, are a popular classification tool for processing discrete data [4,34,35]. For more about discrete Bayesian network classifiers, please see a comprehensive survey published recently [3]. Several research groups combined kernel method and probabilistic models, for instance, Ben-David et al. [2], Altun et al. [1], Taskar et al. [31], Chechik et al. [5]. A major advantage of this technique is that one can construct a wide variety of more flexible classifiers. The generalization performance of a learning method is extremely important in practice, because it provides a measure of the quality of the ultimately chosen model. One can use the Vapnik– Chervonenkis (VC) dimension to construct an estimate of generalization error [32], where the VC dimension is an approach of measuring the complexity of a class of functions by assessing how wiggly its members can be [19]. Another measure of complexity of a class of functions, called Euclidean dimension, is the minimum dimension of the Euclidean space equipped with the standard dot product, into which these functions can be embedded. Given a Markov network (Bayesian network) N , one can get a concept class CN induced by it. Since VC dimension and Euclidean dimension are two important indexes to assess the classification ability of a Markov network (Bayesian network). Three natural questions arise: (1) Whether the two dimensional values can be obtained from the dimension of a model as a (smooth) sub-manifold of Euclidean space? (2) Are the two positive integer numbers always equal? (3) How to calculate these dimensional values? In this paper, we study the classification capability induced by a Markov network (Bayesian network) without considering the training data, the algorithm and the construction of the network graph. We aim to solve the three questions above.

ثبت دیدگاه