مقاله انگلیسی رایگان در مورد رویکرد مبتنی بر پرس و جو برای تحلیل میدانی داده های لگاریتمی – اسپرینگر ۲۰۱۸

مقاله انگلیسی رایگان در مورد رویکرد مبتنی بر پرس و جو برای تحلیل میدانی داده های لگاریتمی – اسپرینگر ۲۰۱۸

 

مشخصات مقاله
ترجمه عنوان مقاله رویکرد مبتنی بر پرس و جو برای تحلیل میدانی داده های لگاریتمی با استفاده از تکنیک های وب کاوی برای بهبود هستی شناسی
عنوان انگلیسی مقاله Query based approach for referrer field analysis of log data using web mining techniques for ontology improvement
انتشار مقاله سال ۲۰۱۸
تعداد صفحات مقاله انگلیسی ۱۲ صفحه
هزینه دانلود مقاله انگلیسی رایگان میباشد.
پایگاه داده نشریه اسپرینگر
نوع نگارش مقاله
مقاله پژوهشی (Research article)
مقاله بیس این مقاله بیس نمیباشد
فرمت مقاله انگلیسی  PDF
رشته های مرتبط مهندسی کامپیوتر
گرایش های مرتبط الگوریتم ها و محاسبات
نوع ارائه مقاله
ژورنال
مجله / کنفرانس مجله بین المللی فناوری اطلاعات – International Journal of Information Technology
دانشگاه Department of Computer Engineering – Punjabi University – India
کلمات کلیدی وب کاوی، کاربرد وب کاوی، هستی شناسی، فایل لگاریتمی، جلسات کاربر، خوشه بندی، کشف دانش
کلمات کلیدی انگلیسی Web mining, Web usage mining, Ontology, Log file, User sessions, Clustering, Knowledge discovery
شناسه دیجیتال – doi
https://doi.org/10.1007/s41870-017-0063-2
کد محصول E9312
وضعیت ترجمه مقاله  ترجمه آماده این مقاله موجود نمیباشد. میتوانید از طریق دکمه پایین سفارش دهید.
دانلود رایگان مقاله دانلود رایگان مقاله انگلیسی
سفارش ترجمه این مقاله سفارش ترجمه این مقاله

 

فهرست مطالب مقاله:
Abstract
۱٫ Introduction
۲ Proposed methodology
۳ Ontology construction
۴ Pre-processing
۵ Feature extraction
۶ Experiments and results of the log analysis
۷ Ontology improvement
۸ Contributions
۹ Conclusion and future work
References

 

بخشی از متن مقاله:

Introduction

Web usage mining (WUM) also known as web log mining is the application of data mining techniques applied on web data to extract relevant data and discover useful patterns [1], with the aim of improving the usefulness of the various web based applications. The process of web usage mining can be broadly divided into four phases—sourcing or collection of data, pre-processing or removal of ‘noise’, discovery of interesting patterns and the last analysis of the discovered patterns [2]. The first phase is simply sourcing of the data or information that is to be processed from various resources— which in our case will predominantly be the log files obtained from the Web servers, Web Proxy Servers and Client Browsers [3]. The quality of source log file is improved in the second phase by removing the extraneous, immaterial data termed as ‘noise’, to make the log file ready for further processing like segregating it into user and ‘sessions’. Identification [4–۶]. In the third stage, various statistical techniques like ‘association’, ‘classification’ and ‘clustering’ are applied to the pre-processed data to discover interesting arrangements or patterns [7–۹]. In the last stage the identified patterns are subjected to various analytical tools and mechanisms [1, 10] to finally extract the sublimate, the ‘essence’ or knowledge which has applications in a wider variety of fields like commerce, improvement of web based applications, identification of criminals and international security, attracting new customers and their retention, increasing website visits etc. Present age, is rightly referred to as the age of information and knowledge, as having useful information/knowledge at ‘right time’ gives one a huge advantage over others leading to appropriate and efficient decision making and plan execution. But in the last two decades, with the advent of internet technology and open source resources, there has been a humongous surge in the amount of information leading to ‘information overload’. It becomes a challenging and time consuming task to sift through this huge volume of data to extract relevant information on a topic. To overcome this issue, certain techniques have been developed, which helps us to retrieve relevant results efficiently and accurately from the web. The plethora of these data mining techniques, methodologies, algorithms which are applied on the web data and web logs to extract relevant data and discover useful patterns, with the aim of improving the usefulness of the various web based applications is known as web usage mining. Ontology is an explicit formal specification of the terms in the domain and relations among them [11]. Commonly it is defined to consist of abstract concepts and relationships only. In some rare cases, ontologies are defined to include instances of concepts and relationships [11]. In this paper it has been shown that how web log data can help to continuously update and improve the knowledge base of the existing ontology from time to time. Web mining techniques can be applied on web log files to find some suggestions to improve the existing ontology and some research has already been done in this field [11–۱۶]. In this work the researcher has used prote´ge´ ۵٫۰ (for ontology construction) and weka tool (for data mining algorithms). The thrust of this paper is facilitating information retrieval through a novel ontology management approach based using web log data.

ثبت دیدگاه