Ontology

From WikiPapers
Jump to: navigation, search

Ontology is included as keyword or extra keyword in 0 datasets, 0 tools and 18 publications.

Datasets

There is no datasets for this keyword.

Tools

There is no tools for this keyword.


Publications

Title Author(s) Published in Language DateThis property is a special property in this wiki. Abstract R C
The people's encyclopedia under the gaze of the sages: a systematic review of scholarly research on Wikipedia Chitu Okoli
Mohamad Mehdi
Mostafa Mesgari
Finn Årup Nielsen
Arto Lanamäki
English 2012 Wikipedia has become one of the ten most visited sites on the Web, and the world’s leading source of Web reference information. Its rapid success has inspired hundreds of scholars from various disciplines to study its content, communication and community dynamics from various perspectives. This article presents a systematic review of scholarly research on Wikipedia. We describe our detailed, rigorous methodology for identifying over 450 scholarly studies of Wikipedia. We present the WikiLit website (http wikilit dot referata dot com), where most of the papers reviewed here are described in detail. In the major section of this article, we then categorize and summarize the studies. An appendix features an extensive list of resources useful for Wikipedia researchers. 15 0
Building ontological models from Arabic Wikipedia: a proposed hybrid approach Nora I. Al-Rajebah
Hend S. Al-Khalifa
AbdulMalik S. Al-Salman
IiWAS English 2010 0 0
Frequent itemset based hierarchical document clustering using Wikipedia as external knowledge G. V. R. Kiran
Ravi Shankar
Vikram Pudi
KES English 2010 0 0
Ontology-driven generation of wiki content and interfaces Angelo Di Iorio
Alberto Musetti
Silvio Peroni
Fabio Vitali
New Rev. Hypermedia Multimedia English 2010 0 0
Semantic Web for E-Governance Using Wiki Technology Vidya Gavekar
Manisha A. Kumbhar
Anil D. Kumbhar
ICETET English 2010 0 0
Timely YAGO: harvesting, querying, and visualizing temporal knowledge from Wikipedia Yafang Wang
Mingjie Zhu
Lizhen Qu
Marc Spaniol
Gerhard Weikum
EDBT English 2010 0 0
"All You Can Eat" Ontology-Building: Feeding Wikipedia to Cyc Samuel Sarjant
Catherine Legg
Michael Robinson
Olena Medelyan
WI-IAT English 2009 In order to achieve genuine web intelligence, building some kind of large general machine-readable conceptual scheme (i.e. ontology) seems inescapable. Yet the past 20 years have shown that manual ontology-building is not practicable. The recent explosion of free user-supplied knowledge on the Web has led to great strides in automatic ontology-building, but quality-control is still a major issue. Ideally one should automatically build onto an already intelligent base. We suggest that the long-running Cyc project is able to assist here. We describe methods used to add 35K new concepts mined from Wikipedia to collections in ResearchCyc entirely automatically. Evaluation with 22 human subjects shows high precision both for the new concepts’ categorization, and their assignment as individuals or collections. Most importantly we show how Cyc itself can be leveraged for ontological quality control by ‘feeding’ it assertions one by one, enabling it to reject those that contradict its other knowledge. 0 0
Identifying document topics using the Wikipedia category network Peter Schönhofen Web Intelli. and Agent Sys. English 2009 In the last few years the size and coverage of Wikipedia, a community edited, freely available on-line encyclopedia has reached the point where it can be effectively used to identify topics discussed in a document, similarly to an ontology or taxonomy. In this paper we will show that even a fairly simple algorithm that exploits only the titles and categories of Wikipedia articles can characterize documents by Wikipedia categories surprisingly well. We test the reliability of our method by predicting categories of Wikipedia articles themselves based on their bodies, and also by performing classification and clustering on 20 Newsgroups and RCV1, representing documents by their Wikipedia categories instead of (or in addition to) their texts. 0 1
Mining meaning from Wikipedia Olena Medelyan
David N. Milne
Catherine Legg
Ian H. Witten
Int. J. Hum.-Comput. Stud.
International Journal of Human Computer Studies
English 2009 Wikipedia is a goldmine of information; not just for its many readers, but also for the growing community of researchers who recognize it as a resource of exceptional scale and utility. It represents a vast investment of manual effort and judgment: a huge, constantly evolving tapestry of concepts and relations that is being applied to a host of tasks. This article provides a comprehensive description of this work. It focuses on research that extracts and makes use of the concepts, relations, facts and descriptions found in Wikipedia, and organizes the work into four broad categories: applying Wikipedia to natural language processing; using it to facilitate information retrieval and information extraction; and as a resource for ontology building. The article addresses how Wikipedia is being used as is, how it is being improved and adapted, and how it is being combined with other structures to create entirely new resources. We identify the research groups and individuals involved, and how their work has developed in the last few years. We provide a comprehensive list of the open-source software they have produced. 2009 Elsevier Ltd. All rights reserved. 0 4
Organización de información en herramientas wiki: aplicación de ontologías en wikis semánticos Jesús Tramullas
Piedad Garrido-Picazo
IX Congreso ISKO España, Nuevas perspectivas para la difusión y organización del conocimiento, Univ. Politécnica de Valencia Spanish 2009 This work checks the methods and techniques applied to software tools and platforms known as semantic wikis. It is carried out a revision of the available bibliography on the subject and the main features provided for the several tools have been pointed out. The analysis let us to confirm the variety of the semantic methods and techniques put into practice, and that the available products, in present-day status, they are not ready to be used by management information systems yet. 0 0
WikiOnto: A System for Semi-automatic Extraction and Modeling of Ontologies Using Wikipedia XML Corpus Lalindra Niranjan De Silva
Lakshman Jayaratne
ICSC English 2009 0 0
Automatically Refining the Wikipedia Infobox Ontology Fei Wu
Daniel S. Weld
17th International World Wide Web Conference (www-08) 2008 The combined efforts of human volunteers have recently extracted numerous facts fromWikipedia, storing them asmachine-harvestable object-attribute-value triples inWikipedia infoboxes. Machine learning systems, such as Kylin, use these infoboxes as training data, accurately extracting even more semantic knowledge from natural language text. But in order to realize the full power of this information, it must be situated in a cleanly-structured ontology. This paper introduces KOG, an autonomous system for refining Wikipedia’s infobox-class ontology towards this end. We cast the problem of ontology refinement as a machine learning problem and solve it using both SVMs and a more powerful joint-inference approach expressed in Markov Logic Networks. We present experiments demonstrating the superiority of the joint-inference approach and evaluating other aspects of our system. Using these techniques, we build a rich ontology, integratingWikipedia’s infobox-class schemata with WordNet. We demonstrate how the resulting ontology may be used to enhance Wikipedia with improved query processing and other features. 0 0
Ontology enhanced web image retrieval: aided by wikipedia \& spreading activation theory Huan Wang
Xing Jiang
Liang-Tien Chia
Ah-Hwee Tan
MIR English 2008 0 0
Wikipedia as an Ontology for Describing Documents Zareen Syed
Tim Finin
Anupam Joshi
Proceedings of the Second International Conference on Weblogs and Social Media, AAAI, March 31, 2008 2008 Identifying topics and concepts associated with a set of documents is a task common to many applications. It can help in the annotation and categorization of documents and be used to model a person's current interests for improving search results, business intelligence or selecting appropriate advertisements. One approach is to associate a document with a set of topics selected from a fixed ontology or vocabulary of terms. We have investigated using Wikipedia's articles and associated pages as a topic ontology for this purpose. The benefits are that the ontology terms are developed through a social process, maintained and kept current by the Wikipedia community, represent a consensus view, and have meaning that can be understood simply by reading the associated Wikipedia page. We use Wikipedia articles and the category and article link graphs to predict concepts common to a set of documents. We describe several algorithms to aggregate and refine results, including the use of spreading activation to select the most appropriate terms. While the Wikipedia category graph can be used to predict generalized concepts, the article links graph helps by predicting more specific concepts and concepts not in the category hierarchy. Our experiments demonstrate the feasibility of extending the category system with new concepts identified as a union of pages from the page link graph. 0 0
Wikipedia in Action: Ontological Knowledge in Text Categorization Maciej Janik
Krys J. Kochut
ICSC English 2008 0 0
YAGO: A Large Ontology from Wikipedia and WordNet F. Suchanek
G. Kasneci
G. Weikum
Web Semantics: Science, Services and Agents on the World Wide Web English 2008 This article presents YAGO, a large ontology with high coverage and precision. YAGO has been automatically derived from Wikipedia and WordNet. It comprises entities and relations, and currently contains more than 1.7 million entities and 15 million facts. These include the taxonomic Is-A hierarchy as well as semantic relations between entities. The facts for YAGO have been extracted from the category system and the infoboxes of Wikipedia and have been combined with taxonomic relations from WordNet. Type checking techniques help us keep YAGO’s precision at 95%—as proven by an extensive evaluation study. YAGO is based on a clean logical model with a decidable consistency. Furthermore, it allows representing n-ary relations in a natural way while maintaining compatibility with RDFS. A powerful query model facilitates access to YAGO’s data. 0 1
OntoWiki: Commuity-driven Ontology Engineering and Ontology Usage based on Wikis Martin Hepp
Daniel Bachlechner
Katharina Siorpaes
WikiSym English 2006 0 1
Wiki Communities in the Context of Work Processes Frank Fuchs-Kittowski
Andre Köhler
WikiSym English 2005 In this article we examine the integration of communities of practice supported by a wiki into work processes. Linear structures are often inappropriate for the execution of knowledge-intensive tasks and work processes. The latter are characterized by non-linear sequences and dynamic social interaction. Communities of practice, however, often lack the „guiding light” needed to structure their work. We discuss the primary requirements for the integration of formally described knowledge-intensive processes into the dynamic social processes of knowledge generation in communities of practice and use the wiki approach for their support. We present our approach for an appropriate interface to integrate wiki communities into process structures and an information retrieval algorithm based on it to connect the process-oriented structures with community-oriented wiki structures. We show the prototypical realization of the concept by a brief example. 0 1