(Alternative names for this keyword)
|Export and share|
|BibTeX, CSV, RDF, JSON|
|Browse properties · List of keywords|
RDF is included as keyword or extra keyword in 0 datasets, 0 tools and 22 publications.
There is no datasets for this keyword.
There is no tools for this keyword.
|Title||Author(s)||Published in||Language||DateThis property is a special property in this wiki.||Abstract||R||C|
|An initial analysis of semantic wikis||Gil Y.
|International Conference on Intelligent User Interfaces, Proceedings IUI||English||2013||Semantic wikis augment wikis with semantic properties that can be used to aggregate and query data through reasoning. Semantic wikis are used by many communities, for widely varying purposes such as organizing genomic knowledge, coding software, and tracking environmental data. Although wikis have been analyzed extensively, there has been no published analysis of the use of semantic wikis. We carried out an initial analysis of twenty semantic wikis selected for their diverse characteristics and content. Based on the number of property edits per contributor, we identified several patterns to characterize community behaviors that are common to groups of wikis.||0||0|
|DBpedia and the live extraction of structured data from Wikipedia||Morsey M.
|Program||English||2012||Purpose: DBpedia extracts structured information from Wikipedia, interlinks it with other knowledge bases and freely publishes the results on the web using Linked Data and SPARQL. However, the DBpedia release process is heavyweight and releases are sometimes based on several months old data. DBpedia-Live solves this problem by providing a live synchronization method based on the update stream of Wikipedia. This paper seeks to address these issues. Design/methodology/approach: Wikipedia provides DBpedia with a continuous stream of updates, i.e. a stream of articles, which were recently updated. DBpedia-Live processes that stream on the fly to obtain RDF data and stores the extracted data back to DBpedia. DBpedia-Live publishes the newly added/deleted triples in files, in order to enable synchronization between the DBpedia endpoint and other DBpedia mirrors. Findings: During the realization of DBpedia-Live the authors learned that it is crucial to process Wikipedia updates in a priority queue. Recently-updated Wikipedia articles should have the highest priority, over mapping-changes and unmodified pages. An overall finding is that there are plenty of opportunities arising from the emerging Web of Data for librarians. Practical implications: DBpedia had and has a great effect on the Web of Data and became a crystallization point for it. Many companies and researchers use DBpedia and its public services to improve their applications and research approaches. The DBpedia-Live framework improves DBpedia further by timely synchronizing it with Wikipedia, which is relevant for many use cases requiring up-to-date information. Originality/value: The new DBpedia-Live framework adds new features to the old DBpedia-Live framework, e.g. abstract extraction, ontology changes, and changesets publication.||0||0|
|Design and Evaluation of an IR-Benchmark for SPARQL Queries with Fulltext Conditions||Mishra A.
|International Conference on Information and Knowledge Management, Proceedings||English||2012||In this paper, we describe our goals in introducing a new, annotated benchmark collection, with which we aim to bridge the gap between the fundamentally different aspects that are involved in querying both structured and unstructured data. This semantically rich collection, captured in a unified XML format, combines components (unstructured text, semistructured infoboxes, and category structure) from 3.1 Million Wikipedia articles with highly structured RDF properties from both DBpedia and YAGO2. The new collection serves as the basis of the INEX 2012 Ad-hoc, Faceted Search, and Jeopardy retrieval tasks. With a focus on the new Jeopardy task, we particularly motivate the usage of the collection for question-answering (QA) style retrieval settings, which we also exemplify by introducing a set of 90 QA-style benchmark queries which come shipped in a SPARQL-based query format that has been extended by fulltext filter conditions.||0||0|
|Models for efficient semantic data storage demonstrated on concrete example of DBpedia||Lasek I.
|CEUR Workshop Proceedings||English||2012||In this paper, we introduce a benchmark to test efficiency of RDF data model for data storage and querying in relation to a concrete dataset.We created Czech DBpedia - a freely available dataset composed of data extracted from Czech Wikipedia. But during creation and querying of this dataset, we faced problems caused by a lack of performance of used RDF storage. We designed metrics to measure efficiency of data storage approaches. Our metric quantifies the impact of data decomposition in RDF triples. Results of our benchmark applied to the dataset of Czech DBpedia are presented.||0||0|
|Publishing statistical data on the web||Salas P.E.R.
|Proceedings - IEEE 6th International Conference on Semantic Computing, ICSC 2012||English||2012||Statistical data is one of the most important sources of information, relevant for large numbers of stakeholders in the governmental, scientific and business domains alike. In this article, we overview how statistical data can be managed on the Web. With OLAP2 Data Cube and CSV2 Data Cube we present two complementary approaches on how to extract and publish statistical data. We also discuss the linking, repair and the visualization of statistical data. As a comprehensive use case, we report on the extraction and publishing on the Web of statistical data describing 10 years of life in Brazil.||0||0|
|ViDaX: An interactive semantic data visualisation and exploration tool||Dumas B.
|Proceedings of the Workshop on Advanced Visual Interfaces AVI||English||2012||We present the Visual Data Explorer (ViDaX), a tool for visualising and exploring large RDF data sets. ViDaX enables the extraction of information from RDF data sources and offers functionality for the analysis of various data characteristics as well as the exploration of the corresponding ontology graph structure. In addition to some basic data mining features, our interactive semantic data visualisation and exploration tool offers various types of visualisations based on the type of data. In contrast to existing semantic data visualisation solutions, ViDaX also offers non-expert users the possibility to explore semantic data based on powerful automatic visualisation and interaction techniques without the need for any low-level programming. To illustrate some of ViDaX's functionality, we present a use case based on semantic data retrieved from DBpedia, a semantic version of the well-known Wikipedia online encyclopedia, which forms a major component of the emerging linked data initiative.||0||0|
|PoolParty: SKOS thesaurus management utilizing linked data||Schandl T.
|Lecture Notes in Computer Science||English||2010||Building and maintaining thesauri are complex and laborious tasks. PoolParty is a Thesaurus Management Tool (TMT) for the Semantic Web, which aims to support the creation and maintenance of thesauri by utilizing Linked Open Data (LOD), text-analysis and easy-to-use GUIs, so thesauri can be managed and utilized by domain experts without needing knowledge about the semantic web. Some aspects of thesaurus management, like the editing of labels, can be done via a wiki-style interface, allowing for lowest possible access barriers to contribution. PoolParty can analyse documents in order to glean new concepts for a thesaurus. Additionally a thesaurus can be enriched by retrieving relevant information from Linked Data sources and thesauri can be imported and updated via LOD URIs from external systems and also can be published as new linked data sources on the semantic web.||0||0|
|Semantically enriched tools for the knowledge society: Case of project management and presentation||Talas J.
|Communications in Computer and Information Science||English||2010||Working with semantically rich data is one of the stepping stones to the knowledge society. In recent years, gathering, processing, and using semantic data have made a big progress, particularly in the academic environment. However, the advantages of the semantic description remain commonly undiscovered by a "common user", including people from academia and IT industry that could otherwise profit from capabilities of contemporary semantic systems in the areas of project management and/or technology-enhanced learning. Mostly, the root cause lays in complexity and non-transparency of the mainstream semantic applications. The semantic tool for project management and presentation consists mainly of a module for the semantic annotation of wiki pages integrated into the project management system Trac. It combines the dynamic, easy-of-use nature and applicability of a wiki for project management with the advantages of semantically rich and accurate approach. The system is released as open-source (OS) and is used for management of students' and research projects at the research lab of the authors.||0||0|
|Visual Semantic Client a visualization tool for semantic content||Wahl H.
|ICETC 2010 - 2010 2nd International Conference on Education Technology and Computer||English||2010||The University of Applied Sciences Technikum Wien is a fastgrowing education organization that actually offers a set of 12 bachelor and 14 master degree programs. Coordination of lectures and therefore quality management has become more and more difficult. Knowledge management in terms of lecture contents and professional skills of lecturers seems to be an unsolvable task. As a matter of f act, nobody is able to overlook all teaching details of the whole university. Although information is available in several databases and documents even getting an overview of all detail content of a single degree program turns out to be impossible. To overcome this problem the Technikum Wien started a project to extract selected information from documents and store it in a Semantic Wiki by automatically setting up entities and their relations. To improve usage of semantic content a software tool to browse the information categories and their relations has being developed. The "Visual Semantic Client" visualizes entities with their attributes and allows following or searching their relations. The paper shows the concepts behind, the system architecture and the current state of development.||0||0|
|DBpedia – A Crystallization Point for the Web of Data||Christian Bizer
|Journal of Web Semantics: Science, Services and Agents on the World Wide Web||English||2009||The DBpedia project is a community effort to extract structured information from Wikipedia and to make this information accessible on the Web. The resulting DBpedia knowledge base currently describes over 2.6 million entities. For each of these entities, DBpedia defines a globally unique identifier that can be dereferenced over the Web into a rich RDF description of the entity, including human-readable definitions in 30 languages, relationships to other resources, classifications in four concept hierarchies, various facts as well as data-level links to other Web data sources describing the entity. Over the last year, an increasing number of data publishers have begun to set data-level links to DBpedia resources, making DBpedia a central interlinking hub for the emerging Web of data. Currently, the Web of interlinked data sources around DBpedia provides approximately 4.7 billion pieces of information and covers domains such as geographic information, people, companies, films, music, genes, drugs, books, and scientific publications. This article describes the extraction of the DBpedia knowledge base, the current status of interlinking DBpedia with other data sources on the Web, and gives an overview of applications that facilitate the Web of Data around DBpedia.||0||0|
|Language-model-based ranking for queries on RDF-graphs||Elbassuoni S.
|International Conference on Information and Knowledge Management, Proceedings||English||2009||The success of knowledge-sharing communities like Wikipedia and the advances in automatic information extraction from textual and Web sources have made it possible to build large "knowledge repositories" such as DBpedia, Freebase, and YAGO. These collections can be viewed as graphs of entities and relationships (ER graphs) and can be represented as a set of subject-property-object (SPO) triples in the Semantic-Web data model RDF. Queries can be expressed in the W3C-endorsed SPARQL language or by similarly designed graph-pattern search. However, exact-match query semantics often fall short of satisfying the users' needs by returning too many or too few results. Therefore, IR-style ranking models are crucially needed. In this paper, we propose a language-model-based approach to ranking the results of exact, relaxed and keyword-augmented graph pattern queries over RDF graphs such as ER graphs. Our method estimates a query model and a set of result-graph models and ranks results based on their Kullback-Leibler divergence with respect to the query model. We demonstrate the effectiveness of our ranking model by a comprehensive user study. Copyright 2009 ACM.||0||0|
|Semantic Wiki aided business process specification||Toufeeq Hussain
|WWW'09 - Proceedings of the 18th International World Wide Web Conference||English||2009||This paper formulates a collaborative system for modeling business application. The system uses a Semantic Wiki to enable collaboration between the various stakeholders involved in the design of the system and translates the captured intelligence into business models which are used for designing a business system. Copyright is held by the author/owner(s).||0||0|
|Next-generation wikis: What users expect; How RDF helps||Rauschmayer A.||CEUR Workshop Proceedings||English||2008||Even though wikis helped start the web 2.0 phenomenon, they currently run the risk of becoming outdated. In order to find out what aspects of wikis will survive and how wikis might need to evolve, the author held a survey among wiki users. This paper argues that adding RDF integration to wikis helps meet the requirements implicitly contained in the answers of that survey. Technical details are given by looking at the semantic wiki Hyena.||0||0|
|Chapter 7 Achieving a Holistic Web in the Chemistry Curriculum||Rzepa H.S.||Annual Reports in Computational Chemistry||English||2007||[No abstract available]||0||0|
|Medical Librarian 2.0||Connor E.||Medical Reference Services Quarterly||English||2007||Web 2.0 refers to an emerging social environment that uses various tools to create, aggregate, and share dynamic content in ways that are more creative and interactive than transactions previously conducted on the Internet. The extension of this social environment to libraries, sometimes called Library 2.0, has profound implications for how librarians will work, collaborate, and deliver content. Medical librarians can connect with present and future generations of users by learning more about the social dynamics of Web 2.0's vast ecosystem, and incorporating some of its interactive tools and technologies (tagging, peer production, and syndication) into routine library practice. © 2007 by The Haworth Press, Inc. All rights reserved.||0||0|
|A wiki as an extensible RDF presentation engine||Rauschmayer A.
|CEUR Workshop Proceedings||English||2006||Semantic wikis  establish the role of wikis as integrators of structured and semi-structured data. In this paper, we present Wikked, which is a semantic wiki turned inside out: it is a wiki engine that is embedded in the generic RDF editor Hyena. That is, Hyena edits (structured) RDF and leaves it to Wikked to display (semi-structured) wiki pages stored in RDF nodes. Wiki text has a clearly defined core syntax, while traditional wiki syntax is regarded as syntactic sugar. It is thus easy to convert Wikked pages to various output formats such as HTML and LaTeX. Wikked's built-in functions for presenting RDF data and for invoking Hyena functionality endow it with the ability to define simple custom user interfaces to RDF data.||0||0|
|Harvesting Wiki Consensus - Using Wikipedia Entries as Ontology Elements||Martin Hepp
|CEUR Workshop Proceedings||English||2006||One major obstacle towards adding machine-readable annotation to existing Web content is the lack of domain ontologies. While FOAF and Dublin Core are popular means for expressing relationships between Web resources and between Web resources and literal values, we widely lack unique identifiers for common concepts and instances. Also, most available ontologies have a very weak community grounding in the sense that they are designed by single individuals or small groups of individuals, while the majority of potential users is not involved in the process of proposing new ontology elements or achieving consensus. This is in sharp contrast to natural language where the evolution of the vocabulary is under the control of the user community. At the same time, we can observe that, within Wiki communities, especially Wikipedia, a large number of users is able to create comprehensive domain representations in the sense of unique, machine-feasible, identifiers and concept definitions which are sufficient for humans to grasp the intension of the concepts. The English version of Wikipedia contains now more than one million entries and thus the same amount of URIs plus a human-readable description. While this collection is on the lower end of ontology expressiveness, it is likely the largest living ontology that is available today. In this paper, we (1) show that standard Wiki technology can be easily used as an ontology development environment for named classes, reducing entry barriers for the participation of users in the creation and maintenance of lightweight ontologies, (2) prove that the URIs of Wikipedia entries are surprisingly reliable identifiers for ontology concepts, and (3) demonstrate the applicability of our approach in a use case.||0||0|
|Manufacturing feature library as a manufacturing information management system for process planning||Hendry Muljadi
|36th International Conference on Computers and Industrial Engineering, ICC and IE 2006||English||2006||A manufacturing feature can be defined simply as a geometric shape and its manufacturing information to create the shape. For the generation of process plans in a manufacturing feature-based process planning system, it is necessary to develop a manufacturing feature library that consists of predefined manufacturing features and the manufacturing information to create the shape of the features. In other words, manufacturing feature library plays an important role for the extraction of manufacturing features with their proper manufacturing information. However, to manage the manufacturing information flexibly, it is important to build a manufacturing feature library that is easy to manage. In this paper, the implementation of Semantic Wiki for the development of the manufacturing feature library as the manufacturing information management system is proposed.||0||0|
|OntoWiki: Commuity-driven Ontology Engineering and Ontology Usage based on Wikis||Martin Hepp
|WikiSym||English||2006||Ontologies are consensual representations of a domain of discourse and the backbone of the future Semantic Web. Currently, however, only a fraction of Web users can take part in the process of building ontologies. In this paper, we show that standard Wiki technology can be used as an ontology development platform, reducing entry barriers for the participation of users in the creation and maintenance of ontologies, and describe our first OntoWiki prototype.||0||1|
|Semantic Wiki as a lightweight knowledge management system||Hendry Muljadi
|Lecture Notes in Computer Science||English||2006||Since its birth in 1995, Wild has become more and more popular. This paper presents a Semantic Wiki, a Wiki extended to include the ideas of Semantic Web. The proposed Semantic Wiki uses a simple Wiki syntax to write labeled links which represent RDF triples. By enabling the writing of labeled links, Semantic Wiki may provide an easy-to-use and flexible environment for an integrated management of content and metadata, so that Semantic Wiki may be used as a lightweight knowledge management system.||0||0|
|Semantic wiki as a lightweight knowledge management system||Hendry Muljadi