Sören Auer

From WikiPapers
Jump to: navigation, search

Sören Auer is an author.

Publications

Only those publications related to wikis are shown here.
Title Keyword(s) Published in Language DateThis property is a special property in this wiki. Abstract R C
Crowd-sourced open courseware authoring with slidewiki.org Crowd-sourcing
OpenCourseWare
Wiki
International Journal of Emerging Technologies in Learning English 2013 While many Learning Content Management Systems are available, the collaborative, community-based creation of rich e-learning content is still not sufficiently well supported. Few attempts have been made to apply crowd-sourcing and wiki-approaches for the creation of elearning content. In this article, we showcase SlideWiki - an Open Courseware Authoring platform supporting the crowdsourced creation of richly structured learning content. 0 0
DBpedia and the live extraction of structured data from Wikipedia Data management
Databases
Knowledge Extraction
Knowledge management
RDF
Triplestore
Websites
Wikipedia
Program English 2012 Purpose: DBpedia extracts structured information from Wikipedia, interlinks it with other knowledge bases and freely publishes the results on the web using Linked Data and SPARQL. However, the DBpedia release process is heavyweight and releases are sometimes based on several months old data. DBpedia-Live solves this problem by providing a live synchronization method based on the update stream of Wikipedia. This paper seeks to address these issues. Design/methodology/approach: Wikipedia provides DBpedia with a continuous stream of updates, i.e. a stream of articles, which were recently updated. DBpedia-Live processes that stream on the fly to obtain RDF data and stores the extracted data back to DBpedia. DBpedia-Live publishes the newly added/deleted triples in files, in order to enable synchronization between the DBpedia endpoint and other DBpedia mirrors. Findings: During the realization of DBpedia-Live the authors learned that it is crucial to process Wikipedia updates in a priority queue. Recently-updated Wikipedia articles should have the highest priority, over mapping-changes and unmodified pages. An overall finding is that there are plenty of opportunities arising from the emerging Web of Data for librarians. Practical implications: DBpedia had and has a great effect on the Web of Data and became a crystallization point for it. Many companies and researchers use DBpedia and its public services to improve their applications and research approaches. The DBpedia-Live framework improves DBpedia further by timely synchronizing it with Wikipedia, which is relevant for many use cases requiring up-to-date information. Originality/value: The new DBpedia-Live framework adds new features to the old DBpedia-Live framework, e.g. abstract extraction, ontology changes, and changesets publication. 0 0
Publishing statistical data on the web Cube Viz
Data cube
Linked data
OLAP
Onto Wiki
RDF
Statistical data
Visualisation
Proceedings - IEEE 6th International Conference on Semantic Computing, ICSC 2012 English 2012 Statistical data is one of the most important sources of information, relevant for large numbers of stakeholders in the governmental, scientific and business domains alike. In this article, we overview how statistical data can be managed on the Web. With OLAP2 Data Cube and CSV2 Data Cube we present two complementary approaches on how to extract and publish statistical data. We also discuss the linking, repair and the visualization of statistical data. As a comprehensive use case, we report on the extraction and publishing on the Web of statistical data describing 10 years of life in Brazil. 0 0
Managing Web content using Linked Data principles - Combining semantic structure with dynamic content syndication Content management
Keywords-Linked Data
Semantic web
Semantic wiki
Proceedings - International Computer Software and Applications Conference English 2011 Despite the success of the emerging Linked Data Web, offering content in a machine-processable way and - at the same time - as a traditional Web site is still not a trivial task. In this paper, we present the OntoWiki-CMS - an extension to the collaborative knowledge engineering toolkit OntoWiki for managing semantically enriched Web content. OntoWiki-CMS is based on OntoWiki for the collaborative authoring of semantically enriched Web content, vocabularies and taxonomies for the semantic structuring of the Web content and the OntoWiki Site Extension, a template and dynamic syndication system for representing the semantically enriched content as a Web site and the dynamic integration of supplementary content. OntoWiki-CMS facilitates and integrates existing content-specific content management strategies (such as blogs, bibliographic repositories or social networks). OntoWiki-CMS helps to balance between the creation of rich, stable semantic structures and the participatory involvement of a potentially large editor and contributor community. As a result semantic structuring of the Web content facilitates better search, browsing and exploration as we demonstrate with a use case. 0 0
Managing multimodal and multilingual semantic content Knowledge management
Multimodality
Semantic web
Semantic wiki
WEBIST 2011 - Proceedings of the 7th International Conference on Web Information Systems and Technologies English 2011 With the advent and increasing popularity of Semantic Wikis and the Linked Data the management of se-mantically represented knowledge became mainstream. However, certain categories of semantically enriched content, such as multimodal documents as well as multilingual textual resources are still difficult to handle. In this paper, we present a comprehensive strategy for managing the life-cycle of both multimodal and multilingual semantically enriched content. The strategy is based on extending a number of semantic knowledge management techniques such as authoring, versioning, evolution, access and exploration for semantically enriched multimodal and multilingual content. We showcase an implementation and user interface based on the semantic wiki paradigm and present a use case from the e-tourism domain. 0 0
Towards a Korean DBpedia and an approach for complementing the Korean Wikipedia based on DBpedia DBpedia
Multi-lingual
Synchronization
Wikipedia
CEUR Workshop Proceedings English 2010 In the first part of this paper we report about experiences when applying the DBpedia extraction framework to the Korean Wikipedia. We improved the extraction of non-Latin characters and extended the framework with pluggable internationalization components in order to facilitate the extraction of localized information. With these improvements we almost doubled the amount of extracted triples. We also will present the results of the extraction for Korean. In the second part, we present a conceptual study aimed at understanding the impact of international resource synchronization in DBpedia. In the absence of any information synchronization, each country would construct its own datasets and manage it from its users. Moreover the cooperation across the various countries is adversely affected. 0 0
Weaving a social data web with semantic pingback Lecture Notes in Computer Science English 2010 In this paper we tackle some pressing obstacles of the emerging Linked Data Web, namely the quality, timeliness and coherence of data, which are prerequisites in order to provide direct end user benefits. We present an approach for complementing the Linked Data Web with a social dimension by extending the well-known Pingback mechanism, which is a technological cornerstone of the blogosphere, towards a Semantic Pingback. It is based on the advertising of an RPC service for propagating typed RDF links between Data Web resources. Semantic Pingback is downwards compatible with conventional Pingback implementations, thus allowing to connect and interlink resources on the Social Web with resources on the Data Web. We demonstrate its usefulness by showcasing use cases of the Semantic Pingback implementations in the semantic wiki OntoWiki and the Linked Data interface for database-backed Web applications Triplify. 0 0
DBpedia - A crystallization point for the Web of Data Journal of Web Semantics 2009 The {DBpedia} project is a community effort to extract structured information from Wikipedia and to make this information accessible on the Web. The resulting {DBpedia} knowledge base currently describes over 2.6 million entities. For each of these entities, {DBpedia} defines a globally unique identifier that can be dereferenced over the Web into a rich {RDF} description of the entity, including human-readable definitions in 30 languages, relationships to other resources, classifications in four concept hierarchies, various facts as well as data-level links to other Web data sources describing the entity. Over the last year, an increasing number of data publishers have begun to set data-level links to {DBpedia} resources, making {DBpedia} a central interlinking hub for the emerging Web of Data. Currently, the Web of interlinked data sources around {DBpedia} provides approximately 4.7 billion pieces of information and covers domains such as geographic information, people, companies, films, music, genes, drugs, books, and scientific publications. This article describes the extraction of the {DBpedia} knowledge base, the current status of interlinking {DBpedia} with other data sources on the Web, and gives an overview of applications that facilitate the Web of Data around {DBpedia.} 2009 Elsevier {B.V.} All rights reserved. 0 0
DBpedia Live Extraction English 2009 0 0
DBpedia live extraction Lecture Notes in Computer Science English 2009 The DBpedia project extracts information from Wikipedia, interlinks it with other knowledge bases, and makes this data available as RDF. So far the DBpedia project has succeeded in creating one of the largest knowledge bases on the Data Web, which is used in many applications and research prototypes. However, the heavy-weight extraction process has been a drawback. It requires manual effort to produce a new release and the extracted information is not up-to-date. We extended DBpedia with a live extraction framework, which is capable of processing tens of thousands of changes per day in order to consume the constant stream of Wikipedia updates. This allows direct modifications of the knowledge base and closer interaction of users with DBpedia. We also show how the Wikipedia community itself is now able to take part in the DBpedia ontology engineering process and that an interactive roundtrip engineering between Wikipedia and DBpedia is made possible. 0 0
DBpedia – A Crystallization Point for the Web of Data Web of data
Linked data
Knowledge Extraction
Wikipedia
RDF
Journal of Web Semantics: Science, Services and Agents on the World Wide Web English 2009 The DBpedia project is a community effort to extract structured information from Wikipedia and to make this information accessible on the Web. The resulting DBpedia knowledge base currently describes over 2.6 million entities. For each of these entities, DBpedia defines a globally unique identifier that can be dereferenced over the Web into a rich RDF description of the entity, including human-readable definitions in 30 languages, relationships to other resources, classifications in four concept hierarchies, various facts as well as data-level links to other Web data sources describing the entity. Over the last year, an increasing number of data publishers have begun to set data-level links to DBpedia resources, making DBpedia a central interlinking hub for the emerging Web of data. Currently, the Web of interlinked data sources around DBpedia provides approximately 4.7 billion pieces of information and covers domains such as geographic information, people, companies, films, music, genes, drugs, books, and scientific publications. This article describes the extraction of the DBpedia knowledge base, the current status of interlinking DBpedia with other data sources on the Web, and gives an overview of applications that facilitate the Web of Data around DBpedia. 0 0
DBpedia: A nucleus for a Web of open data Lecture Notes in Computer Science English 2007 DBpedia is a community effort to extract structured information from Wikipedia and to make this information available on the Web. DBpedia allows you to ask sophisticated queries against datasets derived from Wikipedia and to link other datasets on the Web to Wikipedia data. We describe the extraction of the DBpedia datasets, and how the resulting information is published on the Web for human- and machine-consumption. We describe some emerging applications from the DBpedia community and show how website authors can facilitate DBpedia content within their sites. Finally, we present the current status of interlinking DBpedia with other open datasets on the Web and outline how DBpedia could serve as a nucleus for an emerging Web of open data. 0 2
DBpedia: A nucleus for a web of open data ISWC English 2007 0 3
Discovering unknown connections - The DBpedia relationship finder The Social Semantic Web 2007 - Proceedings of the 1st Conference on Social Semantic Web, CSSW 2007 English 2007 The Relationship Finder is a tool for exploring connections between objects in a Semantic Web knowledge base. It offers a new way to get insights about elements in an ontology, in particular for large amounts of instance data. For this reason, we applied the idea to the DBpedia data set, which contains an enormous amount of knowledge extracted from Wikipedia. We describe the workings of the Relationship Finder algorithm and present some interesting statistical discoveries about DBpedia and Wikipedia. 0 0
Onto wiki: A tool for social, semantic collaboration Semantic collaboration
Semantic wiki
Social software
CEUR Workshop Proceedings English 2007 We present OntoWiki, a tool providing support for agile, distributed knowledge engineering scenarios. OntoWiki facilitates the visual presentation of a knowledge base as an information map, with different views on instance data. It enables intuitive authoring of semantic content, with an inline editing mode for editing RDF content, similar to WYSIWYG for text documents. It fosters social collaboration aspects by keeping track of changes, allowing comments and discussion on every single part of a knowledge base, enabling to rate and measure the popularity of content and honoring the activity of users. OntoWiki enhances the browsing and retrieval by offering semantic enhanced search strategies. All these techniques are applied with the ultimate goal of decreasing the entrance barrier for projects and domain experts to collaborate using semantic technologies. In the spirit of the Web 2.0 OntoWiki implements an "architecture of participation" that allows users to add value to the application as they use it. It is available as open-source software and a demonstration platform can be accessed at http://3ba.se. 0 0
What Have Innsbruck and Leipzig in Common? Extracting Semantics from Wiki Content The Semantic Web: Research and Applications English 2007 Wikis are established means for the collaborative authoring, versioning and publishing of textual articles. The Wikipedia project, for example, succeeded in creating the by far largest encyclopedia just on the basis of a wiki. Recently, several approaches have been proposed on how to extend wikis to allow the creation of structured and semantically enriched content. However, the means for creating semantically enriched structured content are already available and are, although unconsciously, even used by Wikipedia authors. In this article, we present a method for revealing this structured content by extracting information from template instances. We suggest ways to efficiently query the vast amount of extracted information (e.g. more than 8 million RDF statements for the English Wikipedia version alone), leading to astonishing query answering possibilities (such as for the title question). We analyze the quality of the extracted content, and propose strategies for quality improvements with just minor modifications of the wiki systems being currently used. 0 0
What have innsbruck and Leipzig in common? Extracting semantics from wiki content Lecture Notes in Computer Science English 2007 Wikis are established means for the collaborative authoring, versioning and publishing of textual articles. The Wikipedia project, for example, succeeded in creating the by far largest encyclopedia just on the basis of a wiki. Recently, several approaches have been proposed on how to extend wikis to allow the creation of structured and semantically enriched content. However, the means for creating semantically enriched structured content are already available and are, although unconsciously, even used by Wikipedia authors. In this article, we present a method for revealing this structured content by extracting information from template instances. We suggest ways to efficiently query the vast amount of extracted information (e.g. more than 8 million RDF statements for the English Wikipedia version alone), leading to astonishing query answering possibilities (such as for the title question). We analyze the quality of the extracted content, and propose strategies for quality improvements with just minor modifications of the wiki systems being currently used. 0 0
Access control on RDF triple stores from a Semantic Wiki perspective CEUR Workshop Proceedings English 2006 RDF triple stores are used to store and query large RDF models. Semantic Web applications built on top of such triple stores require methods allowing high-performance access control not restricted to per model directives. For the growing number of lightweight, scripted Semantic Web applications it is crucial to rely on access control methods which maintain a balance between expressiveness, simplicity and scalability. Starting from a Semantic Wiki application scenario we collect requirements for useful access control methods provided by the triple store. We derive a basic model for triple store access according to these requirements and review existing approaches in the field of policy management with regard to the requirements. Finally, a lightweight access control framework based on rule-controlled query filters is described. 0 0
Towards a semantic wiki experience - Desktop integration and interactivity in WikSAR CEUR Workshop Proceedings English 2005 Common Wiki systems such as MediaWiki lack semantic annotations. WikSAR (Semantic Authoring and Retrieval within a Wiki), a prototype of a semantic Wiki, offers effortless semantic authoring. Instant gratification of users is achieved by context aware means of navigation, interactive graph visualisation of the emerging ontology, as well as semantic retrieval possibilities. Embedding queries into Wiki pages creates views (as dependant collections) on the information space. Desktop integration includes accessing dates (e.g. reminders) entered in the Wiki via local calendar applications, maintaining bookmarks, and collecting web quotes within the Wiki. Approaches to reference documents on the local file system are sketched out, as well as an enhancement of the Wiki interface to suggest appropriate semantic annotations to the user. 0 1