(Redirected from Semantic annotations)
| semantic annotation|
(Alternative names for this keyword)
|Export and share|
|BibTeX, CSV, RDF, JSON|
|Browse properties · List of keywords|
semantic annotation is included as keyword or extra keyword in 0 datasets, 0 tools and 57 publications.
There is no datasets for this keyword.
There is no tools for this keyword.
|Title||Author(s)||Published in||Language||DateThis property is a special property in this wiki.||Abstract||R||C|
|Improved text annotation with wikipedia entities||Makris C.
|Proceedings of the ACM Symposium on Applied Computing||English||2013||Text annotation is the procedure of initially identifying, in a segment of text, a set of dominant in meaning words and later on attaching to them extra information (usually drawn from a concept ontology, implemented as a catalog) that expresses their conceptual content in the current context. Attaching additional semantic information and structure helps to represent, in a machine interpretable way, the topic of the text and is a fundamental preprocessing step to many Information Retrieval tasks like indexing, clustering, classification, text summarization and cross-referencing content on web pages, posts, tweets etc. In this paper, we deal with automatic annotation of text documents with entities of Wikipedia, the largest online knowledge base; a process that is commonly known as Wikification. Moving similarly to previous approaches the cross-reference of words in the text to Wikipedia articles is based on local compatibility between the text around the term and textual information embedded in the article. The main contribution of this paper is a set of disambiguation techniques that enhance previously published approaches by employing both the WordNet lexical database and the Wikipedia article's PageRank scores in the disambiguation process. The experimental evaluation performed depicts that the exploitation of these additional semantic information sources leads to more accurate Text Annotation. Copyright 2013 ACM.||0||0|
|Improving large-scale search engines with semantic annotations||Fuentes-Lorenzo D.
|Expert Systems with Applications||English||2013||Traditional search engines have become the most useful tools to search the World Wide Web. Even though they are good for certain search tasks, they may be less effective for others, such as satisfying ambiguous or synonym queries. In this paper, we propose an algorithm that, with the help of Wikipedia and collaborative semantic annotations, improves the quality of web search engines in the ranking of returned results. Our work is supported by (1) the logs generated after query searching, (2) semantic annotations of queries and (3) semantic annotations of web pages. The algorithm makes use of this information to elaborate an appropriate ranking. To validate our approach we have implemented a system that can apply the algorithm to a particular search engine. Evaluation results show that the number of relevant web resources obtained after executing a query with the algorithm is higher than the one obtained without it. © 2012 Elsevier Ltd. All rights reserved.||0||0|
|Semantic wiki-based knowledge management system by interleaving ontology mapping tool||Jung J.J.||International Journal of Software Engineering and Knowledge Engineering||English||2013||Many organizations have been employing Knowledge Management Systems (KMS) to improve task performance. In this paper, we propose a novel KMS by using semantic wiki framework based on a centralized Global Wiki Ontology (GWO). The main aim of this system is (i) to collect as many organizational resources as possible, and (ii) to maintain semantic consistency of the system. When enriching the KMS in a particular domain, not only linguistic resources but also conceptual structures can be efficiently captured from multiple users, and more importantly, the resources can be automatically integrated with the GWO of the KMS in real time. Once users add new organization resources, the proposed KMS can formalize and contextualize them into a set of triplets by referring to a predefined pattern-triplet mapping table and the GWO. Especially, since the ontology mapper is interleaved, the KMS can determine whether the new resources are semantically conflicted with the GWO. To evaluate the proposed methodology, we have implemented the semantic wiki-based KMS. As a case study, two user groups were invited to collect the organization resources. We found that the group with the proposed KMS has shown better performance than the other group with traditional wiki-based KMS.||0||0|
|An adaptive semantic Wiki for CoPs of teachers - Case of higher education context||Berkani L.
|International Conference on Information Society, i-Society 2012||English||2012||This paper presents an adaptive semantic wiki dedicated to CoPs made up of actors from the higher education context (faculties, lecturers, teaching assistants, lab assistants). The wiki called ASWiki-CoPs (Adaptive Semantic Wiki for CoPs) is based on the semantic web technologies in order to enhance the knowledge sharing and reuse, offering the functionalities of a wiki together with some knowledge management features. ASWiki-CoP is based on an ontology used to describe the knowledge resources through the objective annotations and to express the member's feedback through the subjective annotations. Furthermore, we describe the member's profile in order to allow an adaptive access to the semantic wiki.||0||0|
|An automatic approach for generating tables in semantic wikis||Al-Husain L.
|Journal of Theoretical and Applied Information Technology||English||2012||Wiki is well-known content management systems. Semantic wikis extends the classical wikis with semantic annotations that made its contents more structured. Tabular representations of information have a considerable value, especially in wikis which are rich in content and contain large amount of information. For this reason, we propose an approach for automatically generating tables for representing the semantic data contained in wiki articles. The proposed approach composed of three steps (1) extract the semantic data of Typed Links and Attributes from the wiki articles and call them Article Properties (2) cluster the collection of wiki articles based on extracted properties from the first step, and (3) construct the table that aggregates the shared properties between articles and present them in two-dimensions. The proposed approach is based on a simple heuristic which is the number of properties that are shared between wiki articles. © 2005 - 2012 JATIT & LLS. All rights reserved.||0||0|
|An ontology evolution-based framework for semantic information retrieval||Rodriguez-Garcia M.A.
|Lecture Notes in Computer Science||English||2012||Ontologies evolve continuously during their life cycle to adapt to new requirements and necessities. Ontology-based information retrieval systems use semantic annotations that are also regularly updated to reflect new points of view. In order to provide a general solution and to minimize the users' effort in the ontology enriching process, a methodology for extracting terms and evolve the domain ontology from Wikipedia is proposed in this work. The framework presented here combines an ontology-based information retrieval system with an ontology evolution approach in such a way that it simplifies the tasks of updating concepts and relations in domain ontologies. This framework has been validated in a scenario where ICT-related cloud services matching the user needs are to be found.||0||0|
|Bricking Semantic Wikipedia by relation population and predicate suggestion||Haofen Wang
|Web Intelligence and Agent Systems||English||2012||Semantic Wikipedia aims to enhance Wikipedia by adding explicit semantics to links between Wikipedia entities. However, we have observed that it currently suffers the following limitations: lack of semantic annotations and lack of semantic annotators. In this paper, we resort to relation population to automatically extract relations between any entity pair to enrich semantic data, and predicate suggestion to recommend proper relation labels to facilitate semantic annotating. Both tasks leverage relation classification which tries to classify extracted relation instances into predefined relations. However, due to the lack of labeled data and the excessiveness of noise in Semantic Wikipedia, existing approaches cannot be directly applied to these tasks to obtain high-quality annotations. In this paper, to tackle the above problems brought by Semantic Wikipedia, we use a label propagation algorithm and exploit semantic features like domain and range constraints on categories as well as linguistic features such as dependency trees of context sentences in Wikipedia articles. The experimental results on 7 typical relation types show the effectiveness and efficiency of our approach in dealing with both tasks. © 2012-IOS Press and the authors. All rights reserved.||0||0|
|Coarse lexical semantic annotation with supersenses: An Arabic case study||Schneider N.
|50th Annual Meeting of the Association for Computational Linguistics, ACL 2012 - Proceedings of the Conference||English||2012||"Lightweight" semantic annotation of text calls for a simple representation, ideally without requiring a semantic lexicon to achieve good coverage in the language and domain. In this paper, we repurpose WordNet's supersense tags for annotation, developing specific guidelines for nominal expressions and applying them to Arabic Wikipedia articles in four topical domains. The resulting corpus has high coverage and was completed quickly with reasonable inter-annotator agreement.||0||0|
|Cognitive linguistics as the underlying framework for semantic annotation||Pipitone A.
|Proceedings - IEEE 6th International Conference on Semantic Computing, ICSC 2012||English||2012||In recent years many attempts have been made to design suitable sets of rules aimed at extracting the semantic meaning from plain text, and to achieve annotation, but very few approaches make extensive use of grammars. Current systems are mainly focused on extracting the semantic role of the entities described in the text. This approach has limitations: in such applications the semantic role is conceived merely as the meaning of the involved entities without considering their context. As an example, current semantic annotators often specify a date entity without any annotation regarding the kind of the date itself i.e. a birth date, a book publication date, and so on. Moreover, these systems use ontologies that have been developed specifically for the system's purposes and have reduced portability. Extensive use of both linguistic resources and semantic representations of the domain are needed in this scenario, the semantic representation of the domain addresses the semantic interpretation of the context, while NLP tools can help to solve some linguistic problems related to the semantic annotation, as synonymy, ambiguities, and co-references. A novel framework inspired to Cognitive Linguistics theories is proposed in this work that is aimed at facing the problem outlined above. In particular, our work is based on Construction Grammar (CxG). CxG defines a "construction" as a form-meaning couple. We use RDF triples in the domain ontology as the "semantic seeds" to build constructions. A suitable set of rules based on linguistic typology have been designed to infer semantics and syntax from the semantic seed, while combining them as the poles of constructions. A hierarchy of rules to infer syntactic patterns for either single words or sentences using Word Net and Frame Net has been designed to overcome the limitations when expressing the syntactic poles using solely the terms stated in the ontology. As a consequence, semantic annotation of plain text is achieved by computing all possible syntactic forms for the same meaning during the analysis of document corpora. The proposed framework has been finalized to semantic annotation of Wikipedia pages, the result is a system for automatic generation of Semantic Web wiki contents from standard Wikipedia pages, leading to a possible solution of the big challenge to make existing wiki sources semantic wikis.||0||0|
|Collaborative machine tool design environment based on semantic wiki technology||Zapp M.
|Proceedings of the European Conference on Knowledge Management, ECKM||English||2012||This paper presents a light-weight collaboration environment for the conceptual design of machine tools. For the design of specialized machine tools and their components, machine designers, customers and suppliers need to gather, retrieve and exchange heterogeneous information like customer requirements, component specifications, design drawings and life-cycle performance data. This knowledge management process can be supported by collaboration tools. Since the European machine tool industry is dominated by SMEs and machine tools are mostly manufactured in small series, light-weight and flexible solutions are required. The collaboration environment proposed in this work is built on the Semantic MediaWiki+ (SMW+) solution, which enhances a regular MediaWiki system with the capabilities of semantic annotations and semantic queries. To facilitate the semantic annotation, the design environment is equipped with ontologies, which represent relevant concepts, attributes, relations and rules in the machine tool design domain. In addition, a rich web application as an extension to SMW+ is developed, which leads the designer through the steps of a machine design project. The environment supports the retrieval and re-use of information from previous design projects, the use of lifecycle performance data of machines, the knowledge exchange among designers and the data exchange to commercial-off-the-shelf assessment and simulation tools.||0||0|
|Extracting knowledge from web search engine results||Kanavos A.
|Proceedings - International Conference on Tools with Artificial Intelligence, ICTAI||English||2012||Nowadays, people frequently use search engines in order to find the information they need on the web. However, usually web search engines return web page references in a global ranking making it difficult to the users to browse different topics captured in the result set and thus making it difficult to find quickly the desired web pages. There is need for special computational systems, that will discover knowledge in these web search results providing the user with the possibility to browse different topics contained in a given result set. In this paper, we focus on the problem of determining different thematic groups on web search engine results that existing web search engines provide. We propose a novel system that exploits a set of reformulation strategies so as to help users gain more relevant results to their desired query. It additionally tries to discover among the result set different topic groups, according to the various meanings of the provided query. The proposed method utilizes a number of semantic annotation techniques using Knowledge Bases, like Word Net and Wikipedia, in order to perceive the different senses of each query term. Finally, the method annotates the extracted topics using information derived from the clusters and presents them to the end user.||0||0|
|Semantic technologies for civil information management during complex emergencies||Caglayan A.
|2012 IEEE International Conference on Technologies for Homeland Security, HST 2012||English||2012||Data sharing in support of situational awareness during complex emergencies remains a challenge to effective response and recovery, despite the fact that significant technological advances have enabled robust mobile data collection capabilities that can operate in both connected and disconnected environments. Current solutions rely on disparate knowledge silos that make situational awareness difficult for operations requiring collaboration to facilitate information sharing and to enable performance tracking for optimal resource allocation. In our paper we discuss the benefits of applying mobile enabled semantic technologies for supporting civil information management (CIM) during complex emergencies and analytically investigate the technical challenges encountered in such efforts. Specifically, we will present research related to developing a Civil Information Management Semantic Wiki (CIM Wiki), built on the Semantic MediaWiki platform. This CIM Wiki is a knowledge portal that enables users to collect, organize, tag, search, browse, visualize, and share structured CIM knowledge.||0||0|
|A wikipedia-based framework for collaborative semantic annotation||Fernandez N.
|International Journal on Artificial Intelligence Tools||English||2011||The semantic web aims at automating web data processing tasks that nowadays only humans are able to do. To make this vision a reality, the information on web resources should be described in a computer-meaningful way, in a process known as semantic annotation. In this paper, a manual, collaborative semantic annotation framework is described. It is designed to take advantage of the benefits of manual annotation systems (like the possibility of annotating formats difficult to annotate in an automatic manner) addressing at the same time some of their limitations (reduce the burden for non-expert annotators). The framework is inspired by two principles: use Wikipedia as a facade for a formal ontology and integrate the semantic annotation task with common user actions like web search. The tools in the framework have been implemented, and empirical results obtained in experiences carried out with these tools are reported.||0||0|
|Annotating software documentation in semantic wikis||Klaas Andries de Graaf||ESAIR||English||2011||0||0|
|Automatic semantic web annotation of named entities||Charton E.
|Lecture Notes in Computer Science||English||2011||This paper describes a method to perform automated semantic annotation of named entities contained in large corpora. The semantic annotation is made in the context of the Semantic Web. The method is based on an algorithm that compares the set of words that appear before and after the name entity with the content of Wikipedia articles, and identifies the more relevant one by means of a similarity measure. It then uses the link that exists between the selected Wikipedia entry and the corresponding RDF description in the Linked Data project to establish a connection between the named entity and some URI in the Semantic Web. We present our system, discuss its architecture, and describe an algorithm dedicated to ontological disambiguation of named entities contained in large-scale corpora. We evaluate the algorithm, and present our results.||0||0|
|City model enrichment||Smart P.D.
Quinn . J.A.
|ISPRS Journal of Photogrammetry and Remote Sensing||English||2011||The combination of mobile communication technology with location and orientation aware digital cameras has introduced increasing interest in the exploitation of 3D city models for applications such as augmented reality and automated image captioning. The effectiveness of such applications is, at present, severely limited by the often poor quality of semantic annotation of the 3D models. In this paper, we show how freely available sources of georeferenced Web 2.0 information can be used for automated enrichment of 3D city models. Point referenced names of prominent buildings and landmarks mined from Wikipedia articles and from the OpenStreetMaps digital map and Geonames gazetteer have been matched to the 2D ground plan geometry of a 3D city model. In order to address the ambiguities that arise in the associations between these sources and the city model, we present procedures to merge potentially related buildings and implement fuzzy matching between reference points and building polygons. An experimental evaluation demonstrates the effectiveness of the presented methods. © 2011 International Society for Photogrammetry and Remote Sensing, Inc. (ISPRS).||0||0|
|Extracting events from Wikipedia as RDF triples linked to widespread semantic web datasets||Carlo Aliprandi
|Lecture Notes in Computer Science||English||2011||Many attempts have been made to extract structured data from Web resources, exposing them as RDF triples and interlinking them with other RDF datasets: in this way it is possible to create clouds of highly integrated Semantic Web data collections. In this paper we describe an approach to enhance the extraction of semantic contents from unstructured textual documents, in particular considering Wikipedia articles and focusing on event mining. Starting from the deep parsing of a set of English Wikipedia articles, we produce a semantic annotation compliant with the Knowledge Annotation Format (KAF). We extract events from the KAF semantic annotation and then we structure each event as a set of RDF triples linked to both DBpedia and WordNet. We point out examples of automatically mined events, providing some general evaluation of how our approach may discover new events and link them to existing contents.||0||0|
|How to reason by HeaRT in a semantic knowledge-based Wiki||Adrian W.T.
|Proceedings - International Conference on Tools with Artificial Intelligence, ICTAI||English||2011||Semantic wikis constitute an increasingly popular class of systems for collaborative knowledge engineering. We developed Loki, a semantic wiki that uses a logic-based knowledge representation. It is compatible with semantic annotations mechanism as well as Semantic Web languages. We integrated the system with a rule engine called HeaRT that supports inference with production rules. Several modes for modularized rule bases, suitable for the distributed rule bases present in a wiki, are considered. Embedding the rule engine enables strong reasoning and allows to run production rules over semantic knowledge bases. In the paper, we demonstrate the system concepts and functionality using an illustrative example.||0||0|
|Introducing new features to wikipedia: Case studies for web science||Mathias Schindler
|IEEE Intelligent Systems||English||2011||Introducing new features to Wikipedia is a complex sociotechnical process. The authors compare the Web science process to the previous introduction of new features and suggest how to use it as a model for the future development of Wikipedia.||0||0|
|Loki-presentation of logic-based semantic wiki||Adrian W.T.
|CEUR Workshop Proceedings||English||2011||TOOL PRESENTATION: The paper presents a semantic wiki, called Loki, with strong logical knowledge representation using rules. The system uses a coherent logic-based representation for semantic annotations of the content and implementing reasoning procedures. The representation uses the logic programming paradigm and the Prolog programming language. The proposed architecture allows for rule-based reasoning in the wiki. It also provides a compatibility layer with the popular Semantic MediaWiki platform, directly parsing its annotations.||0||0|
|Query phrase expansion using Wikipedia in patent class search||Al-Shboul B.
|Lecture Notes in Computer Science||English||2011||Relevance Feedback methods generally suffer from topic drift caused by words ambiguity and synonymous uses of words. As a way to alleviate the inherent problem, we propose a novel query phrase expansion approach utilizing semantic annotations in Wikipedia pages, trying to enrich queries with context disambiguating phrases. Focusing on the patent domain, especially on patent search where patents are classified into a hierarchy of categories, we attempt to understand the roles of phrases and words in query expansion in determining the relevance of documents and examine their contributions to alleviating the query drift problem. Our approach is compared against Relevance Model, a state-of-the-art, to show its superiority in terms of MAP on all levels of the classification hierarchy.||0||0|
|A framework for automatic semantic annotation of Wikipedia articles||Pipitone A.
|SWAP 2010 - 6th Workshop on Semantic Web Applications and Perspectives||English||2010||Semantic wikis represent a novelty in the field of semantic technologies. Nowadays, there are many important "non-semantic" wiki sources, as the Wikipedia encyclopedia. A big challenge is to make existing wiki sources semantic wikis. In this way, a new generation of applications can be designed to brose, search, and reuse wiki contents, while reducing loss of data. The core of this problem is the extraction of semantic sense and the annotation from text. In this paper a hierarchical framework for automatic semantic annotation of plain text is presented that has been finalized to the use of Wikipedia pages as information source. The strategy is based on disambiguation of plain text using both domain ontology and linguistic pattern matching methods. The main steps are: TOC extraction from the original page, content annotation for each section linguistic rules, and semantic wiki generation. The complete framework is outlined and an application scenario is presented.||0||0|
|Human computer collaboration to improve annotations in semantic wikis||Boyer A.
|WEBIST 2010 - Proceedings of the 6th International Conference on Web Information Systems and Technology||English||2010||Semantic wikis are promising tools for producing structured and unstructured data. However, they suffer from a lack of user provided semantic annotations, resulting in a loss of efficiency, despite of their high potential. We propose a system that suggests automatically computed annotations to users in peer to peer semantic wikis. Users only have to validate, complete, modify, refuse or ignore these suggested annotations. Therefore, the annotation task becomes easier, more users will provide annotations. The system is based on collaborative filtering recommender systems, it does not exploit the content of the pages but the usage made on these pages by the users. The resulting semantic wikis contain several kinds of annotations with different status: human, computer or human-computed provided annotations.||0||0|
|Human-machine collaboration for enriching semantic wikis using formal concept analysis||Blansche A.
|CEUR Workshop Proceedings||English||2010||Semantic wikis are new generation of collaborative tools. They allow to embed semantic annotations in the wiki content. These annotations allow to better organize and structure the wiki contents. It is then possible for users to build knowledge understandable by humans and computers. By this way, machines are allowed to produce or update semantic wiki pages as humans can do. In this paper, we propose a new smart agent based on Formal Concept Analysis. This smart agent can compute automatically category trees based on defined semantic properties. In order to reduce human-machine collaboration problems, humans just validate changes proposed by the smart agent. A distributed version of wiki is used to ensure consistency of the content during the validation process.||0||0|
|Overview of the INEX 2009 Ad hoc track||Shlomo Geva
|Lecture Notes in Computer Science||English||2010||This paper gives an overview of the INEX 2009 Ad Hoc Track. The main goals of the Ad Hoc Track were three-fold. The first goal was to investigate the impact of the collection scale and markup, by using a new collection that is again based on a the Wikipedia but is over 4 times larger, with longer articles and additional semantic annotations. For this reason the Ad Hoc track tasks stayed unchanged, and the Thorough Task of INEX 2002-2006 returns. The second goal was to study the impact of more verbose queries on retrieval effectiveness, by using the available markup as structural constraints-now using both the Wikipedia's layout-based markup, as well as the enriched semantic markup-and by the use of phrases. The third goal was to compare different result granularities by allowing systems to retrieve XML elements, ranges of XML elements, or arbitrary passages of text. This investigates the value of the internal document structure (as provided by the XML mark-up) for retrieving relevant information. The INEX 2009 Ad Hoc Track featured four tasks: For the Thorough Task a ranked-list of results (elements or passages) by estimated relevance was needed. For the Focused Task a ranked-list of non-overlapping results (elements or passages) was needed. For the Relevant in Context Task non-overlapping results (elements or passages) were returned grouped by the article from which they came. For the Best in Context Task a single starting point (element start tag or passage start) for each article was needed. We discuss the setup of the track, and the results for the four tasks.||0||0|
|Reflect: A practical approach to web semantics||O'Donoghue S.I.
|Reifying, participating and learning: Analysis of uses of reification tools by a community of practice||Daele A.||International Journal of Web-Based Learning and Teaching Technologies||English||2010||This paper presents observations and analysis of an activity of reification of professional practices within a community of practice. A case is examined of a distance community of tutors using a semantic Wiki for formalising their practices and a tool for storing and classifying documents. On the basis of the instrumental genesis theory, the author highlights the process of appropriation of the tools by the community of practice. This community participated in the development and conception of uses for the tools through a research and development project based on participatory design. This appropriation process, even if it did not occur to the expected extent, did nonetheless allow the community's members to develop their representations regarding the reification of their practices and, gradually, to elaborate broader uses of the tools.||0||0|
|Scalable semantic annotation of text using lexical and Web resources||Zavitsanos E.
|Lecture Notes in Computer Science||English||2010||In this paper we are dealing with the task of adding domain-specific semantic tags to a document, based solely on the domain ontology and generic lexical and Web resources. In this manner, we avoid the need for trained domain-specific lexical resources, which hinder the scalability of semantic annotation. More specifically, the proposed method maps the content of the document to concepts of the ontology, using the WordNet lexicon and Wikipedia. The method comprises a novel combination of measures of semantic relatedness and word sense disambiguation techniques to identify the most related ontology concepts for the document. We test the method on two case studies: (a) a set of summaries, accompanying environmental news videos, (b) a set of medical abstracts. The results in both cases show that the proposed method achieves reasonable performance, thus pointing to a promising path for scalable semantic annotation of documents.||0||0|
|Semantic MediaWiki in operation: Experiences with building a semantic portal||Herzig D.M.
|Lecture Notes in Computer Science||English||2010||Wikis allow users to collaboratively create and maintain content. Semantic wikis, which provide the additional means to annotate the content semantically and thereby allow to structure it, experience an enormous increase in popularity, because structured data is more usable and thus more valuable than unstructured data. As an illustration of leveraging the advantages of semantic wikis for semantic portals, we report on the experience with building the AIFB portal based on Semantic MediaWiki. We discuss the design, in particular how free, wiki-style semantic annotations and guided input along a predefined schema can be combined to create a flexible, extensible, and structured knowledge representation. How this structured data evolved over time and its flexibility regarding changes are subsequently discussed and illustrated by statistics based on actual operational data of the portal. Further, the features exploiting the structured data and the benefits they provide are presented. Since all benefits have its costs, we conducted a performance study of the Semantic MediaWiki and compare it to MediaWiki, the non-semantic base platform. Finally we show how existing caching techniques can be applied to increase the performance.||0||0|
|Semantic need: Guiding metadata annotations by questions people||Happel H.-J.||Lecture Notes in Computer Science||English||2010||In its core, the Semantic Web is about the creation, collection and interlinking of metadata on which agents can perform tasks for human users. While many tools and approaches support either the creation or usage of semantic metadata, there is neither a proper notion of metadata need, nor a related theory of guidance which metadata should be created. In this paper, we propose to analyze structured queries to help identifying missing metadata. We conduct a study on Semantic MediaWiki (SMW), one of the most popular Semantic Web applications to date, analyzing structured "ask"-queries in public SMW instances. Based on that, we describe Semantic Need, an extension for SMW which guides contributors to provide semantic annotations, and summarize feedback from an online survey among 30 experienced SMW users.||0||0|
|Semantic wiki refactoring. A strategy to assist semantic wiki evolution||Rosenfeld M.
|CEUR Workshop Proceedings||English||2010||The content and structure of a wiki evolve as a result of the collaborative effort of the wiki users. In semantic wikis, this also results in the evolution of the ontology that is implicitly expressed through the semantic annotations. Without proper guidance, the semantic wiki can evolve in a chaotic manner resulting in quality problems in the underlying ontology, e.g. inconsistencies. As the wiki grows in size, the detection and solution of quality problems become more diffcult. We propose an approach to detect quality problems in semantic wikis and assist users in the process of solving them. Our approach is inspired by the key principles of software refactoring, namely the cataloging and automated detection of quality problems (bad smells), and the application of quality improvement transformations (refactorings). In this paper we discuss the problem of evolving semantic wikis, present the core model of our approach, and introduce an extensible catalog of semantic wiki bad smells and an extensible toolkit of semantic wiki refactorings.||0||0|
|Semantically enriched tools for the knowledge society: Case of project management and presentation||Talas J.
|Communications in Computer and Information Science||English||2010||Working with semantically rich data is one of the stepping stones to the knowledge society. In recent years, gathering, processing, and using semantic data have made a big progress, particularly in the academic environment. However, the advantages of the semantic description remain commonly undiscovered by a "common user", including people from academia and IT industry that could otherwise profit from capabilities of contemporary semantic systems in the areas of project management and/or technology-enhanced learning. Mostly, the root cause lays in complexity and non-transparency of the mainstream semantic applications. The semantic tool for project management and presentation consists mainly of a module for the semantic annotation of wiki pages integrated into the project management system Trac. It combines the dynamic, easy-of-use nature and applicability of a wiki for project management with the advantages of semantically rich and accurate approach. The system is released as open-source (OS) and is used for management of students' and research projects at the research lab of the authors.||0||0|
|TAGME: on-the-fly annotation of short text fragments (by wikipedia entities)||Paolo Ferragina
|A knowledge workbench for software development||Panagiotou D.
|Proceedings of I-KNOW 2009 - 9th International Conference on Knowledge Management and Knowledge Technologies and Proceedings of I-SEMANTICS 2009 - 5th International Conference on Semantic Systems||English||2009||Modern software development is highly knowledge intensive; it requires that software developers create and share new knowledge during their daily work. However, current software development environments are "syntactic", i.e. they do not facilitate understanding the semantics of software artefacts and hence cannot fully support the knowledge-driven activities of developers. In this paper we present KnowBench, a knowledge workbench environment which focuses on the software development domain and strives to address these problems. KnowBench aims at providing software developers such a tool to ease their daily work and facilitate the articulation and visualization of software artefacts, concept-based source code documentation and related problem solving. Building a knowledge base with software artefacts by using the KnowBench system can then be exploited by semantic search engines or P2P metadata infrastructures in order to foster the dissemination of software development knowledge and facilitate cooperation among software developers.||0||0|
|Customized edit interfaces for wikis via semantic annotations||Angelo Di Iorio
|CEUR Workshop Proceedings||English||2009||Authoring support for semantic annotations represent the wiki way of the Semantic Web, ultimately leading to the wiki version of the Semantic Web's eternal dilemma: why should authors correctly annotate their content? The obvious solution is to make the ratio between the needed effort and the acquired advantages as small as possible. Two are, at least, the specificities that set wikis apart from other Web-accessible content in this respect: social aspects (wikis are often the expression of a community) and technical issues (wikis are edited "on-line"). Being related to a community, wikis are intrinsically associated to the model of knowledge of that community, making the relation between wiki content and ontologies the result of a natural process. Being edited on-line, wikis can benefit from a synergy of Web technologies that support all the information sharing process, from authoring to delivery. In this paper we present an approach to reduce the authoring effort by providing ontology-based tools to integrate models of knowledge with authoring-support technologies, using a functional approach to content fragment creation that plays nicely with the "wiki way" of managing information.||0||0|
|Exploring Flickr's related tags for semantic annotation of web images||Xu H.
|CIVR 2009 - Proceedings of the ACM International Conference on Image and Video Retrieval||English||2009||Exploring social media resources, such as Flickr and Wikipedia to mitigate the difficulty of semantic gap has attracted much attention from both academia and industry. In this paper, we first propose a novel approach to derive semantic correlation matrix from Flickr's related tags resource. We then develop a novel conditional random field model for Web image annotation, which integrates the keyword correlations derived from Flickr, and the textual and visual features of Web images into an unified graph model to improve the annotation performance. The experimental results on real Web image data set demonstrate the effectiveness of the proposed keyword correlation matrix and the Web image annotation approach. Copyright 2009 ACM.||0||0|
|Information extraction in semantic wikis||Smrz P.
|CEUR Workshop Proceedings||English||2009||This paper deals with information extraction technologies supporting semantic annotation and logical organization of textual content in semantic wikis. We describe our work in the context of the KiWi project which aims at developing a new knowledge management system motivated by the wiki way of collaborative content creation that is enhanced by the semantic web technology. The specific characteristics of semantic wikis as advanced community knowledge-sharing platforms are discussed from the perspective of the functionality providing automatic suggestions of semantic tags. We focus on the innovative aspects of the implemented methods. The interfaces of the user-interaction tools as well as the back-end web services are also tackled. We conclude that though there are many challenges related to the integration of information extraction into semantic wikis, this fusion brings valuable results.||0||0|
|Modeling clinical protocols using semantic mediawiki: The case of the oncocure project||Eccher C.
|Lecture Notes in Computer Science||English||2009||A computerized Decision Support Systems (DSS) can improve the adherence of the clinicians to clinical guidelines and protocols. The building of a prescriptive DSS based on breast cancer treatment protocols and its integration with a legacy Electronic Patient Record is the aim of the Oncocure project. An important task of this project is the encoding of the protocols in computer-executable form - a task that requires the collaboration of physicians and computer scientists in a distributed environment. In this paper, we describe our project and how semantic wiki technology was used for the encoding task. Semantic wiki technology features great flexibility, allowing to mix unstructured information and semantic annotations, and to automatically generate the final model with minimal adaptation cost. These features render semantic wikis natural candidates for small to medium scale modeling tasks, where the adaptation and training effort of bigger systems cannot be justified. This approach is not constrained to a specific protocol modeling language, but can be used as a collaborative tool for other languages. When implemented, our DSS is expected to reduce the cost of care while improving the adherence to the guideline and the quality of the documentation.||0||0|
|Parallel annotation and population: A cross-language experience||Sarrafzadeh B.
|Proceedings - 2009 International Conference on Computer Engineering and Technology, ICCET 2009||English||2009||In recent years automatic Ontology Population (OP) from texts has emerged as a new field of application for knowledge acquisition techniques. In OP, the instances of an ontology classes will be extracted from text and added under the ontology concepts. On the other hand, semantic annotation which is a key task in moving toward semantic web tries to tag instance data in a text by their corresponding ontology classes; so the ontology population activity accompanies generating semantic annotations usually. In this paper we introduce a cross-lingual population/ annotation system called POPTA which annotates Persian texts according to an English lexicalized ontology and populates the English ontology according to the input Persian texts. It exploits a hybrid approach, a combination of statistical and pattern-based methods as well as techniques founded on the web and search engines and a novel method of resolving translation ambiguities. POPTA also uses Wikipedia as a vast natural language encyclopedia to extract new instances to populate the input ontology.||0||0|
|RadSem: Semantic annotation and retrieval for medical images||Moller M.
|Lecture Notes in Computer Science||English||2009||We present a tool for semantic medical image annotation and retrieval. It leverages the MEDICO ontology which covers formal background information from various biomedical ontologies such as the Foundational Model of Anatomy (FMA), terminologies like ICD-10 and RadLex and covers various aspects of clinical procedures. This ontology is used during several steps of annotation and retrieval: (1) We developed an ontology-driven metadata extractor for the medical image format DICOM. Its output contains, e. g., person name, age, image acquisition parameters, body region, etc. (2) The output from (1) is used to simplify the manual annotation by providing intuitive visualizations and to provide a preselected subset of annotation concepts. Furthermore, the extracted metadata is linked together with anatomical annotations and clinical findings to generate a unified view of a patient's medical history. (3) On the search side we perform query expansion based on the structure of the medical ontologies. (4) Our ontology for clinical data management allows us to link and combine patients, medical images and annotations together in a comprehensive result list. (5) The medical annotations are further extended by links to external sources like Wikipedia to provide additional information.||0||0|
|SASL: A semantic annotation system for literature||Yuan P.
|Lecture Notes in Computer Science||English||2009||Due to ambiguity, search engines for scientific literatures may not return right search results. One efficient solution to the problems is to automatically annotate literatures and attach the semantic information to them. Generally, semantic annotation requires identifying entities before attaching semantic information to them. However, due to abbreviation and other reasons, it is very difficult to identify entities correctly. The paper presents a Semantic Annotation System for Literature (SASL), which utilizes Wikipedia as knowledge base to annotate literatures. SASL mainly attaches semantic to terminology, academic institutions, conferences, and journals etc. Many of them are usually abbreviations, which induces ambiguity. Here, SASL uses regular expressions to extract the mapping between full name of entities and their abbreviation. Since full names of several entities may map to a single abbreviation, SASL introduces Hidden Markov Model to implement name disambiguation. Finally, the paper presents the experimental results, which confirm SASL a good performance.||0||0|
|Supporting personal semantic annotations in P2P semantic wikis||Torres D.
|Lecture Notes in Computer Science||English||2009||In this paper, we propose to extend Peer-to-Peer Semantic Wikis with personal semantic annotations. Semantic Wikis are one of the most successful Semantic Web applications. In semantic wikis, wikis pages are annotated with semantic data to facilitate the navigation, information retrieving and ontology emerging. Semantic data represents the shared knowledge base which describes the common understanding of the community. However, in a collaborative knowledge building process the knowledge is basically created by individuals who are involved in a social process. Therefore, it is fundamental to support personal knowledge building in a differentiated way. Currently there are no available semantic wikis that support both personal and shared understandings. In order to overcome this problem, we propose a P2P collaborative knowledge building process and extend semantic wikis with personal annotations facilities to express personal understanding. In this paper, we detail the personal semantic annotation model and show its implementation in P2P semantic wikis. We also detail an evaluation study which shows that personal annotations demand less cognitive efforts than semantic data and are very useful to enrich the shared knowledge base.||0||0|
|Exploring the knowledge in semi structured data sets with rich queries||Umbrich J.
|CEUR Workshop Proceedings||English||2008||Semantics can be integrated in to search processing during both document analysis and querying stages. We describe a system that incorporates both, semantic annotations of Wikipedia articles into the search process and allows for rich annotation search, enabling users to formulate queries based on their knowledge about how entities relate to one another while simultaneously retaining the freedom of free text search where appropriate. The outcome of this work is an application consisting of semantic annotators, an extended search engine and an interactive user interface.||0||0|
|Information Extraction and Semantic Annotation of Wikipedia||Maria Ruiz-Casado
|Interleaving Ontology Mapping for Online Semantic Annotation on Semantic Wiki||Jinhyun Ahn
Jason J. Jung
|Lexical and semantic resources for NLP: From words to meanings||Gentile A.L.
|Lecture Notes in Computer Science||English||2008||A user expresses her information need through words with a precise meaning, but from the machine point of view this meaning does not come with the word. A further step is needful to automatically associate it to the words. Techniques that process human language are required and also linguistic and semantic knowledge, stored within distinct and heterogeneous resources, which play an important role during all Natural Language Processing (NLP) steps. Resources management is a challenging problem, together with the correct association between URIs coming from the resources and meanings of the words. This work presents a service that, given a lexeme (an abstract unit of morphological analysis in linguistics, which roughly corresponds to a set of words that are different forms of the same word), returns all syntactic and semantic information collected from a list of lexical and semantic resources. The proposed strategy consists in merging data with origin from stable resources, such as WordNet, with data collected dynamically from evolving sources, such as the Web or Wikipedia. That strategy is implemented in a wrapper to a set of popular linguistic resources that provides a single point of access to them, in a transparent way to the user, to accomplish the computational linguistic problem of getting a rich set of linguistic and semantic annotations in a compact way.||0||0|
|RDF authoring in wikis||Schmedding F.
|CEUR Workshop Proceedings||English||2008||Although the Semantic Web vision is gaining momentum and the underlying technologies are used in many different areas, there still seems to be no agreement on how they should be used in everyday documents, such as news, blogs or wiki pages. In this paper we argue that two aspects are crucial for the enrichment of this documents with semantic annotations: full support for RDF and close integration of the annotations with the continuous text. The former is necessary because many common relationships cannot be expressed by attribute-value-pairs, the latter reduces redundancy and enables Web browsers to help readers using the contained data. To gain further insights, we implemented an RDFa-capable extension for MediaWiki and report on improvements for wiki use cases and other applications on top of the contained data.||0||0|
|SWOOKI: A peer-to-peer semantic wiki||Charbel Rahhal
|CEUR Workshop Proceedings||English||2008||In this paper, we propose to combine the advantages of semantic wikis and P2P wikis in order to design a peer-to-peer semantic wiki. The main challenge is how to merge wiki pages that embed semantic annotations. Merging algorithms used in P2P wiki systems have been designed for linear text and not for semantic data. In this paper, we evaluate two optimistic replication algorithms to build a P2P semantic wiki.||0||0|
|The fast and the numerous - Combining machine and community intelligence for semantic annotation||Sebastian Blohm
|AAAI Workshop - Technical Report||English||2008||Starting from the observation that certain communities have incentive mechanisms in place to create large amounts of unstructured content, we propose in this paper an original model which we expect to lead to the large number of annotations required to semantically enrich Web content at a large scale. The novelty of our model lies in the combination of two key ingredients: the effort that online communities are making to create content and the capability of machines to detect regular patterns in user annotation to suggest new annotations. Provided that the creation of semantic content is made easy enough and incentives are in place, we can assume that these communities will be willing to provide annotations. However, as human resources are clearly limited, we aim at integrating algorithmic support into our model to bootstrap on existing annotations and learn patterns to be used for suggesting new annotations. As the automatically extracted information needs to be validated, our model presents the extracted knowledge to the user in the form of questions, thus allowing for the validation of the information. In this paper, we describe the requirements on our model, its concrete implementation based on Semantic MediaWiki and an information extraction system and discuss lessons learned from practical experience with real users. These experiences allow us to conclude that our model is a promising approach towards leveraging semantic annotation. Copyright||0||0|
|Wikify! Linking documents to encyclopedic knowledge||Rada Mihalcea
|International Conference on Information and Knowledge Management, Proceedings||English||2007||This paper introduces the use of Wikipedia as a resource for automatic keyword extraction and word sense disambiguation, and shows how this online encyclopedia can be used to achieve state-of-the-art results on both these tasks. The paper also shows how the two methods can be combined into a system able to automatically enrich a text with links to encyclopedic knowledge. Given an input document, the system identifies the important concepts in the text and automatically links these concepts to the corresponding Wikipedia pages. Evaluations of the system show that the automatic annotations are reliable and hardly distinguishable from manual annotations. Copyright 2007 ACM.||0||0|
|A semantic web portal for semantic annotation and search||Fernandez-Garcia N.
|Lecture Notes in Computer Science||English||2006||The semantic annotation of the contents of Web resources is a required step in order to allow the Semantic Web vision to become a reality. In this paper we describe an approach to manual semantic annotation which tries to integrate both the semantic annotation task and the information retrieval task. Our approach exploits the information provided by Wikipedia pages and takes the form of a semantic Web portal, which allows a community of users to easily define and share annotations on Web resources.||0||0|
|Annotation and navigation in semantic wikis?||Eyal Oren
|CEUR Workshop Proceedings||English||2006||Semantic Wikis allow users to semantically annotate their Wiki content. The particular annotations can differ in expressive power, simplicity, and meaning. We present an elaborate conceptual model for semantic annotations, introduce a unique and rich Wiki syntax for these annotations, and discuss how to best formally represent the augmented Wiki content. We improve existing navigation techniques to automatically construct faceted browsing for semistructured data. By utilising the Wiki annotations we provide greatly enhanced information retrieval. Further we report on our ongoing development of these techniques in our prototype SemperWiki.||0||0|
|How semantics make better wikis||Eyal Oren
John G. Breslin
|World Wide Web||English||2006||0||0|
|SweetWiki : Semantic WEb enabled technologies in wiki||Michel Buffa
|CEUR Workshop Proceedings||English||2006||Wikis are social web sites enabling a potentially large number of participants to modify any page or create a new page using their web browser. As they grow, wikis suffer from a number of problems (anarchical structure, large number of pages, aging navigation paths, etc.). We believe that semantic wikis can improve navigation and search. In SweetWiki we investigate the use of semantic web technologies to support and ease the lifecycle of the wiki. The very model of wikis was declaratively described: an OWL schema captures concepts such as WikiWord, wiki page, forward and backward link, author, etc. This ontology is then exploited by an embedded semantic search engine (Corese). In addition, SweetWiki integrates a standard WYSIWYG editor (Kupu) that we extended to support semantic annotation following the "social tagging" approach made popular by web sites such as flickr.com. When editing a page, the user can freely enter some keywords in an AJAX-powered textfield and an auto-completion mechanism proposes existing keywords by issuing SPARQL queries to identify existing concepts with compatible labels. Thus tagging is both easy (keyword-like) and motivating (real time display of the number of related pages) and concepts are collected as in folksonomies. To maintain and reengineer the folksonomy, we reused a web-based editor available in the underlying semantic web server to edit semantic web ontologies and annotations. Unlike in other wikis, pages are stored directly in XHTML ready to be served and semantic annotations are embedded in the pages themselves using RDF/A. If someone sends or copy a page, the annotations follow it, and if an application crawls the wiki site it can extract the metadata and reuse them.||0||0|
|Exploiting user queries and Web Communities in semantic annotation||Fernandez-Garcia N.
|CEUR Workshop Proceedings||English||2005||In order to make current Web resources understandable by computers and make possible the Semantic Web vision, we need to add semantic metadata to such Web resources. In this paper we describe the SQAPS system, which aims at providing a mean of exploiting for semantic annotation the effort of users who every day look for information on the Web. We also describe how we can take benefit of the information generated and maintained by Web Communities as Wikipedia in order to achieve our goal.||0||0|
|SemperWiki: A semantic personal Wiki||Eyal Oren||CEUR Workshop Proceedings||English||2005||Wikis are collaborative authoring environments, and are very popular. The original concept has recently been extended in two directions: semantic Wikis and personal Wikis. Semantic Wikis focus on better retrieval and querying facilities, by using semantic annotations of pages. Personal Wikis focus on improving usability and on providing an easy-to-use personal information space. We combine these two developments and present a semantic personal Wiki. Our application SemperWiki offers the usability of personal Wikis and the improved retrieval and querying of semantic Wikis. Users can annotate pages with RDF together with their normal text. The system is extremely easy-to-use, provides intelligent navigation based on semantic annotations, and responds instantly to all changes.||0||0|
|Towards a semantic wiki experience - Desktop integration and interactivity in WikSAR||Aumueller D.
|CEUR Workshop Proceedings||English||2005||Common Wiki systems such as MediaWiki lack semantic annotations. WikSAR (Semantic Authoring and Retrieval within a Wiki), a prototype of a semantic Wiki, offers effortless semantic authoring. Instant gratification of users is achieved by context aware means of navigation, interactive graph visualisation of the emerging ontology, as well as semantic retrieval possibilities. Embedding queries into Wiki pages creates views (as dependant collections) on the information space. Desktop integration includes accessing dates (e.g. reminders) entered in the Wiki via local calendar applications, maintaining bookmarks, and collecting web quotes within the Wiki. Approaches to reference documents on the local file system are sketched out, as well as an enhancement of the Wiki interface to suggest appropriate semantic annotations to the user.||0||1|