Angela Fahrni

From WikiPapers
Jump to: navigation, search

Angela Fahrni is an author.


Only those publications related to wikis are shown here.
Title Keyword(s) Published in Language DateThis property is a special property in this wiki. Abstract R C
A latent variable model for discourse-Aware concept and entity disambiguation 14th Conference of the European Chapter of the Association for Computational Linguistics 2014, EACL 2014 English 2014 This paper takes a discourse-oriented perspective for disambiguating common and proper noun mentions with respect to Wikipedia. Our novel approach models the relationship between disambiguation and aspects of cohesion using Markov Logic Networks with latent variables. Considering cohesive aspects consistently improves the disambiguation results on various commonly used data sets. 0 0
CoSyne: Synchronizing multilingual wiki content Context-sensitive machine translation
Cross-lingual textual entailment
Cross-lingual topical alignment
Multilingual content synchronization
RESTful web services
User edits classification
User generated content
WikiSym 2012 English 2012 CoSyne is a content synchronization system for assisting users and organizations involved in the maintenance of multilingual wikis. The system allows users to explore the diversity of multilingual content using a monolingual view. It provides suggestions for content modification based on additional or more specific information found in other language versions, and enables seamless integration of automatically translated sentences while giving users the flexibility to edit, correct and control eventual changes to the wiki page. To support these tasks, CoSyne employs state-of-the-art machine translation and natural language processing techniques. 0 0
Jointly disambiguating and clustering concepts and entities with markov logic Word sense disambiguation 24th International Conference on Computational Linguistics - Proceedings of COLING 2012: Technical Papers English 2012 We present a novel approach for jointly disambiguating and clustering known and unknown concepts and entities with Markov Logic. Concept and entity disambiguation is the task of identifying the correct concept or entity in a knowledge base for a single- or multi-word noun (mention) given its context. Concept and entity clustering is the task of clustering mentions so that all mentions in one cluster refer to the same concept or entity. The proposed model (1) is global, i.e. a group of mentions in a text is disambiguated in one single step combining various global and local features, and (2) performs disambiguation, unknown concept and entity detection and clustering jointly. The disambiguation is performed with respect to Wikipedia. The model is trained once on Wikipedia articles and then applied to and evaluated on different data sets originating from news papers, audio transcripts and internet sources. 0 0
CoSyne: A framework for multilingual content synchronization of wikis Recognizing textual entailment
WikiSym 2011 Conference Proceedings - 7th Annual International Symposium on Wikis and Open Collaboration English 2011 Wikis allow a large base of contributors easy access to shared content, and freedom in editing it. One of the side-effects of this freedom was the emergence of parallel and independently evolving versions in a variety of languages, reflecting the multilingual background of the pool of contributors. For the Wiki to properly represent the user-added content, this should be fully available in all its languages. Working on parallel Wikis in several European languages, we investigate the possibility to "synchronize" different language versions of the same document, by: i) pinpointing topically related pieces of information in the different languages, ii) identifying information that is missing or less detailed in one of the two versions, iii) translating this in the appropriate language, iv) inserting it in the appropriate place. Progress along such directions will allow users to share more easily content across language boundaries. 0 0
Real anaphora resolution is hard: The case of German Lecture Notes in Computer Science English 2010 We introduce a system for anaphora resolution for German that uses various resources in order to develop a real system as opposed to systems based on idealized assumptions, e.g. the use of true mentions only or perfect parse trees and perfect morphology. The components that we use to replace such idealizations comprise a full-fledged morphology, a Wikipedia-based named entity recognition, a rule-based dependency parser and a German wordnet. We show that under these conditions coreference resolution is (at least for German) still far from being perfect. 0 0
Old wine orwarm beer: Target-specific sentiment analysis of adjectives AISB 2008 Convention: Communication, Interaction and Social Intelligence - Proceedings of the AISB 2008 Symposium on Affective Language in Human and Machine English 2008 In this paper, we focus on the target-specific polarity determination of adjectives. A domain-specific noun, the target noun, is modified by a qualifying adjective. Rather than having a prior polarity, adjectives are often bearing a target-specific polarity. In some cases, a single adjective even switches polarity depending on the accompanying noun. In order to realise such a'sentiment disambiguation', a two stage model is proposed: Identification of domainspecific targets and the construction of a target-specific polarity adjective lexicon.We use Wikipedia for automatic target detection, and a bootstrapping approach to determine the target-specific adjective polarity. It can be shown that our approach outperforms a baseline system that is based on a prior adjective lexicon derived from Senti- WordNet. 0 0