(Alternative names for this dataset)
|Language(s)||Catalan, Spanish, English|
|Export and share|
|BibTeX, CSV, RDF, JSON|
|Browse properties · List of datasets|
WikiCorpus are datasets of Wikipedia enriched with linguistic info.
|Title||Author(s)||Keyword(s)||Published in||Language||DateThis property is a special property in this wiki.||Abstract||R||C|
|Wikicorpus: A Word-Sense Disambiguated Multilingual Wikipedia Corpus||Samuel Reese
|LREC||English||2010||This article presents a new freely available trilingual corpus (Catalan, Spanish, English) that contains large portions of the Wikipedia and has been automatically enriched with linguistic information. To our knowledge, this is the largest such corpus that is freely available to the community: In its present version, it contains over 750 million words. The corpora have been annotated with lemma and part of speech information using the open source library FreeLing. Also, they have been sense annotated with the state of the art Word Sense Disambiguation algorithm UKB. As UKB assigns WordNet senses, and WordNet has been aligned across languages via the InterLingual
Index, this sort of annotation opens the way to massive explorations in lexical semantics that were not possible before. We present a first attempt at creating a trilingual lexical resource from the sense-tagged Wikipedia corpora, namely, WikiNet. Moreover, we present two by-products of the project that are of use for the NLP community: An open source Java-based parser for Wikipedia pages developed forthe construction of the corpus, and the integration of the WSD algorithm UKB in FreeLing.