Information arbitrage across multi-lingual Wikipedia
|Information arbitrage across multi-lingual Wikipedia|
|Author(s)||Eytan Adar, Michael Skinner, Daniel S. Weld|
|Article||BASE, CiteSeerX, Google Scholar|
|Web||Ask, Bing, Google (PDF), Yahoo!|
|Download and mirrors|
|Local copy||Not available|
|Export and share|
|BibTeX, CSV, RDF, JSON|
|Browse properties · List of publications|
The rapid globalization of Wikipedia is generating a parallel, multi-lingual corpus of unprecedented scale. Pages for the same topic in many different languages emerge both as a result of manual translation and independent development. Unfortunately, these pages may appear at different times, vary in size, scope, and quality. Furthermore, differential growth rates cause the conceptual mapping between articles in different languages to be both complex and dynamic. These disparities provide the opportunity for a powerful form of information arbitrage --leveraging articles in one or more languages to improve the content in another. Analyzing four large language domains (English, Spanish, French, and German), we present Ziggurat , an automated system for aligning Wikipedia infoboxes, creating new infoboxes as necessary, filling in missing information, and detecting discrepancies between parallel pages. Our method uses self-supervised learning and our experiments demonstrate the method's feasibility, even in the absence of dictionaries.
- This section requires expansion. Please, help!
Cited byThis publication has 2 citations. Only those publications available in WikiPapers are shown here:
- Cultural bias in Wikipedia content on famous persons
- The people's encyclopedia under the gaze of the sages: a systematic review of scholarly research on Wikipedia