An evaluation framework for cross-lingual link discovery
|An evaluation framework for cross-lingual link discovery|
|Author(s)||Tang L.-X., Geva S., Trotman A., Xu Y., Itakura K.Y.|
|Published in||Information Processing and Management|
|Keyword(s)||Assessment, Cross-lingual link discovery, Evaluation framework, Evaluation metrics, Validation, Wikipedia (Extra: Assessment, Evaluation framework, Evaluation metrics, Link Discovery, Validation, Wikipedia, Knowledge based systems, Hypertext systems)|
|Article||BASE, CiteSeerX, Google Scholar|
|Web||Ask, Bing, Google (PDF), Yahoo!|
|Download and mirrors|
|Local copy||Not available|
|Remote mirror(s)||Not available|
|Export and share|
|BibTeX, CSV, RDF, JSON|
|Browse properties · List of journal articles|
An evaluation framework for cross-lingual link discovery is a 2014 journal article written in English by Tang L.-X., Geva S., Trotman A., Xu Y., Itakura K.Y. and published in Information Processing and Management.
Cross-Lingual Link Discovery (CLLD) is a new problem in Information Retrieval. The aim is to automatically identify meaningful and relevant hypertext links between documents in different languages. This is particularly helpful in knowledge discovery if a multi-lingual knowledge base is sparse in one language or another, or the topical coverage in each language is different; such is the case with Wikipedia. Techniques for identifying new and topically relevant cross-lingual links are a current topic of interest at NTCIR where the CrossLink task has been running since the 2011 NTCIR-9. This paper presents the evaluation framework for benchmarking algorithms for cross-lingual link discovery evaluated in the context of NTCIR-9. This framework includes topics, document collections, assessments, metrics, and a toolkit for pooling, assessment, and evaluation. The assessments are further divided into two separate sets: manual assessments performed by human assessors; and automatic assessments based on links extracted from Wikipedia itself. Using this framework we show that manual assessment is more robust than automatic assessment in the context of cross-lingual link discovery.
- This section requires expansion. Please, help!
Probably, this publication is cited by others, but there are no articles available for them in WikiPapers.