Learning multilingual named entity recognition from Wikipedia
|Learning multilingual named entity recognition from Wikipedia|
|Author(s)||Nothman J., Ringland N., Radford W., Murphy T., Curran J.R.|
|Published in||Artificial Intelligence|
|Keyword(s)||Annotated corpora, Information extraction, Named entity recognition, Semi-structured resources, Semi-supervised learning, Wikipedia (Extra: Annotated corpora, Information Extraction, Named entity recognition, Semi-structured, Semi-supervised learning, Wikipedia, Gold, Supervised learning, Websites)|
|Article||BASE, CiteSeerX, Google Scholar|
|Web||Ask, Bing, Google (PDF), Yahoo!|
|Download and mirrors|
|Local copy||Not available|
|Remote mirror(s)||Not available|
|Export and share|
|BibTeX, CSV, RDF, JSON|
|Browse properties · List of journal articles|
Learning multilingual named entity recognition from Wikipedia is a 2013 journal article written in English by Nothman J., Ringland N., Radford W., Murphy T., Curran J.R. and published in Artificial Intelligence.
We automatically create enormous, free and multilingual silver-standard training annotations for named entity recognition (ner) by exploiting the text and structure of Wikipedia. Most ner systems rely on statistical models of annotated data to identify and classify names of people, locations and organisations in text. This dependence on expensive annotation is the knowledge bottleneck our work overcomes. We first classify each Wikipedia article into named entity (ne) types, training and evaluating on 7200 manually-labelled Wikipedia articles across nine languages. Our cross-lingual approach achieves up to 95% accuracy. We transform the links between articles into ne annotations by projecting the target articles classifications onto the anchor text. This approach yields reasonable annotations, but does not immediately compete with existing gold-standard data. By inferring additional links and heuristically tweaking the Wikipedia corpora, we better align our automatic annotations to gold standards. We annotate millions of words in nine languages, evaluating English, German, Spanish, Dutch and Russian Wikipedia-trained models against conll shared task data and other gold-standard corpora. Our approach outperforms other approaches to automatic ne annotation (Richman and Schone, 2008 , Mika et al., 2008 ) competes with gold-standard training when tested on an evaluation corpus from a different source; and performs 10% better than newswire-trained models on manually-annotated Wikipedia text. © 2012 Elsevier B.V. All rights reserved.
- This section requires expansion. Please, help!
Probably, this publication is cited by others, but there are no articles available for them in WikiPapers. Cited 7 time(s)