|Extracting semantic concept relations from Wikipedia|
|Author(s)||Arnold P., Rahm E.|
|Published in||ACM International Conference Proceeding Series|
|Keyword(s)||background knowledge, Information extraction, natural language processing, semantic relations, thesauri, wikipedia (Extra: Information retrieval, Natural language processing systems, Semantics, Thesauri, Back-ground knowledge, Different domains, NAtural language processing, Semantic concept, Semantic pattern, Semantic relations, Wikipedia, Wikipedia articles, Semantic Web)|
|Article||BASE, CiteSeerX, Google Scholar|
|Web||Ask, Bing, Google (PDF), Yahoo!|
|Download and mirrors|
|Local copy||Not available|
|Remote mirror(s)||Not available|
|Export and share|
|BibTeX, CSV, RDF, JSON|
|Browse properties · List of conference papers|
Background knowledge as provided by repositories such as WordNet is of critical importance for linking or mapping ontologies and related tasks. Since current repositories are quite limited in their scope and currentness, we investigate how to automatically build up improved repositories by extracting semantic relations (e.g., is-a and part-of relations) from Wikipedia articles. Our approach uses a comprehensive set of semantic patterns, finite state machines and NLP-techniques to process Wikipedia definitions and to identify semantic relations between concepts. Our approach is able to extract multiple relations from a single Wikipedia article. An evaluation for different domains shows the high quality and effectiveness of the proposed approach.
- This section requires expansion. Please, help!
Probably, this publication is cited by others, but there are no articles available for them in WikiPapers.