Overview of the INEX 2011 question answering track (QA@INEX)
|Overview of the INEX 2011 question answering track (QA@INEX)|
|Author(s)||SanJuan E., Moriceau V., Tannier X., Bellot P., Mothe J.|
|Published in||Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)|
|Keyword(s)||Automatic summarization, Focus information retrieval, Natural language processing, Question answering, Text informativeness, Text readability, Wikipedia, XML (Extra: Automatic summarization, Informativeness, NAtural language processing, Question Answering, Text readability, Wikipedia, Information management, Query languages, Websites, XML, Natural language processing systems)|
|Article||BASE, CiteSeerX, Google Scholar|
|Web||Ask, Bing, Google (PDF), Yahoo!|
|Download and mirrors|
|Local copy||Not available|
|Remote mirror(s)||Not available|
|Export and share|
|BibTeX, CSV, RDF, JSON|
|Browse properties · List of conference papers|
Overview of the INEX 2011 question answering track (QA@INEX) is a 2012 conference paper written in English by SanJuan E., Moriceau V., Tannier X., Bellot P., Mothe J. and published in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics).
The INEX QA track aimed to evaluate complex question-answering tasks where answers are short texts generated from the Wikipedia by extraction of relevant short passages and aggregation into a coherent summary. In such a task, Question-answering, XML/passage retrieval and automatic summarization are combined in order to get closer to real information needs. Based on the groundwork carried out in 2009-2010 edition to determine the sub-tasks and a novel evaluation methodology, the 2011 edition experimented contextualizing tweets using a recent cleaned dump of the Wikipedia. Participants had to contextualize 132 tweets from the New York Times (NYT). Informativeness of answers has been evaluated, as well as their readability. 13 teams from 6 countries actively participated to this track. This tweet contextualization task will continue in 2012 as part of the CLEF INEX lab with same methodology and baseline but on a much wider range of tweet types.
- This section requires expansion. Please, help!
Probably, this publication is cited by others, but there are no articles available for them in WikiPapers. Cited 2 time(s)