Browse wiki

Jump to: navigation, search
Overview of the INEX 2010 question answering track (QA@INEX)
Abstract The INEX Question Answering track (QA@INEXThe INEX Question Answering track (QA@INEX) aims to evaluate a complex question-answering task using the Wikipedia. The set of questions is composed of factoid, precise questions that expect short answers, as well as more complex questions that can be answered by several sentences or by an aggregation of texts from different documents. Long answers have been evaluated based on Kullback Leibler (KL) divergence between n-gram distributions. This allowed summarization systems to participate. Most of them generated a readable extract of sentences from top ranked documents by a state-of-the-art document retrieval engine. Participants also tested several methods of question disambiguation. Evaluation has been carried out on a pool of real questions from OverBlog and Yahoo! Answers. Results tend to show that the baseline-restricted focused IR system minimizes KL divergence but misses readability meanwhile summarization systems tend to use longer and stand-alone sentences thus improving readability but increasing KL divergence. readability but increasing KL divergence.
Abstractsub The INEX Question Answering track (QA@INEXThe INEX Question Answering track (QA@INEX) aims to evaluate a complex question-answering task using the Wikipedia. The set of questions is composed of factoid, precise questions that expect short answers, as well as more complex questions that can be answered by several sentences or by an aggregation of texts from different documents. Long answers have been evaluated based on Kullback Leibler (KL) divergence between n-gram distributions. This allowed summarization systems to participate. Most of them generated a readable extract of sentences from top ranked documents by a state-of-the-art document retrieval engine. Participants also tested several methods of question disambiguation. Evaluation has been carried out on a pool of real questions from OverBlog and Yahoo! Answers. Results tend to show that the baseline-restricted focused IR system minimizes KL divergence but misses readability meanwhile summarization systems tend to use longer and stand-alone sentences thus improving readability but increasing KL divergence. readability but increasing KL divergence.
Bibtextype inproceedings  +
Doi 10.1007/978-3-642-23577-1_24  +
Has author SanJuan E. + , Bellot P. + , Moriceau V. + , Tannier X. +
Has extra keyword Complex questions + , Document Retrieval + , KL-divergence + , Kullback-Leibler divergence + , Question answering + , Question Answering track + , Summarization systems + , Wikipedia + , Lakes + , XML + , Natural language processing systems +
Isbn 9783642235764  +
Language English +
Number of citations by publication 0  +
Number of references by publication 0  +
Pages 269–281  +
Published in Lecture Notes in Computer Science +
Title Overview of the INEX 2010 question answering track (QA@INEX) +
Type conference paper  +
Volume 6932 LNCS  +
Year 2011 +
Creation dateThis property is a special property in this wiki. 8 November 2014 03:33:02  +
Categories Publications without keywords parameter  + , Publications without license parameter  + , Publications without remote mirror parameter  + , Publications without archive mirror parameter  + , Publications without paywall mirror parameter  + , Conference papers  + , Publications without references parameter  + , Publications  +
Modification dateThis property is a special property in this wiki. 8 November 2014 03:33:02  +
DateThis property is a special property in this wiki. 2011  +
hide properties that link here 
Overview of the INEX 2010 question answering track (QA@INEX) + Title
 

 

Enter the name of the page to start browsing from.