Browse wiki

Jump to: navigation, search
Concordia university at the TREC-15 QA track
Abstract In this paper, we describe the system we uIn this paper, we describe the system we used for the TREC Question Answering Track. For factoid and list questions two different approaches were exploited: A redundancy-based approach using a modified version of aranea and a parse-tree based unifier. The modified version of aranea essentially uses Google snippets for extracting answers and then projects them to aquaint. The parse-tree based unifier is a linguistic-based approach that chunks candidate sentences syntactically and uses a heuristic measure to compute the similarity of each chunk in a candidate to its counterpart in the question. To answer other types of questions, our system extracts from Wikipedia articles a list of interest-marking terms related to the topic and uses them to extract and score sentences from the aquaint document collection using various interest-marking triggers. We submitted 3 runs using different variations of the system. In the factoid run, the average of our 3 runs is 0.202, for the list, we achieved an average of 0.084, and for the "Other", we achieved an average F-score of 0.192., we achieved an average F-score of 0.192.
Abstractsub In this paper, we describe the system we uIn this paper, we describe the system we used for the TREC Question Answering Track. For factoid and list questions two different approaches were exploited: A redundancy-based approach using a modified version of aranea and a parse-tree based unifier. The modified version of aranea essentially uses Google snippets for extracting answers and then projects them to aquaint. The parse-tree based unifier is a linguistic-based approach that chunks candidate sentences syntactically and uses a heuristic measure to compute the similarity of each chunk in a candidate to its counterpart in the question. To answer other types of questions, our system extracts from Wikipedia articles a list of interest-marking terms related to the topic and uses them to extract and score sentences from the aquaint document collection using various interest-marking triggers. We submitted 3 runs using different variations of the system. In the factoid run, the average of our 3 runs is 0.202, for the list, we achieved an average of 0.084, and for the "Other", we achieved an average F-score of 0.192., we achieved an average F-score of 0.192.
Bibtextype inproceedings  +
Has author Kosseim L. + , Beaudoin A. + , Keighbadi A. + , Razmara M. +
Has extra keyword Concordia University + , Document collection + , F-score + , QA tracks + , Question Answering track + , Wikipedia articles + , Forestry + , Information retrieval +
Issn 1048776X  +
Language English +
Number of citations by publication 0  +
Number of references by publication 0  +
Published in NIST Special Publication +
Title Concordia university at the TREC-15 QA track +
Type conference paper  +
Year 2006 +
Creation dateThis property is a special property in this wiki. 7 November 2014 05:57:04  +
Categories Publications without keywords parameter  + , Publications without license parameter  + , Publications without DOI parameter  + , Publications without remote mirror parameter  + , Publications without archive mirror parameter  + , Publications without paywall mirror parameter  + , Conference papers  + , Publications without references parameter  + , Publications  +
Modification dateThis property is a special property in this wiki. 7 November 2014 05:57:04  +
DateThis property is a special property in this wiki. 2006  +
show properties that link here 

 

Enter the name of the page to start browsing from.