Overview of the clef 2008 multilingual question answering track
|Overview of the clef 2008 multilingual question answering track|
|Author(s)||Forner P., Penas A., Agirre E., Alegria I., Forascu C., Moreau N., Osenova P., Prokopidis P., Rocha P., Sacaleanu B., Sutcliffe R., Tjong Kim Sang E.|
|Published in||Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)|
|Keyword(s)||Unknown (Extra: Main tasks, Not given, Participating systems, Question Answering, Question Answering track, Speech transcriptions, Subtasks, Wikipedia, Word Sense Disambiguation, Natural language processing systems, Speech transmission, Transcription, Linguistics)|
|Article||BASE, CiteSeerX, Google Scholar|
|Web||Ask, Bing, Google (PDF), Yahoo!|
|Download and mirrors|
|Local copy||Not available|
|Remote mirror(s)||Not available|
|Export and share|
|BibTeX, CSV, RDF, JSON|
|Browse properties · List of conference papers|
Overview of the clef 2008 multilingual question answering track is a 2009 conference paper written in English by Forner P., Penas A., Agirre E., Alegria I., Forascu C., Moreau N., Osenova P., Prokopidis P., Rocha P., Sacaleanu B., Sutcliffe R., Tjong Kim Sang E. and published in Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics).
The QA campaign at CLEF 2008 , was mainly the same as that proposed last year. The results and the analyses reported by last year's participants suggested that the changes introduced in the previous campaign had led to a drop in systems' performance. So for this year's competition it has been decided to practically replicate last year's exercise. Following last year's experience some QA pairs were grouped in clusters. Every cluster was characterized by a topic (not given to participants). The questions from a cluster contained co-references between one of them and the others. Moreover, as last year, the systems were given the possibility to search for answers in Wikipedia as document corpus beside the usual newswire collection. In addition to the main task, three additional exercises were offered, namely the Answer Validation Exercise (AVE), the Question Answering on Speech Transcriptions (QAST), which continued last year's successful pilots, together with the new Word Sense Disambiguation for Question Answering (QA-WSD). As general remark, it must be said that the main task still proved to be very challenging for participating systems. As a kind of shallow comparison with last year's results the best overall accuracy dropped significantly from 42% to 19% in the multi-lingual subtasks, but increased a little in the monolingual sub-tasks, going from 54% to 63%.
- This section requires expansion. Please, help!
Probably, this publication is cited by others, but there are no articles available for them in WikiPapers. Cited 4 time(s)