Wikipedia-based Kernels for text categorization

From WikiPapers
Jump to: navigation, search

Wikipedia-based Kernels for text categorization is a 2007 conference paper written in English by Minier Z., Bodo Z., Csato L. and published in Proceedings - 9th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing, SYNASC 2007.

[edit] Abstract

In recent years several models have been proposed for text categorization. Within this, one of the widely applied models is the vector space model (VSM), where independence between indexing terms, usually words, is assumed. Since training corpora sizes are relatively small - compared to ≈ ∞ what would be required for a realistic number of words - the generalization power of the learning algorithms is low. It is assumed that a bigger text corpus can boost the representation and hence the learning process. Based on the work of Gabrilovich and Markovitch [6], we incorporate Wikipedia articles into the system to give word distributional representation for documents. The extension with this new corpus causes dimensionality increase, therefore clustering of features is needed. We use Latent Semantic Analysis (LSA), Kernel Principal Component Analysis (KPCA) and Kernel Canonical Correlation Analysis (KCCA) and present results for these experiments on the Reuters corpus.

[edit] References

This section requires expansion. Please, help!

Cited by

Probably, this publication is cited by others, but there are no articles available for them in WikiPapers. Cited 9 time(s)