Cross-modal topic correlations for multimedia retrieval

From WikiPapers
Jump to: navigation, search

Cross-modal topic correlations for multimedia retrieval is a 2012 conference paper written in English by Yu J., Cong Y., Qin Z., Wan T. and published in Proceedings - International Conference on Pattern Recognition.

[edit] Abstract

In this paper, we propose a novel approach for cross-modal multimedia retrieval by jointly modeling the text and image components of multimedia documents. In this model, the image component is represented by local SIFT descriptors based on the bag-of-feature model. The text component is represented by a topic distribution learned from latent topic models such as latent Dirichlet allocation (LDA). The latent semantic relations between texts and images can be reflected by correlations between the word topics and topics of image features. A statistical correlation model conditioned on category information is investigated. Experimental results on a benchmark Wikipedia dataset show that the newly proposed approach outperforms state-of-the-art cross-modal multimedia retrieval systems.

[edit] References

This section requires expansion. Please, help!

Cited by

Probably, this publication is cited by others, but there are no articles available for them in WikiPapers.