A semantic approach to recommending text advertisements for images
|A semantic approach to recommending text advertisements for images|
|Author(s)||Zhang W., Tian L., Sun X., Wang H., Yu Y.|
|Published in||RecSys'12 - Proceedings of the 6th ACM Conference on Recommender Systems|
|Keyword(s)||Crossmedia mining, Semantic matching, Visual contextual advertising (Extra: Contextual advertisings, Cross-media, High quality images, Image annotation, Knowledge base, Knowledge basis, Search-based, Semantic approach, Semantic matching, State of the art, Target images, Test images, Textual information, Wikipedia, Knowledge based systems, Marketing, Recommender systems, Websites, Semantics)|
|Article||BASE, CiteSeerX, Google Scholar|
|Web||Ask, Bing, Google (PDF), Yahoo!|
|Download and mirrors|
|Local copy||Not available|
|Remote mirror(s)||Not available|
|Export and share|
|BibTeX, CSV, RDF, JSON|
|Browse properties · List of conference papers|
A semantic approach to recommending text advertisements for images is a 2012 conference paper written in English by Zhang W., Tian L., Sun X., Wang H., Yu Y. and published in RecSys'12 - Proceedings of the 6th ACM Conference on Recommender Systems.
In recent years, more and more images have been uploaded and published on the Web. Along with text Web pages, images have been becoming important media to place relevant advertisements. Visual contextual advertising, a young research area, refers to finding relevant text advertisements for a target image without any textual information (e.g., tags). There are two existing approaches, advertisement search based on image annotation, and more recently, advertisement matching based on feature translation between images and texts. However, the state of the art fails to achieve satisfactory results due to the fact that recommended advertisements are syntactically matched but semantically mismatched. In this paper, we propose a semantic approach to improving the performance of visual contextual advertising. More specifically, we exploit a large high-quality image knowledge base (ImageNet) and a widely-used text knowledge base (Wikipedia) to build a bridge between target images and advertisements. The image-advertisement match is built by mapping images and advertisements into the respective knowledge bases and then finding semantic matches between the two knowledge bases. The experimental results show that semantic match outperforms syntactic match significantly using test images from Flickr. We also show that our approach gives a large improvement of 16.4% on the precision of the top 10 matches over previous work, with more semantically relevant advertisements recommended. Copyright © 2012 by the Association for Computing Machinery, Inc. (ACM).
- This section requires expansion. Please, help!
Probably, this publication is cited by others, but there are no articles available for them in WikiPapers. Cited 1 time(s)