| image processing|
(Alternative names for this keyword)
|Related keyword(s)||wikimedia commons|
|Export and share|
|BibTeX, CSV, RDF, JSON|
|Browse properties · List of keywords|
image processing is included as keyword or extra keyword in 0 datasets, 0 tools and 5 publications.
There is no datasets for this keyword.
There is no tools for this keyword.
|Title||Author(s)||Published in||Language||DateThis property is a special property in this wiki.||Abstract||R||C|
|Position-wise contextual advertising: Placing relevant ads at appropriate positions of a web page||ZongDa Wu
|Neurocomputing||English||2013||Web advertising, a form of online advertising, which uses the Internet as a medium to post product or service information and attract customers, has become one of the most important marketing channels. As one prevalent type of web advertising, contextual advertising refers to the placement of the most relevant ads at appropriate positions of a web page, so as to provide a better user experience and increase the user's ad-click rate. However, most existing contextual advertising techniques only take into account how to select as relevant ads for a given page as possible, without considering the positional effect of the ad placement on the page, resulting in an unsatisfactory performance in ad local context relevance. In this paper, we address the novel problem of position-wise contextual advertising, i.e., how to select and place relevant ads properly for a target web page. In our proposed approach, the relevant ads are selected based on not only global context relevance but also local context relevance, so that the embedded ads yield contextual relevance to both the whole target page and the insertion positions where the ads are placed. In addition, to improve the accuracy of global and local context relevance measure, the rich wikipedia knowledge is used to enhance the semantic feature representation of pages and ad candidates. Last, we evaluate our approach using a set of ads and pages downloaded from the Internet, and demonstrate the effectiveness of our approach. © 2013 Elsevier B.V.||0||0|
|Comparison of different ontology-based query expansion algorithms for effective image retrieval||Leung C.H.C.
|Communications in Computer and Information Science||English||2011||We study several semantic concept-based query expansion and re-ranking scheme and compare different ontology-based expansion methods in image search and retrieval. In particular, we exploit the two concept similarities of different concept expansion ontology-WordNet Similarity, Wikipedia Similarity. Furthermore, we compare the keywords semantic distance with the precision of image search results with query expansion according to different concept expansion algorithms. We also compare the image retrieval precision of searching with the expanded query and original plain query. Preliminary experiments have been able to demonstrate that the two proposed retrieval mechanism has the potential to outperform unaided approaches.||0||0|
|Combining text/image in WikipediaMM task 2009||Moulin C.
|Lecture Notes in Computer Science||English||2010||This paper reports our multimedia information retrieval experiments carried out for the ImageCLEF Wikipedia task 2009. We extend our previous multimedia model defined as a vector of textual and visual information based on a bag of words approach . We extract additional textual information from the original Wikipedia articles and we compute several image descriptors (local colour and texture features). We show that combining linearly textual and visual information significantly improves the results.||0||0|
|Automated object shape modelling by clustering of web images||Scardino G.
|VISAPP 2008 - 3rd International Conference on Computer Vision Theory and Applications, Proceedings||English||2008||The paper deals with the description of a framework to create shape models of an object using images fromthe web. Results obtained from different image search engines using simple keywords are filtered, and it is possible to select images viewing a single object owning a well-defined contour. In order to have a large set of valid images, the implemented system uses lexical web databases (e.g. WordNet) or free web encyclopedias (e.g. Wikipedia), to get more keywords correlated to the given object. The shapes extracted from selected images are represented by Fourier descriptors, and are grouped by K-means algorithm. Finally, the more representative shapes of main clusters are considered as prototypical contours of the object. Preliminary experimental results are illustrated to show the effectiveness of the proposed approach.||0||0|
|Object image retrieval by exploiting online knowledge resources||Gang Wang
|26th IEEE Conference on Computer Vision and Pattern Recognition, CVPR||English||2008||We describe a method to retrieve images found on web pages with specified object class labels, using an analysis of text around the image and of image appearance. Our method determines whether an object is both described in text and appears in a image using a discriminative image model and a generative text model. Our models are learnt by exploiting established online knowledge resources (Wikipedia pages for text; Flickr and Caltech data sets for image). These resources provide rich text and object appearance information. We describe results on two data sets. The first is Berg's collection of ten animal categories; on this data set, we outperform previous approaches [7, 33]. We have also collected five more categories. Experimental results show the effectiveness of our approach on this new data set.||0||0|