This paper introduces a multimodal approach for reranking of image retrieval results based on relevance feedback. We consider\r\nthe problem of reordering the ranked list of images returned by an image retrieval system, in such a way that relevant images to\r\na query are moved to the first positions of the list. We propose a Markov random field (MRF) model that aims at classifying the\r\nimages in the initial retrieval-result list as relevant or irrelevant; the output of the MRF is used to generate a new list of ranked\r\nimages. The MRF takes into account (1) the rank information provided by the initial retrieval system, (2) similarities among images\r\nin the list, and (3) relevance feedback information. Hence, the problem of image reranking is reduced to that of minimizing an\r\nenergy function that represents a trade-off between image relevance and interimage similarity.The proposed MRF is a multimodal\r\nas it can take advantage of both visual and textual information by which images are described with.We report experimental results\r\nin the IAPR TC12 collection using visual and textual features to represent images. Experimental results show that our method is\r\nable to improve the ranking provided by the base retrieval system. Also, the multimodal MRF outperforms unimodal (i.e., either\r\ntext-based or image-based) MRFs that we have developed in previous work. Furthermore, the proposed MRF outperforms baseline\r\nmultimodal methods that combine information from unimodal MRFs.
Loading....