This paper describes an approach to exploit the implicit user feedback gathered during interactive video retrieval tasks. We\r\npropose a framework, where the video is first indexed according to temporal, textual, and visual features and then implicit user\r\nfeedback analysis is realized using a graph-based methodology. The generated graph encodes the semantic relations between\r\nvideo segments based on past user interaction and is subsequently used to generate recommendations. Moreover, we combine\r\nthe visual features and implicit feedback information by training a support vector machine classifier with examples generated\r\nfrom the aforementioned graph in order to optimize the query by visual example search. The proposed framework is evaluated\r\nby conducting real-user experiments. The results demonstrate that significant improvement in terms of precision and recall is\r\nreported after the exploitation of implicit user feedback, while an improved ranking is presented in most of the evaluated queries\r\nby visual example.
Loading....