Searching for the co-occurrence of two visual concepts in unlabeled images is an important step towards answering complex user queries. Traditional visual search methods use combinations of the confidence scores of individual concept detectors to tackle such queries. In this paper we introduce the notion of bi-concepts, a new concept-based retrieval method that is directly learned from social-tagged images. As the number of potential bi-concepts is gigantic, manually collecting training examples is infeasible. Instead, we propose a multimedia framework to collect de-noised positive as well as informative negative training examples from the social web, to learn bi-concept detectors from these examples, and to apply them in a search engine for retrieving bi-concepts in unlabeled images.
We study the behavior of our bi-concept search engine using 1.2 M social-tagged images as a data source. Our experiments indicate that harvesting examples for bi-concepts differs from traditional single-concept methods, yet the examples can be collected with high accuracy using a multi-modal approach. We find that directly learning bi-concepts is better than oracle linear fusion of single-concept detectors, with a relative improvement of 100%. This study reveals the potential of learning high-order semantics from social images, for free, suggesting promising new lines of research.
- Training data: Positive and negative training examples automatically harvested from the social web.
- Test data: Ground truth, image data, image search results, etc.
Xirong Li, Cees G.M. Snoek, Marcel Worring, and Arnold W.M. Smeulders, Harvesting Social Images for Bi-Concept Search, in IEEE Transactions on Multimedia (T-MM), volume 14, issue 4, pages 1091-1104, August 2012 [ PDF | BibText | datasets ]
We have released tag relevance learning 1.0 software.