Our paper “Pseudo Test Collections for Training and Tuning Microblog Rankers” by Richard Berendsen, Manos Tsagkias, Maarten de Rijke, and Wouter Weerkamp has been accepted at SIGIR 2013, in Dublin, Ireland, 28 July– 1 August. The abstract follows.
Recent years have witnessed a persistent interest in generating pseudo test collections, both for training and evaluation purposes. We describe a method for generating queries and relevance judgments for microblog search in an unsupervised way. Our starting point is this intuition: tweets with a hashtag are relevant to the topic covered by the hashtag and hence to a suitable query derived from the hashtag. Our baseline method selects all commonly used hashtags, and all associated tweets as relevance judgments; we then generate a query from these tweets. Next, we generate a timestamp for each query, allowing us to use temporal information in the training process. We then enrich the generation process with knowledge derived from an editorial test collection for microblog search.
We use our pseudo test collections in two ways. First, we tune parameters of a variety of well known retrieval methods on them. Correlations with parameter sweeps on an editorial test collection are high on average, with a large variance over retrieval algorithms. Second, we use the pseudo test collections as training sets in a learning to rank scenario. Performance close to the training error on the editorial collection is achieved in all cases. Our results demonstrate the utility of tuning and training microblog search retrieval algorithms on automatically generated training material.
We are working towards releasing a pre-print soon.