Tony Russell-Rose is director of UXLabs, a UX research and design consultancy specializing in complex search and information access applications. Previously Tony has led R&D teams at Canon, Reuters, HP Labs and BT Labs, and seems happy to work for pretty much any organization that has 'Labs' in the title. He has a PhD in Artificial Intelligence and is author of Designing the Search Experience (Morgan Kaufmann, 2012). Tony is a DZone MVB and is not an employee of DZone and has posted 28 posts at DZone. You can read more from them at their website. View Full User Profile

Measuring Search Quality: Retrieval and Relevance Metrics

02.26.2012
| 5089 views |
  • submit to reddit

I am trying to put together a framework for search quality evaluation for a specialist information provider.

At the moment quality is measured by counting the number of hits for certain key docs across various queries, and monitoring changes on a regular schedule. I’d like to broaden this out into something more scalable and robust, from which a more extensive range of metrics can be calculated. (As an aside, I know there are many ways of evaluating the overall search experience, but I’m focusing solely on ranked retrieval and relevance here).

We are in the fortunate position of being able to acquire binary relevance judgements from SMEs, so can aspire to something like the TREC approach:

http://trec.nist.gov/data/reljudge_eng.html

But of course we are running just a single site search engine here, so can’t pool results across runs to produce a consolidated ‘gold standard’ result set as you would in the TREC framework.

I am sure this scenario repeats the world over. One solution I can think of is to run your existing search engine with various alternative configurations, e.g. precision oriented, recall oriented, freshness oriented, etc. and aggregate the top N results from each to emulate the pooling approach. Can anyone suggest any others? Or perhaps an alternative method entirely?

Jason Hull one of DZone's MVBs recommended looking at this article for some answers.

Published at DZone with permission of Tony Russell-rose, author and DZone MVB. (source)

(Note: Opinions expressed in this article and its replies are the opinions of their respective authors and not those of DZone, Inc.)

Comments

Goel Yatendra replied on Thu, 2012/03/15 - 3:34pm

I think you are going to need to make some more decisions about what exactly you want to learn from the evaluation. The (not-always-successfully-obtained) goal in TREC is to create reusable collections that
support a wide variety of different measures.

Comment viewing options

Select your preferred way to display the comments and click "Save settings" to activate your changes.