BUPT_WILDCAT at TREC Crowdsourcing Track: Crowdsourcing for Relevance Evaluation.
Tao XiaChuang ZhangTai LiJingjing XiePublished in: TREC (2011)
Keyphrases
- interactive information retrieval
- relevance assessments
- relevance judgments
- test collection
- information retrieval
- user feedback
- ir evaluation
- relevance judgements
- relevance feedback
- information seeking
- evaluation campaigns
- retrieval systems
- crowd sourced
- information retrieval systems
- amazon mechanical turk
- human computation
- human judgments
- average precision
- graded relevance
- user studies
- evaluation measures
- retrieval effectiveness
- question answering
- implicit feedback
- retrieved documents
- evaluation metrics
- evaluation method
- user interaction
- image retrieval