-
1
-
-
65249129950
-
Crowdsourcing for relevance evaluation
-
Nov.
-
O. Alonso, D. E. Rose, and B. Stewart. Crowdsourcing for relevance evaluation. SIGIR Forum, 42(2):9-15, Nov. 2008.
-
(2008)
SIGIR Forum
, vol.42
, Issue.2
, pp. 9-15
-
-
Alonso, O.1
Rose, D.E.2
Stewart, B.3
-
2
-
-
33750288965
-
A statistical method for system evaluation using incomplete judgments
-
New York, NY, USA ACM
-
J. A. Aslam, V. Pavlu, and E. Yilmaz. A statistical method for system evaluation using incomplete judgments. In Proceedings of the 29th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '06, pages 541-548, New York, NY, USA, 2006. ACM.
-
(2006)
Proceedings of the 29th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '06
, pp. 541-548
-
-
Aslam, J.A.1
Pavlu, V.2
Yilmaz, E.3
-
3
-
-
8644251996
-
Retrieval evaluation with incomplete information
-
New York, NY, USA ACM
-
C. Buckley and E. M. Voorhees. Retrieval evaluation with incomplete information. In Proceedings of the 27th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '04, pages 25-32, New York, NY, USA, 2004. ACM.
-
(2004)
Proceedings of the 27th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '04
, pp. 25-32
-
-
Buckley, C.1
Voorhees, E.M.2
-
5
-
-
33750359727
-
Minimal test collections for retrieval evaluation
-
New York, NY, USA ACM
-
B. Carterette, J. Allan, and R. Sitaraman. Minimal test collections for retrieval evaluation. In Proceedings of the 29th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '06, pages 268-275, New York, NY, USA, 2006. ACM.
-
(2006)
Proceedings of the 29th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '06
, pp. 268-275
-
-
Carterette, B.1
Allan, J.2
Sitaraman, R.3
-
6
-
-
0003700723
-
Readings in information retrieval
-
K. Sparck Jones and P. Willett, editors Morgan Kaufmann Publishers Inc., San Francisco, CA, USA
-
C. Cleverdon. Readings in information retrieval. In K. Sparck Jones and P. Willett, editors, Readings in Information Retrieval, chapter The Cranfield Tests on Index Language Devices, pages 47-59. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA, 1997.
-
(1997)
Readings in Information Retrieval, Chapter the Cranfield Tests on Index Language Devices
, pp. 47-59
-
-
Cleverdon, C.1
-
9
-
-
84891921026
-
Pick-a-crowd: Tell me what you like, and i'll tell you what to do
-
International World Wide Web Conferences Steering Committee
-
D. E. Difallah, G. Demartini, and P. Cudré-Mauroux. Pick-a-crowd: tell me what you like, and i'll tell you what to do. In Proceedings of the 22nd international conference on World Wide Web, pages 367-374. International World Wide Web Conferences Steering Committee, 2013.
-
(2013)
Proceedings of the 22nd International Conference on World Wide Web
, pp. 367-374
-
-
Difallah, D.E.1
Demartini, G.2
Cudré-Mauroux, P.3
-
10
-
-
84875647836
-
Increasing cheat robustness of crowdsourcing tasks
-
C. Eickhoff and A. P. de Vries. Increasing cheat robustness of crowdsourcing tasks. Information retrieval, 16(2):121-137, 2013.
-
(2013)
Information Retrieval
, vol.16
, Issue.2
, pp. 121-137
-
-
Eickhoff, C.1
De Vries, A.P.2
-
11
-
-
84866626350
-
Quality through flow and immersion: Gamifying crowdsourced relevance assessments
-
ACM
-
C. Eickhoff, C. G. Harris, A. P. de Vries, and P. Srinivasan. Quality through flow and immersion: gamifying crowdsourced relevance assessments. In Proceedings of the 35th international ACM SIGIR conference on Research and development in information retrieval, pages 871-880. ACM, 2012.
-
(2012)
Proceedings of the 35th International ACM SIGIR Conference on Research and Development in Information Retrieval
, pp. 871-880
-
-
Eickhoff, C.1
Harris, C.G.2
De Vries, A.P.3
Srinivasan, P.4
-
12
-
-
85018109911
-
Crowdsourcing document relevance assessment with mechanical turk
-
Stroudsburg, PA, USA Association for Computational Linguistics
-
C. Grady and M. Lease. Crowdsourcing document relevance assessment with mechanical turk. In Proceedings of the NAACL HLT 2010 Workshop on Creating Speech and Language Data with Amazon's Mechanical Turk, CSLDAMT '10, pages 172-179, Stroudsburg, PA, USA, 2010. Association for Computational Linguistics.
-
(2010)
Proceedings of the NAACL HLT 2010 Workshop on Creating Speech and Language Data with Amazon's Mechanical Turk, CSLDAMT '10
, pp. 172-179
-
-
Grady, C.1
Lease, M.2
-
13
-
-
80052358237
-
-
University of Würzburg, Tech. Rep
-
M. Hirth, T. Hoßfeld, and P. Tran-Gia. Cheat-detection mechanisms for crowdsourcing. University of Würzburg, Tech. Rep, 474, 2010.
-
(2010)
Cheat-detection Mechanisms for Crowdsourcing
, vol.474
-
-
Hirth, M.1
Hoßfeld, T.2
Tran-Gia, P.3
-
14
-
-
56849109269
-
-
Crown Publishing Group, New York, NY, USA, 1 edition
-
J. Howe. Crowdsourcing: Why the Power of the Crowd Is Driving the Future of Business. Crown Publishing Group, New York, NY, USA, 1 edition, 2008.
-
(2008)
Crowdsourcing: Why the Power of the Crowd is Driving the Future of Business
-
-
Howe, J.1
-
15
-
-
84885665252
-
Accurately interpreting clickthrough data as implicit feedback
-
New York, NY, USA ACM
-
T. Joachims, L. Granka, B. Pan, H. Hembrooke, and G. Gay. Accurately interpreting clickthrough data as implicit feedback. In Proceedings of the 28th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '05, pages 154-161, New York, NY, USA, 2005. ACM.
-
(2005)
Proceedings of the 28th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '05
, pp. 154-161
-
-
Joachims, T.1
Granka, L.2
Pan, B.3
Hembrooke, H.4
Gay, G.5
-
16
-
-
80052132873
-
Crowdsourcing for book search evaluation: Impact of hit design on comparative system ranking
-
ACM
-
G. Kazai, J. Kamps, M. Koolen, and N. Milic-Frayling. Crowdsourcing for book search evaluation: impact of hit design on comparative system ranking. In Proceedings of the 34th international ACM SIGIR conference on Research and development in Information Retrieval, pages 205-214. ACM, 2011.
-
(2011)
Proceedings of the 34th International ACM SIGIR Conference on Research and Development in Information Retrieval
, pp. 205-214
-
-
Kazai, G.1
Kamps, J.2
Koolen, M.3
Milic-Frayling, N.4
-
18
-
-
84875650055
-
An analysis of human factors and label accuracy in crowdsourcing relevance judgments
-
G. Kazai, J. Kamps, and N. Milic-Frayling. An analysis of human factors and label accuracy in crowdsourcing relevance judgments. Information retrieval, 16(2):138-178, 2013.
-
(2013)
Information Retrieval
, vol.16
, Issue.2
, pp. 138-178
-
-
Kazai, G.1
Kamps, J.2
Milic-Frayling, N.3
-
19
-
-
80055031587
-
On the evaluation of the quality of relevance assessments collected through crowdsourcing
-
Association for Computing Machinery, Inc., July
-
G. Kazai and N. Milic-Frayling. On the evaluation of the quality of relevance assessments collected through crowdsourcing. In SIGIR Workshop on Future of IR Evaluation. Association for Computing Machinery, Inc., July 2009.
-
(2009)
SIGIR Workshop on Future of IR Evaluation
-
-
Kazai, G.1
Milic-Frayling, N.2
-
20
-
-
57649217556
-
Crowdsourcing user studies with mechanical turk
-
New York, NY, USA ACM
-
A. Kittur, E. H. Chi, and B. Suh. Crowdsourcing user studies with mechanical turk. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '08, pages 453-456, New York, NY, USA, 2008. ACM.
-
(2008)
Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, CHI '08
, pp. 453-456
-
-
Kittur, A.1
Chi, E.H.2
Suh, B.3
-
21
-
-
41549146576
-
Near-optimal sensor placements in Gaussian processes: Theory, efficient algorithms and empirical studies
-
A. Krause, A. Singh, and C. Guestrin. Near-optimal sensor placements in gaussian processes: Theory, efficient algorithms and empirical studies. The Journal of Machine Learning Research, 9:235-284, 2008.
-
(2008)
The Journal of Machine Learning Research
, vol.9
, pp. 235-284
-
-
Krause, A.1
Singh, A.2
Guestrin, C.3
-
23
-
-
84903574586
-
On quality control and machine learning in crowdsourcing
-
M. Lease. On quality control and machine learning in crowdsourcing. Human Computation, 11:11 2011.
-
(2011)
Human Computation
, vol.11
, pp. 11
-
-
Lease, M.1
-
25
-
-
0000695404
-
Information-based objective functions for active data selection
-
D. J. MacKay. Information-based objective functions for active data selection. Neural computation, 4(4):590-604, 1992.
-
(1992)
Neural Computation
, vol.4
, Issue.4
, pp. 590-604
-
-
MacKay, D.J.1
-
27
-
-
0000095809
-
An analysis of approximations for maximizing submodular set functionsâǍŤi
-
G. L. Nemhauser, L. A. Wolsey, and M. L. Fisher. An analysis of approximations for maximizing submodular set functionsâǍŤi. Mathematical Programming, 14(1):265-294, 1978.
-
(1978)
Mathematical Programming
, vol.14
, Issue.1
, pp. 265-294
-
-
Nemhauser, G.L.1
Wolsey, L.A.2
Fisher, M.L.3
-
28
-
-
84961289992
-
Glove: Global vectors for word representation
-
J. Pennington, R. Socher, and C. D. Manning. Glove: Global vectors for word representation. In EMNLP, Volume 14, pages 1532-1543, 2014.
-
(2014)
EMNLP
, vol.14
, pp. 1532-1543
-
-
Pennington, J.1
Socher, R.2
Manning, C.D.3
-
29
-
-
84880091394
-
Gaussian processes for active data mining of spatial aggregates
-
SIAM
-
N. Ramakrishnan, C. Bailey-Kellogg, S. Tadepalli, V. Pandey, et al. Gaussian processes for active data mining of spatial aggregates. In SDM, pages 427-438. SIAM, 2005.
-
(2005)
SDM
, pp. 427-438
-
-
Ramakrishnan, N.1
Bailey-Kellogg, C.2
Tadepalli, S.3
Pandey, V.4
-
30
-
-
84961700014
-
Corpus annotation through crowdsourcing: Towards best practice guidelines
-
M. Sabou, K. Bontcheva, L. Derczynski, and A. Scharl. Corpus annotation through crowdsourcing: Towards best practice guidelines. In LREC, pages 859-866, 2014.
-
(2014)
LREC
, pp. 859-866
-
-
Sabou, M.1
Bontcheva, K.2
Derczynski, L.3
Scharl, A.4
-
32
-
-
0039218390
-
A test for the separation of relevant and non-relevant documents in experimental retrieval collections
-
C. J. van Rijsbergen and K. SPARCK JONES. A test for the separation of relevant and non-relevant documents in experimental retrieval collections. Journal of Documentation, 29(3):251-257, 1973.
-
(1973)
Journal of Documentation
, vol.29
, Issue.3
, pp. 251-257
-
-
Van Rijsbergen, C.J.1
Sparck Jones, K.2
-
35
-
-
0001884644
-
Individual comparisons by ranking methods
-
F. Wilcoxon. Individual comparisons by ranking methods. Biometrics bulletin, 1(6):80-83, 1945.
-
(1945)
Biometrics Bulletin
, vol.1
, Issue.6
, pp. 80-83
-
-
Wilcoxon, F.1
-
36
-
-
80053455236
-
Active learning from crowds
-
Y. Yan, G. M. Fung, R. Rosales, and J. G. Dy. Active learning from crowds. In Proceedings of the 28th international conference on machine learning (ICML-11), pages 1161-1168, 2011.
-
(2011)
Proceedings of the 28th International Conference on Machine Learning (ICML-11)
, pp. 1161-1168
-
-
Yan, Y.1
Fung, G.M.2
Rosales, R.3
Dy, J.G.4
|