-
1
-
-
84875658677
-
Implementing crowdsourcing-based relevance experimentation: An industrial perspective
-
Apr
-
O. Alonso. Implementing crowdsourcing-based relevance experimentation: an industrial perspective. Information Retrieval, 16(2):101-120, Apr. 2013.
-
(2013)
Information Retrieval
, vol.16
, Issue.2
, pp. 101-120
-
-
Alonso, O.1
-
2
-
-
84867117736
-
How to grade a test without knowing the answers - A Bayesian graphical model for adaptive crowdsourcing and aptitude testing
-
Y. Bachrach, T. Graepel, T. Minka, and J. Guiver. How to grade a test without knowing the answers-a bayesian graphical model for adaptive crowdsourcing and aptitude testing. In Proc. of the 29th International Conference on Machine Learning (ICML-12), pages 1183-1190, 2012.
-
(2012)
Proc. of the 29th International Conference on Machine Learning ICML-12)
, pp. 1183-1190
-
-
Bachrach, Y.1
Graepel, T.2
Minka, T.3
Guiver, J.4
-
3
-
-
57349188929
-
Relevance assessment: Are judges exchangeable and does it matter
-
New York, NY, USA, ACM
-
P. Bailey, N. Craswell, I. Soboroff, P. Thomas, A. P. de Vries, and E. Yilmaz. Relevance assessment: are judges exchangeable and does it matter. In Proc. of the 31st ACM SIGIR conf. on Research and development in IR, SIGIR '08, pages 667-674, New York, NY, USA, 2008. ACM.
-
(2008)
Proc. of the 31st ACM SIGIR Conf. on Research and Development in IR, SIGIR '08
, pp. 667-674
-
-
Bailey, P.1
Craswell, N.2
Soboroff, I.3
Thomas, P.4
De Vries, A.P.5
Yilmaz, E.6
-
4
-
-
80052127659
-
Repeatable and reliable search system evaluation using crowdsourcing
-
W.-Y. Ma, J.-Y. Nie, R. A. Baeza-Yates, T.-S. Chua, and W. B. Croft, editors, ACM
-
R. Blanco, H. Halpin, D. M. Herzig, P. Mika, J. Pound, H. S. Thompson, and D. T. Tran. Repeatable and reliable search system evaluation using crowdsourcing. In W.-Y. Ma, J.-Y. Nie, R. A. Baeza-Yates, T.-S. Chua, and W. B. Croft, editors, SIGIR, pages 923-932. ACM, 2011.
-
(2011)
SIGIR
, pp. 923-932
-
-
Blanco, R.1
Halpin, H.2
Herzig, D.M.3
Mika, P.4
Pound, J.5
Thompson, H.S.6
Tran, D.T.7
-
5
-
-
77954009273
-
Are your participants gaming the system?: Screening Mechanical Turk workers
-
E. D. Mynatt, D. Schoner, G. Fitzpatrick, S. E. Hudson, W. K. Edwards, and T. Rodden, editors, ACM
-
J. S. Downs, M. B. Holbrook, S. Sheng, and L. F. Cranor. Are your participants gaming the system?: screening Mechanical Turk workers. In E. D. Mynatt, D. Schoner, G. Fitzpatrick, S. E. Hudson, W. K. Edwards, and T. Rodden, editors, CHI, pages 2399-2402. ACM, 2010.
-
(2010)
CHI
, pp. 2399-2402
-
-
Downs, J.S.1
Holbrook, M.B.2
Sheng, S.3
Cranor, L.F.4
-
7
-
-
0035470889
-
Greedy function approximation: A gradient boosting machine
-
J. Friedman. Greedy function approximation: A gradient boosting machine. The Annals of Statistics, 29(5):1189-1232, 2001.
-
(2001)
The Annals of Statistics
, vol.29
, Issue.5
, pp. 1189-1232
-
-
Friedman, J.1
-
8
-
-
0034164230
-
Additive logistic regression: A statistical view of boosting
-
J. Friedman, T. Hastie, and R. Tibshirani. Additive logistic regression: a statistical view of boosting. The Annals of Statistics, 28(2):337-407, 2000.
-
(2000)
The Annals of Statistics
, vol.28
, Issue.2
, pp. 337-407
-
-
Friedman, J.1
Hastie, T.2
Tibshirani, R.3
-
10
-
-
84911095728
-
Predicting result quality in crowdsourcing using application layer monitoring
-
Da Nang, Vietnam, July
-
M. Hirth, S. Scheuring, T. Hoßfeld, C. Schwartz, and R Tran-Gia. Predicting Result Quality in Crowdsourcing Using Application Layer Monitoring. In 5th International Conference on Communications and Electronics (ICCE 2014), Da Nang, Vietnam, July 2014.
-
(2014)
5th International Conference on Communications and Electronics (ICCE 2014)
-
-
Hirth, M.1
Scheuring, S.2
Hoßfeld, T.3
Schwartz, C.4
Tran-Gia, R.5
-
11
-
-
84908440734
-
Learning whom to trust with mace
-
D. Hovy, T. Berg-Kirkpatrick, A. Vaswani, and E. Hovy. Learning whom to trust with mace. In Proceedings of NAACL-HLT, pages 1120-1130, 2013.
-
(2013)
Proceedings of NAACL-HLT
, pp. 1120-1130
-
-
Hovy, D.1
Berg-Kirkpatrick, T.2
Vaswani, A.3
Hovy, E.4
-
12
-
-
56849109269
-
-
Crown Publishing Group, New York, NY, USA, 1 edition
-
J. Howe. Crowdsourcing: Why the Power of the Crowd Is Driving the Future of Business. Crown Publishing Group, New York, NY, USA, 1 edition, 2008.
-
(2008)
Crowdsourcing: Why the Power of the Crowd Is Driving the Future of Business
-
-
Howe, J.1
-
13
-
-
77956245055
-
Quality management on amazon mechanical turk
-
New York, NY, USA, ACM
-
P. G. Ipeirotis, F. Provost, and J. Wang. Quality management on Amazon Mechanical Turk. In Proc. of the ACM SIGKDD Workshop on Human Computation, HCOMP '10, pages 64-67, New York, NY, USA, 2010. ACM.
-
(2010)
Proc. of the ACM SIGKDD Workshop on Human Computation, HCOMP '10
, pp. 64-67
-
-
Ipeirotis, P.G.1
Provost, F.2
Wang, J.3
-
14
-
-
84899442104
-
Combining human and machine intelligence in large-scale crowdsourcing
-
International Foundation for Autonomous Agents and Multiagent Systems
-
E. Kamar, S. Hacker, and E. Horvitz. Combining human and machine intelligence in large-scale crowdsourcing. In Proc. of the 11th International Conference on Autonomous Agents and Multiagent Systems-Volume 1, pages 467-474. International Foundation for Autonomous Agents and Multiagent Systems, 2012.
-
(2012)
Proc. of the 11th International Conference on Autonomous Agents and Multiagent Systems
, vol.1
, pp. 467-474
-
-
Kamar, E.1
Hacker, S.2
Horvitz, E.3
-
16
-
-
78049368441
-
Bayesian knowledge corroboration with logical rules and user feedback
-
Berlin, Heidelberg, Springer-Verlag
-
G. Kasneci, J. Van Gael, R. Herbrich, and T. Graepel. Bayesian knowledge corroboration with logical rules and user feedback. In Proc. of the 2010 European Conference on Machine learning and knowledge Discovery in Databases: Part II, ECML PKDD'10, pages 1-18, Berlin, Heidelberg, 2010. Springer-Verlag.
-
(2010)
Proc. of the 2010 European Conference on Machine Learning and Knowledge Discovery in Databases: Part II, ECML PKDD'10
, pp. 1-18
-
-
Kasneci, G.1
Van Gael, J.2
Herbrich, R.3
Graepel, T.4
-
18
-
-
84875650055
-
An analysis of human factors and label accuracy in crowdsourcing relevance judgments
-
G. Kazai, J. Kamps, and N. Milic-Frayling. An analysis of human factors and label accuracy in crowdsourcing relevance judgments. Inf. Retr., 16(2):138-178, 2013.
-
(2013)
Inf. Retr
, vol.16
, Issue.2
, pp. 138-178
-
-
Kazai, G.1
Kamps, J.2
Milic-Frayling, N.3
-
19
-
-
57649217556
-
Crowdsourcing user studies with mechanical turk
-
New York, NY, USA ACM
-
A. Kittur, E. H. Chi, and B. Suh. Crowdsourcing user studies with mechanical turk. In Proceeding of the 26th SIGCHI Conference on Human factors in computing systems, CHI '08, pages 453-456, New York, NY, USA, 2008. ACM.
-
(2008)
Proceeding of the 26th SIGCHI Conference on Human Factors in Computing Systems, CHI '08
, pp. 453-456
-
-
Kittur, A.1
Chi, E.H.2
Suh, B.3
-
20
-
-
84874886217
-
The future of crowd work
-
ACM
-
A. Kittur, J. V. Nickerson, M. Bernstein, E. Gerber, A. Shaw, J. Zimmerman, M. Lease, and J. Horton. The future of crowd work. In Proceedings of the 2013 conference on Computer supported cooperative work, pages 1301-1318. ACM, 2013.
-
Proceedings of the 2013 Conference on Computer Supported Cooperative Work
, vol.2013
, pp. 1301-1318
-
-
Kittur, A.1
Nickerson, J.V.2
Bernstein, M.3
Gerber, E.4
Shaw, A.5
Zimmerman, J.6
Lease, M.7
Horton, J.8
-
21
-
-
80052123756
-
Ensuring quality in crowdsourced search relevance evaluation: The effects of training question distribution
-
J. Le, A. Edmonds, V. Hester, and L. Biewald. Ensuring quality in crowdsourced search relevance evaluation: The effects of training question distribution. In SIGIR Workshop on Crowdsourcing for Search Evaluation, pages 21-26, 2010.
-
(2010)
SIGIR Workshop on Crowdsourcing for Search Evaluation
, pp. 21-26
-
-
Le, J.1
Edmonds, A.2
Hester, V.3
Biewald, L.4
-
22
-
-
84873543071
-
Overview of the TREC 2011 crowdsourcing track
-
M. Lease and G. Kazai. Overview of the TREC 2011 crowdsourcing track. In Proceedings of TREC, 2011.
-
(2011)
Proceedings of TREC
-
-
Lease, M.1
Kazai, G.2
-
23
-
-
77952357661
-
How reliable are annotations via crowdsourcing: A study about inter-annotator agreement for multi-label image annotation
-
New York, NY, USA, ACM
-
S. Nowak and S. Rüger. How reliable are annotations via crowdsourcing: a study about inter-annotator agreement for multi-label image annotation. In Proc. of the International Conference on Multimedia Information Retrieval, MIR '10, pages 557-566, New York, NY, USA, 2010. ACM.
-
(2010)
Proc. of the International Conference on Multimedia Information Retrieval, MIR '10
, pp. 557-566
-
-
Nowak, S.1
Rüger, S.2
-
24
-
-
79958083139
-
Human computation: A survey and taxonomy of a growing field
-
New York, NY, USA, ACM
-
A. J. Quinn and B. B. Bederson. Human computation: a survey and taxonomy of a growing field. In Proc. of the 2011 Annual Conference on Human factors in computing systems, CHI '11, pages 1403-1412, New York, NY, USA, 2011. ACM.
-
(2011)
Proc. of the 2011 Annual Conference on Human Factors in Computing Systems, CHI '11
, pp. 1403-1412
-
-
Quinn, A.J.1
Bederson, B.B.2
-
25
-
-
85162536261
-
Ranking annotators for crowdsourced labeling tasks
-
V. C. Raykar and S. Yu. Ranking annotators for crowdsourced labeling tasks. In NIPS, pages 1809-1817, 2011.
-
(2011)
NIPS
, pp. 1809-1817
-
-
Raykar, V.C.1
Yu, S.2
-
26
-
-
77951954464
-
Learning from crowds
-
V. C. Raykar, S. Yu, L. H. Zhao, G. H. Valadez, C. Florin, L. Bogoni, and L. Moy. Learning from crowds. The Journal of Machine Learning Research, 99:1297-1322, 2010.
-
(2010)
The Journal of Machine Learning Research
, Issue.99
, pp. 1297-1322
-
-
Raykar, V.C.1
Yu, S.2
Zhao, L.H.3
Valadez, G.H.4
Florin, C.5
Bogoni, L.6
Moy, L.7
-
27
-
-
84869013273
-
Crowdscape: Interactively visualizing user behavior and output
-
New York, NY, USA, ACM
-
J. Rzeszotarski and A. Kittur. Crowdscape: interactively visualizing user behavior and output. In Proc. of the 25th annual ACM symposium on User interface software and technology, UIST '12, pages 55-62, New York, NY, USA, 2012. ACM.
-
(2012)
Proc. of the 25th Annual ACM Symposium on User Interface Software and Technology, UIST '12
, pp. 55-62
-
-
Rzeszotarski, J.1
Kittur, A.2
-
28
-
-
80052132320
-
Quantifying test collection quality based on the consistency of relevance judgements
-
New York, NY, USA, ACM
-
F. Scholer, A. Turpin, and M. Sanderson. Quantifying test collection quality based on the consistency of relevance judgements. In Proc. of the 34th ACM SIGIR conf. on Research and development in IR, SIGIR '11, pages 1063-1072, New York, NY, USA, 2011. ACM.
-
(2011)
Proc. of the 34th ACM SIGIR Conf. on Research and Development in IR, SIGIR '11
, pp. 1063-1072
-
-
Scholer, F.1
Turpin, A.2
Sanderson, M.3
-
30
-
-
80053360508
-
Cheap and fast-but is it good?: Evaluating non-expert annotations for natural language tasks
-
Association for Computational Linguistics Stroudsburg, PA, USA
-
R. Snow, B. O'Connor, D. Jurafsky, and A. Y. Ng. Cheap and fast-but is it good?: evaluating non-expert annotations for natural language tasks. In Proc. of the Conference on Empirical Methods in Natural Language Processing, EMNLP '08, pages 254-263, Stroudsburg, PA, USA, 2008. Association for Computational Linguistics.
-
(2008)
Proc. of the Conference on Empirical Methods in Natural Language Processing, EMNLP '08
, pp. 254-263
-
-
Snow, R.1
O'Connor, B.2
Jurafsky, D.3
Ng, A.Y.4
-
31
-
-
84899439729
-
Efficient budget allocation with accuracy guarantees for crowdsourcing classification tasks
-
International Foundation for Autonomous Agents and Multiagent Systems
-
L. Tran-Thanh, M. Venanzi, A. Rogers, and N. R. Jennings. Efficient budget allocation with accuracy guarantees for crowdsourcing classification tasks. In Proc. of the 2013 international conference on Autonomous agents and multi-agent systems, pages 901-908. International Foundation for Autonomous Agents and Multiagent Systems, 2013.
-
(2013)
Proc. of the 2013 International Conference on Autonomous Agents and Multi-agent Systems
, pp. 901-908
-
-
Tran-Thanh, L.1
Venanzi, M.2
Rogers, A.3
Jennings, N.R.4
|