메뉴 건너뛰기




Volumn , Issue , 2016, Pages 267-276

Quality management in crowdsourcing using gold judges behavior

Author keywords

Experimentation; Measurement

Indexed keywords

CROWDSOURCING; DATA MINING; GOLD; INFORMATION RETRIEVAL; MEASUREMENTS; QUALITY MANAGEMENT; WEBSITES; WORLD WIDE WEB;

EID: 84964397358     PISSN: None     EISSN: None     Source Type: Conference Proceeding    
DOI: 10.1145/2835776.2835835     Document Type: Conference Paper
Times cited : (41)

References (32)
  • 1
    • 84875658677 scopus 로고    scopus 로고
    • Implementing crowdsourcing-based relevance experimentation: An industrial perspective
    • Apr
    • O. Alonso. Implementing crowdsourcing-based relevance experimentation: an industrial perspective. Information Retrieval, 16(2):101-120, Apr. 2013.
    • (2013) Information Retrieval , vol.16 , Issue.2 , pp. 101-120
    • Alonso, O.1
  • 4
    • 80052127659 scopus 로고    scopus 로고
    • Repeatable and reliable search system evaluation using crowdsourcing
    • W.-Y. Ma, J.-Y. Nie, R. A. Baeza-Yates, T.-S. Chua, and W. B. Croft, editors, ACM
    • R. Blanco, H. Halpin, D. M. Herzig, P. Mika, J. Pound, H. S. Thompson, and D. T. Tran. Repeatable and reliable search system evaluation using crowdsourcing. In W.-Y. Ma, J.-Y. Nie, R. A. Baeza-Yates, T.-S. Chua, and W. B. Croft, editors, SIGIR, pages 923-932. ACM, 2011.
    • (2011) SIGIR , pp. 923-932
    • Blanco, R.1    Halpin, H.2    Herzig, D.M.3    Mika, P.4    Pound, J.5    Thompson, H.S.6    Tran, D.T.7
  • 5
    • 77954009273 scopus 로고    scopus 로고
    • Are your participants gaming the system?: Screening Mechanical Turk workers
    • E. D. Mynatt, D. Schoner, G. Fitzpatrick, S. E. Hudson, W. K. Edwards, and T. Rodden, editors, ACM
    • J. S. Downs, M. B. Holbrook, S. Sheng, and L. F. Cranor. Are your participants gaming the system?: screening Mechanical Turk workers. In E. D. Mynatt, D. Schoner, G. Fitzpatrick, S. E. Hudson, W. K. Edwards, and T. Rodden, editors, CHI, pages 2399-2402. ACM, 2010.
    • (2010) CHI , pp. 2399-2402
    • Downs, J.S.1    Holbrook, M.B.2    Sheng, S.3    Cranor, L.F.4
  • 7
    • 0035470889 scopus 로고    scopus 로고
    • Greedy function approximation: A gradient boosting machine
    • J. Friedman. Greedy function approximation: A gradient boosting machine. The Annals of Statistics, 29(5):1189-1232, 2001.
    • (2001) The Annals of Statistics , vol.29 , Issue.5 , pp. 1189-1232
    • Friedman, J.1
  • 8
    • 0034164230 scopus 로고    scopus 로고
    • Additive logistic regression: A statistical view of boosting
    • J. Friedman, T. Hastie, and R. Tibshirani. Additive logistic regression: a statistical view of boosting. The Annals of Statistics, 28(2):337-407, 2000.
    • (2000) The Annals of Statistics , vol.28 , Issue.2 , pp. 337-407
    • Friedman, J.1    Hastie, T.2    Tibshirani, R.3
  • 14
    • 84899442104 scopus 로고    scopus 로고
    • Combining human and machine intelligence in large-scale crowdsourcing
    • International Foundation for Autonomous Agents and Multiagent Systems
    • E. Kamar, S. Hacker, and E. Horvitz. Combining human and machine intelligence in large-scale crowdsourcing. In Proc. of the 11th International Conference on Autonomous Agents and Multiagent Systems-Volume 1, pages 467-474. International Foundation for Autonomous Agents and Multiagent Systems, 2012.
    • (2012) Proc. of the 11th International Conference on Autonomous Agents and Multiagent Systems , vol.1 , pp. 467-474
    • Kamar, E.1    Hacker, S.2    Horvitz, E.3
  • 18
    • 84875650055 scopus 로고    scopus 로고
    • An analysis of human factors and label accuracy in crowdsourcing relevance judgments
    • G. Kazai, J. Kamps, and N. Milic-Frayling. An analysis of human factors and label accuracy in crowdsourcing relevance judgments. Inf. Retr., 16(2):138-178, 2013.
    • (2013) Inf. Retr , vol.16 , Issue.2 , pp. 138-178
    • Kazai, G.1    Kamps, J.2    Milic-Frayling, N.3
  • 21
    • 80052123756 scopus 로고    scopus 로고
    • Ensuring quality in crowdsourced search relevance evaluation: The effects of training question distribution
    • J. Le, A. Edmonds, V. Hester, and L. Biewald. Ensuring quality in crowdsourced search relevance evaluation: The effects of training question distribution. In SIGIR Workshop on Crowdsourcing for Search Evaluation, pages 21-26, 2010.
    • (2010) SIGIR Workshop on Crowdsourcing for Search Evaluation , pp. 21-26
    • Le, J.1    Edmonds, A.2    Hester, V.3    Biewald, L.4
  • 22
    • 84873543071 scopus 로고    scopus 로고
    • Overview of the TREC 2011 crowdsourcing track
    • M. Lease and G. Kazai. Overview of the TREC 2011 crowdsourcing track. In Proceedings of TREC, 2011.
    • (2011) Proceedings of TREC
    • Lease, M.1    Kazai, G.2
  • 23
    • 77952357661 scopus 로고    scopus 로고
    • How reliable are annotations via crowdsourcing: A study about inter-annotator agreement for multi-label image annotation
    • New York, NY, USA, ACM
    • S. Nowak and S. Rüger. How reliable are annotations via crowdsourcing: a study about inter-annotator agreement for multi-label image annotation. In Proc. of the International Conference on Multimedia Information Retrieval, MIR '10, pages 557-566, New York, NY, USA, 2010. ACM.
    • (2010) Proc. of the International Conference on Multimedia Information Retrieval, MIR '10 , pp. 557-566
    • Nowak, S.1    Rüger, S.2
  • 25
    • 85162536261 scopus 로고    scopus 로고
    • Ranking annotators for crowdsourced labeling tasks
    • V. C. Raykar and S. Yu. Ranking annotators for crowdsourced labeling tasks. In NIPS, pages 1809-1817, 2011.
    • (2011) NIPS , pp. 1809-1817
    • Raykar, V.C.1    Yu, S.2
  • 30
    • 80053360508 scopus 로고    scopus 로고
    • Cheap and fast-but is it good?: Evaluating non-expert annotations for natural language tasks
    • Association for Computational Linguistics Stroudsburg, PA, USA
    • R. Snow, B. O'Connor, D. Jurafsky, and A. Y. Ng. Cheap and fast-but is it good?: evaluating non-expert annotations for natural language tasks. In Proc. of the Conference on Empirical Methods in Natural Language Processing, EMNLP '08, pages 254-263, Stroudsburg, PA, USA, 2008. Association for Computational Linguistics.
    • (2008) Proc. of the Conference on Empirical Methods in Natural Language Processing, EMNLP '08 , pp. 254-263
    • Snow, R.1    O'Connor, B.2    Jurafsky, D.3    Ng, A.Y.4


* 이 정보는 Elsevier사의 SCOPUS DB에서 KISTI가 분석하여 추출한 것입니다.