메뉴 건너뛰기




Volumn 19-23-Oct-2015, Issue , 2015, Pages 783-790

Exploiting document content for efficient aggregation of crowdsourcing votes

Author keywords

Clustering hypothesis; Crowdsourcing; Relevance assessment

Indexed keywords

KNOWLEDGE MANAGEMENT;

EID: 84958231872     PISSN: None     EISSN: None     Source Type: Conference Proceeding    
DOI: 10.1145/2806416.2806460     Document Type: Conference Paper
Times cited : (16)

References (33)
  • 1
    • 65249129950 scopus 로고    scopus 로고
    • Crowdsourcing for relevance evaluation
    • ACM
    • Omar Alonso, Daniel E Rose, and Benjamin Stewart. Crowdsourcing for relevance evaluation. In ACM SigIR Forum, volume 42, pages 9-15. ACM, 2008.
    • (2008) ACM SigIR Forum , vol.42 , pp. 9-15
    • Alonso, O.1    Rose, D.E.2    Stewart, B.3
  • 3
    • 78649667902 scopus 로고    scopus 로고
    • Moving the crowd at threadless: Motivations for participation in a crowdsourcing application
    • Daren C Brabham. Moving the crowd at threadless: Motivations for participation in a crowdsourcing application. Information, Communication & Society, 13(8):1122-1145, 2010.
    • (2010) Information, Communication & Society , vol.13 , Issue.8 , pp. 1122-1145
    • Brabham, D.C.1
  • 7
    • 84875647965 scopus 로고    scopus 로고
    • Crowdsourcing for search evaluation
    • ACM
    • Vitor R Carvalho, Matthew Lease, and Emine Yilmaz. Crowdsourcing for search evaluation. In ACM Sigir forum, volume 44, pages 17-22. ACM, 2011.
    • (2011) ACM Sigir Forum , vol.44 , pp. 17-22
    • Carvalho, V.R.1    Lease, M.2    Yilmaz, E.3
  • 8
    • 0003102944 scopus 로고
    • Maximum likelihood estimation of observer error-rates using the em algorithm
    • Alexander Philip Dawid and Allan M Skene. Maximum likelihood estimation of observer error-rates using the em algorithm. Applied statistics, pages 20-28, 1979.
    • (1979) Applied Statistics , pp. 20-28
    • Dawid, A.P.1    Skene, A.M.2
  • 9
    • 84891921026 scopus 로고    scopus 로고
    • Pick-a-crowd: Tell me what you like, and i'll tell you what to do
    • International World Wide Web Conferences Steering Committee
    • Djellel Eddine Difallah, Gianluca Demartini, and Philippe Cudré-Mauroux. Pick-a-crowd: tell me what you like, and i'll tell you what to do. In Proceedings of the 22nd international conference on World Wide Web, pages 367-374. International World Wide Web Conferences Steering Committee, 2013.
    • (2013) Proceedings of the 22nd International Conference on World Wide Web , pp. 367-374
    • Difallah, D.E.1    Demartini, G.2    Cudré-Mauroux, P.3
  • 10
    • 84875647836 scopus 로고    scopus 로고
    • Increasing cheat robustness of crowdsourcing tasks
    • Carsten Eickhoff and Arjen P de Vries. Increasing cheat robustness of crowdsourcing tasks. Information retrieval, 16(2):121-137, 2013.
    • (2013) Information Retrieval , vol.16 , Issue.2 , pp. 121-137
    • Eickhoff, C.1    De Vries, A.P.2
  • 14
    • 84860206608 scopus 로고    scopus 로고
    • On aggregating labels from multiple crowd workers to infer relevance of documents
    • Springer
    • Mehdi Hosseini, Ingemar J Cox, Nataša Milić-Frayling, Gabriella Kazai, and Vishwa Vinay. On aggregating labels from multiple crowd workers to infer relevance of documents. In Advances in information retrieval, pages 182-194. Springer, 2012.
    • (2012) Advances in Information Retrieval , pp. 182-194
    • Hosseini, M.1    Cox, I.J.2    Milić-Frayling, N.3    Kazai, G.4    Vinay, V.5
  • 17
    • 84896842331 scopus 로고    scopus 로고
    • Budget-optimal task allocation for reliable crowdsourcing systems
    • David R Karger, Sewoong Oh, and Devavrat Shah. Budget-optimal task allocation for reliable crowdsourcing systems. Operations Research, 62(1):1-24, 2014.
    • (2014) Operations Research , vol.62 , Issue.1 , pp. 1-24
    • Karger, D.R.1    Oh, S.2    Shah, D.3
  • 19
    • 84875650055 scopus 로고    scopus 로고
    • An analysis of human factors and label accuracy in crowdsourcing relevance judgments
    • Gabriella Kazai, Jaap Kamps, and Natasa Milic-Frayling. An analysis of human factors and label accuracy in crowdsourcing relevance judgments. Information retrieval, 16(2):138-178, 2013.
    • (2013) Information Retrieval , vol.16 , Issue.2 , pp. 138-178
    • Kazai, G.1    Kamps, J.2    Milic-Frayling, N.3
  • 20
    • 80055031587 scopus 로고    scopus 로고
    • On the evaluation of the quality of relevance assessments collected through crowdsourcing
    • Gabriella Kazai and Natasa Milic-Frayling. On the evaluation of the quality of relevance assessments collected through crowdsourcing. In SIGIR 2009 Workshop on the Future of IR Evaluation, page 21, 2009.
    • (2009) SIGIR 2009 Workshop on the Future of IR Evaluation , pp. 21
    • Kazai, G.1    Milic-Frayling, N.2
  • 23
    • 84875473512 scopus 로고    scopus 로고
    • Crowdsourcing for information retrieval
    • ACM
    • Matthew Lease and Emine Yilmaz. Crowdsourcing for information retrieval. In ACM SIGIR Forum, volume 45, pages 66-75. ACM, 2012.
    • (2012) ACM SIGIR Forum , vol.45 , pp. 66-75
    • Lease, M.1    Yilmaz, E.2
  • 27
    • 0039218390 scopus 로고
    • A test for the separation of relevant and non-relevant documents in experimental retrieval collections
    • Cornelis J van Rijsbergen and Karen Spärck Jones. A test for the separation of relevant and non-relevant documents in experimental retrieval collections. Journal of Documentation, 29(3):251-257, 1973.
    • (1973) Journal of Documentation , vol.29 , Issue.3 , pp. 251-257
    • Van Rijsbergen, C.J.1    Jones, K.S.2
  • 33
    • 84861603528 scopus 로고    scopus 로고
    • Reputation-based incentive protocols in crowdsourcing applications
    • IEEE
    • Yu Zhang and Mihaela van der Schaar. Reputation-based incentive protocols in crowdsourcing applications. In INFOCOM, 2012 Proceedings IEEE, pages 2140-2148. IEEE, 2012.
    • (2012) INFOCOM, 2012 Proceedings IEEE , pp. 2140-2148
    • Zhang, Y.1    Van Der Schaar, M.2


* 이 정보는 Elsevier사의 SCOPUS DB에서 KISTI가 분석하여 추출한 것입니다.