메뉴 건너뛰기




Volumn 51, Issue 1, 2019, Pages

Quality control in crowdsourcing: A survey of quality attributes, assessment techniques, and assurance actions

Author keywords

Assessment; Assurance; Attributes; Crowdsourcing; Quality model

Indexed keywords

PERSONAL COMPUTERS; QUALITY CONTROL; SURVEYS;

EID: 85040668008     PISSN: 03600300     EISSN: 15577341     Source Type: Journal    
DOI: 10.1145/3148148     Document Type: Review
Times cited : (321)

References (220)
  • 1
    • 84980317079 scopus 로고    scopus 로고
    • How many workers to ask?: Adaptive exploration for collecting high quality labels
    • Ittai Abraham, Omar Alonso, Vasilis Kandylas, Rajesh Patel, Steven Shelford, and Aleksandrs Slivkins. 2016. How many workers to ask?: Adaptive exploration for collecting high quality labels. In ACM SIGIR 2016. 473–482. http://doi.acm.org/10.1145/2911451.2911514
    • (2016) ACM SIGIR 2016 , pp. 473-482
    • Abraham, I.1    Alonso, O.2    Kandylas, V.3    Patel, R.4    Shelford, S.5    Slivkins, A.6
  • 2
    • 35348837893 scopus 로고    scopus 로고
    • A content-driven reputation system for the Wikipedia
    • Bo Thomas Adler and Luca De Alfaro. 2007. A content-driven reputation system for the Wikipedia. In WWW 2007. 261–270.
    • (2007) WWW 2007 , pp. 261-270
    • Adler, B.T.1    De Alfaro, L.2
  • 4
    • 84877297434 scopus 로고    scopus 로고
    • An introduction to outlier analysis
    • Springer
    • Charu C. Aggarwal. 2013. An introduction to outlier analysis. In Outlier Analysis. Springer, 1–40.
    • (2013) Outlier Analysis , pp. 1-40
    • Aggarwal, C.C.1
  • 5
    • 80755169524 scopus 로고    scopus 로고
    • The Jabberwocky programming environment for structured social computing
    • Salman Ahmad, Alexis Battle, Zahan Malkani, and Sepander Kamvar. 2011. The Jabberwocky programming environment for structured social computing. In UIST’11. 53–64.
    • (2011) UIST’11 , pp. 53-64
    • Ahmad, S.1    Battle, A.2    Malkani, Z.3    Kamvar, S.4
  • 6
    • 34247540250 scopus 로고    scopus 로고
    • Games with a purpose
    • June 2006
    • Luis von Ahn. 2006. Games with a purpose. Computer 39, 6 (June 2006), 92–94.
    • (2006) Computer , vol.39 , Issue.6 , pp. 92-94
    • Von Ahn, L.1
  • 7
    • 84900392478 scopus 로고    scopus 로고
    • Cognitively inspired task design to improve user performance on crowdsourcing platforms
    • Harini Alagarai Sampath, Rajeev Rajeshuni, and Bipin Indurkhya. 2014. Cognitively inspired task design to improve user performance on crowdsourcing platforms. In CHI 2014. 3665–3674.
    • (2014) CHI 2014 , pp. 3665-3674
    • Sampath, H.A.1    Rajeshuni, R.2    Indurkhya, B.3
  • 10
    • 84920512028 scopus 로고    scopus 로고
    • Harnessing implicit teamwork knowledge to improve quality in crowdsourcing processes
    • Mohammad Allahbakhsh, Samira Samimi, Hamid Reza Motahari-Nezhad, and Boualem Benatallah. 2014. Harnessing implicit teamwork knowledge to improve quality in crowdsourcing processes. In SOCA 2014. 17–24.
    • (2014) SOCA 2014 , pp. 17-24
    • Allahbakhsh, M.1    Samimi, S.2    Motahari-Nezhad, H.R.3    Benatallah, B.4
  • 11
    • 84858211339 scopus 로고    scopus 로고
    • Collaborative workflow for crowdsourcing translation
    • Vamshi Ambati, Stephan Vogel, and Jaime Carbonell. 2012. Collaborative workflow for crowdsourcing translation. In CSCW 2012. 1191–1194.
    • (2012) CSCW 2012 , pp. 1191-1194
    • Ambati, V.1    Vogel, S.2    Carbonell, J.3
  • 12
    • 84968834788 scopus 로고    scopus 로고
    • Discovering best teams for data leak-aware crowdsourcing in social networks
    • (2016), Article
    • Iheb Ben Amor, Salima Benbernou, Mourad Ouziri, Zaki Malik, and Brahim Medjahed. 2016. Discovering best teams for data leak-aware crowdsourcing in social networks. ACM Transactions on the Web 10, 1 (2016), Article 2, 27 pages.
    • (2016) ACM Transactions on The Web , vol.10 , Issue.1 , pp. 27
    • Amor, I.B.1    Benbernou, S.2    Ouziri, M.3    Malik, Z.4    Medjahed, B.5
  • 14
    • 84889603841 scopus 로고    scopus 로고
    • An analysis of crowd workers mistakes for specific and complex relevance assessment task
    • ACM
    • Jesse Anderton, Maryam Bashir, Virgil Pavlu, and Javed A. Aslam. 2013. An analysis of crowd workers mistakes for specific and complex relevance assessment task. In CIKM 2013. ACM, 1873–1876.
    • (2013) CIKM 2013 , pp. 1873-1876
    • Anderton, J.1    Bashir, M.2    Pavlu, V.3    Aslam, J.A.4
  • 15
    • 84900431356 scopus 로고    scopus 로고
    • Effects of simultaneous and sequential work structures on distributed collaborative interdependent tasks
    • Paul André, Robert E. Kraut, and Aniket Kittur. 2014. Effects of simultaneous and sequential work structures on distributed collaborative interdependent tasks. In CHI 2014. 139–148.
    • (2014) CHI 2014 , pp. 139-148
    • André, P.1    Kraut, R.E.2    Kittur, A.3
  • 18
    • 84996520882 scopus 로고    scopus 로고
    • Active content-based crowdsourcing task selection
    • Piyush Bansal, Carsten Eickhoff, and Thomas Hofmann. 2016. Active content-based crowdsourcing task selection. In CIKM 2016. 529–538.
    • (2016) CIKM 2016 , pp. 529-538
    • Bansal, P.1    Eickhoff, C.2    Hofmann, T.3
  • 20
    • 84868532270 scopus 로고    scopus 로고
    • Analytic methods for optimizing realtime crowdsourcing
    • 2012
    • Michael S. Bernstein, David R. Karger, Robert C. Miller, and Joel Brandt. 2012. Analytic methods for optimizing realtime crowdsourcing. CoRR abs/1204.2995 (2012). http://arxiv.org/abs/1204.2995
    • (2012) CoRR Abs/1204.2995
    • Bernstein, M.S.1    Karger, D.R.2    Miller, R.C.3    Brandt, J.4
  • 24
    • 84991466889 scopus 로고    scopus 로고
    • Location privacy for crowdsourcing applications
    • Ioannis Boutsis and Vana Kalogeraki. 2016. Location privacy for crowdsourcing applications. In UbiComp 2016. 694–705. DOI:http://dx.doi.org/10.1145/2971648.2971741
    • (2016) UbiComp 2016 , pp. 694-705
    • Boutsis, I.1    Kalogeraki, V.2
  • 25
    • 84860842462 scopus 로고    scopus 로고
    • Answering search queries with crowdsearcher
    • Alessandro Bozzon, Marco Brambilla, and Stefano Ceri. 2012. Answering search queries with crowdsearcher. In WWW 2012. 1009–1018.
    • (2012) WWW 2012 , pp. 1009-1018
    • Bozzon, A.1    Brambilla, M.2    Ceri, S.3
  • 28
    • 84907031159 scopus 로고    scopus 로고
    • From labor to trader: Opinion elicitation via online crowds as a market
    • Caleb Chen Cao, Lei Chen, and Hosagrahar Visvesvaraya Jagadish. 2014. From labor to trader: Opinion elicitation via online crowds as a market. In KDD 2014. 1067–1076.
    • (2014) KDD 2014 , pp. 1067-1076
    • Cao, C.C.1    Chen, L.2    Jagadish, H.V.3
  • 29
  • 30
    • 84908206322 scopus 로고    scopus 로고
    • Modal ranking: A uniquely robust voting rule
    • Ioannis Caragiannis, Ariel D. Procaccia, and Nisarg Shah. 2014. Modal ranking: A uniquely robust voting rule. In AAAI 2014. 616–622.
    • (2014) AAAI 2014 , pp. 616-622
    • Caragiannis, I.1    Procaccia, A.D.2    Shah, N.3
  • 31
    • 84893185641 scopus 로고    scopus 로고
    • Efficient crowdsourcing contests
    • Ruggiero Cavallo and Shaili Jain. 2012. Efficient crowdsourcing contests. In Proceedings of AAMAS 2012 - Volume 2. 677–686.
    • (2012) Proceedings of AAMAS 2012 , vol.2 , pp. 677-686
    • Cavallo, R.1    Jain, S.2
  • 33
    • 84901398917 scopus 로고    scopus 로고
    • Optimistic knowledge gradient policy for optimal budget allocation in crowdsourcing
    • Xi Chen, Qihang Lin, and Dengyong Zhou. 2013. Optimistic knowledge gradient policy for optimal budget allocation in crowdsourcing. In ICML 2013, Vol. 28, 64–72.
    • (2013) ICML 2013 , vol.28 , pp. 64-72
    • Chen, X.1    Lin, Q.2    Zhou, D.3
  • 34
    • 84951038767 scopus 로고    scopus 로고
    • Measuring crowdsourcing effort with error-time curves
    • ACM, New York
    • Justin Cheng, Jaime Teevan, and Michael S. Bernstein. 2015a. Measuring crowdsourcing effort with error-time curves. In CHI 2015. ACM, New York, 1365–1374.
    • (2015) CHI 2015 , pp. 1365-1374
    • Cheng, J.1    Teevan, J.2    Bernstein, M.S.3
  • 35
    • 84951025363 scopus 로고    scopus 로고
    • Break it down: A comparison of macro- And microtasks
    • Justin Cheng, Jaime Teevan, Shamsi T. Iqbal, and Michael S. Bernstein. 2015b. Break it down: A comparison of macro- and microtasks. In CHI 2015. 4061–4064. http://doi.acm.org/10.1145/2702123.2702146
    • (2015) CHI 2015 , pp. 4061-4064
    • Cheng, J.1    Teevan, J.2    Iqbal, S.T.3    Bernstein, M.S.4
  • 36
    • 77956240720 scopus 로고    scopus 로고
    • Task search in a human computation market
    • Lydia B. Chilton, John J. Horton, Robert C. Miller, and Shiri Azenkot. 2010. Task search in a human computation market. In HCOMP 2010. 1–9. http://doi.acm.org/10.1145/1837885.1837889
    • (2010) HCOMP 2010 , pp. 1-9
    • Chilton, L.B.1    Horton, J.J.2    Miller, R.C.3    Azenkot, S.4
  • 37
    • 84968909143 scopus 로고    scopus 로고
    • And now for something completely different: Improving crowdsourcing workflows with micro-diversions
    • ACM, New York
    • Peng Dai, Jeffrey M. Rzeszotarski, Praveen Paritosh, and Ed H. Chi. 2015. And now for something completely different: Improving crowdsourcing workflows with micro-diversions. In CSCW 2015. ACM, New York, 628–638. DOI:http://dx. doi.org/10.1145/2675133.2675260
    • (2015) CSCW 2015 , pp. 628-638
    • Dai, P.1    Rzeszotarski, J.M.2    Paritosh, P.3    Chi, E.H.4
  • 38
    • 84893096721 scopus 로고    scopus 로고
    • Aggregating crowdsourced binary ratings
    • Nilesh Dalvi, Anirban Dasgupta, Ravi Kumar, and Vibhor Rastogi. 2013. Aggregating crowdsourced binary ratings. In WWW 2013. 285–294.
    • (2013) WWW 2013 , pp. 285-294
    • Dalvi, N.1    Dasgupta, A.2    Kumar, R.3    Rastogi, V.4
  • 39
    • 84958231872 scopus 로고    scopus 로고
    • Exploiting document content for efficient aggregation of crowdsourcing votes
    • Martin Davtyan, Carsten Eickhoff, and Thomas Hofmann. 2015. Exploiting document content for efficient aggregation of crowdsourcing votes. In CIKM 2015. 783–790.
    • (2015) CIKM 2015 , pp. 783-790
    • Davtyan, M.1    Eickhoff, C.2    Hofmann, T.3
  • 42
    • 84884591987 scopus 로고    scopus 로고
    • Large-scale linked data integration using probabilistic reasoning and crowdsourcing
    • 2013
    • Gianluca Demartini, Djellel Eddine Difallah, and Philippe Cudré-Mauroux. 2013. Large-scale linked data integration using probabilistic reasoning and crowdsourcing. The VLDB Journal 22, 5 (2013), 665–687.
    • (2013) The VLDB Journal , vol.22 , Issue.5 , pp. 665-687
    • Demartini, G.1    Difallah, D.E.2    Cudré-Mauroux, P.3
  • 43
    • 85072829610 scopus 로고    scopus 로고
    • Scaling-up the crowd: Micro-task pricing schemes for worker retention and latency improvement
    • Djellel Eddine Difallah, Michele Catasta, Gianluca Demartini, and Philippe Cudré-Mauroux. 2014. Scaling-up the crowd: Micro-task pricing schemes for worker retention and latency improvement. In HCOMP 2014.
    • (2014) HCOMP 2014
    • Difallah, D.E.1    Catasta, M.2    Demartini, G.3    Cudré-Mauroux, P.4
  • 44
    • 84887483983 scopus 로고    scopus 로고
    • Mechanical cheat: Spamming schemes and adversarial techniques on crowdsourcing platforms
    • Djellel Eddine Difallah, Gianluca Demartini, and Philippe Cudré-Mauroux. 2012. Mechanical cheat: Spamming schemes and adversarial techniques on crowdsourcing platforms. In CrowdSearch. 26–30.
    • (2012) CrowdSearch , pp. 26-30
    • Difallah, D.E.1    Demartini, G.2    Cudré-Mauroux, P.3
  • 45
    • 84891921026 scopus 로고    scopus 로고
    • Pick-a-crowd: Tell me what you like, and I’ll tell you what to do
    • Djellel Eddine Difallah, Gianluca Demartini, and Philippe Cudré-Mauroux. 2013. Pick-a-crowd: Tell me what you like, and I’ll tell you what to do. In WWW 2013. 367–374.
    • (2013) WWW 2013 , pp. 367-374
    • Difallah, D.E.1    Demartini, G.2    Cudré-Mauroux, P.3
  • 46
    • 85011386598 scopus 로고    scopus 로고
    • Scheduling human intelligence tasks in multi-tenant crowd-powered systems
    • Djellel Eddine Difallah, Gianluca Demartini, and Philippe Cudré-Mauroux. 2016. Scheduling human intelligence tasks in multi-tenant crowd-powered systems. In WWW 2016. 855–865.
    • (2016) WWW 2016 , pp. 855-865
    • Difallah, D.E.1    Demartini, G.2    Cudré-Mauroux, P.3
  • 47
    • 84900457719 scopus 로고    scopus 로고
    • Combining crowdsourcing and learning to improve engagement and performance
    • Mira Dontcheva, Robert R. Morris, Joel R. Brandt, and Elizabeth M. Gerber. 2014. Combining crowdsourcing and learning to improve engagement and performance. In CHI 2014. 3379–3388.
    • (2014) CHI 2014 , pp. 3379-3388
    • Dontcheva, M.1    Morris, R.R.2    Brandt, J.R.3    Gerber, E.M.4
  • 48
    • 84858168620 scopus 로고    scopus 로고
    • Flexible Social Workflows: Collaborations as human architecture
    • March 2012
    • Christoph Dorn, R. N. Taylor, and S. Dustdar. 2012. Flexible Social Workflows: Collaborations as human architecture. IEEE Internet Computing 16, 2 (March 2012), 72–77.
    • (2012) IEEE Internet Computing , vol.16 , Issue.2 , pp. 72-77
    • Dorn, C.1    Taylor, R.N.2    Dustdar, S.3
  • 49
    • 85014764511 scopus 로고    scopus 로고
    • Toward a learning science for complex crowdsourcing tasks
    • ACM, New York, NY, USA
    • Shayan Doroudi, Ece Kamar, Emma Brunskill, and Eric Horvitz. 2016. Toward a learning science for complex crowdsourcing tasks. In CHI 2016. ACM, New York, NY, USA, 2623–2634.
    • (2016) CHI 2016 , pp. 2623-2634
    • Doroudi, S.1    Kamar, E.2    Brunskill, E.3    Horvitz, E.4
  • 50
    • 84858213232 scopus 로고    scopus 로고
    • Shepherding the crowd yields better work
    • Steven Dow, Anand Kulkarni, Scott Klemmer, and Björn Hartmann. 2012. Shepherding the crowd yields better work. In CSCW 2012. 1013–1022.
    • (2012) CSCW 2012 , pp. 1013-1022
    • Dow, S.1    Kulkarni, A.2    Klemmer, S.3    Hartmann, B.4
  • 51
    • 85167627193 scopus 로고    scopus 로고
    • MicroTalk: Using argumentation to improve crowdsourcing accuracy
    • Ryan Drapeau, Lydia B. Chilton, Jonathan Bragg, and Daniel S. Weld. 2016. MicroTalk: Using argumentation to improve crowdsourcing accuracy. In HCOMP 2016.
    • (2016) HCOMP 2016
    • Drapeau, R.1    Chilton, L.B.2    Bragg, J.3    Weld, D.S.4
  • 52
    • 84875647836 scopus 로고    scopus 로고
    • Increasing cheat robustness of crowdsourcing tasks
    • 2013
    • Carsten Eickhoff and Arjen P. de Vries. 2013. Increasing cheat robustness of crowdsourcing tasks. Information Retrieval 16, 2 (2013), 121–137.
    • (2013) Information Retrieval , vol.16 , Issue.2 , pp. 121-137
    • Eickhoff, C.1    De Vries, A.P.2
  • 53
    • 84866626350 scopus 로고    scopus 로고
    • Quality through flow and immersion: Gamifying crowdsourced relevance assessments
    • Carsten Eickhoff, Christopher G. Harris, Arjen P. de Vries, and Padmini Srinivasan. 2012. Quality through flow and immersion: Gamifying crowdsourced relevance assessments. In SIGIR 2012. 871–880.
    • (2012) SIGIR 2012 , pp. 871-880
    • Eickhoff, C.1    Harris, C.G.2    De Vries, A.P.3    Srinivasan, P.4
  • 54
    • 85206035168 scopus 로고    scopus 로고
    • A majority of wrongs doesn’t make it right - On crowdsourcing quality for skewed domain tasks
    • Kinda El Maarry, Ulrich Güntzer, and Wolf-Tilo Balke. 2015. A majority of wrongs doesn’t make it right - On crowdsourcing quality for skewed domain tasks. In WISE 2015. 293–308.
    • (2015) WISE , vol.2015 , pp. 293-308
    • Maarry, K.E.1    Güntzer, U.2    Balke, W.-T.3
  • 55
    • 85167441521 scopus 로고    scopus 로고
    • Incentives to counter bias in human computation
    • Boi Faltings, Radu Jurca, Pearl Pu, and Bao Duy Tran. 2014. Incentives to counter bias in human computation. In HCOMP 2014. http://www.aaai.org/ocs/index.php/HCOMP/HCOMP14/paper/view/8945.
    • (2014) HCOMP 2014
    • Faltings, B.1    Jurca, R.2    Pu, P.3    Tran, B.D.4
  • 57
    • 84896861075 scopus 로고    scopus 로고
    • What’s the right price? Pricing tasks for finishing on time
    • 2011
    • Siamak Faradani, Björn Hartmann, and Panagiotis G. Ipeirotis. 2011. What’s the right price? Pricing tasks for finishing on time.Human Computation 11 (2011).
    • (2011) Human Computation , vol.11
    • Faradani, S.1    Hartmann, B.2    Ipeirotis, P.G.3
  • 58
    • 84977597084 scopus 로고    scopus 로고
    • Please stay vs let’s play: Social pressure incentives in paid collaborative crowdsourcing
    • Oluwaseyi Feyisetan and Elena Simperl. 2016. Please stay vs let’s play: Social pressure incentives in paid collaborative crowdsourcing. In ICWE 2016. 405–412.
    • (2016) ICWE 2016 , pp. 405-412
    • Feyisetan, O.1    Simperl, E.2
  • 59
    • 84939609466 scopus 로고    scopus 로고
    • Improving paid microtasks through gamification and adaptive furtherance incentives
    • Oluwaseyi Feyisetan, Elena Simperl, Max Van Kleek, and Nigel Shadbolt. 2015. Improving paid microtasks through gamification and adaptive furtherance incentives. In WWW 2015. 333–343.
    • (2015) WWW 2015 , pp. 333-343
    • Feyisetan, O.1    Simperl, E.2    Van Kleek, M.3    Shadbolt, N.4
  • 60
    • 84865730611 scopus 로고
    • A set of measures of centrality based on betweenness
    • 1977
    • Linton C. Freeman. 1977. A set of measures of centrality based on betweenness. Sociometry (1977), 35–41.
    • (1977) Sociometry , pp. 35-41
    • Freeman, L.C.1
  • 61
    • 84940423017 scopus 로고    scopus 로고
    • Understanding malicious behavior in crowdsourcing platforms: The case of online surveys
    • Ujwal Gadiraju, Ricardo Kawase, Stefan Dietze, and Gianluca Demartini. 2015. Understanding malicious behavior in crowdsourcing platforms: The case of online surveys. In CHI 2015, Vol. 15.
    • (2015) CHI 2015 , vol.15
    • Gadiraju, U.1    Kawase, R.2    Dietze, S.3    Demartini, G.4
  • 63
    • 85044475619 scopus 로고    scopus 로고
    • Exact exponent in optimal rates for crowdsourcing
    • Chao Gao, Yu Lu, and Denny Zhou. 2016. Exact exponent in optimal rates for crowdsourcing. In ICML 2016. 603–611.
    • (2016) ICML 2016 , pp. 603-611
    • Gao, C.1    Lu, Y.2    Zhou, D.3
  • 64
    • 84871066615 scopus 로고    scopus 로고
    • Map to humans and reduce error: Crowdsourcing for deduplication applied to digital libraries
    • ACM
    • Mihai Georgescu, Dang Duc Pham, Claudiu S. Firan, Wolfgang Nejdl, and Julien Gaugaz. 2012. Map to humans and reduce error: Crowdsourcing for deduplication applied to digital libraries. In CIKM 2012. ACM, 1970–1974.
    • (2012) CIKM 2012 , pp. 1970-1974
    • Georgescu, M.1    Pham, D.D.2    Firan, C.S.3    Nejdl, W.4    Gaugaz, J.5
  • 65
    • 84874863913 scopus 로고    scopus 로고
    • Quality control mechanisms for crowdsourcing: Peer review, arbitration, & expertise at familysearch indexing
    • Derek L. Hansen, Patrick J. Schone, Douglas Corey, Matthew Reid, and Jake Gehring. 2013. Quality control mechanisms for crowdsourcing: Peer review, arbitration, & expertise at familysearch indexing. In CSCW 2013. 649–660.
    • (2013) CSCW 2013 , pp. 649-660
    • Hansen, D.L.1    Schone, P.J.2    Corey, D.3    Reid, M.4    Gehring, J.5
  • 66
    • 84877979429 scopus 로고    scopus 로고
    • Combining crowdsourcing and google street view to identify street-level accessibility problems
    • Kotaro Hara, Vicki Le, and Jon Froehlich. 2013. Combining crowdsourcing and google street view to identify street-level accessibility problems. In CHI 2013. 631–640.
    • (2013) CHI 2013 , pp. 631-640
    • Hara, K.1    Le, V.2    Froehlich, J.3
  • 68
    • 85014781330 scopus 로고    scopus 로고
    • A glimpse far into the future: Understanding long-term crowd worker quality
    • Kenji Hata, Ranjay Krishna, Li Fei-Fei, and Michael S. Bernstein. 2017. A glimpse far into the future: Understanding long-term crowd worker quality. In CSCW 2017. 889–901.
    • (2017) CSCW 2017 , pp. 889-901
    • Hata, K.1    Krishna, R.2    Fei-Fei, L.3    Bernstein, M.S.4
  • 69
    • 77953992027 scopus 로고    scopus 로고
    • Crowdsourcing graphical perception: Using Mechanical Turk to assess visualization design
    • Jeffrey Heer and Michael Bostock. 2010. Crowdsourcing graphical perception: Using Mechanical Turk to assess visualization design. In CHI 2010. 203–212.
    • (2010) CHI 2010 , pp. 203-212
    • Heer, J.1    Bostock, M.2
  • 70
    • 84862102144 scopus 로고    scopus 로고
    • CommunitySourcing: Engaging local crowds to perform expert work via physical kiosks
    • Kurtis Heimerl, Brian Gawalt, Kuang Chen, Tapan Parikh, and Björn Hartmann. 2012. CommunitySourcing: Engaging local crowds to perform expert work via physical kiosks. In CHI 2012. 1539–1548.
    • (2012) CHI 2012 , pp. 1539-1548
    • Heimerl, K.1    Gawalt, B.2    Chen, K.3    Parikh, T.4    Hartmann, B.5
  • 72
    • 80054054806 scopus 로고    scopus 로고
    • Turkalytics: Analytics for human computation
    • Paul Heymann and Hector Garcia-Molina. 2011. Turkalytics: Analytics for human computation. In WWW 2011. 477–486.
    • (2011) WWW 2011 , pp. 477-486
    • Heymann, P.1    Garcia-Molina, H.2
  • 73
    • 85018921016 scopus 로고    scopus 로고
    • Eliciting categorical data for optimal aggregation
    • Curran Associates, Inc
    • Chien-Ju Ho, Rafael Frongillo, and Yiling Chen. 2016. Eliciting categorical data for optimal aggregation. In NIPS 2016. Curran Associates, Inc., 2450–2458.
    • (2016) NIPS 2016 , pp. 2450-2458
    • Ho, C.-J.1    Frongillo, R.2    Chen, Y.3
  • 74
    • 84968835134 scopus 로고    scopus 로고
    • Incentivizing high quality crowdwork
    • Chien-Ju Ho, Aleksandrs Slivkins, Siddharth Suri, and Jennifer Wortman Vaughan. 2015. Incentivizing high quality crowdwork. In WWW 2015. 419–429. DOI:http://dx.doi.org/10.1145/2736277.2741102
    • (2015) WWW 2015 , pp. 419-429
    • Ho, C.-J.1    Slivkins, A.2    Suri, S.3    Vaughan, J.W.4
  • 75
    • 84886397297 scopus 로고    scopus 로고
    • Online task assignment in crowdsourcing markets
    • Chien-Ju Ho and Jennifer Wortman Vaughan. 2012. Online task assignment in crowdsourcing markets. In AAAI, Vol. 12. 45–51.
    • (2012) AAAI , vol.12 , pp. 45-51
    • Ho, C.-J.1    Vaughan, J.W.2
  • 77
    • 84907462746 scopus 로고    scopus 로고
    • Crowdsourcing quality-of-experience assessments
    • Sept. 2014
    • Tobias Hossfeld, Christian Keimel, and Christian Timmerer. 2014. Crowdsourcing quality-of-experience assessments. Computer 47, 9 (Sept. 2014), 98–102.
    • (2014) Computer , vol.47 , Issue.9 , pp. 98-102
    • Hossfeld, T.1    Keimel, C.2    Timmerer, C.3
  • 78
    • 33847246935 scopus 로고    scopus 로고
    • The rise of crowdsourcing
    • June 2006
    • Jeff. Howe. 2006. The rise of crowdsourcing. Wired (June 2006).
    • (2006) Wired
    • Howe, J.1
  • 79
    • 84862095677 scopus 로고    scopus 로고
    • Deploying MonoTrans widgets in the wild
    • Chang Hu, Philip Resnik, Yakov Kronrod, and Benjamin Bederson. 2012. Deploying MonoTrans widgets in the wild. In CHI 2012. 2935–2938.
    • (2012) CHI 2012 , pp. 2935-2938
    • Hu, C.1    Resnik, P.2    Kronrod, Y.3    Bederson, B.4
  • 80
    • 84877989068 scopus 로고    scopus 로고
    • Don’t hide in the crowd!: Increasing social transparency between peer workers improves crowdsourcing outcomes
    • Shih-Wen Huang and Wai-Tat Fu. 2013a. Don’t hide in the crowd!: Increasing social transparency between peer workers improves crowdsourcing outcomes. In CHI 2013. 621–630.
    • (2013) CHI 2013 , pp. 621-630
    • Huang, S.-W.1    Fu, W.-T.2
  • 81
    • 84874875815 scopus 로고    scopus 로고
    • Enhancing reliability using peer consistency evaluation in human computation
    • Shih-Wen Huang and Wai-Tat Fu. 2013b. Enhancing reliability using peer consistency evaluation in human computation. In CSCW 2013. 639–648.
    • (2013) CSCW 2013 , pp. 639-648
    • Huang, S.-W.1    Fu, W.-T.2
  • 82
    • 84883094853 scopus 로고    scopus 로고
    • BATC: A benchmark for aggregation techniques in crowdsourcing
    • Nguyen Quoc Viet Hung, Nguyen Thanh Tam, Ngoc Tran Lam, and Karl Aberer. 2013a. BATC: A benchmark for aggregation techniques in crowdsourcing. In SIGIR 2013. 1079–1080.
    • (2013) SIGIR 2013 , pp. 1079-1080
    • Hung, N.Q.V.1    Tam, N.T.2    Lam, N.T.3    Aberer, K.4
  • 83
    • 84887447348 scopus 로고    scopus 로고
    • An evaluation of aggregation techniques in crowdsourcing
    • Springer
    • Nguyen Quoc Viet Hung, Nguyen Thanh Tam, Lam Ngoc Tran, and Karl Aberer. 2013b. An evaluation of aggregation techniques in crowdsourcing. In WISE 2013. Springer, 1–15.
    • (2013) WISE 2013 , pp. 1-15
    • Hung, N.Q.V.1    Tam, N.T.2    Tran, L.N.3    Aberer, K.4
  • 84
    • 84957602716 scopus 로고    scopus 로고
    • Minimizing efforts in validating crowd answers
    • Nguyen Quoc Viet Hung, Duong Chi Thang, Matthias Weidlich, and Karl Aberer. 2015. Minimizing efforts in validating crowd answers. In SIGMOD 2015. 999–1014.
    • (2015) SIGMOD 2015 , pp. 999-1014
    • Hung, N.Q.V.1    Thang, D.C.2    Weidlich, M.3    Aberer, K.4
  • 87
    • 62949137213 scopus 로고    scopus 로고
    • An analytic approach to reputation ranking of participants in online transactions
    • Aleksandar Ignjatovic, Norman Foo, and Chung Tong Lee. 2008. An analytic approach to reputation ranking of participants in online transactions. In WI/IAT 2008. 587–590.
    • (2008) WI/IAT 2008 , pp. 587-590
    • Ignjatovic, A.1    Foo, N.2    Lee, C.T.3
  • 88
    • 85014757422 scopus 로고    scopus 로고
    • Pay it backward: Per-task payments on crowdsourcing platforms reduce productivity
    • Kazushi Ikeda and Michael S. Bernstein. 2016. Pay it backward: Per-task payments on crowdsourcing platforms reduce productivity. In CHI 2016. 4111–4121. http://doi.acm.org/10.1145/2858036.2858327
    • (2016) CHI 2016 , pp. 4111-4121
    • Ikeda, K.1    Bernstein, M.S.2
  • 89
    • 79958122721 scopus 로고    scopus 로고
    • Analyzing the Amazon Mechanical Turk marketplace
    • Dec. 2010
    • Panagiotis G. Ipeirotis. 2010. Analyzing the Amazon Mechanical Turk marketplace. XRDS 17, 2 (Dec. 2010), 16–21.
    • (2010) XRDS , vol.17 , Issue.2 , pp. 16-21
    • Ipeirotis, P.G.1
  • 90
    • 84909594574 scopus 로고    scopus 로고
    • Quizz: Targeted crowdsourcing with a billion (potential) users
    • Panagiotis G. Ipeirotis and Evgeniy Gabrilovich. 2014. Quizz: Targeted crowdsourcing with a billion (potential) users. In WWW 2014. 143–154.
    • (2014) WWW 2014 , pp. 143-154
    • Ipeirotis, P.G.1    Gabrilovich, E.2
  • 91
    • 84877960208 scopus 로고    scopus 로고
    • Turkopticon: Interrupting worker invisibility in Amazon Mechanical Turk
    • Lilly C. Irani and M. Silberman. 2013. Turkopticon: Interrupting worker invisibility in Amazon Mechanical Turk. In CHI 2013. 611–620.
    • (2013) CHI 2013 , pp. 611-620
    • Irani, L.C.1    Silberman, M.2
  • 92
    • 84937907918 scopus 로고    scopus 로고
    • Reputation-based worker filtering in crowdsourcing
    • Curran Associates, Inc
    • Srikanth Jagabathula, Lakshminarayanan Subramanian, and Ashwin Venkataraman. 2014. Reputation-based worker filtering in crowdsourcing. In NIPS 2014. Curran Associates, Inc., 2492–2500.
    • (2014) NIPS 2014 , pp. 2492-2500
    • Jagabathula, S.1    Subramanian, L.2    Venkataraman, A.3
  • 93
  • 95
    • 0003162764 scopus 로고    scopus 로고
    • The big five trait taxonomy: History, measurement, and theoretical perspectives
    • Guilford Press, New York
    • Oliver P. John and Sanjay Srivastava. 1999. The big five trait taxonomy: History, measurement, and theoretical perspectives. Handbook of Personality: Theory and Research (2nd ed.). Guilford Press, New York, 102–138.
    • (1999) Handbook of Personality: Theory and Research (2nd Ed.) , pp. 102-138
    • John, O.P.1    Srivastava, S.2
  • 96
    • 84903574585 scopus 로고    scopus 로고
    • Improving consensus accuracy via Z-score and weighted voting
    • Hyun Joon Jung and Matthew Lease. 2011. Improving consensus accuracy via Z-score and weighted voting. In Human Computation.
    • (2011) Human Computation
    • Jung, H.J.1    Lease, M.2
  • 97
    • 84866631974 scopus 로고    scopus 로고
    • Inferring missing relevance judgments from crowd workers via probabilistic matrix factorization
    • Hyun Joon Jung and Matthew Lease. 2012. Inferring missing relevance judgments from crowd workers via probabilistic matrix factorization. In SIGIR 2012. 1095–1096.
    • (2012) SIGIR 2012 , pp. 1095-1096
    • Jung, H.J.1    Lease, M.2
  • 98
    • 85167424088 scopus 로고    scopus 로고
    • Predicting next label quality: A time-series model of crowdwork
    • Hyun Joon Jung, Yubin Park, and Matthew Lease. 2014. Predicting next label quality: A time-series model of crowdwork. In HCOMP 2014.
    • (2014) HCOMP 2014
    • Jung, H.J.1    Park, Y.2    Lease, M.3
  • 99
    • 4644328140 scopus 로고    scopus 로고
    • Measuring software product quality: A survey of ISO/IEC 9126
    • 2004
    • Ho-Won Jung, Seung-Gweon Kim, and Chang-Shin Chung. 2004. Measuring software product quality: A survey of ISO/IEC 9126. IEEE Software 5 (2004), 88–92.
    • (2004) IEEE Software , vol.5 , pp. 88-92
    • Jung, H.-W.1    Kim, S.-G.2    Chung, C.-S.3
  • 100
    • 84963512374 scopus 로고    scopus 로고
    • Parting crowds: Characterizing divergent interpretations in crowdsourced annotation tasks
    • Sanjay Kairam and Jeffrey Heer. 2016. Parting crowds: Characterizing divergent interpretations in crowdsourced annotation tasks. In CSCW 2016. 1637–1648. DOI:http://dx.doi.org/10.1145/2818048.2820016
    • (2016) CSCW 2016 , pp. 1637-1648
    • Kairam, S.1    Heer, J.2
  • 101
    • 85162483531 scopus 로고    scopus 로고
    • Iterative learning for reliable crowdsourcing systems
    • Curran Associates, Inc
    • David R. Karger, Sewoong Oh, and Devavrat Shah. 2011. Iterative learning for reliable crowdsourcing systems. In NIPS 2011. Curran Associates, Inc., 1953–1961.
    • (2011) NIPS 2011 , pp. 1953-1961
    • Karger, D.R.1    Oh, S.2    Shah, D.3
  • 102
    • 84896842331 scopus 로고    scopus 로고
    • Budget-optimal task allocation for reliable crowdsourcing systems
    • 2014
    • David R. Karger, Sewoong Oh, and Devavrat Shah. 2014. Budget-optimal task allocation for reliable crowdsourcing systems. Operations Research 62, 1 (2014), 1–24.
    • (2014) Operations Research , vol.62 , Issue.1 , pp. 1-24
    • Karger, D.R.1    Oh, S.2    Shah, D.3
  • 103
    • 84995473513 scopus 로고    scopus 로고
    • Investigating the impact of ‘emphasis frames’ and social loafing on player motivation and performance in a crowdsourcing game
    • Geoff Kaufman, Mary Flanagan, and Sukdith Punjasthitkul. 2016. Investigating the impact of ‘emphasis frames’ and social loafing on player motivation and performance in a crowdsourcing game. In CHI 2016. 4122–4128.
    • (2016) CHI 2016 , pp. 4122-4128
    • Kaufman, G.1    Flanagan, M.2    Punjasthitkul, S.3
  • 104
    • 80052132873 scopus 로고    scopus 로고
    • Crowdsourcing for book search evaluation: Impact of hit design on comparative system ranking
    • Gabriella Kazai, Jaap Kamps, Marijn Koolen, and Natasa Milic-Frayling. 2011. Crowdsourcing for book search evaluation: Impact of hit design on comparative system ranking. In SIGIR 2011. 205–214.
    • (2011) SIGIR 2011 , pp. 205-214
    • Kazai, G.1    Kamps, J.2    Koolen, M.3    Milic-Frayling, N.4
  • 105
    • 83055165986 scopus 로고    scopus 로고
    • Worker types and personality traits in crowdsourcing relevance labels
    • ACM
    • Gabriella Kazai, Jaap Kamps, and Natasa Milic-Frayling. 2011. Worker types and personality traits in crowdsourcing relevance labels. In CIKM 2011. ACM, 1941–1944.
    • (2011) CIKM 2011 , pp. 1941-1944
    • Kazai, G.1    Kamps, J.2    Milic-Frayling, N.3
  • 106
    • 84871089459 scopus 로고    scopus 로고
    • The face of quality in crowdsourcing relevance labels: Demographics, personality and labeling accuracy
    • ACM
    • Gabriella Kazai, Jaap Kamps, and Natasa Milic-Frayling. 2012. The face of quality in crowdsourcing relevance labels: Demographics, personality and labeling accuracy. In CIKM 2012. ACM, 2583–2586.
    • (2012) CIKM 2012 , pp. 2583-2586
    • Kazai, G.1    Kamps, J.2    Milic-Frayling, N.3
  • 107
    • 84964397358 scopus 로고    scopus 로고
    • Quality management in crowdsourcing using gold judges behavior
    • Gabriella Kazai and Imed Zitouni. 2016. Quality management in crowdsourcing using gold judges behavior. In WSDM 2016. 267–276. DOI:http://dx.doi.org/10.1145/2835776.2835835
    • (2016) WSDM 2016 , pp. 267-276
    • Kazai, G.1    Zitouni, I.2
  • 108
    • 78649890126 scopus 로고    scopus 로고
    • Quality assurance for human-based electronic services: A decision matrix for choosing the right approach
    • Robert Kern, Hans Thies, Cordula Bauer, and Gerhard Satzger. 2010. Quality assurance for human-based electronic services: A decision matrix for choosing the right approach. In ICWE 2010 Workshops. 421–424.
    • (2010) ICWE 2010 Workshops , pp. 421-424
    • Kern, R.1    Thies, H.2    Bauer, C.3    Satzger, G.4
  • 109
    • 79952341648 scopus 로고    scopus 로고
    • Evaluating and improving the usability of mechanical turk for low-income workers in India
    • ACM
    • Shashank Khanna, Aishwarya Ratan, James Davis, and William Thies. 2010. Evaluating and improving the usability of mechanical turk for low-income workers in india. In 1st ACM Symposium on Computing for Development. ACM, 12.
    • (2010) 1st ACM Symposium on Computing for Development , vol.12
    • Khanna, S.1    Ratan, A.2    Davis, J.3    Thies, W.4
  • 110
    • 84867773513 scopus 로고    scopus 로고
    • Predicting QoS in scheduled crowdsourcing
    • Roman Khazankin, Daniel Schall, and Schahram Dustdar. 2012. Predicting QoS in scheduled crowdsourcing. In CAISE 2012. 460–472.
    • (2012) CAISE 2012 , pp. 460-472
    • Khazankin, R.1    Schall, D.2    Dustdar, S.3
  • 112
    • 79960271461 scopus 로고    scopus 로고
    • Crowdsourcing, collaboration and creativity
    • 2010
    • Aniket Kittur. 2010. Crowdsourcing, collaboration and creativity. ACM Crossroads 17, 2 (2010), 22–26.
    • (2010) ACM Crossroads , vol.17 , Issue.2 , pp. 22-26
    • Kittur, A.1
  • 114
    • 84858202589 scopus 로고    scopus 로고
    • CrowdWeaver: Visually managing complex crowd work
    • Aniket Kittur, Susheel Khamkar, Paul André, and Robert Kraut. 2012. CrowdWeaver: Visually managing complex crowd work. In CSCW 2012. 1033–1036.
    • (2012) CSCW 2012 , pp. 1033-1036
    • Kittur, A.1    Khamkar, S.2    André, P.3    Kraut, R.4
  • 116
    • 80755168388 scopus 로고    scopus 로고
    • Crowdforge: Crowdsourcing complex work
    • Aniket Kittur, Boris Smus, Susheel Khamkar, and Robert E. Kraut. 2011. Crowdforge: Crowdsourcing complex work. In UIST’11. 43–52.
    • (2011) UIST’11 , pp. 43-52
    • Kittur, A.1    Smus, B.2    Khamkar, S.3    Kraut, R.E.4
  • 117
    • 84968866065 scopus 로고    scopus 로고
    • Motivating multi-generational crowd workers in social-purpose work
    • Masatomo Kobayashi, Shoma Arita, Toshinari Itoko, Shin Saito, and Hironobu Takagi. 2015. Motivating multi-generational crowd workers in social-purpose work. In CSCW 2015. 1813–1824.
    • (2015) CSCW 2015 , pp. 1813-1824
    • Kobayashi, M.1    Arita, S.2    Itoko, T.3    Saito, S.4    Takagi, H.5
  • 118
    • 84968817591 scopus 로고    scopus 로고
    • Getting more for less: Optimized crowdsourcing with dynamic tasks and goals
    • Ari Kobren, Chun How Tan, Panagiotis Ipeirotis, and Evgeniy Gabrilovich. 2015. Getting more for less: Optimized crowdsourcing with dynamic tasks and goals. In WWW 2015. 592–602.
    • (2015) WWW 2015 , pp. 592-602
    • Kobren, A.1    Tan, C.H.2    Ipeirotis, P.3    Gabrilovich, E.4
  • 119
    • 84995473547 scopus 로고    scopus 로고
    • To play or not to play: Interactions between response quality and task complexity in games and paid crowdsourcing
    • Markus Krause and René F. Kizilcec. 2015. To play or not to play: Interactions between response quality and task complexity in games and paid crowdsourcing. In HCOMP 2015. 102–109.
    • (2015) HCOMP 2015 , pp. 102-109
    • Krause, M.1    Kizilcec, R.F.2
  • 122
    • 84963836261 scopus 로고    scopus 로고
    • Crowdsourcing processes: A survey of approaches and opportunities
    • 2016
    • Pavel Kucherbaev, Florian Daniel, Stefano Tranquillini, and Maurizio Marchese. 2016b. Crowdsourcing processes: A survey of approaches and opportunities. IEEE Internet Computing 20, 2 (2016), 50–56.
    • (2016) IEEE Internet Computing , vol.20 , Issue.2 , pp. 50-56
    • Kucherbaev, P.1    Daniel, F.2    Tranquillini, S.3    Marchese, M.4
  • 123
    • 84963623512 scopus 로고    scopus 로고
    • ReLauncher: Crowdsourcing microtasks runtime controller
    • Pavel Kucherbaev, Florian Daniel, Stefano Tranquillini, and Maurizio Marchese. 2016a. ReLauncher: Crowdsourcing microtasks runtime controller. In CSCW 2016. 1607–1612.
    • (2016) CSCW 2016 , pp. 1607-1612
    • Kucherbaev, P.1    Daniel, F.2    Tranquillini, S.3    Marchese, M.4
  • 124
    • 84858187792 scopus 로고    scopus 로고
    • Collaboratively crowdsourcing workflows with Turkomatic
    • ACM, New York
    • Anand Kulkarni, Matthew Can, and Björn Hartmann. 2012a. Collaboratively crowdsourcing workflows with Turkomatic. In CSCW’12. ACM, New York, 1003–1012.
    • (2012) CSCW’12 , pp. 1003-1012
    • Kulkarni, A.1    Can, M.2    Hartmann, B.3
  • 125
    • 84867322288 scopus 로고    scopus 로고
    • MobileWorks: Designing for quality in a managed crowdsourcing architecture
    • Sept. 2012
    • Anand Kulkarni, Philipp Gutheim, Prayag Narula, David Rolnitzky, Tapan Parikh, and Björn Hartmann. 2012b. MobileWorks: Designing for quality in a managed crowdsourcing architecture. IEEE Internet Computing 16, 5 (Sept. 2012), 28–35.
    • (2012) IEEE Internet Computing , vol.16 , Issue.5 , pp. 28-35
    • Kulkarni, A.1    Gutheim, P.2    Narula, P.3    Rolnitzky, D.4    Parikh, T.5    Hartmann, B.6
  • 127
    • 84877983946 scopus 로고    scopus 로고
    • Warping time for more effective real-time crowdsourcing
    • Walter S. Lasecki, Christopher D. Miller, and Jeffrey P. Bigham. 2013. Warping time for more effective real-time crowdsourcing. In CHI 2013. 2033–2036.
    • (2013) CHI 2013 , pp. 2033-2036
    • Lasecki, W.S.1    Miller, C.D.2    Bigham, J.P.3
  • 128
    • 84950997204 scopus 로고    scopus 로고
    • The effects of sequence and delay on crowd work
    • Walter S. Lasecki, Jeffrey M. Rzeszotarski, Adam Marcus, and Jeffrey P. Bigham. 2015. The effects of sequence and delay on crowd work. In CHI 2015. 1375–1378.
    • (2015) CHI 2015 , pp. 1375-1378
    • Lasecki, W.S.1    Rzeszotarski, J.M.2    Marcus, A.3    Bigham, J.P.4
  • 129
    • 84874844446 scopus 로고    scopus 로고
    • Real-time crowd labeling for deployable activity recognition
    • Walter S. Lasecki, Young Chol Song, Henry Kautz, and Jeffrey P. Bigham. 2013. Real-time crowd labeling for deployable activity recognition. In CSCW 2013. 1203–1212.
    • (2013) CSCW 2013 , pp. 1203-1212
    • Lasecki, W.S.1    Song, Y.C.2    Kautz, H.3    Bigham, J.P.4
  • 130
    • 84898987373 scopus 로고    scopus 로고
    • Information extraction and manipulation threats in crowd-powered systems
    • ACM
    • Walter S. Lasecki, Jaime Teevan, and Ece Kamar. 2014. Information extraction and manipulation threats in crowd-powered systems. In CSCW 2014. ACM, 248–256.
    • (2014) CSCW 2014 , pp. 248-256
    • Lasecki, W.S.1    Teevan, J.2    Kamar, E.3
  • 131
    • 33749170463 scopus 로고    scopus 로고
    • Information filtering via iterative refinement
    • 2006
    • Paolo Laureti, Lionel Moret, Yi-Cheng Zhang, and Yi-Kuo Yu. 2006. Information filtering via iterative refinement. Euro-physics Letters 75 (2006), 1006.
    • (2006) Euro-Physics Letters , vol.75 , pp. 1006
    • Laureti, P.1    Moret, L.2    Zhang, Y.-C.3    Yu, Y.-K.4
  • 132
    • 85015044190 scopus 로고    scopus 로고
    • Curiosity killed the cat, but makes crowdwork better
    • ACM, New York
    • Edith Law, Ming Yin, Joslin Goh, Kevin Chen, Michael A. Terry, and Krzysztof Z. Gajos. 2016. Curiosity killed the cat, but makes crowdwork better. In CHI 2016. ACM, New York, 4098–4110.
    • (2016) CHI 2016 , pp. 4098-4110
    • Law, E.1    Yin, M.2    Goh, J.3    Chen, K.4    Terry, M.A.5    Gajos, K.Z.6
  • 133
    • 80052123756 scopus 로고    scopus 로고
    • Ensuring quality in crowdsourced search relevance evaluation: The effects of training question distribution
    • John Le, Andy Edmonds, Vaughn Hester, and Lukas Biewald. 2010. Ensuring quality in crowdsourced search relevance evaluation: The effects of training question distribution. In SIGIR 2010 Workshop on Crowdsourcing for Search Evaluation. 21–26.
    • (2010) SIGIR 2010 Workshop on Crowdsourcing for Search Evaluation , pp. 21-26
    • Le, J.1    Edmonds, A.2    Hester, V.3    Biewald, L.4
  • 134
    • 0001925995 scopus 로고
    • Emerging Perspectives on Service Marketing
    • Robert C. Lewis and Bernhard H. Booms. 1983. Emerging Perspectives on Service Marketing. American Marketing, 99–107.
    • (1983) American Marketing , pp. 99-107
    • Lewis, R.C.1    Booms, B.H.2
  • 135
    • 84909630641 scopus 로고    scopus 로고
    • The wisdom of minority: Discovering and targeting the right group of workers for crowdsourcing
    • Hongwei Li, Bo Zhao, and Ariel Fuxman. 2014. The wisdom of minority: Discovering and targeting the right group of workers for crowdsourcing. In WWW 2014. 165–176.
    • (2014) WWW 2014 , pp. 165-176
    • Li, H.1    Zhao, B.2    Fuxman, A.3
  • 136
    • 84964378377 scopus 로고    scopus 로고
    • Crowdsourcing high quality labels with a tight budget
    • Qi Li, Fenglong Ma, Jing Gao, Lu Su, and Christopher J. Quinn. 2016. Crowdsourcing high quality labels with a tight budget. In WSDM 2016. 237–246.
    • (2016) WSDM 2016 , pp. 237-246
    • Li, Q.1    Ma, F.2    Gao, J.3    Su, L.4    Quinn, C.J.5
  • 137
    • 84908151429 scopus 로고    scopus 로고
    • Signals in the silence: Models of implicit feedback in a recommendation system for crowdsourcing
    • Christopher H. Lin, Ece Kamar, and Eric Horvitz. 2014. Signals in the silence: Models of implicit feedback in a recommendation system for crowdsourcing. In AAAI 2014. 908–915.
    • (2014) AAAI 2014 , pp. 908-915
    • Lin, C.H.1    Kamar, E.2    Horvitz, E.3
  • 140
    • 78649569877 scopus 로고    scopus 로고
    • Turkit: Human computation algorithms on Mechanical Turk
    • ACM, New York
    • Greg Little, Lydia B. Chilton, Max Goldman, and Robert C. Miller. 2010c. Turkit: Human computation algorithms on Mechanical Turk. In UIST’10. ACM, New York, 57–66.
    • (2010) UIST’10 , pp. 57-66
    • Little, G.1    Chilton, L.B.2    Goldman, M.3    Miller, R.C.4
  • 141
    • 84867130103 scopus 로고    scopus 로고
    • TrueLabel + confusions: A spectrum of probabilistic models in analyzing multiple ratings
    • icml.cc/ Omnipress
    • Chao Liu and Yi-Min Wang. 2012. TrueLabel + confusions: A spectrum of probabilistic models in analyzing multiple ratings.. In ICML 2012. icml.cc/ Omnipress.
    • (2012) ICML 2012
    • Liu, C.1    Wang, Y.-M.2
  • 142
    • 84898947098 scopus 로고    scopus 로고
    • Scoring workers in crowdsourcing: How many control questions are enough?
    • Curran Associates, Inc
    • Qiang Liu, Alexander T. Ihler, and Mark Steyvers. 2013. Scoring workers in crowdsourcing: How many control questions are enough? In NIPS 2013. Curran Associates, Inc., 1914–1922.
    • (2013) NIPS 2013 , pp. 1914-1922
    • Liu, Q.1    Ihler, A.T.2    Steyvers, M.3
  • 143
    • 84959060536 scopus 로고    scopus 로고
    • Saving money while polling with interpoll using power analysis
    • Benjamin Livshits and Todd Mytkowicz. 2014. Saving money while polling with interpoll using power analysis. In HCOMP 2014.
    • (2014) HCOMP 2014
    • Livshits, B.1    Mytkowicz, T.2
  • 147
    • 84877995682 scopus 로고    scopus 로고
    • Using crowdsourcing to support pro-environmental Community Activism
    • Elaine Massung, David Coyle, Kirsten F. Cater, Marc Jay, and Chris Preist. 2013. Using crowdsourcing to support pro-environmental Community Activism. In CHI 2013. 371–380.
    • (2013) CHI 2013 , pp. 371-380
    • Massung, E.1    Coyle, D.2    Cater, K.F.3    Jay, M.4    Preist, C.5
  • 148
    • 85020381392 scopus 로고    scopus 로고
    • Using hierarchical skills for optimized task assignment in knowledge-intensive crowdsourcing
    • Panagiotis Mavridis, David Gross-Amblard, and Zoltán Miklós. 2016. Using hierarchical skills for optimized task assignment in knowledge-intensive crowdsourcing. In WWW 2016. 843–853.
    • (2016) WWW 2016 , pp. 843-853
    • Mavridis, P.1    Gross-Amblard, D.2    Miklós, Z.3
  • 149
    • 85107998643 scopus 로고    scopus 로고
    • Why is that relevant? Collecting annotator rationales for relevance judgments
    • Tyler McDonnell, Matthew Lease, Mucahid Kutlu, and Tamer Elsayed. 2016. Why is that relevant? Collecting annotator rationales for relevance judgments. In HCOMP 2016.
    • (2016) HCOMP 2016
    • McDonnell, T.1    Lease, M.2    Kutlu, M.3    Elsayed, T.4
  • 150
    • 84962419909 scopus 로고    scopus 로고
    • Crowdlang: A programming language for the systematic exploration of human computation systems
    • Springer
    • Patrick Minder and Abraham Bernstein. 2012. Crowdlang: A programming language for the systematic exploration of human computation systems. In Social Informatics. Springer, 124–137.
    • (2012) Social Informatics , pp. 124-137
    • Minder, P.1    Bernstein, A.2
  • 151
    • 85015436553 scopus 로고    scopus 로고
    • Visual diversity and user interface quality
    • Aliaksei Miniukovich and Antonella De Angeli. 2015. Visual diversity and user interface quality. In British HCI 2015. 101–109.
    • (2015) British HCI 2015 , pp. 101-109
    • Miniukovich, A.1    De Angeli, A.2
  • 152
    • 84916196366 scopus 로고    scopus 로고
    • Cross-task crowdsourcing
    • Kaixiang Mo, Erheng Zhong, and Qiang Yang. 2013. Cross-task crowdsourcing. In KDD 2013. 677–685.
    • (2013) KDD 2013 , pp. 677-685
    • Mo, K.1    Zhong, E.2    Yang, Q.3
  • 153
    • 84867294283 scopus 로고    scopus 로고
    • Priming for better performance in microtask crowdsourcing environments
    • Sept. 2012
    • Robert R. Morris, Mira Dontcheva, and Elizabeth M. Gerber. 2012. Priming for better performance in microtask crowdsourcing environments. IEEE Internet Computing 16, 5 (Sept. 2012), 13–19.
    • (2012) IEEE Internet Computing , vol.16 , Issue.5 , pp. 13-19
    • Morris, R.R.1    Dontcheva, M.2    Gerber, E.M.3
  • 154
    • 84980378623 scopus 로고    scopus 로고
    • Identifying careless workers in crowdsourcing platforms: A game theory approach
    • Yashar Moshfeghi, Alvaro F. Huertas-Rosero, and Joemon M. Jose. 2016. Identifying careless workers in crowdsourcing platforms: A game theory approach. In ACM SIGIR 2016. 857–860.
    • (2016) ACM SIGIR 2016 , pp. 857-860
    • Moshfeghi, Y.1    Huertas-Rosero, A.F.2    Jose, J.M.3
  • 155
    • 84933538091 scopus 로고    scopus 로고
    • Threats and trade-offs in resource critical crowdsourcing tasks over networks
    • Swaprava Nath, Pankaj Dayama, Dinesh Garg, Y. Narahari, and James Y. Zou. 2012. Threats and trade-offs in resource critical crowdsourcing tasks over networks. In AAAI 2012.
    • (2012) AAAI 2012
    • Nath, S.1    Dayama, P.2    Garg, D.3    Narahari, Y.4    Zou, J.Y.5
  • 156
    • 85015067002 scopus 로고    scopus 로고
    • How one microtask affects another
    • Edward Newell and Derek Ruths. 2016. How one microtask affects another. In CHI 2016. 3155–3166.
    • (2016) CHI 2016 , pp. 3155-3166
    • Newell, E.1    Ruths, D.2
  • 157
    • 84937543086 scopus 로고    scopus 로고
    • Using crowdsourcing to investigate perception of narrative similarity
    • ACM
    • Dong Nguyen, Dolf Trieschnigg, and Mariët Theune. 2014. Using crowdsourcing to investigate perception of narrative similarity. In CIKM 2014. ACM, 321–330.
    • (2014) CIKM 2014 , pp. 321-330
    • Nguyen, D.1    Trieschnigg, D.2    Theune, M.3
  • 159
    • 84977471736 scopus 로고    scopus 로고
    • WeatherUSI: User-based weather crowdsourcing on public displays
    • Evangelos Niforatos, Ivan Elhart, and Marc Langheinrich. 2016. WeatherUSI: User-based weather crowdsourcing on public displays. In ICWE 2016. 567–570.
    • (2016) ICWE 2016 , pp. 567-570
    • Niforatos, E.1    Elhart, I.2    Langheinrich, M.3
  • 160
    • 80755187853 scopus 로고    scopus 로고
    • Platemate: Crowdsourcing nutritional analysis from food photographs
    • ACM
    • Jon Noronha, Eric Hysen, Haoqi Zhang, and Krzysztof Z. Gajos. 2011. Platemate: Crowdsourcing nutritional analysis from food photographs. In UIST 2011. ACM, 1–12.
    • (2011) UIST 2011 , pp. 1-12
    • Noronha, J.1    Hysen, E.2    Zhang, H.3    Gajos, K.Z.4
  • 162
    • 85050940841 scopus 로고    scopus 로고
    • Optimality of belief propagation for crowdsourced classification
    • JMLR.org
    • Jungseul Ok, Sewoong Oh, Jinwoo Shin, and Yung Yi. 2016. Optimality of belief propagation for crowdsourced classification. In ICML 2016. JMLR.org, 535–544.
    • (2016) ICML 2016 , pp. 535-544
    • Ok, J.1    Oh, S.2    Shin, J.3    Yi, Y.4
  • 163
    • 84923421915 scopus 로고    scopus 로고
    • Programmatic gold: Targeted and scalable quality assurance in crowdsourcing
    • 2011
    • David Oleson, Alexander Sorokin, Greg P. Laughlin, Vaughn Hester, John Le, and Lukas Biewald. 2011. Programmatic gold: Targeted and scalable quality assurance in crowdsourcing. HCOMP 2011 11, 11 (2011).
    • (2011) HCOMP 2011 , vol.11 , pp. 11
    • Oleson, D.1    Sorokin, A.2    Laughlin, G.P.3    Hester, V.4    Le, J.5    Biewald, L.6
  • 164
    • 84977569709 scopus 로고    scopus 로고
    • On the invitation of expert contributors from online communities for knowledge crowdsourcing tasks
    • Jasper Oosterman and Geert-Jan Houben. 2016. On the invitation of expert contributors from online communities for knowledge crowdsourcing tasks. In ICWE 2016. 413–421.
    • (2016) ICWE 2016 , pp. 413-421
    • Oosterman, J.1    Houben, G.-J.2
  • 166
    • 84899010623 scopus 로고    scopus 로고
    • Competing or aiming to be average?: Normification as a means of engaging digital volunteers
    • Chris Preist, Elaine Massung, and David Coyle. 2014. Competing or aiming to be average?: Normification as a means of engaging digital volunteers. In CSCW 2014. 1222–1233.
    • (2014) CSCW 2014 , pp. 1222-1233
    • Preist, C.1    Massung, E.2    Coyle, D.3
  • 167
    • 84856351776 scopus 로고    scopus 로고
    • Strategies for community based crowdsourcing
    • Cindy Puah, Ahmad Zaki Abu Bakar, and Chu Wei Ching. 2011. Strategies for community based crowdsourcing. In ICRIIS 2011. 1–4.
    • (2011) ICRIIS 2011 , pp. 1-4
    • Puah, C.1    Bakar, A.Z.A.2    Ching, C.W.3
  • 168
    • 84899033611 scopus 로고    scopus 로고
    • AskSheet: Efficient human computation for decision making with spreadsheets
    • Alexander J. Quinn and Benjamin B. Bederson. 2014. AskSheet: Efficient human computation for decision making with spreadsheets. In CSCW 2014. 1456–1466.
    • (2014) CSCW 2014 , pp. 1456-1466
    • Quinn, A.J.1    Bederson, B.B.2
  • 169
    • 85112829112 scopus 로고    scopus 로고
    • Learning to scale payments in crowdsourcing with properboost
    • Goran Radanovic and Boi Faltings. 2016. Learning to scale payments in crowdsourcing with properboost. In HCOMP 2016.
    • (2016) HCOMP 2016
    • Radanovic, G.1    Faltings, B.2
  • 170
    • 84937711788 scopus 로고    scopus 로고
    • Effective crowdsourcing for software feature ideation in online co-creation forums
    • Karthikeyan Rajasekharan, Aditya P. Mathur, and See-Kiong Ng. 2013. Effective crowdsourcing for software feature ideation in online co-creation forums. In SEKE 2013. 119–124.
    • (2013) SEKE 2013 , pp. 119-124
    • Rajasekharan, K.1    Mathur, A.P.2    Ng, S.-K.3
  • 171
    • 84908448107 scopus 로고    scopus 로고
    • What will others choose? How a majority vote reward scheme can improve human computation in a spatial location identification task
    • Huaming Rao, Shih-Wen Huang, and Wai-Tat Fu. 2013. What will others choose? How a majority vote reward scheme can improve human computation in a spatial location identification task. In HCOMP 2013.
    • (2013) HCOMP 2013
    • Rao, H.1    Huang, S.-W.2    Fu, W.-T.3
  • 172
    • 85162536261 scopus 로고    scopus 로고
    • Ranking annotators for crowdsourced labeling tasks
    • Curran Associates Inc
    • Vikas C. Raykar and Shipeng Yu. 2011. Ranking annotators for crowdsourced labeling tasks. In NIPS 2011. Curran Associates Inc., 1809–1817.
    • (2011) NIPS 2011 , pp. 1809-1817
    • Raykar, V.C.1    Yu, S.2
  • 174
    • 85055086357 scopus 로고    scopus 로고
    • An assessment of intrinsic and extrinsic motivation on task performance in crowdsourcing markets
    • Jakob Rogstadius, Vassilis Kostakos, Aniket Kittur, Boris Smus, Jim Laredo, and Maja Vukovic. 2011. An assessment of intrinsic and extrinsic motivation on task performance in crowdsourcing markets. In ICWSM.
    • (2011) ICWSM
    • Rogstadius, J.1    Kostakos, V.2    Kittur, A.3    Smus, B.4    Laredo, J.5    Vukovic, M.6
  • 175
    • 84937598773 scopus 로고    scopus 로고
    • Competitive game designs for improving the cost effectiveness of crowdsourcing
    • ACM
    • Markus Rokicki, Sergiu Chelaru, Sergej Zerr, and Stefan Siersdorfer. 2014. Competitive game designs for improving the cost effectiveness of crowdsourcing. In CICM 2014. ACM, 1469–1478.
    • (2014) CICM 2014 , pp. 1469-1478
    • Rokicki, M.1    Chelaru, S.2    Zerr, S.3    Siersdorfer, S.4
  • 176
    • 84968763642 scopus 로고    scopus 로고
    • Groupsourcing: Team competition designs for crowdsourcing
    • Markus Rokicki, Sergej Zerr, and Stefan Siersdorfer. 2015. Groupsourcing: Team competition designs for crowdsourcing. In WWW 2015. 906–915.
    • (2015) WWW 2015 , pp. 906-915
    • Rokicki, M.1    Zerr, S.2    Siersdorfer, S.3
  • 177
    • 84937968933 scopus 로고    scopus 로고
    • Task assignment optimization in knowledge-intensive crowdsourcing
    • 2015
    • Senjuti Basu Roy, Ioanna Lykourentzou, Saravanan Thirumuruganathan, Sihem Amer-Yahia, and Gautam Das. 2015. Task assignment optimization in knowledge-intensive crowdsourcing. The VLDB Journal 24, 4 (2015), 467–491.
    • (2015) The VLDB Journal , vol.24 , Issue.4 , pp. 467-491
    • Roy, S.B.1    Lykourentzou, I.2    Thirumuruganathan, S.3    Amer-Yahia, S.4    Das, G.5
  • 178
    • 80755168394 scopus 로고    scopus 로고
    • Instrumenting the crowd: Using implicit behavioral measures to predict task performance
    • ACM
    • Jeffrey M. Rzeszotarski and Aniket Kittur. 2011. Instrumenting the crowd: Using implicit behavioral measures to predict task performance. In UIST 2011. ACM, 13–22.
    • (2011) UIST 2011 , pp. 13-22
    • Rzeszotarski, J.M.1    Kittur, A.2
  • 179
    • 84869013273 scopus 로고    scopus 로고
    • CrowdScape: Interactively visualizing user behavior and output
    • ACM
    • Jeffrey M. Rzeszotarski and Aniket Kittur. 2012. CrowdScape: Interactively visualizing user behavior and output. In UIST 2012. ACM, 55–62.
    • (2012) UIST 2012 , pp. 55-62
    • Rzeszotarski, J.M.1    Kittur, A.2
  • 180
    • 84992197431 scopus 로고    scopus 로고
    • Ability grouping of crowd workers via reward discrimination
    • Yuko Sakurai, Tenda Okimoto, Masaaki Oka, Masato Shinoda, and Makoto Yokoo. 2013. Ability grouping of crowd workers via reward discrimination. In HCOMP 2013.
    • (2013) HCOMP 2013
    • Sakurai, Y.1    Okimoto, T.2    Oka, M.3    Shinoda, M.4    Yokoo, M.5
  • 181
    • 84892430704 scopus 로고    scopus 로고
    • Auction-based crowdsourcing supporting skill management
    • June 2013
    • Benjamin Satzger, Harald Psaier, Daniel Schall, and Schahram Dustdar. 2013. Auction-based crowdsourcing supporting skill management. Information Systems 38, 4 (June 2013), 547–560.
    • (2013) Information Systems , vol.38 , Issue.4 , pp. 547-560
    • Satzger, B.1    Psaier, H.2    Schall, D.3    Dustdar, S.4
  • 182
    • 84879474878 scopus 로고    scopus 로고
    • Incentives and rewarding in social computing
    • 2013
    • Ognjen Scekic, Hong-Linh Truong, and Schahram Dustdar. 2013a. Incentives and rewarding in social computing. Communications of the ACM 56, 6 (2013), 72–82.
    • (2013) Communications of The ACM , vol.56 , Issue.6 , pp. 72-82
    • Scekic, O.1    Truong, H.-L.2    Dustdar, S.3
  • 184
    • 85028223118 scopus 로고    scopus 로고
    • Crowdsourcing tasks to social networks in BPEL4People
    • 2014
    • Daniel Schall, Benjamin Satzger, and Harald Psaier. 2014. Crowdsourcing tasks to social networks in BPEL4People. World Wide Web 17, 1 (2014), 1–32.
    • (2014) World Wide Web , vol.17 , Issue.1 , pp. 1-32
    • Schall, D.1    Satzger, B.2    Psaier, H.3
  • 185
    • 84861947956 scopus 로고    scopus 로고
    • Expert discovery and interactions in mixed service-oriented systems
    • 2012
    • Daniel Schall, Florian Skopik, and Schahram Dustdar. 2012. Expert discovery and interactions in mixed service-oriented systems. IEEE Transactions on Services Computing 5, 2 (2012), 233–245.
    • (2012) IEEE Transactions on Services Computing , vol.5 , Issue.2 , pp. 233-245
    • Schall, D.1    Skopik, F.2    Dustdar, S.3
  • 186
    • 84893325426 scopus 로고    scopus 로고
    • Worker perception of quality assurance mechanisms in crowdsourcing and human computation markets
    • Thimo Schulze, Dennis Nordheimer, and Martin Schader. 2013. Worker perception of quality assurance mechanisms in crowdsourcing and human computation markets. In AMCIS 2013.
    • (2013) AMCIS 2013
    • Schulze, T.1    Nordheimer, D.2    Schader, M.3
  • 187
    • 84965134289 scopus 로고    scopus 로고
    • Double or nothing: Multiplicative incentive mechanisms for crowdsourcing
    • Curran Associates, Inc
    • Nihar Bhadresh Shah and Denny Zhou. 2015. Double or nothing: Multiplicative incentive mechanisms for crowdsourcing. In NIPS 2015. Curran Associates, Inc., 1–9.
    • (2015) NIPS 2015 , pp. 1-9
    • Shah, N.B.1    Zhou, D.2
  • 188
    • 84997755074 scopus 로고    scopus 로고
    • No oops, you won’t do it again: Mechanisms for self-correction in crowdsourcing
    • Nihar Bhadresh Shah and Dengyong Zhou. 2016. No oops, you won’t do it again: Mechanisms for self-correction in crowdsourcing. In ICML 2016. 1–10.
    • (2016) ICML 2016 , pp. 1-10
    • Shah, N.B.1    Zhou, D.2
  • 189
    • 84979038141 scopus 로고    scopus 로고
    • SQUARE: A benchmark for research on computing crowd consensus
    • Aashish Sheshadri and Matthew Lease. 2013. SQUARE: A benchmark for research on computing crowd consensus. In HCOMP 2013.
    • (2013) HCOMP 2013
    • Sheshadri, A.1    Lease, M.2
  • 190
    • 84893063797 scopus 로고    scopus 로고
    • Pricing mechanisms for crowdsourcing markets
    • Yaron Singer and Manas Mittal. 2013. Pricing mechanisms for crowdsourcing markets. In WWW. 1157–1166.
    • (2013) WWW , pp. 1157-1166
    • Singer, Y.1    Mittal, M.2
  • 191
  • 192
    • 84994141774 scopus 로고    scopus 로고
    • Two’s company, three’s a crowd: A case study of crowdsourcing software development
    • Klaas-Jan Stol and Brian Fitzgerald. 2014. Two’s company, three’s a crowd: A case study of crowdsourcing software development. In ICSE 2014. 187–198.
    • (2014) ICSE 2014 , pp. 187-198
    • Stol, K.-J.1    Fitzgerald, B.2
  • 195
    • 84867323091 scopus 로고    scopus 로고
    • Analyzing crowd labor and designing incentives for humans in the loop
    • Sept. 2012
    • Oksana Tokarchuk, Roberta Cuel, and Marco Zamarian. 2012. Analyzing crowd labor and designing incentives for humans in the loop. IEEE Internet Computing 16, 5 (Sept. 2012), 45–51.
    • (2012) IEEE Internet Computing , vol.16 , Issue.5 , pp. 45-51
    • Tokarchuk, O.1    Cuel, R.2    Zamarian, M.3
  • 198
    • 85025459096 scopus 로고    scopus 로고
    • Crowdsourced nonparametric density estimation using relative distances
    • Antti Ukkonen, Behrouz Derakhshan, and Hannes Heikinheimo. 2015. Crowdsourced nonparametric density estimation using relative distances. In HCOMP 2015.
    • (2015) HCOMP 2015
    • Ukkonen, A.1    Derakhshan, B.2    Heikinheimo, H.3
  • 199
    • 84900443529 scopus 로고    scopus 로고
    • Twitch crowdsourcing: Crowd contributions in short bursts of time
    • Rajan Vaish, Keith Wyngarden, Jingshu Chen, Brandon Cheung, and Michael S. Bernstein. 2014. Twitch crowdsourcing: Crowd contributions in short bursts of time. In CHI 2014. 3645–3654.
    • (2014) CHI 2014 , pp. 3645-3654
    • Vaish, R.1    Wyngarden, K.2    Chen, J.3    Cheung, B.4    Bernstein, M.S.5
  • 200
    • 84905112588 scopus 로고    scopus 로고
    • Crowdsourcing algorithms for entity resolution
    • 2014
    • Norases Vesdapunt, Kedar Bellare, and Nilesh Dalvi. 2014. Crowdsourcing algorithms for entity resolution. Proceedings of the VLDB Endowment 7, 12 (2014), 1071–1082.
    • (2014) Proceedings of The VLDB Endowment , vol.7 , Issue.12 , pp. 1071-1082
    • Vesdapunt, N.1    Bellare, K.2    Dalvi, N.3
  • 202
    • 51749107030 scopus 로고    scopus 로고
    • ReCAPTCHA: Human-based character recognition via web security measures
    • 2008
    • Luis Von Ahn, Benjamin Maurer, Colin McMillen, David Abraham, and Manuel Blum. 2008. reCAPTCHA: Human-based character recognition via web security measures. Science 321, 5895 (2008), 1465–1468.
    • (2008) Science , vol.321 , Issue.5895 , pp. 1465-1468
    • Ahn, L.V.1    Maurer, B.2    McMillen, C.3    Abraham, D.4    Blum, M.5
  • 205
    • 85167458387 scopus 로고    scopus 로고
    • Output agreement mechanisms and common knowledge
    • Bo Waggoner and Yiling Chen. 2014. Output agreement mechanisms and common knowledge. In HCOMP 2014.
    • (2014) HCOMP 2014
    • Waggoner, B.1    Chen, Y.2
  • 207
    • 85162481803 scopus 로고    scopus 로고
    • Bayesian bias mitigation for crowdsourcing
    • Curran Associates, Inc
    • Fabian L. Wauthier and Michael I. Jordan. 2011. Bayesian bias mitigation for crowdsourcing. In NIPS 2011. Curran Associates, Inc., 1800–1808.
    • (2011) NIPS 2011 , pp. 1800-1808
    • Wauthier, F.L.1    Jordan, M.I.2
  • 209
    • 84862069681 scopus 로고    scopus 로고
    • Strategies for crowdsourcing social data analysis
    • Wesley Willett, Jeffrey Heer, and Maneesh Agrawala. 2012. Strategies for crowdsourcing social data analysis. In CHI 2012. 227–236.
    • (2012) CHI 2012 , pp. 227-236
    • Willett, W.1    Heer, J.2    Agrawala, M.3
  • 211
    • 80053455236 scopus 로고    scopus 로고
    • Active learning from crowds
    • Yan Yan, Glenn M. Fung, Rómer Rosales, and Jennifer G. Dy. 2011. Active learning from crowds. In ICML 2011. 1161–1168.
    • (2011) ICML 2011 , pp. 1161-1168
    • Yan, Y.1    Fung, G.M.2    Rosales, R.3    Dy, J.G.4
  • 212
    • 85016497602 scopus 로고    scopus 로고
    • Modeling task complexity in crowdsourcing
    • Jie Yang, Judith Redi, Gianluca DeMartini, and Alessandro Bozzon. 2016. Modeling task complexity in crowdsourcing. In HCOMP 2016. 249–258.
    • (2016) HCOMP 2016 , pp. 249-258
    • Yang, J.1    Redi, J.2    DeMartini, G.3    Bozzon, A.4
  • 213
    • 84973438529 scopus 로고    scopus 로고
    • Monetary interventions in crowdsourcing task switching
    • Ming Yin, Yiling Chen, and Yu-An Sun. 2014. Monetary interventions in crowdsourcing task switching. In HCOMP 2014.
    • (2014) HCOMP 2014
    • Yin, M.1    Chen, Y.2    Sun, Y.-A.3
  • 214
    • 84898974524 scopus 로고    scopus 로고
    • A comparison of social, learning, and financial strategies on crowd engagement and output quality
    • Lixiu Yu, Paul André, Aniket Kittur, and Robert Kraut. 2014. A comparison of social, learning, and financial strategies on crowd engagement and output quality. In CSCW 2014. 967–978.
    • (2014) CSCW 2014 , pp. 967-978
    • Yu, L.1    André, P.2    Kittur, A.3    Kraut, R.4
  • 216
    • 84924852080 scopus 로고    scopus 로고
    • TaskRec: A task recommendation framework in crowdsourcing systems
    • 2015
    • Man-Ching Yuen, Irwin King, and Kwong-Sak Leung. 2015. TaskRec: A task recommendation framework in crowdsourcing systems. Neural Processing Letters 41, 2 (2015), 223–238.
    • (2015) Neural Processing Letters , vol.41 , Issue.2 , pp. 223-238
    • Yuen, M.-C.1    King, I.2    Leung, K.-S.3
  • 218
    • 84937559546 scopus 로고    scopus 로고
    • A transfer learning based framework of crowd-selection on twitter
    • ACM
    • Zhou Zhao, Da Yan, Wilfred Ng, and Shi Gao. 2013. A transfer learning based framework of crowd-selection on twitter. In KDD 2013. ACM, 1514–1517.
    • (2013) KDD 2013 , pp. 1514-1517
    • Zhao, Z.1    Yan, D.2    Ng, W.3    Gao, S.4
  • 219
    • 84898939077 scopus 로고    scopus 로고
    • Reviewing versus doing: Learning and performance in crowd assessment
    • Haiyi Zhu, Steven P. Dow, Robert E. Kraut, and Aniket Kittur. 2014. Reviewing versus doing: Learning and performance in crowd assessment. In CSCW 2014. 1445–1455.
    • (2014) CSCW 2014 , pp. 1445-1455
    • Zhu, H.1    Dow, S.P.2    Kraut, R.E.3    Kittur, A.4
  • 220
    • 84928737713 scopus 로고    scopus 로고
    • Leveraging in-batch annotation bias for crowdsourced active learning
    • Honglei Zhuang and Joel Young. 2015. Leveraging in-batch annotation bias for crowdsourced active learning. In WSDM 2015. 243–252. DOI:http://dx.doi.org/10.1145/2684822.2685301
    • (2015) WSDM 2015 , pp. 243-252
    • Zhuang, H.1    Young, J.2


* 이 정보는 Elsevier사의 SCOPUS DB에서 KISTI가 분석하여 추출한 것입니다.