-
1
-
-
84980317079
-
How many workers to ask?: Adaptive exploration for collecting high quality labels
-
Ittai Abraham, Omar Alonso, Vasilis Kandylas, Rajesh Patel, Steven Shelford, and Aleksandrs Slivkins. 2016. How many workers to ask?: Adaptive exploration for collecting high quality labels. In ACM SIGIR 2016. 473–482. http://doi.acm.org/10.1145/2911451.2911514
-
(2016)
ACM SIGIR 2016
, pp. 473-482
-
-
Abraham, I.1
Alonso, O.2
Kandylas, V.3
Patel, R.4
Shelford, S.5
Slivkins, A.6
-
2
-
-
35348837893
-
A content-driven reputation system for the Wikipedia
-
Bo Thomas Adler and Luca De Alfaro. 2007. A content-driven reputation system for the Wikipedia. In WWW 2007. 261–270.
-
(2007)
WWW 2007
, pp. 261-270
-
-
Adler, B.T.1
De Alfaro, L.2
-
3
-
-
79952274979
-
Wikipedia vandalism detection: Combining natural language, metadata, and reputation features
-
Springer
-
Bo Thomas Adler, Luca De Alfaro, Santiago M. Mola-Velasco, Paolo Rosso, and Andrew G. West. 2011. Wikipedia vandalism detection: Combining natural language, metadata, and reputation features. In Computational Linguistics and Intelligent Text Processing. Springer, 277–288.
-
(2011)
Computational Linguistics and Intelligent Text Processing
, pp. 277-288
-
-
Adler, B.T.1
De Alfaro, L.2
Mola-Velasco, S.M.3
Rosso, P.4
West, A.G.5
-
4
-
-
84877297434
-
An introduction to outlier analysis
-
Springer
-
Charu C. Aggarwal. 2013. An introduction to outlier analysis. In Outlier Analysis. Springer, 1–40.
-
(2013)
Outlier Analysis
, pp. 1-40
-
-
Aggarwal, C.C.1
-
5
-
-
80755169524
-
The Jabberwocky programming environment for structured social computing
-
Salman Ahmad, Alexis Battle, Zahan Malkani, and Sepander Kamvar. 2011. The Jabberwocky programming environment for structured social computing. In UIST’11. 53–64.
-
(2011)
UIST’11
, pp. 53-64
-
-
Ahmad, S.1
Battle, A.2
Malkani, Z.3
Kamvar, S.4
-
6
-
-
34247540250
-
Games with a purpose
-
June 2006
-
Luis von Ahn. 2006. Games with a purpose. Computer 39, 6 (June 2006), 92–94.
-
(2006)
Computer
, vol.39
, Issue.6
, pp. 92-94
-
-
Von Ahn, L.1
-
7
-
-
84900392478
-
Cognitively inspired task design to improve user performance on crowdsourcing platforms
-
Harini Alagarai Sampath, Rajeev Rajeshuni, and Bipin Indurkhya. 2014. Cognitively inspired task design to improve user performance on crowdsourcing platforms. In CHI 2014. 3665–3674.
-
(2014)
CHI 2014
, pp. 3665-3674
-
-
Sampath, H.A.1
Rajeshuni, R.2
Indurkhya, B.3
-
8
-
-
84875749706
-
Quality control in crowdsourcing systems: Issues and directions
-
March 2013
-
Mohammad Allahbakhsh, Boualem Benatallah, Aleksandar Ignjatovic, Hamid Reza Motahari-Nezhad, Elisa Bertino, and Schahram Dustdar. 2013. Quality control in crowdsourcing systems: Issues and directions. IEEE Internet Computing 17, 2 (March 2013), 76–81.
-
(2013)
IEEE Internet Computing
, vol.17
, Issue.2
, pp. 76-81
-
-
Allahbakhsh, M.1
Benatallah, B.2
Ignjatovic, A.3
Motahari-Nezhad, H.R.4
Bertino, E.5
Dustdar, S.6
-
9
-
-
84874414989
-
Reputation management in crowdsourcing systems
-
Mohammad Allahbakhsh, Aleksandar Ignjatovic, Boualem Benatallah, Seyed-Mehdi-Reza Beheshti, Elisa Bertino, and Norman Foo. 2012. Reputation management in crowdsourcing systems. In CollaborateCom 2012. 664–671.
-
(2012)
CollaborateCom 2012
, pp. 664-671
-
-
Allahbakhsh, M.1
Ignjatovic, A.2
Benatallah, B.3
Beheshti, S.-M.-R.4
Bertino, E.5
Foo, N.6
-
10
-
-
84920512028
-
Harnessing implicit teamwork knowledge to improve quality in crowdsourcing processes
-
Mohammad Allahbakhsh, Samira Samimi, Hamid Reza Motahari-Nezhad, and Boualem Benatallah. 2014. Harnessing implicit teamwork knowledge to improve quality in crowdsourcing processes. In SOCA 2014. 17–24.
-
(2014)
SOCA 2014
, pp. 17-24
-
-
Allahbakhsh, M.1
Samimi, S.2
Motahari-Nezhad, H.R.3
Benatallah, B.4
-
11
-
-
84858211339
-
Collaborative workflow for crowdsourcing translation
-
Vamshi Ambati, Stephan Vogel, and Jaime Carbonell. 2012. Collaborative workflow for crowdsourcing translation. In CSCW 2012. 1191–1194.
-
(2012)
CSCW 2012
, pp. 1191-1194
-
-
Ambati, V.1
Vogel, S.2
Carbonell, J.3
-
12
-
-
84968834788
-
Discovering best teams for data leak-aware crowdsourcing in social networks
-
(2016), Article
-
Iheb Ben Amor, Salima Benbernou, Mourad Ouziri, Zaki Malik, and Brahim Medjahed. 2016. Discovering best teams for data leak-aware crowdsourcing in social networks. ACM Transactions on the Web 10, 1 (2016), Article 2, 27 pages.
-
(2016)
ACM Transactions on The Web
, vol.10
, Issue.1
, pp. 27
-
-
Amor, I.B.1
Benbernou, S.2
Ouziri, M.3
Malik, Z.4
Medjahed, B.5
-
14
-
-
84889603841
-
An analysis of crowd workers mistakes for specific and complex relevance assessment task
-
ACM
-
Jesse Anderton, Maryam Bashir, Virgil Pavlu, and Javed A. Aslam. 2013. An analysis of crowd workers mistakes for specific and complex relevance assessment task. In CIKM 2013. ACM, 1873–1876.
-
(2013)
CIKM 2013
, pp. 1873-1876
-
-
Anderton, J.1
Bashir, M.2
Pavlu, V.3
Aslam, J.A.4
-
15
-
-
84900431356
-
Effects of simultaneous and sequential work structures on distributed collaborative interdependent tasks
-
Paul André, Robert E. Kraut, and Aniket Kittur. 2014. Effects of simultaneous and sequential work structures on distributed collaborative interdependent tasks. In CHI 2014. 139–148.
-
(2014)
CHI 2014
, pp. 139-148
-
-
André, P.1
Kraut, R.E.2
Kittur, A.3
-
17
-
-
84908213945
-
Crowdsourcing for multiple-choice question answering
-
Bahadir Ismail Aydin, Yavuz Selim Yilmaz, Yaliang Li, Qi Li, Jing Gao, and Murat Demirbas. 2014. Crowdsourcing for multiple-choice question answering. In 26th IAAI Conference.
-
(2014)
26th IAAI Conference
-
-
Aydin, B.I.1
Yilmaz, Y.S.2
Li, Y.3
Li, Q.4
Gao, J.5
Demirbas, M.6
-
18
-
-
84996520882
-
Active content-based crowdsourcing task selection
-
Piyush Bansal, Carsten Eickhoff, and Thomas Hofmann. 2016. Active content-based crowdsourcing task selection. In CIKM 2016. 529–538.
-
(2016)
CIKM 2016
, pp. 529-538
-
-
Bansal, P.1
Eickhoff, C.2
Hofmann, T.3
-
19
-
-
70349690001
-
Methodologies for data quality assessment and improvement
-
2009
-
Carlo Batini, Cinzia Cappiello, Chiara Francalanci, and Andrea Maurino. 2009. Methodologies for data quality assessment and improvement. ACM Computing Surveys (CSUR) 41, 3 (2009), 16.
-
(2009)
ACM Computing Surveys (CSUR)
, vol.41
, Issue.3
, pp. 16
-
-
Batini, C.1
Cappiello, C.2
Francalanci, C.3
Maurino, A.4
-
21
-
-
78649604206
-
Soylent: A word processor with a crowd inside
-
ACM
-
Michael S. Bernstein, Greg Little, Robert C. Miller, Björn Hartmann, Mark S. Ackerman, David R. Karger, David Crowell, and Katrina Panovich. 2010. Soylent: A word processor with a crowd inside. In UIST 2010. ACM, 313–322.
-
(2010)
UIST 2010
, pp. 313-322
-
-
Bernstein, M.S.1
Little, G.2
Miller, R.C.3
Hartmann, B.4
Ackerman, M.S.5
Karger, D.R.6
Crowell, D.7
Panovich, K.8
-
22
-
-
78649587763
-
VizWiz: Nearly real-time answers to visual questions
-
Jeffrey P. Bigham, Chandrika Jayant, Hanjie Ji, Greg Little, Andrew Miller, Robert C. Miller, Robin Miller, Aubrey Tatarowicz, Brandyn White, Samual White, and Tom Yeh. 2010. VizWiz: Nearly real-time answers to visual questions. In UIST 2010 (UIST’10). 333–342.
-
(2010)
UIST 2010 (UIST’10)
, pp. 333-342
-
-
Bigham, J.P.1
Jayant, C.2
Ji, H.3
Little, G.4
Miller, A.5
Miller, R.C.6
Miller, R.7
Tatarowicz, A.8
White, B.9
White, S.10
Yeh, T.11
-
24
-
-
84991466889
-
Location privacy for crowdsourcing applications
-
Ioannis Boutsis and Vana Kalogeraki. 2016. Location privacy for crowdsourcing applications. In UbiComp 2016. 694–705. DOI:http://dx.doi.org/10.1145/2971648.2971741
-
(2016)
UbiComp 2016
, pp. 694-705
-
-
Boutsis, I.1
Kalogeraki, V.2
-
25
-
-
84860842462
-
Answering search queries with crowdsearcher
-
Alessandro Bozzon, Marco Brambilla, and Stefano Ceri. 2012. Answering search queries with crowdsearcher. In WWW 2012. 1009–1018.
-
(2012)
WWW 2012
, pp. 1009-1018
-
-
Bozzon, A.1
Brambilla, M.2
Ceri, S.3
-
28
-
-
84907031159
-
From labor to trader: Opinion elicitation via online crowds as a market
-
Caleb Chen Cao, Lei Chen, and Hosagrahar Visvesvaraya Jagadish. 2014. From labor to trader: Opinion elicitation via online crowds as a market. In KDD 2014. 1067–1076.
-
(2014)
KDD 2014
, pp. 1067-1076
-
-
Cao, C.C.1
Chen, L.2
Jagadish, H.V.3
-
29
-
-
79960271433
-
A quality model for mashups
-
Cinzia Cappiello, Florian Daniel, Agnes Koschmider, Maristella Matera, and Matteo Picozzi. 2011. A quality model for mashups. In ICWE 2011. 137–151. DOI:http://dx.doi.org/10.1007/978-3-642-22233-7_10
-
(2011)
ICWE 2011
, pp. 137-151
-
-
Cappiello, C.1
Daniel, F.2
Koschmider, A.3
Matera, M.4
Picozzi, M.5
-
30
-
-
84908206322
-
Modal ranking: A uniquely robust voting rule
-
Ioannis Caragiannis, Ariel D. Procaccia, and Nisarg Shah. 2014. Modal ranking: A uniquely robust voting rule. In AAAI 2014. 616–622.
-
(2014)
AAAI 2014
, pp. 616-622
-
-
Caragiannis, I.1
Procaccia, A.D.2
Shah, N.3
-
31
-
-
84893185641
-
Efficient crowdsourcing contests
-
Ruggiero Cavallo and Shaili Jain. 2012. Efficient crowdsourcing contests. In Proceedings of AAMAS 2012 - Volume 2. 677–686.
-
(2012)
Proceedings of AAMAS 2012
, vol.2
, pp. 677-686
-
-
Cavallo, R.1
Jain, S.2
-
33
-
-
84901398917
-
Optimistic knowledge gradient policy for optimal budget allocation in crowdsourcing
-
Xi Chen, Qihang Lin, and Dengyong Zhou. 2013. Optimistic knowledge gradient policy for optimal budget allocation in crowdsourcing. In ICML 2013, Vol. 28, 64–72.
-
(2013)
ICML 2013
, vol.28
, pp. 64-72
-
-
Chen, X.1
Lin, Q.2
Zhou, D.3
-
34
-
-
84951038767
-
Measuring crowdsourcing effort with error-time curves
-
ACM, New York
-
Justin Cheng, Jaime Teevan, and Michael S. Bernstein. 2015a. Measuring crowdsourcing effort with error-time curves. In CHI 2015. ACM, New York, 1365–1374.
-
(2015)
CHI 2015
, pp. 1365-1374
-
-
Cheng, J.1
Teevan, J.2
Bernstein, M.S.3
-
35
-
-
84951025363
-
Break it down: A comparison of macro- And microtasks
-
Justin Cheng, Jaime Teevan, Shamsi T. Iqbal, and Michael S. Bernstein. 2015b. Break it down: A comparison of macro- and microtasks. In CHI 2015. 4061–4064. http://doi.acm.org/10.1145/2702123.2702146
-
(2015)
CHI 2015
, pp. 4061-4064
-
-
Cheng, J.1
Teevan, J.2
Iqbal, S.T.3
Bernstein, M.S.4
-
36
-
-
77956240720
-
Task search in a human computation market
-
Lydia B. Chilton, John J. Horton, Robert C. Miller, and Shiri Azenkot. 2010. Task search in a human computation market. In HCOMP 2010. 1–9. http://doi.acm.org/10.1145/1837885.1837889
-
(2010)
HCOMP 2010
, pp. 1-9
-
-
Chilton, L.B.1
Horton, J.J.2
Miller, R.C.3
Azenkot, S.4
-
37
-
-
84968909143
-
And now for something completely different: Improving crowdsourcing workflows with micro-diversions
-
ACM, New York
-
Peng Dai, Jeffrey M. Rzeszotarski, Praveen Paritosh, and Ed H. Chi. 2015. And now for something completely different: Improving crowdsourcing workflows with micro-diversions. In CSCW 2015. ACM, New York, 628–638. DOI:http://dx. doi.org/10.1145/2675133.2675260
-
(2015)
CSCW 2015
, pp. 628-638
-
-
Dai, P.1
Rzeszotarski, J.M.2
Paritosh, P.3
Chi, E.H.4
-
39
-
-
84958231872
-
Exploiting document content for efficient aggregation of crowdsourcing votes
-
Martin Davtyan, Carsten Eickhoff, and Thomas Hofmann. 2015. Exploiting document content for efficient aggregation of crowdsourcing votes. In CIKM 2015. 783–790.
-
(2015)
CIKM 2015
, pp. 783-790
-
-
Davtyan, M.1
Eickhoff, C.2
Hofmann, T.3
-
40
-
-
79961090049
-
Reputation systems for open collaboration
-
2011
-
Luca De Alfaro, Ashutosh Kulshreshtha, Ian Pye, and Bo Thomas Adler. 2011. Reputation systems for open collaboration. Communications of the ACM 54, 8 (2011), 81–87.
-
(2011)
Communications of The ACM
, vol.54
, Issue.8
, pp. 81-87
-
-
De Alfaro, L.1
Kulshreshtha, A.2
Pye, I.3
Adler, B.T.4
-
42
-
-
84884591987
-
Large-scale linked data integration using probabilistic reasoning and crowdsourcing
-
2013
-
Gianluca Demartini, Djellel Eddine Difallah, and Philippe Cudré-Mauroux. 2013. Large-scale linked data integration using probabilistic reasoning and crowdsourcing. The VLDB Journal 22, 5 (2013), 665–687.
-
(2013)
The VLDB Journal
, vol.22
, Issue.5
, pp. 665-687
-
-
Demartini, G.1
Difallah, D.E.2
Cudré-Mauroux, P.3
-
43
-
-
85072829610
-
Scaling-up the crowd: Micro-task pricing schemes for worker retention and latency improvement
-
Djellel Eddine Difallah, Michele Catasta, Gianluca Demartini, and Philippe Cudré-Mauroux. 2014. Scaling-up the crowd: Micro-task pricing schemes for worker retention and latency improvement. In HCOMP 2014.
-
(2014)
HCOMP 2014
-
-
Difallah, D.E.1
Catasta, M.2
Demartini, G.3
Cudré-Mauroux, P.4
-
44
-
-
84887483983
-
Mechanical cheat: Spamming schemes and adversarial techniques on crowdsourcing platforms
-
Djellel Eddine Difallah, Gianluca Demartini, and Philippe Cudré-Mauroux. 2012. Mechanical cheat: Spamming schemes and adversarial techniques on crowdsourcing platforms. In CrowdSearch. 26–30.
-
(2012)
CrowdSearch
, pp. 26-30
-
-
Difallah, D.E.1
Demartini, G.2
Cudré-Mauroux, P.3
-
45
-
-
84891921026
-
Pick-a-crowd: Tell me what you like, and I’ll tell you what to do
-
Djellel Eddine Difallah, Gianluca Demartini, and Philippe Cudré-Mauroux. 2013. Pick-a-crowd: Tell me what you like, and I’ll tell you what to do. In WWW 2013. 367–374.
-
(2013)
WWW 2013
, pp. 367-374
-
-
Difallah, D.E.1
Demartini, G.2
Cudré-Mauroux, P.3
-
46
-
-
85011386598
-
Scheduling human intelligence tasks in multi-tenant crowd-powered systems
-
Djellel Eddine Difallah, Gianluca Demartini, and Philippe Cudré-Mauroux. 2016. Scheduling human intelligence tasks in multi-tenant crowd-powered systems. In WWW 2016. 855–865.
-
(2016)
WWW 2016
, pp. 855-865
-
-
Difallah, D.E.1
Demartini, G.2
Cudré-Mauroux, P.3
-
47
-
-
84900457719
-
Combining crowdsourcing and learning to improve engagement and performance
-
Mira Dontcheva, Robert R. Morris, Joel R. Brandt, and Elizabeth M. Gerber. 2014. Combining crowdsourcing and learning to improve engagement and performance. In CHI 2014. 3379–3388.
-
(2014)
CHI 2014
, pp. 3379-3388
-
-
Dontcheva, M.1
Morris, R.R.2
Brandt, J.R.3
Gerber, E.M.4
-
48
-
-
84858168620
-
Flexible Social Workflows: Collaborations as human architecture
-
March 2012
-
Christoph Dorn, R. N. Taylor, and S. Dustdar. 2012. Flexible Social Workflows: Collaborations as human architecture. IEEE Internet Computing 16, 2 (March 2012), 72–77.
-
(2012)
IEEE Internet Computing
, vol.16
, Issue.2
, pp. 72-77
-
-
Dorn, C.1
Taylor, R.N.2
Dustdar, S.3
-
49
-
-
85014764511
-
Toward a learning science for complex crowdsourcing tasks
-
ACM, New York, NY, USA
-
Shayan Doroudi, Ece Kamar, Emma Brunskill, and Eric Horvitz. 2016. Toward a learning science for complex crowdsourcing tasks. In CHI 2016. ACM, New York, NY, USA, 2623–2634.
-
(2016)
CHI 2016
, pp. 2623-2634
-
-
Doroudi, S.1
Kamar, E.2
Brunskill, E.3
Horvitz, E.4
-
51
-
-
85167627193
-
MicroTalk: Using argumentation to improve crowdsourcing accuracy
-
Ryan Drapeau, Lydia B. Chilton, Jonathan Bragg, and Daniel S. Weld. 2016. MicroTalk: Using argumentation to improve crowdsourcing accuracy. In HCOMP 2016.
-
(2016)
HCOMP 2016
-
-
Drapeau, R.1
Chilton, L.B.2
Bragg, J.3
Weld, D.S.4
-
52
-
-
84875647836
-
Increasing cheat robustness of crowdsourcing tasks
-
2013
-
Carsten Eickhoff and Arjen P. de Vries. 2013. Increasing cheat robustness of crowdsourcing tasks. Information Retrieval 16, 2 (2013), 121–137.
-
(2013)
Information Retrieval
, vol.16
, Issue.2
, pp. 121-137
-
-
Eickhoff, C.1
De Vries, A.P.2
-
53
-
-
84866626350
-
Quality through flow and immersion: Gamifying crowdsourced relevance assessments
-
Carsten Eickhoff, Christopher G. Harris, Arjen P. de Vries, and Padmini Srinivasan. 2012. Quality through flow and immersion: Gamifying crowdsourced relevance assessments. In SIGIR 2012. 871–880.
-
(2012)
SIGIR 2012
, pp. 871-880
-
-
Eickhoff, C.1
Harris, C.G.2
De Vries, A.P.3
Srinivasan, P.4
-
54
-
-
85206035168
-
A majority of wrongs doesn’t make it right - On crowdsourcing quality for skewed domain tasks
-
Kinda El Maarry, Ulrich Güntzer, and Wolf-Tilo Balke. 2015. A majority of wrongs doesn’t make it right - On crowdsourcing quality for skewed domain tasks. In WISE 2015. 293–308.
-
(2015)
WISE
, vol.2015
, pp. 293-308
-
-
Maarry, K.E.1
Güntzer, U.2
Balke, W.-T.3
-
55
-
-
85167441521
-
Incentives to counter bias in human computation
-
Boi Faltings, Radu Jurca, Pearl Pu, and Bao Duy Tran. 2014. Incentives to counter bias in human computation. In HCOMP 2014. http://www.aaai.org/ocs/index.php/HCOMP/HCOMP14/paper/view/8945.
-
(2014)
HCOMP 2014
-
-
Faltings, B.1
Jurca, R.2
Pu, P.3
Tran, B.D.4
-
57
-
-
84896861075
-
What’s the right price? Pricing tasks for finishing on time
-
2011
-
Siamak Faradani, Björn Hartmann, and Panagiotis G. Ipeirotis. 2011. What’s the right price? Pricing tasks for finishing on time.Human Computation 11 (2011).
-
(2011)
Human Computation
, vol.11
-
-
Faradani, S.1
Hartmann, B.2
Ipeirotis, P.G.3
-
58
-
-
84977597084
-
Please stay vs let’s play: Social pressure incentives in paid collaborative crowdsourcing
-
Oluwaseyi Feyisetan and Elena Simperl. 2016. Please stay vs let’s play: Social pressure incentives in paid collaborative crowdsourcing. In ICWE 2016. 405–412.
-
(2016)
ICWE 2016
, pp. 405-412
-
-
Feyisetan, O.1
Simperl, E.2
-
59
-
-
84939609466
-
Improving paid microtasks through gamification and adaptive furtherance incentives
-
Oluwaseyi Feyisetan, Elena Simperl, Max Van Kleek, and Nigel Shadbolt. 2015. Improving paid microtasks through gamification and adaptive furtherance incentives. In WWW 2015. 333–343.
-
(2015)
WWW 2015
, pp. 333-343
-
-
Feyisetan, O.1
Simperl, E.2
Van Kleek, M.3
Shadbolt, N.4
-
60
-
-
84865730611
-
A set of measures of centrality based on betweenness
-
1977
-
Linton C. Freeman. 1977. A set of measures of centrality based on betweenness. Sociometry (1977), 35–41.
-
(1977)
Sociometry
, pp. 35-41
-
-
Freeman, L.C.1
-
61
-
-
84940423017
-
Understanding malicious behavior in crowdsourcing platforms: The case of online surveys
-
Ujwal Gadiraju, Ricardo Kawase, Stefan Dietze, and Gianluca Demartini. 2015. Understanding malicious behavior in crowdsourcing platforms: The case of online surveys. In CHI 2015, Vol. 15.
-
(2015)
CHI 2015
, vol.15
-
-
Gadiraju, U.1
Kawase, R.2
Dietze, S.3
Demartini, G.4
-
62
-
-
84995751216
-
Boomerang: Rebounding the consequences of reputation feedback on crowdsourcing platforms
-
Snehalkumar (Neil) S. Gaikwad, Durim Morina, Adam Ginzberg, Catherine Mullings, Shirish Goyal, Dilrukshi Gamage, Christopher Diemert, Mathias Burton, Sharon Zhou, Mark Whiting, Karolina Ziulkoski, Alipta Ballav, Aaron Gilbee, Senadhipathige S. Niranga, Vibhor Sehgal, Jasmine Lin, Leonardy Kristianto, Angela Richmond-Fuller, Jeff Regino, Nalin Chhibber, Dinesh Majeti, Sachin Sharma, Kamila Mananova, Dinesh Dhakal, William Dai, Victoria Purynova, Samarth Sandeep, Varshine Chandrakanthan, Tejas Sarma, Sekandar Matin, Ahmed Nasser, Rohit Nistala, Alexander Stolzoff, Kristy Milland, Vinayak Mathur, Rajan Vaish, and Michael S. Bernstein. 2016. Boomerang: Rebounding the consequences of reputation feedback on crowdsourcing platforms. In UIST 2016. 625–637.
-
(2016)
UIST 2016
, pp. 625-637
-
-
Neil, S.1
Gaikwad, S.2
Morina, D.3
Ginzberg, A.4
Mullings, C.5
Goyal, S.6
Gamage, D.7
Diemert, C.8
Burton, M.9
Zhou, S.10
Whiting, M.11
Ziulkoski, K.12
Ballav, A.13
Gilbee, A.14
Niranga, S.S.15
Sehgal, V.16
Lin, J.17
Kristianto, L.18
Richmond-Fuller, A.19
Regino, J.20
Chhibber, N.21
Majeti, D.22
Sharma, S.23
Mananova, K.24
Dhakal, D.25
Dai, W.26
Purynova, V.27
Sandeep, S.28
Chandrakanthan, V.29
Sarma, T.30
Matin, S.31
Nasser, A.32
Nistala, R.33
Stolzoff, A.34
Milland, K.35
Mathur, V.36
Vaish, R.37
Bernstein, M.S.38
more..
-
63
-
-
85044475619
-
Exact exponent in optimal rates for crowdsourcing
-
Chao Gao, Yu Lu, and Denny Zhou. 2016. Exact exponent in optimal rates for crowdsourcing. In ICML 2016. 603–611.
-
(2016)
ICML 2016
, pp. 603-611
-
-
Gao, C.1
Lu, Y.2
Zhou, D.3
-
64
-
-
84871066615
-
Map to humans and reduce error: Crowdsourcing for deduplication applied to digital libraries
-
ACM
-
Mihai Georgescu, Dang Duc Pham, Claudiu S. Firan, Wolfgang Nejdl, and Julien Gaugaz. 2012. Map to humans and reduce error: Crowdsourcing for deduplication applied to digital libraries. In CIKM 2012. ACM, 1970–1974.
-
(2012)
CIKM 2012
, pp. 1970-1974
-
-
Georgescu, M.1
Pham, D.D.2
Firan, C.S.3
Nejdl, W.4
Gaugaz, J.5
-
65
-
-
84874863913
-
Quality control mechanisms for crowdsourcing: Peer review, arbitration, & expertise at familysearch indexing
-
Derek L. Hansen, Patrick J. Schone, Douglas Corey, Matthew Reid, and Jake Gehring. 2013. Quality control mechanisms for crowdsourcing: Peer review, arbitration, & expertise at familysearch indexing. In CSCW 2013. 649–660.
-
(2013)
CSCW 2013
, pp. 649-660
-
-
Hansen, D.L.1
Schone, P.J.2
Corey, D.3
Reid, M.4
Gehring, J.5
-
66
-
-
84877979429
-
Combining crowdsourcing and google street view to identify street-level accessibility problems
-
Kotaro Hara, Vicki Le, and Jon Froehlich. 2013. Combining crowdsourcing and google street view to identify street-level accessibility problems. In CHI 2013. 631–640.
-
(2013)
CHI 2013
, pp. 631-640
-
-
Hara, K.1
Le, V.2
Froehlich, J.3
-
67
-
-
59449103450
-
Towards a theory of user judgment of aesthetics and user interface quality
-
2008
-
Jan Hartmann, Alistair Sutcliffe, and Antonella De Angeli. 2008. Towards a theory of user judgment of aesthetics and user interface quality. ACM Transactions on Computer-Human Interaction 15, 4 (2008), 15.
-
(2008)
ACM Transactions on Computer-Human Interaction
, vol.15
, Issue.4
, pp. 15
-
-
Hartmann, J.1
Sutcliffe, A.2
De Angeli, A.3
-
68
-
-
85014781330
-
A glimpse far into the future: Understanding long-term crowd worker quality
-
Kenji Hata, Ranjay Krishna, Li Fei-Fei, and Michael S. Bernstein. 2017. A glimpse far into the future: Understanding long-term crowd worker quality. In CSCW 2017. 889–901.
-
(2017)
CSCW 2017
, pp. 889-901
-
-
Hata, K.1
Krishna, R.2
Fei-Fei, L.3
Bernstein, M.S.4
-
69
-
-
77953992027
-
Crowdsourcing graphical perception: Using Mechanical Turk to assess visualization design
-
Jeffrey Heer and Michael Bostock. 2010. Crowdsourcing graphical perception: Using Mechanical Turk to assess visualization design. In CHI 2010. 203–212.
-
(2010)
CHI 2010
, pp. 203-212
-
-
Heer, J.1
Bostock, M.2
-
70
-
-
84862102144
-
CommunitySourcing: Engaging local crowds to perform expert work via physical kiosks
-
Kurtis Heimerl, Brian Gawalt, Kuang Chen, Tapan Parikh, and Björn Hartmann. 2012. CommunitySourcing: Engaging local crowds to perform expert work via physical kiosks. In CHI 2012. 1539–1548.
-
(2012)
CHI 2012
, pp. 1539-1548
-
-
Heimerl, K.1
Gawalt, B.2
Chen, K.3
Parikh, T.4
Hartmann, B.5
-
71
-
-
0031166786
-
Software quality and the capability maturity model
-
1997
-
James Herbsleb, David Zubrow, Dennis Goldenson, Will Hayes, and Mark Paulk. 1997. Software quality and the capability maturity model. Communications of the ACM 40, 6 (1997), 30–40.
-
(1997)
Communications of The ACM
, vol.40
, Issue.6
, pp. 30-40
-
-
Herbsleb, J.1
Zubrow, D.2
Goldenson, D.3
Hayes, W.4
Paulk, M.5
-
72
-
-
80054054806
-
Turkalytics: Analytics for human computation
-
Paul Heymann and Hector Garcia-Molina. 2011. Turkalytics: Analytics for human computation. In WWW 2011. 477–486.
-
(2011)
WWW 2011
, pp. 477-486
-
-
Heymann, P.1
Garcia-Molina, H.2
-
73
-
-
85018921016
-
Eliciting categorical data for optimal aggregation
-
Curran Associates, Inc
-
Chien-Ju Ho, Rafael Frongillo, and Yiling Chen. 2016. Eliciting categorical data for optimal aggregation. In NIPS 2016. Curran Associates, Inc., 2450–2458.
-
(2016)
NIPS 2016
, pp. 2450-2458
-
-
Ho, C.-J.1
Frongillo, R.2
Chen, Y.3
-
74
-
-
84968835134
-
Incentivizing high quality crowdwork
-
Chien-Ju Ho, Aleksandrs Slivkins, Siddharth Suri, and Jennifer Wortman Vaughan. 2015. Incentivizing high quality crowdwork. In WWW 2015. 419–429. DOI:http://dx.doi.org/10.1145/2736277.2741102
-
(2015)
WWW 2015
, pp. 419-429
-
-
Ho, C.-J.1
Slivkins, A.2
Suri, S.3
Vaughan, J.W.4
-
75
-
-
84886397297
-
Online task assignment in crowdsourcing markets
-
Chien-Ju Ho and Jennifer Wortman Vaughan. 2012. Online task assignment in crowdsourcing markets. In AAAI, Vol. 12. 45–51.
-
(2012)
AAAI
, vol.12
, pp. 45-51
-
-
Ho, C.-J.1
Vaughan, J.W.2
-
76
-
-
84912023144
-
Situated crowdsourcing using a market model
-
ACM
-
Simo Hosio, Jorge Goncalves, Vili Lehdonvirta, Denzil Ferreira, and Vassilis Kostakos. 2014. Situated crowdsourcing using a market model. In UIST 2014. ACM, 55–64.
-
(2014)
UIST 2014
, pp. 55-64
-
-
Hosio, S.1
Goncalves, J.2
Lehdonvirta, V.3
Ferreira, D.4
Kostakos, V.5
-
77
-
-
84907462746
-
Crowdsourcing quality-of-experience assessments
-
Sept. 2014
-
Tobias Hossfeld, Christian Keimel, and Christian Timmerer. 2014. Crowdsourcing quality-of-experience assessments. Computer 47, 9 (Sept. 2014), 98–102.
-
(2014)
Computer
, vol.47
, Issue.9
, pp. 98-102
-
-
Hossfeld, T.1
Keimel, C.2
Timmerer, C.3
-
78
-
-
33847246935
-
The rise of crowdsourcing
-
June 2006
-
Jeff. Howe. 2006. The rise of crowdsourcing. Wired (June 2006).
-
(2006)
Wired
-
-
Howe, J.1
-
79
-
-
84862095677
-
Deploying MonoTrans widgets in the wild
-
Chang Hu, Philip Resnik, Yakov Kronrod, and Benjamin Bederson. 2012. Deploying MonoTrans widgets in the wild. In CHI 2012. 2935–2938.
-
(2012)
CHI 2012
, pp. 2935-2938
-
-
Hu, C.1
Resnik, P.2
Kronrod, Y.3
Bederson, B.4
-
80
-
-
84877989068
-
Don’t hide in the crowd!: Increasing social transparency between peer workers improves crowdsourcing outcomes
-
Shih-Wen Huang and Wai-Tat Fu. 2013a. Don’t hide in the crowd!: Increasing social transparency between peer workers improves crowdsourcing outcomes. In CHI 2013. 621–630.
-
(2013)
CHI 2013
, pp. 621-630
-
-
Huang, S.-W.1
Fu, W.-T.2
-
81
-
-
84874875815
-
Enhancing reliability using peer consistency evaluation in human computation
-
Shih-Wen Huang and Wai-Tat Fu. 2013b. Enhancing reliability using peer consistency evaluation in human computation. In CSCW 2013. 639–648.
-
(2013)
CSCW 2013
, pp. 639-648
-
-
Huang, S.-W.1
Fu, W.-T.2
-
82
-
-
84883094853
-
BATC: A benchmark for aggregation techniques in crowdsourcing
-
Nguyen Quoc Viet Hung, Nguyen Thanh Tam, Ngoc Tran Lam, and Karl Aberer. 2013a. BATC: A benchmark for aggregation techniques in crowdsourcing. In SIGIR 2013. 1079–1080.
-
(2013)
SIGIR 2013
, pp. 1079-1080
-
-
Hung, N.Q.V.1
Tam, N.T.2
Lam, N.T.3
Aberer, K.4
-
83
-
-
84887447348
-
An evaluation of aggregation techniques in crowdsourcing
-
Springer
-
Nguyen Quoc Viet Hung, Nguyen Thanh Tam, Lam Ngoc Tran, and Karl Aberer. 2013b. An evaluation of aggregation techniques in crowdsourcing. In WISE 2013. Springer, 1–15.
-
(2013)
WISE 2013
, pp. 1-15
-
-
Hung, N.Q.V.1
Tam, N.T.2
Tran, L.N.3
Aberer, K.4
-
84
-
-
84957602716
-
Minimizing efforts in validating crowd answers
-
Nguyen Quoc Viet Hung, Duong Chi Thang, Matthias Weidlich, and Karl Aberer. 2015. Minimizing efforts in validating crowd answers. In SIGMOD 2015. 999–1014.
-
(2015)
SIGMOD 2015
, pp. 999-1014
-
-
Hung, N.Q.V.1
Thang, D.C.2
Weidlich, M.3
Aberer, K.4
-
86
-
-
85040661977
-
Interpretation of crowdsourced activities using provenance network analysis
-
Trung Dong Huynh, Mark Ebden, Matteo Venanzi, Sarvapali D. Ramchurn, Stephen J. Roberts, and Luc Moreau. 2013. Interpretation of crowdsourced activities using provenance network analysis. In HCOMP 2013.
-
(2013)
HCOMP 2013
-
-
Huynh, T.D.1
Ebden, M.2
Venanzi, M.3
Ramchurn, S.D.4
Roberts, S.J.5
Moreau, L.6
-
87
-
-
62949137213
-
An analytic approach to reputation ranking of participants in online transactions
-
Aleksandar Ignjatovic, Norman Foo, and Chung Tong Lee. 2008. An analytic approach to reputation ranking of participants in online transactions. In WI/IAT 2008. 587–590.
-
(2008)
WI/IAT 2008
, pp. 587-590
-
-
Ignjatovic, A.1
Foo, N.2
Lee, C.T.3
-
88
-
-
85014757422
-
Pay it backward: Per-task payments on crowdsourcing platforms reduce productivity
-
Kazushi Ikeda and Michael S. Bernstein. 2016. Pay it backward: Per-task payments on crowdsourcing platforms reduce productivity. In CHI 2016. 4111–4121. http://doi.acm.org/10.1145/2858036.2858327
-
(2016)
CHI 2016
, pp. 4111-4121
-
-
Ikeda, K.1
Bernstein, M.S.2
-
89
-
-
79958122721
-
Analyzing the Amazon Mechanical Turk marketplace
-
Dec. 2010
-
Panagiotis G. Ipeirotis. 2010. Analyzing the Amazon Mechanical Turk marketplace. XRDS 17, 2 (Dec. 2010), 16–21.
-
(2010)
XRDS
, vol.17
, Issue.2
, pp. 16-21
-
-
Ipeirotis, P.G.1
-
90
-
-
84909594574
-
Quizz: Targeted crowdsourcing with a billion (potential) users
-
Panagiotis G. Ipeirotis and Evgeniy Gabrilovich. 2014. Quizz: Targeted crowdsourcing with a billion (potential) users. In WWW 2014. 143–154.
-
(2014)
WWW 2014
, pp. 143-154
-
-
Ipeirotis, P.G.1
Gabrilovich, E.2
-
91
-
-
84877960208
-
Turkopticon: Interrupting worker invisibility in Amazon Mechanical Turk
-
Lilly C. Irani and M. Silberman. 2013. Turkopticon: Interrupting worker invisibility in Amazon Mechanical Turk. In CHI 2013. 611–620.
-
(2013)
CHI 2013
, pp. 611-620
-
-
Irani, L.C.1
Silberman, M.2
-
92
-
-
84937907918
-
Reputation-based worker filtering in crowdsourcing
-
Curran Associates, Inc
-
Srikanth Jagabathula, Lakshminarayanan Subramanian, and Ashwin Venkataraman. 2014. Reputation-based worker filtering in crowdsourcing. In NIPS 2014. Curran Associates, Inc., 2492–2500.
-
(2014)
NIPS 2014
, pp. 2492-2500
-
-
Jagabathula, S.1
Subramanian, L.2
Venkataraman, A.3
-
95
-
-
0003162764
-
The big five trait taxonomy: History, measurement, and theoretical perspectives
-
Guilford Press, New York
-
Oliver P. John and Sanjay Srivastava. 1999. The big five trait taxonomy: History, measurement, and theoretical perspectives. Handbook of Personality: Theory and Research (2nd ed.). Guilford Press, New York, 102–138.
-
(1999)
Handbook of Personality: Theory and Research (2nd Ed.)
, pp. 102-138
-
-
John, O.P.1
Srivastava, S.2
-
96
-
-
84903574585
-
Improving consensus accuracy via Z-score and weighted voting
-
Hyun Joon Jung and Matthew Lease. 2011. Improving consensus accuracy via Z-score and weighted voting. In Human Computation.
-
(2011)
Human Computation
-
-
Jung, H.J.1
Lease, M.2
-
97
-
-
84866631974
-
Inferring missing relevance judgments from crowd workers via probabilistic matrix factorization
-
Hyun Joon Jung and Matthew Lease. 2012. Inferring missing relevance judgments from crowd workers via probabilistic matrix factorization. In SIGIR 2012. 1095–1096.
-
(2012)
SIGIR 2012
, pp. 1095-1096
-
-
Jung, H.J.1
Lease, M.2
-
98
-
-
85167424088
-
Predicting next label quality: A time-series model of crowdwork
-
Hyun Joon Jung, Yubin Park, and Matthew Lease. 2014. Predicting next label quality: A time-series model of crowdwork. In HCOMP 2014.
-
(2014)
HCOMP 2014
-
-
Jung, H.J.1
Park, Y.2
Lease, M.3
-
99
-
-
4644328140
-
Measuring software product quality: A survey of ISO/IEC 9126
-
2004
-
Ho-Won Jung, Seung-Gweon Kim, and Chang-Shin Chung. 2004. Measuring software product quality: A survey of ISO/IEC 9126. IEEE Software 5 (2004), 88–92.
-
(2004)
IEEE Software
, vol.5
, pp. 88-92
-
-
Jung, H.-W.1
Kim, S.-G.2
Chung, C.-S.3
-
100
-
-
84963512374
-
Parting crowds: Characterizing divergent interpretations in crowdsourced annotation tasks
-
Sanjay Kairam and Jeffrey Heer. 2016. Parting crowds: Characterizing divergent interpretations in crowdsourced annotation tasks. In CSCW 2016. 1637–1648. DOI:http://dx.doi.org/10.1145/2818048.2820016
-
(2016)
CSCW 2016
, pp. 1637-1648
-
-
Kairam, S.1
Heer, J.2
-
101
-
-
85162483531
-
Iterative learning for reliable crowdsourcing systems
-
Curran Associates, Inc
-
David R. Karger, Sewoong Oh, and Devavrat Shah. 2011. Iterative learning for reliable crowdsourcing systems. In NIPS 2011. Curran Associates, Inc., 1953–1961.
-
(2011)
NIPS 2011
, pp. 1953-1961
-
-
Karger, D.R.1
Oh, S.2
Shah, D.3
-
102
-
-
84896842331
-
Budget-optimal task allocation for reliable crowdsourcing systems
-
2014
-
David R. Karger, Sewoong Oh, and Devavrat Shah. 2014. Budget-optimal task allocation for reliable crowdsourcing systems. Operations Research 62, 1 (2014), 1–24.
-
(2014)
Operations Research
, vol.62
, Issue.1
, pp. 1-24
-
-
Karger, D.R.1
Oh, S.2
Shah, D.3
-
103
-
-
84995473513
-
Investigating the impact of ‘emphasis frames’ and social loafing on player motivation and performance in a crowdsourcing game
-
Geoff Kaufman, Mary Flanagan, and Sukdith Punjasthitkul. 2016. Investigating the impact of ‘emphasis frames’ and social loafing on player motivation and performance in a crowdsourcing game. In CHI 2016. 4122–4128.
-
(2016)
CHI 2016
, pp. 4122-4128
-
-
Kaufman, G.1
Flanagan, M.2
Punjasthitkul, S.3
-
104
-
-
80052132873
-
Crowdsourcing for book search evaluation: Impact of hit design on comparative system ranking
-
Gabriella Kazai, Jaap Kamps, Marijn Koolen, and Natasa Milic-Frayling. 2011. Crowdsourcing for book search evaluation: Impact of hit design on comparative system ranking. In SIGIR 2011. 205–214.
-
(2011)
SIGIR 2011
, pp. 205-214
-
-
Kazai, G.1
Kamps, J.2
Koolen, M.3
Milic-Frayling, N.4
-
105
-
-
83055165986
-
Worker types and personality traits in crowdsourcing relevance labels
-
ACM
-
Gabriella Kazai, Jaap Kamps, and Natasa Milic-Frayling. 2011. Worker types and personality traits in crowdsourcing relevance labels. In CIKM 2011. ACM, 1941–1944.
-
(2011)
CIKM 2011
, pp. 1941-1944
-
-
Kazai, G.1
Kamps, J.2
Milic-Frayling, N.3
-
106
-
-
84871089459
-
The face of quality in crowdsourcing relevance labels: Demographics, personality and labeling accuracy
-
ACM
-
Gabriella Kazai, Jaap Kamps, and Natasa Milic-Frayling. 2012. The face of quality in crowdsourcing relevance labels: Demographics, personality and labeling accuracy. In CIKM 2012. ACM, 2583–2586.
-
(2012)
CIKM 2012
, pp. 2583-2586
-
-
Kazai, G.1
Kamps, J.2
Milic-Frayling, N.3
-
107
-
-
84964397358
-
Quality management in crowdsourcing using gold judges behavior
-
Gabriella Kazai and Imed Zitouni. 2016. Quality management in crowdsourcing using gold judges behavior. In WSDM 2016. 267–276. DOI:http://dx.doi.org/10.1145/2835776.2835835
-
(2016)
WSDM 2016
, pp. 267-276
-
-
Kazai, G.1
Zitouni, I.2
-
108
-
-
78649890126
-
Quality assurance for human-based electronic services: A decision matrix for choosing the right approach
-
Robert Kern, Hans Thies, Cordula Bauer, and Gerhard Satzger. 2010. Quality assurance for human-based electronic services: A decision matrix for choosing the right approach. In ICWE 2010 Workshops. 421–424.
-
(2010)
ICWE 2010 Workshops
, pp. 421-424
-
-
Kern, R.1
Thies, H.2
Bauer, C.3
Satzger, G.4
-
109
-
-
79952341648
-
Evaluating and improving the usability of mechanical turk for low-income workers in India
-
ACM
-
Shashank Khanna, Aishwarya Ratan, James Davis, and William Thies. 2010. Evaluating and improving the usability of mechanical turk for low-income workers in india. In 1st ACM Symposium on Computing for Development. ACM, 12.
-
(2010)
1st ACM Symposium on Computing for Development
, vol.12
-
-
Khanna, S.1
Ratan, A.2
Davis, J.3
Thies, W.4
-
110
-
-
84867773513
-
Predicting QoS in scheduled crowdsourcing
-
Roman Khazankin, Daniel Schall, and Schahram Dustdar. 2012. Predicting QoS in scheduled crowdsourcing. In CAISE 2012. 460–472.
-
(2012)
CAISE 2012
, pp. 460-472
-
-
Khazankin, R.1
Schall, D.2
Dustdar, S.3
-
112
-
-
79960271461
-
Crowdsourcing, collaboration and creativity
-
2010
-
Aniket Kittur. 2010. Crowdsourcing, collaboration and creativity. ACM Crossroads 17, 2 (2010), 22–26.
-
(2010)
ACM Crossroads
, vol.17
, Issue.2
, pp. 22-26
-
-
Kittur, A.1
-
114
-
-
84858202589
-
CrowdWeaver: Visually managing complex crowd work
-
Aniket Kittur, Susheel Khamkar, Paul André, and Robert Kraut. 2012. CrowdWeaver: Visually managing complex crowd work. In CSCW 2012. 1033–1036.
-
(2012)
CSCW 2012
, pp. 1033-1036
-
-
Kittur, A.1
Khamkar, S.2
André, P.3
Kraut, R.4
-
115
-
-
84874886217
-
The future of crowd work
-
Aniket Kittur, Jeffrey V. Nickerson, Michael Bernstein, Elizabeth Gerber, Aaron Shaw, John Zimmerman, Matt Lease, and John Horton. 2013. The future of crowd work. In CSCW 2013. 1301–1318.
-
(2013)
CSCW 2013
, pp. 1301-1318
-
-
Kittur, A.1
Nickerson, J.V.2
Bernstein, M.3
Gerber, E.4
Shaw, A.5
Zimmerman, J.6
Lease, M.7
Horton, J.8
-
116
-
-
80755168388
-
Crowdforge: Crowdsourcing complex work
-
Aniket Kittur, Boris Smus, Susheel Khamkar, and Robert E. Kraut. 2011. Crowdforge: Crowdsourcing complex work. In UIST’11. 43–52.
-
(2011)
UIST’11
, pp. 43-52
-
-
Kittur, A.1
Smus, B.2
Khamkar, S.3
Kraut, R.E.4
-
117
-
-
84968866065
-
Motivating multi-generational crowd workers in social-purpose work
-
Masatomo Kobayashi, Shoma Arita, Toshinari Itoko, Shin Saito, and Hironobu Takagi. 2015. Motivating multi-generational crowd workers in social-purpose work. In CSCW 2015. 1813–1824.
-
(2015)
CSCW 2015
, pp. 1813-1824
-
-
Kobayashi, M.1
Arita, S.2
Itoko, T.3
Saito, S.4
Takagi, H.5
-
118
-
-
84968817591
-
Getting more for less: Optimized crowdsourcing with dynamic tasks and goals
-
Ari Kobren, Chun How Tan, Panagiotis Ipeirotis, and Evgeniy Gabrilovich. 2015. Getting more for less: Optimized crowdsourcing with dynamic tasks and goals. In WWW 2015. 592–602.
-
(2015)
WWW 2015
, pp. 592-602
-
-
Kobren, A.1
Tan, C.H.2
Ipeirotis, P.3
Gabrilovich, E.4
-
119
-
-
84995473547
-
To play or not to play: Interactions between response quality and task complexity in games and paid crowdsourcing
-
Markus Krause and René F. Kizilcec. 2015. To play or not to play: Interactions between response quality and task complexity in games and paid crowdsourcing. In HCOMP 2015. 102–109.
-
(2015)
HCOMP 2015
, pp. 102-109
-
-
Krause, M.1
Kizilcec, R.F.2
-
120
-
-
85014738021
-
Embracing error to enable rapid crowdsourcing
-
Ranjay A. Krishna, Kenji Hata, Stephanie Chen, Joshua Kravitz, David A. Shamma, Li Fei-Fei, and Michael S. Bernstein. 2016. Embracing error to enable rapid crowdsourcing. In CHI 2016. 3167–3179.
-
(2016)
CHI 2016
, pp. 3167-3179
-
-
Krishna, R.A.1
Hata, K.2
Chen, S.3
Kravitz, J.4
Shamma, D.A.5
Fei-Fei, L.6
Bernstein, M.S.7
-
121
-
-
84887460956
-
A survey on service quality description
-
2013
-
Kyriakos Kritikos, Barbara Pernici, Pierluigi Plebani, Cinzia Cappiello, Marco Comuzzi, Salima Benrernou, Ivona Brandic, Attila Kertész, Michael Parkin, and Manuel Carro. 2013. A survey on service quality description. ACM Computing Surveys (CSUR) 46, 1 (2013), 1.
-
(2013)
ACM Computing Surveys (CSUR)
, vol.46
, Issue.1
, pp. 1
-
-
Kritikos, K.1
Pernici, B.2
Plebani, P.3
Cappiello, C.4
Comuzzi, M.5
Benrernou, S.6
Brandic, I.7
Kertész, A.8
Parkin, M.9
Carro, M.10
-
122
-
-
84963836261
-
Crowdsourcing processes: A survey of approaches and opportunities
-
2016
-
Pavel Kucherbaev, Florian Daniel, Stefano Tranquillini, and Maurizio Marchese. 2016b. Crowdsourcing processes: A survey of approaches and opportunities. IEEE Internet Computing 20, 2 (2016), 50–56.
-
(2016)
IEEE Internet Computing
, vol.20
, Issue.2
, pp. 50-56
-
-
Kucherbaev, P.1
Daniel, F.2
Tranquillini, S.3
Marchese, M.4
-
123
-
-
84963623512
-
ReLauncher: Crowdsourcing microtasks runtime controller
-
Pavel Kucherbaev, Florian Daniel, Stefano Tranquillini, and Maurizio Marchese. 2016a. ReLauncher: Crowdsourcing microtasks runtime controller. In CSCW 2016. 1607–1612.
-
(2016)
CSCW 2016
, pp. 1607-1612
-
-
Kucherbaev, P.1
Daniel, F.2
Tranquillini, S.3
Marchese, M.4
-
124
-
-
84858187792
-
Collaboratively crowdsourcing workflows with Turkomatic
-
ACM, New York
-
Anand Kulkarni, Matthew Can, and Björn Hartmann. 2012a. Collaboratively crowdsourcing workflows with Turkomatic. In CSCW’12. ACM, New York, 1003–1012.
-
(2012)
CSCW’12
, pp. 1003-1012
-
-
Kulkarni, A.1
Can, M.2
Hartmann, B.3
-
125
-
-
84867322288
-
MobileWorks: Designing for quality in a managed crowdsourcing architecture
-
Sept. 2012
-
Anand Kulkarni, Philipp Gutheim, Prayag Narula, David Rolnitzky, Tapan Parikh, and Björn Hartmann. 2012b. MobileWorks: Designing for quality in a managed crowdsourcing architecture. IEEE Internet Computing 16, 5 (Sept. 2012), 28–35.
-
(2012)
IEEE Internet Computing
, vol.16
, Issue.5
, pp. 28-35
-
-
Kulkarni, A.1
Gutheim, P.2
Narula, P.3
Rolnitzky, D.4
Parikh, T.5
Hartmann, B.6
-
127
-
-
84877983946
-
Warping time for more effective real-time crowdsourcing
-
Walter S. Lasecki, Christopher D. Miller, and Jeffrey P. Bigham. 2013. Warping time for more effective real-time crowdsourcing. In CHI 2013. 2033–2036.
-
(2013)
CHI 2013
, pp. 2033-2036
-
-
Lasecki, W.S.1
Miller, C.D.2
Bigham, J.P.3
-
129
-
-
84874844446
-
Real-time crowd labeling for deployable activity recognition
-
Walter S. Lasecki, Young Chol Song, Henry Kautz, and Jeffrey P. Bigham. 2013. Real-time crowd labeling for deployable activity recognition. In CSCW 2013. 1203–1212.
-
(2013)
CSCW 2013
, pp. 1203-1212
-
-
Lasecki, W.S.1
Song, Y.C.2
Kautz, H.3
Bigham, J.P.4
-
130
-
-
84898987373
-
Information extraction and manipulation threats in crowd-powered systems
-
ACM
-
Walter S. Lasecki, Jaime Teevan, and Ece Kamar. 2014. Information extraction and manipulation threats in crowd-powered systems. In CSCW 2014. ACM, 248–256.
-
(2014)
CSCW 2014
, pp. 248-256
-
-
Lasecki, W.S.1
Teevan, J.2
Kamar, E.3
-
131
-
-
33749170463
-
Information filtering via iterative refinement
-
2006
-
Paolo Laureti, Lionel Moret, Yi-Cheng Zhang, and Yi-Kuo Yu. 2006. Information filtering via iterative refinement. Euro-physics Letters 75 (2006), 1006.
-
(2006)
Euro-Physics Letters
, vol.75
, pp. 1006
-
-
Laureti, P.1
Moret, L.2
Zhang, Y.-C.3
Yu, Y.-K.4
-
132
-
-
85015044190
-
Curiosity killed the cat, but makes crowdwork better
-
ACM, New York
-
Edith Law, Ming Yin, Joslin Goh, Kevin Chen, Michael A. Terry, and Krzysztof Z. Gajos. 2016. Curiosity killed the cat, but makes crowdwork better. In CHI 2016. ACM, New York, 4098–4110.
-
(2016)
CHI 2016
, pp. 4098-4110
-
-
Law, E.1
Yin, M.2
Goh, J.3
Chen, K.4
Terry, M.A.5
Gajos, K.Z.6
-
134
-
-
0001925995
-
Emerging Perspectives on Service Marketing
-
Robert C. Lewis and Bernhard H. Booms. 1983. Emerging Perspectives on Service Marketing. American Marketing, 99–107.
-
(1983)
American Marketing
, pp. 99-107
-
-
Lewis, R.C.1
Booms, B.H.2
-
135
-
-
84909630641
-
The wisdom of minority: Discovering and targeting the right group of workers for crowdsourcing
-
Hongwei Li, Bo Zhao, and Ariel Fuxman. 2014. The wisdom of minority: Discovering and targeting the right group of workers for crowdsourcing. In WWW 2014. 165–176.
-
(2014)
WWW 2014
, pp. 165-176
-
-
Li, H.1
Zhao, B.2
Fuxman, A.3
-
136
-
-
84964378377
-
Crowdsourcing high quality labels with a tight budget
-
Qi Li, Fenglong Ma, Jing Gao, Lu Su, and Christopher J. Quinn. 2016. Crowdsourcing high quality labels with a tight budget. In WSDM 2016. 237–246.
-
(2016)
WSDM 2016
, pp. 237-246
-
-
Li, Q.1
Ma, F.2
Gao, J.3
Su, L.4
Quinn, C.J.5
-
137
-
-
84908151429
-
Signals in the silence: Models of implicit feedback in a recommendation system for crowdsourcing
-
Christopher H. Lin, Ece Kamar, and Eric Horvitz. 2014. Signals in the silence: Models of implicit feedback in a recommendation system for crowdsourcing. In AAAI 2014. 908–915.
-
(2014)
AAAI 2014
, pp. 908-915
-
-
Lin, C.H.1
Kamar, E.2
Horvitz, E.3
-
140
-
-
78649569877
-
Turkit: Human computation algorithms on Mechanical Turk
-
ACM, New York
-
Greg Little, Lydia B. Chilton, Max Goldman, and Robert C. Miller. 2010c. Turkit: Human computation algorithms on Mechanical Turk. In UIST’10. ACM, New York, 57–66.
-
(2010)
UIST’10
, pp. 57-66
-
-
Little, G.1
Chilton, L.B.2
Goldman, M.3
Miller, R.C.4
-
141
-
-
84867130103
-
TrueLabel + confusions: A spectrum of probabilistic models in analyzing multiple ratings
-
icml.cc/ Omnipress
-
Chao Liu and Yi-Min Wang. 2012. TrueLabel + confusions: A spectrum of probabilistic models in analyzing multiple ratings.. In ICML 2012. icml.cc/ Omnipress.
-
(2012)
ICML 2012
-
-
Liu, C.1
Wang, Y.-M.2
-
142
-
-
84898947098
-
Scoring workers in crowdsourcing: How many control questions are enough?
-
Curran Associates, Inc
-
Qiang Liu, Alexander T. Ihler, and Mark Steyvers. 2013. Scoring workers in crowdsourcing: How many control questions are enough? In NIPS 2013. Curran Associates, Inc., 1914–1922.
-
(2013)
NIPS 2013
, pp. 1914-1922
-
-
Liu, Q.1
Ihler, A.T.2
Steyvers, M.3
-
143
-
-
84959060536
-
Saving money while polling with interpoll using power analysis
-
Benjamin Livshits and Todd Mytkowicz. 2014. Saving money while polling with interpoll using power analysis. In HCOMP 2014.
-
(2014)
HCOMP 2014
-
-
Livshits, B.1
Mytkowicz, T.2
-
144
-
-
79959611982
-
The collective intelligence genome
-
2010
-
Thomas W. Malone, Robert Laubacher, and Chrysanthos Dellarocas. 2010. The collective intelligence genome. IEEE Engineering Management Review 38, 3 (2010), 38.
-
(2010)
IEEE Engineering Management Review
, vol.38
, Issue.3
, pp. 38
-
-
Malone, T.W.1
Laubacher, R.2
Dellarocas, C.3
-
145
-
-
85135855703
-
Volunteering versus work for pay: Incentives and tradeoffs in crowdsourcing
-
Andrew Mao, Ece Kamar, Yiling Chen, Eric Horvitz, Megan E. Schwamb, Chris J. Lintott, and Arfon M. Smith. 2013. Volunteering versus work for pay: Incentives and tradeoffs in crowdsourcing. In HCOMP 2013.
-
(2013)
HCOMP 2013
-
-
Mao, A.1
Kamar, E.2
Chen, Y.3
Horvitz, E.4
Schwamb, M.E.5
Lintott, C.J.6
Smith, A.M.7
-
146
-
-
84875121323
-
Counting with the crowd
-
VLDB Endowment
-
Adam Marcus, David Karger, Samuel Madden, Robert Miller, and Sewoong Oh. 2012. Counting with the crowd. In Proceedings of the VLDB Endowment, Vol. 6. VLDB Endowment, 109–120.
-
(2012)
Proceedings of The VLDB Endowment
, vol.6
, pp. 109-120
-
-
Marcus, A.1
Karger, D.2
Madden, S.3
Miller, R.4
Oh, S.5
-
147
-
-
84877995682
-
Using crowdsourcing to support pro-environmental Community Activism
-
Elaine Massung, David Coyle, Kirsten F. Cater, Marc Jay, and Chris Preist. 2013. Using crowdsourcing to support pro-environmental Community Activism. In CHI 2013. 371–380.
-
(2013)
CHI 2013
, pp. 371-380
-
-
Massung, E.1
Coyle, D.2
Cater, K.F.3
Jay, M.4
Preist, C.5
-
148
-
-
85020381392
-
Using hierarchical skills for optimized task assignment in knowledge-intensive crowdsourcing
-
Panagiotis Mavridis, David Gross-Amblard, and Zoltán Miklós. 2016. Using hierarchical skills for optimized task assignment in knowledge-intensive crowdsourcing. In WWW 2016. 843–853.
-
(2016)
WWW 2016
, pp. 843-853
-
-
Mavridis, P.1
Gross-Amblard, D.2
Miklós, Z.3
-
149
-
-
85107998643
-
Why is that relevant? Collecting annotator rationales for relevance judgments
-
Tyler McDonnell, Matthew Lease, Mucahid Kutlu, and Tamer Elsayed. 2016. Why is that relevant? Collecting annotator rationales for relevance judgments. In HCOMP 2016.
-
(2016)
HCOMP 2016
-
-
McDonnell, T.1
Lease, M.2
Kutlu, M.3
Elsayed, T.4
-
150
-
-
84962419909
-
Crowdlang: A programming language for the systematic exploration of human computation systems
-
Springer
-
Patrick Minder and Abraham Bernstein. 2012. Crowdlang: A programming language for the systematic exploration of human computation systems. In Social Informatics. Springer, 124–137.
-
(2012)
Social Informatics
, pp. 124-137
-
-
Minder, P.1
Bernstein, A.2
-
151
-
-
85015436553
-
Visual diversity and user interface quality
-
Aliaksei Miniukovich and Antonella De Angeli. 2015. Visual diversity and user interface quality. In British HCI 2015. 101–109.
-
(2015)
British HCI 2015
, pp. 101-109
-
-
Miniukovich, A.1
De Angeli, A.2
-
152
-
-
84916196366
-
Cross-task crowdsourcing
-
Kaixiang Mo, Erheng Zhong, and Qiang Yang. 2013. Cross-task crowdsourcing. In KDD 2013. 677–685.
-
(2013)
KDD 2013
, pp. 677-685
-
-
Mo, K.1
Zhong, E.2
Yang, Q.3
-
153
-
-
84867294283
-
Priming for better performance in microtask crowdsourcing environments
-
Sept. 2012
-
Robert R. Morris, Mira Dontcheva, and Elizabeth M. Gerber. 2012. Priming for better performance in microtask crowdsourcing environments. IEEE Internet Computing 16, 5 (Sept. 2012), 13–19.
-
(2012)
IEEE Internet Computing
, vol.16
, Issue.5
, pp. 13-19
-
-
Morris, R.R.1
Dontcheva, M.2
Gerber, E.M.3
-
154
-
-
84980378623
-
Identifying careless workers in crowdsourcing platforms: A game theory approach
-
Yashar Moshfeghi, Alvaro F. Huertas-Rosero, and Joemon M. Jose. 2016. Identifying careless workers in crowdsourcing platforms: A game theory approach. In ACM SIGIR 2016. 857–860.
-
(2016)
ACM SIGIR 2016
, pp. 857-860
-
-
Moshfeghi, Y.1
Huertas-Rosero, A.F.2
Jose, J.M.3
-
155
-
-
84933538091
-
Threats and trade-offs in resource critical crowdsourcing tasks over networks
-
Swaprava Nath, Pankaj Dayama, Dinesh Garg, Y. Narahari, and James Y. Zou. 2012. Threats and trade-offs in resource critical crowdsourcing tasks over networks. In AAAI 2012.
-
(2012)
AAAI 2012
-
-
Nath, S.1
Dayama, P.2
Garg, D.3
Narahari, Y.4
Zou, J.Y.5
-
156
-
-
85015067002
-
How one microtask affects another
-
Edward Newell and Derek Ruths. 2016. How one microtask affects another. In CHI 2016. 3155–3166.
-
(2016)
CHI 2016
, pp. 3155-3166
-
-
Newell, E.1
Ruths, D.2
-
157
-
-
84937543086
-
Using crowdsourcing to investigate perception of narrative similarity
-
ACM
-
Dong Nguyen, Dolf Trieschnigg, and Mariët Theune. 2014. Using crowdsourcing to investigate perception of narrative similarity. In CIKM 2014. ACM, 321–330.
-
(2014)
CIKM 2014
, pp. 321-330
-
-
Nguyen, D.1
Trieschnigg, D.2
Theune, M.3
-
159
-
-
84977471736
-
WeatherUSI: User-based weather crowdsourcing on public displays
-
Evangelos Niforatos, Ivan Elhart, and Marc Langheinrich. 2016. WeatherUSI: User-based weather crowdsourcing on public displays. In ICWE 2016. 567–570.
-
(2016)
ICWE 2016
, pp. 567-570
-
-
Niforatos, E.1
Elhart, I.2
Langheinrich, M.3
-
160
-
-
80755187853
-
Platemate: Crowdsourcing nutritional analysis from food photographs
-
ACM
-
Jon Noronha, Eric Hysen, Haoqi Zhang, and Krzysztof Z. Gajos. 2011. Platemate: Crowdsourcing nutritional analysis from food photographs. In UIST 2011. ACM, 1–12.
-
(2011)
UIST 2011
, pp. 1-12
-
-
Noronha, J.1
Hysen, E.2
Zhang, H.3
Gajos, K.Z.4
-
161
-
-
85032712776
-
Crowd access path optimization: Diversity matters
-
Besmira Nushi, Adish Singla, Anja Gruenheid, Erfan Zamanian, Andreas Krause, and Donald Kossmann. 2015. Crowd access path optimization: Diversity matters. In HCOMP 2015.
-
(2015)
HCOMP 2015
-
-
Nushi, B.1
Singla, A.2
Gruenheid, A.3
Zamanian, E.4
Krause, A.5
Kossmann, D.6
-
162
-
-
85050940841
-
Optimality of belief propagation for crowdsourced classification
-
JMLR.org
-
Jungseul Ok, Sewoong Oh, Jinwoo Shin, and Yung Yi. 2016. Optimality of belief propagation for crowdsourced classification. In ICML 2016. JMLR.org, 535–544.
-
(2016)
ICML 2016
, pp. 535-544
-
-
Ok, J.1
Oh, S.2
Shin, J.3
Yi, Y.4
-
163
-
-
84923421915
-
Programmatic gold: Targeted and scalable quality assurance in crowdsourcing
-
2011
-
David Oleson, Alexander Sorokin, Greg P. Laughlin, Vaughn Hester, John Le, and Lukas Biewald. 2011. Programmatic gold: Targeted and scalable quality assurance in crowdsourcing. HCOMP 2011 11, 11 (2011).
-
(2011)
HCOMP 2011
, vol.11
, pp. 11
-
-
Oleson, D.1
Sorokin, A.2
Laughlin, G.P.3
Hester, V.4
Le, J.5
Biewald, L.6
-
164
-
-
84977569709
-
On the invitation of expert contributors from online communities for knowledge crowdsourcing tasks
-
Jasper Oosterman and Geert-Jan Houben. 2016. On the invitation of expert contributors from online communities for knowledge crowdsourcing tasks. In ICWE 2016. 413–421.
-
(2016)
ICWE 2016
, pp. 413-421
-
-
Oosterman, J.1
Houben, G.-J.2
-
166
-
-
84899010623
-
Competing or aiming to be average?: Normification as a means of engaging digital volunteers
-
Chris Preist, Elaine Massung, and David Coyle. 2014. Competing or aiming to be average?: Normification as a means of engaging digital volunteers. In CSCW 2014. 1222–1233.
-
(2014)
CSCW 2014
, pp. 1222-1233
-
-
Preist, C.1
Massung, E.2
Coyle, D.3
-
167
-
-
84856351776
-
Strategies for community based crowdsourcing
-
Cindy Puah, Ahmad Zaki Abu Bakar, and Chu Wei Ching. 2011. Strategies for community based crowdsourcing. In ICRIIS 2011. 1–4.
-
(2011)
ICRIIS 2011
, pp. 1-4
-
-
Puah, C.1
Bakar, A.Z.A.2
Ching, C.W.3
-
168
-
-
84899033611
-
AskSheet: Efficient human computation for decision making with spreadsheets
-
Alexander J. Quinn and Benjamin B. Bederson. 2014. AskSheet: Efficient human computation for decision making with spreadsheets. In CSCW 2014. 1456–1466.
-
(2014)
CSCW 2014
, pp. 1456-1466
-
-
Quinn, A.J.1
Bederson, B.B.2
-
169
-
-
85112829112
-
Learning to scale payments in crowdsourcing with properboost
-
Goran Radanovic and Boi Faltings. 2016. Learning to scale payments in crowdsourcing with properboost. In HCOMP 2016.
-
(2016)
HCOMP 2016
-
-
Radanovic, G.1
Faltings, B.2
-
170
-
-
84937711788
-
Effective crowdsourcing for software feature ideation in online co-creation forums
-
Karthikeyan Rajasekharan, Aditya P. Mathur, and See-Kiong Ng. 2013. Effective crowdsourcing for software feature ideation in online co-creation forums. In SEKE 2013. 119–124.
-
(2013)
SEKE 2013
, pp. 119-124
-
-
Rajasekharan, K.1
Mathur, A.P.2
Ng, S.-K.3
-
171
-
-
84908448107
-
What will others choose? How a majority vote reward scheme can improve human computation in a spatial location identification task
-
Huaming Rao, Shih-Wen Huang, and Wai-Tat Fu. 2013. What will others choose? How a majority vote reward scheme can improve human computation in a spatial location identification task. In HCOMP 2013.
-
(2013)
HCOMP 2013
-
-
Rao, H.1
Huang, S.-W.2
Fu, W.-T.3
-
172
-
-
85162536261
-
Ranking annotators for crowdsourced labeling tasks
-
Curran Associates Inc
-
Vikas C. Raykar and Shipeng Yu. 2011. Ranking annotators for crowdsourced labeling tasks. In NIPS 2011. Curran Associates Inc., 1809–1817.
-
(2011)
NIPS 2011
, pp. 1809-1817
-
-
Raykar, V.C.1
Yu, S.2
-
173
-
-
84912020129
-
Expert crowdsourcing with flash teams
-
Daniela Retelny, Sébastien Robaszkiewicz, Alexandra To, Walter S. Lasecki, Jay Patel, Negar Rahmati, Tulsee Doshi, Melissa Valentine, and Michael S. Bernstein. 2014. Expert crowdsourcing with flash teams. In UIST. ACM, 75–85.
-
(2014)
UIST. ACM
, pp. 75-85
-
-
Retelny, D.1
Robaszkiewicz, S.2
To, A.3
Lasecki, W.S.4
Patel, J.5
Rahmati, N.6
Doshi, T.7
Valentine, M.8
Bernstein, M.S.9
-
174
-
-
85055086357
-
An assessment of intrinsic and extrinsic motivation on task performance in crowdsourcing markets
-
Jakob Rogstadius, Vassilis Kostakos, Aniket Kittur, Boris Smus, Jim Laredo, and Maja Vukovic. 2011. An assessment of intrinsic and extrinsic motivation on task performance in crowdsourcing markets. In ICWSM.
-
(2011)
ICWSM
-
-
Rogstadius, J.1
Kostakos, V.2
Kittur, A.3
Smus, B.4
Laredo, J.5
Vukovic, M.6
-
175
-
-
84937598773
-
Competitive game designs for improving the cost effectiveness of crowdsourcing
-
ACM
-
Markus Rokicki, Sergiu Chelaru, Sergej Zerr, and Stefan Siersdorfer. 2014. Competitive game designs for improving the cost effectiveness of crowdsourcing. In CICM 2014. ACM, 1469–1478.
-
(2014)
CICM 2014
, pp. 1469-1478
-
-
Rokicki, M.1
Chelaru, S.2
Zerr, S.3
Siersdorfer, S.4
-
176
-
-
84968763642
-
Groupsourcing: Team competition designs for crowdsourcing
-
Markus Rokicki, Sergej Zerr, and Stefan Siersdorfer. 2015. Groupsourcing: Team competition designs for crowdsourcing. In WWW 2015. 906–915.
-
(2015)
WWW 2015
, pp. 906-915
-
-
Rokicki, M.1
Zerr, S.2
Siersdorfer, S.3
-
177
-
-
84937968933
-
Task assignment optimization in knowledge-intensive crowdsourcing
-
2015
-
Senjuti Basu Roy, Ioanna Lykourentzou, Saravanan Thirumuruganathan, Sihem Amer-Yahia, and Gautam Das. 2015. Task assignment optimization in knowledge-intensive crowdsourcing. The VLDB Journal 24, 4 (2015), 467–491.
-
(2015)
The VLDB Journal
, vol.24
, Issue.4
, pp. 467-491
-
-
Roy, S.B.1
Lykourentzou, I.2
Thirumuruganathan, S.3
Amer-Yahia, S.4
Das, G.5
-
178
-
-
80755168394
-
Instrumenting the crowd: Using implicit behavioral measures to predict task performance
-
ACM
-
Jeffrey M. Rzeszotarski and Aniket Kittur. 2011. Instrumenting the crowd: Using implicit behavioral measures to predict task performance. In UIST 2011. ACM, 13–22.
-
(2011)
UIST 2011
, pp. 13-22
-
-
Rzeszotarski, J.M.1
Kittur, A.2
-
179
-
-
84869013273
-
CrowdScape: Interactively visualizing user behavior and output
-
ACM
-
Jeffrey M. Rzeszotarski and Aniket Kittur. 2012. CrowdScape: Interactively visualizing user behavior and output. In UIST 2012. ACM, 55–62.
-
(2012)
UIST 2012
, pp. 55-62
-
-
Rzeszotarski, J.M.1
Kittur, A.2
-
181
-
-
84892430704
-
Auction-based crowdsourcing supporting skill management
-
June 2013
-
Benjamin Satzger, Harald Psaier, Daniel Schall, and Schahram Dustdar. 2013. Auction-based crowdsourcing supporting skill management. Information Systems 38, 4 (June 2013), 547–560.
-
(2013)
Information Systems
, vol.38
, Issue.4
, pp. 547-560
-
-
Satzger, B.1
Psaier, H.2
Schall, D.3
Dustdar, S.4
-
182
-
-
84879474878
-
Incentives and rewarding in social computing
-
2013
-
Ognjen Scekic, Hong-Linh Truong, and Schahram Dustdar. 2013a. Incentives and rewarding in social computing. Communications of the ACM 56, 6 (2013), 72–82.
-
(2013)
Communications of The ACM
, vol.56
, Issue.6
, pp. 72-82
-
-
Scekic, O.1
Truong, H.-L.2
Dustdar, S.3
-
184
-
-
85028223118
-
Crowdsourcing tasks to social networks in BPEL4People
-
2014
-
Daniel Schall, Benjamin Satzger, and Harald Psaier. 2014. Crowdsourcing tasks to social networks in BPEL4People. World Wide Web 17, 1 (2014), 1–32.
-
(2014)
World Wide Web
, vol.17
, Issue.1
, pp. 1-32
-
-
Schall, D.1
Satzger, B.2
Psaier, H.3
-
185
-
-
84861947956
-
Expert discovery and interactions in mixed service-oriented systems
-
2012
-
Daniel Schall, Florian Skopik, and Schahram Dustdar. 2012. Expert discovery and interactions in mixed service-oriented systems. IEEE Transactions on Services Computing 5, 2 (2012), 233–245.
-
(2012)
IEEE Transactions on Services Computing
, vol.5
, Issue.2
, pp. 233-245
-
-
Schall, D.1
Skopik, F.2
Dustdar, S.3
-
186
-
-
84893325426
-
Worker perception of quality assurance mechanisms in crowdsourcing and human computation markets
-
Thimo Schulze, Dennis Nordheimer, and Martin Schader. 2013. Worker perception of quality assurance mechanisms in crowdsourcing and human computation markets. In AMCIS 2013.
-
(2013)
AMCIS 2013
-
-
Schulze, T.1
Nordheimer, D.2
Schader, M.3
-
187
-
-
84965134289
-
Double or nothing: Multiplicative incentive mechanisms for crowdsourcing
-
Curran Associates, Inc
-
Nihar Bhadresh Shah and Denny Zhou. 2015. Double or nothing: Multiplicative incentive mechanisms for crowdsourcing. In NIPS 2015. Curran Associates, Inc., 1–9.
-
(2015)
NIPS 2015
, pp. 1-9
-
-
Shah, N.B.1
Zhou, D.2
-
188
-
-
84997755074
-
No oops, you won’t do it again: Mechanisms for self-correction in crowdsourcing
-
Nihar Bhadresh Shah and Dengyong Zhou. 2016. No oops, you won’t do it again: Mechanisms for self-correction in crowdsourcing. In ICML 2016. 1–10.
-
(2016)
ICML 2016
, pp. 1-10
-
-
Shah, N.B.1
Zhou, D.2
-
189
-
-
84979038141
-
SQUARE: A benchmark for research on computing crowd consensus
-
Aashish Sheshadri and Matthew Lease. 2013. SQUARE: A benchmark for research on computing crowd consensus. In HCOMP 2013.
-
(2013)
HCOMP 2013
-
-
Sheshadri, A.1
Lease, M.2
-
190
-
-
84893063797
-
Pricing mechanisms for crowdsourcing markets
-
Yaron Singer and Manas Mittal. 2013. Pricing mechanisms for crowdsourcing markets. In WWW. 1157–1166.
-
(2013)
WWW
, pp. 1157-1166
-
-
Singer, Y.1
Mittal, M.2
-
191
-
-
84908421195
-
Near-optimally teaching the crowd to classify
-
JMLR.org
-
Adish Singla, Ilija Bogunovic, Gábor Bartók, Amin Karbasi, and Andreas Krause. 2014. Near-optimally teaching the crowd to classify. In ICML 2014. JMLR.org, II-154–II-162.
-
(2014)
ICML 2014
-
-
Singla, A.1
Bogunovic, I.2
Bartók, G.3
Karbasi, A.4
Krause, A.5
-
192
-
-
84994141774
-
Two’s company, three’s a crowd: A case study of crowdsourcing software development
-
Klaas-Jan Stol and Brian Fitzgerald. 2014. Two’s company, three’s a crowd: A case study of crowdsourcing software development. In ICSE 2014. 187–198.
-
(2014)
ICSE 2014
, pp. 187-198
-
-
Stol, K.-J.1
Fitzgerald, B.2
-
195
-
-
84867323091
-
Analyzing crowd labor and designing incentives for humans in the loop
-
Sept. 2012
-
Oksana Tokarchuk, Roberta Cuel, and Marco Zamarian. 2012. Analyzing crowd labor and designing incentives for humans in the loop. IEEE Internet Computing 16, 5 (Sept. 2012), 45–51.
-
(2012)
IEEE Internet Computing
, vol.16
, Issue.5
, pp. 45-51
-
-
Tokarchuk, O.1
Cuel, R.2
Zamarian, M.3
-
197
-
-
84959925225
-
Crowdsourcing complex workflows under budget constraints
-
Long Tran-Thanh, Trung Dong Huynh, Avi Rosenfeld, Sarvapali D. Ramchurn, and Nicholas R. Jennings. 2015. Crowdsourcing complex workflows under budget constraints. In AAAI 2015. 1298–1304.
-
(2015)
AAAI 2015
, pp. 1298-1304
-
-
Tran-Thanh, L.1
Huynh, T.D.2
Rosenfeld, A.3
Ramchurn, S.D.4
Jennings, N.R.5
-
198
-
-
85025459096
-
Crowdsourced nonparametric density estimation using relative distances
-
Antti Ukkonen, Behrouz Derakhshan, and Hannes Heikinheimo. 2015. Crowdsourced nonparametric density estimation using relative distances. In HCOMP 2015.
-
(2015)
HCOMP 2015
-
-
Ukkonen, A.1
Derakhshan, B.2
Heikinheimo, H.3
-
199
-
-
84900443529
-
Twitch crowdsourcing: Crowd contributions in short bursts of time
-
Rajan Vaish, Keith Wyngarden, Jingshu Chen, Brandon Cheung, and Michael S. Bernstein. 2014. Twitch crowdsourcing: Crowd contributions in short bursts of time. In CHI 2014. 3645–3654.
-
(2014)
CHI 2014
, pp. 3645-3654
-
-
Vaish, R.1
Wyngarden, K.2
Chen, J.3
Cheung, B.4
Bernstein, M.S.5
-
200
-
-
84905112588
-
Crowdsourcing algorithms for entity resolution
-
2014
-
Norases Vesdapunt, Kedar Bellare, and Nilesh Dalvi. 2014. Crowdsourcing algorithms for entity resolution. Proceedings of the VLDB Endowment 7, 12 (2014), 1071–1082.
-
(2014)
Proceedings of The VLDB Endowment
, vol.7
, Issue.12
, pp. 1071-1082
-
-
Vesdapunt, N.1
Bellare, K.2
Dalvi, N.3
-
202
-
-
51749107030
-
ReCAPTCHA: Human-based character recognition via web security measures
-
2008
-
Luis Von Ahn, Benjamin Maurer, Colin McMillen, David Abraham, and Manuel Blum. 2008. reCAPTCHA: Human-based character recognition via web security measures. Science 321, 5895 (2008), 1465–1468.
-
(2008)
Science
, vol.321
, Issue.5895
, pp. 1465-1468
-
-
Ahn, L.V.1
Maurer, B.2
McMillen, C.3
Abraham, D.4
Blum, M.5
-
205
-
-
85167458387
-
Output agreement mechanisms and common knowledge
-
Bo Waggoner and Yiling Chen. 2014. Output agreement mechanisms and common knowledge. In HCOMP 2014.
-
(2014)
HCOMP 2014
-
-
Waggoner, B.1
Chen, Y.2
-
206
-
-
84860858213
-
Serf and turf: Crowdturfing for fun and profit
-
Gang Wang, Christo Wilson, Xiaohan Zhao, Yibo Zhu, Manish Mohanlal, Haitao Zheng, and Ben Y. Zhao. 2012. Serf and turf: Crowdturfing for fun and profit. In WWW 2012. 679–688.
-
(2012)
WWW 2012
, pp. 679-688
-
-
Wang, G.1
Wilson, C.2
Zhao, X.3
Zhu, Y.4
Mohanlal, M.5
Zheng, H.6
Zhao, B.Y.7
-
207
-
-
85162481803
-
Bayesian bias mitigation for crowdsourcing
-
Curran Associates, Inc
-
Fabian L. Wauthier and Michael I. Jordan. 2011. Bayesian bias mitigation for crowdsourcing. In NIPS 2011. Curran Associates, Inc., 1800–1808.
-
(2011)
NIPS 2011
, pp. 1800-1808
-
-
Wauthier, F.L.1
Jordan, M.I.2
-
208
-
-
85014763762
-
Crowd guilds: Worker-led reputation and feedback on crowdsourcing platforms
-
Mark E. Whiting, Dilrukshi Gamage, Snehalkumar (Neil) S. Gaikwad, Aaron Gilbee, Shirish Goyal, Alipta Ballav, Dinesh Majeti, Nalin Chhibber, Angela Richmond-Fuller, Freddie Vargus, Tejas Seshadri Sarma, Varshine Chandrakanthan, Teogenes Moura, Mohamed Hashim Salih, Gabriel Bayomi Tinoco Kalejaiye, Adam Ginzberg, Catherine A. Mullings, Yoni Dayan, Kristy Milland, Henrique Orefice, Jeff Regino, Sayna Parsi, Kunz Mainali, Vibhor Sehgal, Sekandar Matin, Akshansh Sinha, Rajan Vaish, and Michael S. Bernstein. 2017. Crowd guilds: Worker-led reputation and feedback on crowdsourcing platforms. In CSCW 2017. 1902–1913. DOI:http://dx.doi.org/10.1145/2998181.2998234
-
(2017)
CSCW 2017
, pp. 1902-1913
-
-
Whiting, M.E.1
Gamage, D.2
Neil, S.3
Gaikwad, S.4
Gilbee, A.5
Goyal, S.6
Ballav, A.7
Majeti, D.8
Chhibber, N.9
Richmond-Fuller, A.10
Vargus, F.11
Sarma, T.S.12
Chandrakanthan, V.13
Moura, T.14
Salih, M.H.15
Kalejaiye, G.B.T.16
Ginzberg, A.17
Mullings, C.A.18
Dayan, Y.19
Milland, K.20
Orefice, H.21
Regino, J.22
Parsi, S.23
Mainali, K.24
Sehgal, V.25
Matin, S.26
Sinha, A.27
Vaish, R.28
Bernstein, M.S.29
more..
-
209
-
-
84862069681
-
Strategies for crowdsourcing social data analysis
-
Wesley Willett, Jeffrey Heer, and Maneesh Agrawala. 2012. Strategies for crowdsourcing social data analysis. In CHI 2012. 227–236.
-
(2012)
CHI 2012
, pp. 227-236
-
-
Willett, W.1
Heer, J.2
Agrawala, M.3
-
212
-
-
85016497602
-
Modeling task complexity in crowdsourcing
-
Jie Yang, Judith Redi, Gianluca DeMartini, and Alessandro Bozzon. 2016. Modeling task complexity in crowdsourcing. In HCOMP 2016. 249–258.
-
(2016)
HCOMP 2016
, pp. 249-258
-
-
Yang, J.1
Redi, J.2
DeMartini, G.3
Bozzon, A.4
-
213
-
-
84973438529
-
Monetary interventions in crowdsourcing task switching
-
Ming Yin, Yiling Chen, and Yu-An Sun. 2014. Monetary interventions in crowdsourcing task switching. In HCOMP 2014.
-
(2014)
HCOMP 2014
-
-
Yin, M.1
Chen, Y.2
Sun, Y.-A.3
-
214
-
-
84898974524
-
A comparison of social, learning, and financial strategies on crowd engagement and output quality
-
Lixiu Yu, Paul André, Aniket Kittur, and Robert Kraut. 2014. A comparison of social, learning, and financial strategies on crowd engagement and output quality. In CSCW 2014. 967–978.
-
(2014)
CSCW 2014
, pp. 967-978
-
-
Yu, L.1
André, P.2
Kittur, A.3
Kraut, R.4
-
215
-
-
33748676643
-
Decoding information from noisy, redundant, and intentionally distorted sources
-
2006
-
Yi-Kuo Yu, Yi-Cheng Zhang, Paolo Laureti, and Lionel Moret. 2006. Decoding information from noisy, redundant, and intentionally distorted sources. Physica A: Statistical Mechanics and its Applications 371, 2 (2006), 732–744.
-
(2006)
Physica A: Statistical Mechanics and Its Applications
, vol.371
, Issue.2
, pp. 732-744
-
-
Yu, Y.-K.1
Zhang, Y.-C.2
Laureti, P.3
Moret, L.4
-
216
-
-
84924852080
-
TaskRec: A task recommendation framework in crowdsourcing systems
-
2015
-
Man-Ching Yuen, Irwin King, and Kwong-Sak Leung. 2015. TaskRec: A task recommendation framework in crowdsourcing systems. Neural Processing Letters 41, 2 (2015), 223–238.
-
(2015)
Neural Processing Letters
, vol.41
, Issue.2
, pp. 223-238
-
-
Yuen, M.-C.1
King, I.2
Leung, K.-S.3
-
217
-
-
84920118410
-
Imbalanced multiple noisy labeling
-
2015
-
Jing Zhang, Xindong Wu, and Victor S. Sheng. 2015. Imbalanced multiple noisy labeling. IEEE Transactions on Knowledge & Data Engineering 27, 2 (2015), 489–503.
-
(2015)
IEEE Transactions on Knowledge & Data Engineering
, vol.27
, Issue.2
, pp. 489-503
-
-
Zhang, J.1
Wu, X.2
Sheng, V.S.3
-
218
-
-
84937559546
-
A transfer learning based framework of crowd-selection on twitter
-
ACM
-
Zhou Zhao, Da Yan, Wilfred Ng, and Shi Gao. 2013. A transfer learning based framework of crowd-selection on twitter. In KDD 2013. ACM, 1514–1517.
-
(2013)
KDD 2013
, pp. 1514-1517
-
-
Zhao, Z.1
Yan, D.2
Ng, W.3
Gao, S.4
-
219
-
-
84898939077
-
Reviewing versus doing: Learning and performance in crowd assessment
-
Haiyi Zhu, Steven P. Dow, Robert E. Kraut, and Aniket Kittur. 2014. Reviewing versus doing: Learning and performance in crowd assessment. In CSCW 2014. 1445–1455.
-
(2014)
CSCW 2014
, pp. 1445-1455
-
-
Zhu, H.1
Dow, S.P.2
Kraut, R.E.3
Kittur, A.4
-
220
-
-
84928737713
-
Leveraging in-batch annotation bias for crowdsourced active learning
-
Honglei Zhuang and Joel Young. 2015. Leveraging in-batch annotation bias for crowdsourced active learning. In WSDM 2015. 243–252. DOI:http://dx.doi.org/10.1145/2684822.2685301
-
(2015)
WSDM 2015
, pp. 243-252
-
-
Zhuang, H.1
Young, J.2
|