메뉴 건너뛰기




Volumn , Issue , 2012, Pages 1191-1194

Collaborative workflow for crowdsourcing translation

Author keywords

amazon mechanical turk; collaborative workflow; crowdsourcing; language translation

Indexed keywords

ASSISTIVE; COLLABORATIVE WORKFLOW; CROWDSOURCING; LANGUAGE PAIRS; LANGUAGE TRANSLATION; MECHANICAL TURKS; PIPELINE MODELS;

EID: 84858211339     PISSN: None     EISSN: None     Source Type: Conference Proceeding    
DOI: 10.1145/2145204.2145382     Document Type: Conference Paper
Times cited : (61)

References (9)
  • 2
    • 84858170184 scopus 로고    scopus 로고
    • Technical report, Human-Computer Interaction Institute, SCS, Carnegie Mellon University, January
    • R. E. K. Aniket Kittur, Boris Smus. Crowdforge: Crowdsourcing complex work. Technical report, Human-Computer Interaction Institute, SCS, Carnegie Mellon University, January 2011.
    • (2011) Crowdforge: Crowdsourcing Complex Work
    • Aniket Kittur, R.E.K.1    Smus, B.2
  • 4
    • 80053402398 scopus 로고    scopus 로고
    • Fast, cheap, and creative: Evaluating translation quality using Amazon's Mechanical Turk
    • Singapore, August Association for Computational Linguistics
    • C. Callison-Burch. Fast, cheap, and creative: Evaluating translation quality using Amazon's Mechanical Turk. In EMNLP 2009, pages 286-295, Singapore, August 2009. Association for Computational Linguistics.
    • (2009) EMNLP 2009 , pp. 286-295
    • Callison-Burch, C.1
  • 8
    • 85133336275 scopus 로고    scopus 로고
    • Bleu: A method for automatic evaluation of machine translation
    • Morristown, NJ, USA
    • K. Papineni, S. Roukos, T. Ward, and W.-J. Zhu. Bleu: a method for automatic evaluation of machine translation. In ACL 2002, pages 311-318, Morristown, NJ, USA, 2002.
    • (2002) ACL 2002 , pp. 311-318
    • Papineni, K.1    Roukos, S.2    Ward, T.3    Zhu, W.-J.4
  • 9
    • 80053360508 scopus 로고    scopus 로고
    • Cheap and fast - But is it good? evaluating non-expert annotations for natural language tasks
    • Honolulu, Hawaii, October
    • R. Snow, B. O'Connor, D. Jurafsky, and A. Ng. Cheap and fast - but is it good? evaluating non-expert annotations for natural language tasks. In Proceedings of the EMNLP 2008, pages 254-263, Honolulu, Hawaii, October 2008.
    • (2008) Proceedings of the EMNLP 2008 , pp. 254-263
    • Snow, R.1    O'Connor, B.2    Jurafsky, D.3    Ng, A.4


* 이 정보는 Elsevier사의 SCOPUS DB에서 KISTI가 분석하여 추출한 것입니다.