메뉴 건너뛰기




Volumn , Issue , 2014, Pages 293-301

Results of the wmt14 metrics shared task

Author keywords

[No Author keywords available]

Indexed keywords

COMPUTATIONAL LINGUISTICS;

EID: 85122610649     PISSN: 0736587X     EISSN: None     Source Type: Conference Proceeding    
DOI: None     Document Type: Conference Paper
Times cited : (82)

References (21)
  • 1
    • 85122618194 scopus 로고    scopus 로고
    • Parmesan: Improving Meteor by More Fine-grained Paraphrasing
    • Baltimore, USA. Association for Computational Linguistics
    • Barancikova, P. (2014). Parmesan: Improving Meteor by More Fine-grained Paraphrasing. In Proceedings of the Ninth Workshop on Statistical Machine Translation, Baltimore, USA. Association for Computational Linguistics.
    • (2014) Proceedings of the Ninth Workshop on Statistical Machine Translation
    • Barancikova, P.1
  • 4
    • 85010213757 scopus 로고    scopus 로고
    • A Systematic Comparison of Smoothing Techniques for Sentence-Level BLEU
    • Baltimore, USA. Association for Computational Linguistics
    • Chen, B. and Cherry, C. (2014). A Systematic Comparison of Smoothing Techniques for Sentence-Level BLEU. In Proceedings of the Ninth Workshop on Statistical Machine Translation, Baltimore, USA. Association for Computational Linguistics.
    • (2014) Proceedings of the Ninth Workshop on Statistical Machine Translation
    • Chen, B.1    Cherry, C.2
  • 6
    • 84926007060 scopus 로고    scopus 로고
    • Meteor Universal: Language Specific Translation Evaluation for Any Target Language
    • Baltimore, USA. Association for Computational Linguistics
    • Denkowski, M. and Lavie, A. (2014). Meteor Universal: Language Specific Translation Evaluation for Any Target Language. In Proceedings of the Ninth Workshop on Statistical Machine Translation, Baltimore, USA. Association for Computational Linguistics.
    • (2014) Proceedings of the Ninth Workshop on Statistical Machine Translation
    • Denkowski, M.1    Lavie, A.2
  • 7
    • 15744403523 scopus 로고    scopus 로고
    • Automatic evaluation of machine translation quality using n-gram cooccurrence statistics
    • San Francisco, CA, USA. Morgan Kaufmann Publishers Inc
    • Doddington, G. (2002). Automatic evaluation of machine translation quality using n-gram cooccurrence statistics. In Proceedings of the second international conference on Human Language Technology Research, HLT ?02, pages 138?145, San Francisco, CA, USA. Morgan Kaufmann Publishers Inc.
    • (2002) Proceedings of the second international conference on Human Language Technology Research, HLT ?02 , pp. 138-145
    • Doddington, G.1
  • 8
    • 85122631843 scopus 로고    scopus 로고
    • Application of Prize based on Sentence Length in Chunk-based Automatic Evaluation of Machine Translation
    • Baltimore, USA. Association for Computational Linguistics
    • Echizenya, H. (2014). Application of Prize based on Sentence Length in Chunk-based Automatic Evaluation of Machine Translation. In Proceedings of the Ninth Workshop on Statistical Machine Translation, Baltimore, USA. Association for Computational Linguistics.
    • (2014) Proceedings of the Ninth Workshop on Statistical Machine Translation
    • Echizenya, H.1
  • 9
    • 84986291965 scopus 로고    scopus 로고
    • LAYERED: Description of Metric for Machine Translation Evaluation in WMT14 Metrics Task
    • Baltimore, USA. Association for Computational Linguistics
    • Gautam, S. and Bhattacharyya, P. (2014). LAYERED: Description of Metric for Machine Translation Evaluation in WMT14 Metrics Task. In Proceedings of the Ninth Workshop on Statistical Machine Translation, Baltimore, USA. Association for Computational Linguistics.
    • (2014) Proceedings of the Ninth Workshop on Statistical Machine Translation
    • Gautam, S.1    Bhattacharyya, P.2
  • 10
    • 84986305212 scopus 로고    scopus 로고
    • IPA and STOUT: Leveraging Linguistic and Source-based Features for Machine Translation Evaluation
    • Baltimore, USA. Association for Computational Linguistics
    • Gonzalez, M., Barron-Cedeno, A., and Marquez, L. (2014). IPA and STOUT: Leveraging Linguistic and Source-based Features for Machine Translation Evaluation. In Proceedings of the Ninth Workshop on Statistical Machine Translation, Baltimore, USA. Association for Computational Linguistics.
    • (2014) Proceedings of the Ninth Workshop on Statistical Machine Translation
    • Gonzalez, M.1    Barron-Cedeno, A.2    Marquez, L.3
  • 12
    • 85120085499 scopus 로고    scopus 로고
    • Manual and automatic evaluation of machine translation between european languages
    • New York City. Association for Computational Linguistics
    • Koehn, P. and Monz, C. (2006). Manual and automatic evaluation of machine translation between european languages. In Proceedings on the Workshop on Statistical Machine Translation, pages 102?121, New York City. Association for Computational Linguistics.
    • (2006) Proceedings on the Workshop on Statistical Machine Translation , pp. 102-121
    • Koehn, P.1    Monz, C.2
  • 13
    • 84893373336 scopus 로고    scopus 로고
    • CDER: Efficient MT Evaluation Using Block Movements
    • Leusch, G., Ueffing, N., and Ney, H. (2006). CDER: Efficient MT Evaluation Using Block Movements. In In Proceedings of EACL, pages 241?248.
    • (2006) Proceedings of EACL , pp. 241-248
    • Leusch, G.1    Ueffing, N.2    Ney, H.3
  • 19
    • 85007398719 scopus 로고    scopus 로고
    • BEER: A Smooth Sentence Level Evaluation Metric with Rich Ingredients
    • Baltimore, USA. Association for Computational Linguistics
    • Stanojevic, M. and Simaan, K. (2014). BEER: A Smooth Sentence Level Evaluation Metric with Rich Ingredients. In Proceedings of the Ninth Workshop on Statistical Machine Translation, Baltimore, USA. Association for Computational Linguistics.
    • (2014) Proceedings of the Ninth Workshop on Statistical Machine Translation
    • Stanojevic, M.1    Simaan, K.2
  • 20
    • 85037123016 scopus 로고    scopus 로고
    • The reliability of the itu-T p.85 standard for the evaluation of text-To-speech systems
    • Hansen, J. H. L. and Pellom, B. L., editors
    • Vazquez-Alvarez, Y. and Huckvale, M. (2002). The reliability of the itu-T p.85 standard for the evaluation of text-To-speech systems. In Hansen, J. H. L. and Pellom, B. L., editors, INTERSPEECH. ISCA.
    • (2002) INTERSPEECH. ISCA
    • Vazquez-Alvarez, Y.1    Huckvale, M.2


* 이 정보는 Elsevier사의 SCOPUS DB에서 KISTI가 분석하여 추출한 것입니다.