메뉴 건너뛰기




Volumn , Issue , 2015, Pages 820-829

Pragmatic neural language modelling in machine translation

Author keywords

[No Author keywords available]

Indexed keywords

COMPUTATIONAL LINGUISTICS; MODELING LANGUAGES; NATURAL LANGUAGE PROCESSING SYSTEMS;

EID: 84944029121     PISSN: None     EISSN: None     Source Type: Conference Proceeding    
DOI: 10.3115/v1/n15-1083     Document Type: Conference Paper
Times cited : (25)

References (27)
  • 1
    • 84906923072 scopus 로고    scopus 로고
    • Decoder integration and expected bleu training for recurrent neural network language models
    • Baltimore, Maryland, June Association for Computational Linguistics
    • Michael Auli and Jianfeng Gao. Decoder integration and expected bleu training for recurrent neural network language models. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (ACL '14), pages 136-142, Baltimore, Maryland, June 2014. Association for Computational Linguistics.
    • (2014) Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (ACL '14) , pp. 136-142
    • Auli, M.1    Gao, J.2
  • 2
    • 84926321124 scopus 로고    scopus 로고
    • Joint language and translation modeling with recurrent neural networks
    • Seattle, Washington, USA, October 2013. Association for Computational Linguistics
    • Michael Auli, Michel Galley, Chris Quirk, and Geoffrey Zweig. Joint language and translation modeling with recurrent neural networks. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1044-1054, Seattle, Washington, USA, October 2013. Association for Computational Linguistics.
    • Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing , pp. 1044-1054
    • Auli, M.1    Galley, M.2    Quirk, C.3    Zweig, G.4
  • 3
    • 84941201812 scopus 로고    scopus 로고
    • Oxlm: A neural language modelling framework for machine translation
    • October
    • Paul Baltescu, Phil Blunsom, and Hieu Hoang. Oxlm: A neural language modelling framework for machine translation. The Prague Bulletin of Mathematical Linguistics, 102(1):81-92, October 2014.
    • (2014) The Prague Bulletin of Mathematical Linguistics , vol.102 , Issue.1 , pp. 81-92
    • Baltescu, P.1    Blunsom, P.2    Hoang, H.3
  • 4
    • 42549142788 scopus 로고    scopus 로고
    • Adaptive importance sampling to accelerate training of a neural probabilistic language model
    • Yoshua Bengio and Jean-Sbastien Senecal. Adaptive importance sampling to accelerate training of a neural probabilistic language model. IEEE Transactions on Neural Networks, 19(4):713-722, 2008.
    • (2008) IEEE Transactions on Neural Networks , vol.19 , Issue.4 , pp. 713-722
    • Bengio, Y.1    Senecal, J.2
  • 8
    • 84943757899 scopus 로고    scopus 로고
    • One billion word benchmark for measuring progress in statistical language modeling
    • Ciprian Chelba, Tomas Mikolov, Mike Schuster, Qi Ge, Thorsten Brants, and Phillipp Koehn. One billion word benchmark for measuring progress in statistical language modeling. CoRR, 2013.
    • (2013) CoRR
    • Chelba, C.1    Mikolov, T.2    Schuster, M.3    Ge, Q.4    Brants, T.5    Koehn, P.6
  • 9
    • 0033329799 scopus 로고    scopus 로고
    • An empirical study of smoothing techniques for language modeling
    • Stanley F. Chen and Joshua Goodman. An empirical study of smoothing techniques for language modeling. Computer Speech & Language, 13(4): 359-393, 1999.
    • (1999) Computer Speech & Language , vol.13 , Issue.4 , pp. 359-393
    • Chen, S.F.1    Goodman, J.2
  • 10
    • 84960134362 scopus 로고    scopus 로고
    • On the properties of neural machine translation: Encoder-decoder approaches
    • KyungHyun Cho, Bart van Merrienboer, Dzmitry Bahdanau, and Yoshua Bengio. On the properties of neural machine translation: Encoder-decoder approaches. CoRR, 2014.
    • (2014) CoRR
    • Cho, K.1    Van Merrienboer, B.2    Bahdanau, D.3    Bengio, Y.4
  • 12
    • 84989154066 scopus 로고    scopus 로고
    • Classes for fast maximum entropy training
    • Joshua Goodman. Classes for fast maximum entropy training. CoRR, 2001.
    • (2001) CoRR
    • Goodman, J.1
  • 13
    • 84982842007 scopus 로고    scopus 로고
    • Kenlm: Faster and smaller language model queries
    • Edinburgh, Scotland, July. Association for Computational Linguistics
    • Kenneth Heafield. Kenlm: Faster and smaller language model queries. In Proceedings of the Sixth Workshop on Statistical Machine Translation (WMT '11), pages 187-197, Edinburgh, Scotland, July 2011. Association for Computational Linguistics.
    • (2011) Proceedings of the Sixth Workshop on Statistical Machine Translation (WMT '11) , pp. 187-197
    • Heafield, K.1
  • 14
    • 84938015047 scopus 로고
    • A method for the construction of minimum-redundancy codes
    • September
    • David A. Huffman. A method for the construction of minimum-redundancy codes. Proceedings of the Institute of Radio Engineers, 40(9):1098-1101, September 1952.
    • (1952) Proceedings of the Institute of Radio Engineers , vol.40 , Issue.9 , pp. 1098-1101
    • Huffman, D.A.1
  • 19
  • 23
    • 33847610331 scopus 로고    scopus 로고
    • Continuous space language models
    • Holger Schwenk. Continuous space language models. Computer Speech & Language, 21(3):492-518, 2007.
    • (2007) Computer Speech & Language , vol.21 , Issue.3 , pp. 492-518
    • Schwenk, H.1
  • 24
    • 85044798389 scopus 로고    scopus 로고
    • Continuous-space language models for statistical machine translation
    • Holger Schwenk. Continuous-space language models for statistical machine translation. Prague Bulletin of Mathematical Linguistics, 93:137-146, 2010.
    • (2010) Prague Bulletin of Mathematical Linguistics , vol.93 , pp. 137-146
    • Schwenk, H.1
  • 25
    • 84959277840 scopus 로고    scopus 로고
    • Sequence to sequence learning with neural networks
    • Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. Sequence to sequence learning with neural networks. CoRR, 2014.
    • (2014) CoRR
    • Sutskever, I.1    Vinyals, O.2    Le, Q.V.3
  • 26
    • 84926298172 scopus 로고    scopus 로고
    • Decoding with large-scale neural language models improves translation
    • Seattle, Washington, USA, October. Association for Computational Linguistics
    • Ashish Vaswani, Yinggong Zhao, Victoria Fossum, and David Chiang. Decoding with large-scale neural language models improves translation. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1387-1392, Seattle, Washington, USA, October 2013. Association for Computational Linguistics.
    • (2013) Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing , pp. 1387-1392
    • Vaswani, A.1    Zhao, Y.2    Fossum, V.3    Chiang, D.4


* 이 정보는 Elsevier사의 SCOPUS DB에서 KISTI가 분석하여 추출한 것입니다.