메뉴 건너뛰기




Volumn , Issue , 2011, Pages 196-201

Strategies for training large scale neural network language models

Author keywords

[No Author keywords available]

Indexed keywords

BROADCAST NEWS; FAST CONVERGENCE; LANGUAGE MODEL; LARGE DATASETS; MAXIMUM ENTROPY MODELS; NETWORK LANGUAGE; NEURAL NETWORK MODEL; RELATIVE REDUCTION; TRAINING DATA; WORD ERROR RATE;

EID: 84858966958     PISSN: None     EISSN: None     Source Type: Conference Proceeding    
DOI: 10.1109/ASRU.2011.6163930     Document Type: Conference Paper
Times cited : (499)

References (21)
  • 1
    • 0030181951 scopus 로고    scopus 로고
    • A maximum entropy approach to adaptive statistical language modeling
    • R. Rosenfeld, "A maximum entropy approach to adaptive statistical language modeling," Computer, Speech and Language, vol. 10, pp. 187-228, 1996.
    • (1996) Computer, Speech and Language , vol.10 , pp. 187-228
    • Rosenfeld, R.1
  • 4
    • 80051659716 scopus 로고    scopus 로고
    • Speech recognition with segmental conditional random fields: A summary of the JHU CLSP summer workshop
    • G. Zweig, P. Nguyen et al., "Speech recognition with segmental conditional random fields: A summary of the JHU CLSP summer workshop," in Proceedings of ICASSP, 2011.
    • (2011) Proceedings of ICASSP
    • Zweig, G.1    Nguyen, P.2
  • 8
    • 79959828555 scopus 로고    scopus 로고
    • Efficient estimation of maximum entropy language models with N-gram features: An SRILM extension
    • T. Alumae and M. Kurimo, "Efficient estimation of maximum entropy language models with N-gram features: an SRILM extension," in Proceedings of Interspeech, 2010.
    • (2010) Proceedings of Interspeech
    • Alumae, T.1    Kurimo, M.2
  • 10
    • 34547973194 scopus 로고    scopus 로고
    • Training neural network language models on very large corpora
    • H. Schwenk and J.-L. Gauvain, "Training neural network language models on very large corpora," in Proceedings of EMNLP, 2005.
    • (2005) Proceedings of EMNLP
    • Schwenk, H.1    Gauvain, J.-L.2
  • 13
    • 0034856455 scopus 로고    scopus 로고
    • Classes for fast maximum entropy training
    • J. Goodman, "Classes for fast maximum entropy training," in Proceedings of ICASSP, 2001.
    • (2001) Proceedings of ICASSP
    • Goodman, J.1
  • 14
    • 34547997987 scopus 로고    scopus 로고
    • Hierarchical probabilistic neural network language model
    • F. Morin and Y Bengio, "Hierarchical probabilistic neural network language model," in AISTATS, 2005, pp. 246-252.
    • (2005) AISTATS , pp. 246-252
    • Morin, F.1    Bengio, Y.2
  • 16
    • 33847610331 scopus 로고    scopus 로고
    • Continuous space language models
    • DOI 10.1016/j.csl.2006.09.003, PII S0885230806000325
    • H. Schwenk, "Continuous space language models," Comput. Speech Lang., vol. 21, pp. 492-518, July 2007. (Pubitemid 46367510)
    • (2007) Computer Speech and Language , vol.21 , Issue.3 , pp. 492-518
    • Schwenk, H.1
  • 18
    • 0027636611 scopus 로고
    • Learning and development in neural networks: The importance of starting small
    • J. L. Elman, "Learning and development in neural networks: The importance of starting small," Cognition, vol. 48, pp. 71-99, 1993.
    • (1993) Cognition , vol.48 , pp. 71-99
    • Elman, J.L.1
  • 19
    • 80053284315 scopus 로고    scopus 로고
    • A fast re-scoring strategy to capture long-distance dependencies
    • A. Deoras, T. Mikolov, and K. Church, "A fast re-scoring strategy to capture long-distance dependencies," in Proceedings of EMNLP, 2011.
    • (2011) Proceedings of EMNLP
    • Deoras, A.1    Mikolov, T.2    Church, K.3
  • 21
    • 84863387613 scopus 로고    scopus 로고
    • Shrinking exponential language models
    • S. F. Chen, "Shrinking exponential language models," in Proc. NAACL HLT, 2009.
    • (2009) Proc. NAACL HLT
    • Chen, S.F.1


* 이 정보는 Elsevier사의 SCOPUS DB에서 KISTI가 분석하여 추출한 것입니다.