메뉴 건너뛰기




Volumn 2015-January, Issue , 2015, Pages 3511-3515

Recurrent neural network language model adaptation for multi-genre broadcast speech recognition

Author keywords

Language model adaptation; Latent Dirichlet allocation; RNNLM; Speech recognition; Topic model

Indexed keywords

COMPUTATIONAL LINGUISTICS; DATA MINING; RECURRENT NEURAL NETWORKS; SEMANTICS; SPEECH; SPEECH COMMUNICATION; STATISTICS; VARIATIONAL TECHNIQUES;

EID: 84959155988     PISSN: 2308457X     EISSN: 19909772     Source Type: Conference Proceeding    
DOI: None     Document Type: Conference Paper
Times cited : (92)

References (34)
  • 3
    • 84890480734 scopus 로고    scopus 로고
    • Comparison of feedforward and recurrent neural network language models
    • Vancouver, Canada, May
    • M. Sundermeyer, I. Oparin, J.-L. Gauvain, B. Freiberg, R. Schluter, and H. Ney, "Comparison of feedforward and recurrent neural network language models, " Proc. ICASSP, Vancouver, Canada, May 2013, pp. 8430-8434.
    • (2013) Proc. ICASSP , pp. 8430-8434
    • Sundermeyer, M.1    Oparin, I.2    Gauvain, J.-L.3    Freiberg, B.4    Schluter, R.5    Ney, H.6
  • 4
    • 84904483474 scopus 로고    scopus 로고
    • Recurrent neural networks for language understanding
    • K. Yao, G. Zweig, M.-Y. Hwang, Y. Shi, and D. Yu, "Recurrent neural networks for language understanding. " Proc. Interspeech, 2013, pp. 2524-2528.
    • (2013) Proc. Interspeech , pp. 2524-2528
    • Yao, K.1    Zweig, G.2    Hwang, M.-Y.3    Shi, Y.4    Yu, D.5
  • 5
    • 84906237242 scopus 로고    scopus 로고
    • Investigation of recurrent-neural-network architectures and learning methods for spoken language understanding
    • G. Mesnil, X. He, L. Deng, and Y. Bengio, "Investigation of recurrent-neural-network architectures and learning methods for spoken language understanding. " Proc. Interspeech, 2013.
    • (2013) Proc. Interspeech
    • Mesnil, G.1    He, X.2    Deng, L.3    Bengio, Y.4
  • 6
    • 84943795466 scopus 로고    scopus 로고
    • One billion word benchmark for measuring progress in statistical language modeling
    • Tech. Rep.
    • C. Chelba, T. Mikolov, M. Schuster, Q. Ge, T. Brants, P. Koehn, and T. Robinson, "One billion word benchmark for measuring progress in statistical language modeling, " Google, Tech. Rep., 2013. [Online]. Available: http: //arxiv. org/abs/1312. 3005
    • (2013) Google
    • Chelba, C.1    Mikolov, T.2    Schuster, M.3    Ge, Q.4    Brants, T.5    Koehn, P.6    Robinson, T.7
  • 8
    • 84910084714 scopus 로고    scopus 로고
    • Factored recurrent neural network language model in TED lecture transcription
    • Y. Wu, H. Yamamoto, X. Lu, S. Matsuda, C. Hori, and H. Kashioka, "Factored recurrent neural network language model in TED lecture transcription. " Proc. IWSLT, 2012, pp. 222-228.
    • (2012) Proc. IWSLT , pp. 222-228
    • Wu, Y.1    Yamamoto, H.2    Lu, X.3    Matsuda, S.4    Hori, C.5    Kashioka, H.6
  • 9
    • 84874235486 scopus 로고    scopus 로고
    • Context dependent recurrent neural network language model
    • T. Mikolov and G. Zweig, "Context dependent recurrent neural network language model. " Proc. SLT, 2012, pp. 234-239.
    • (2012) Proc. SLT , pp. 234-239
    • Mikolov, T.1    Zweig, G.2
  • 10
    • 84906228355 scopus 로고    scopus 로고
    • Recurrent neural network based personalized language modeling by social network crowdsourcing
    • T.-H. Wen, A. Heidel, H.-Y. Lee, Y. Tsao, and L.-S. Lee, "Recurrent neural network based personalized language modeling by social network crowdsourcing. " Proc. Interspeech, 2013.
    • (2013) Proc. Interspeech
    • Wen, T.-H.1    Heidel, A.2    Lee, H.-Y.3    Tsao, Y.4    Lee, L.-S.5
  • 11
    • 84959124776 scopus 로고    scopus 로고
    • Ph. D. dissertation, TU Delft, Delft University of Technology
    • Y. Shi, "Language models with meta-information, " Ph. D. dissertation, TU Delft, Delft University of Technology, 2014.
    • (2014) Language Models with Meta-information
    • Shi, Y.1
  • 12
    • 84948684977 scopus 로고    scopus 로고
    • Multi-domain recurrent neural network language model for medical speech recognition
    • O. Tilk and T. Alumae, "Multi-domain recurrent neural network language model for medical speech recognition, " in Proc. HLT, vol. 268, 2014.
    • (2014) Proc. HLT , vol.268
    • Tilk, O.1    Alumae, T.2
  • 13
    • 85026972772 scopus 로고    scopus 로고
    • Probabilistic latent semantic indexing
    • T. Hofmann, "Probabilistic latent semantic indexing, " in Proc. 22nd ACM SIGIR conference, 1999, pp. 50-57.
    • (1999) Proc. 22nd ACM SIGIR Conference , pp. 50-57
    • Hofmann, T.1
  • 17
    • 33847610331 scopus 로고    scopus 로고
    • Continuous space language models
    • H. Schwenk, "Continuous space language models, " Computer Speech & Language, vol. 21, no. 3, pp. 492-518, 2007.
    • (2007) Computer Speech & Language , vol.21 , Issue.3 , pp. 492-518
    • Schwenk, H.1
  • 18
    • 44849092930 scopus 로고    scopus 로고
    • Empirical study of neural network language models for Arabic speech recognition
    • A. Emami and L. Mangu, "Empirical study of neural network language models for Arabic speech recognition, " Proc. IEEE Workshop on ASRU, 2007, pp. 147-152.
    • (2007) Proc. IEEE Workshop on ASRU , pp. 147-152
    • Emami, A.1    Mangu, L.2
  • 19
    • 79959850026 scopus 로고    scopus 로고
    • Improved neural network based language modelling and adaptation
    • J. Park, X. Liu, M. J. F. Gales, and P. C. Woodland, "Improved neural network based language modelling and adaptation, " Proc. ISCA Interspeech, 2010, pp. 1041-1044.
    • (2010) Proc. ISCA Interspeech , pp. 1041-1044
    • Park, J.1    Liu, X.2    Gales, M.J.F.3    Woodland, P.C.4
  • 21
    • 84905240726 scopus 로고    scopus 로고
    • Efficient lattice rescoring using recurrent neural network language models
    • X. Liu, Y. Wang, X. Chen, M. Gales, and P. C. Woodland, "Efficient lattice rescoring using recurrent neural network language models, " Proc. ICASSP, 2014.
    • (2014) Proc. ICASSP
    • Liu, X.1    Wang, Y.2    Chen, X.3    Gales, M.4    Woodland, P.C.5
  • 22
    • 84910067710 scopus 로고    scopus 로고
    • Efficient GPU-based training of recurrent neural network language models using spliced sentence bunch
    • X. Chen, Y. Wang, X. Liu, M. Gales, and P. Woodland, "Efficient GPU-based training of recurrent neural network language models using spliced sentence bunch, " Proc. Interspeech, 2014.
    • (2014) Proc. Interspeech
    • Chen, X.1    Wang, Y.2    Liu, X.3    Gales, M.4    Woodland, P.5
  • 23
    • 84946051719 scopus 로고    scopus 로고
    • Improving the training and evaluation efficiency of recurrent neural network language models
    • X. Chen, X. Liu, M. Gales, and P. C. Woodland, "Improving the training and evaluation efficiency of recurrent neural network language models, " Proc. ICASSP, 2015.
    • (2015) Proc. ICASSP
    • Chen, X.1    Liu, X.2    Gales, M.3    Woodland, P.C.4
  • 24
    • 84946014155 scopus 로고    scopus 로고
    • Recurrent neural network language model training with noise contrastive estimation for speech recognition
    • X. Chen, X. Liu, M. Gales, and P. C. Woodland, "Recurrent neural network language model training with noise contrastive estimation for speech recognition, " Proc. ICASSP, 2015.
    • (2015) Proc. ICASSP
    • Chen, X.1    Liu, X.2    Gales, M.3    Woodland, P.C.4
  • 25
    • 84874235486 scopus 로고    scopus 로고
    • Context dependent recurrent neural network language model
    • T. Mikolov and G. Zweig, "Context dependent recurrent neural network language model. " Proc. SLT, 2012, pp. 234-239.
    • (2012) Proc. SLT , pp. 234-239
    • Mikolov, T.1    Zweig, G.2
  • 26
    • 44949164155 scopus 로고    scopus 로고
    • A PLSA-based language model for conversational telephone speech
    • D. Mrva and P. C. Woodland, "A PLSA-based language model for conversational telephone speech. " Proc. Interspeech, 2004.
    • (2004) Proc. Interspeech
    • Mrva, D.1    Woodland, P.C.2
  • 28
    • 44949236270 scopus 로고    scopus 로고
    • Unsupervised language model adaptation using latent semantic marginals
    • Y. C. Tam and T. Schultz, "Unsupervised language model adaptation using latent semantic marginals. " Proc. Interspeech, 2006.
    • (2006) Proc. Interspeech
    • Tam, Y.C.1    Schultz, T.2
  • 30
    • 51449103447 scopus 로고    scopus 로고
    • Optimizing bottle-neck features for LVCSR
    • F. Grezl and P. Fousek, "Optimizing bottle-neck features for LVCSR, " Proc. ICASSP, 2008, pp. 4729-4732.
    • (2008) Proc. ICASSP , pp. 4729-4732
    • Grezl, F.1    Fousek, P.2
  • 31
    • 79251574977 scopus 로고    scopus 로고
    • The efficient incorporation of MLP features into automatic speech recognition systems
    • J. Park, F. Diehl, M. J. F. Gales, M. Tomalin, and P. C. Woodland, "The efficient incorporation of MLP features into automatic speech recognition systems, " Computer Speech & Language, vol. 25, no. 3, pp. 519-534, 2011.
    • (2011) Computer Speech & Language , vol.25 , Issue.3 , pp. 519-534
    • Park, J.1    Diehl, F.2    Gales, M.J.F.3    Tomalin, M.4    Woodland, P.C.5
  • 33
    • 84910067710 scopus 로고    scopus 로고
    • Efficient training of recurrent neural network language models using spliced sentence bunch
    • X. Chen, Y. Wang, X. Liu, M. Gales, and P. C. Woodland, "Efficient training of recurrent neural network language models using spliced sentence bunch, " Proc. Interspeech, 2014.
    • (2014) Proc. Interspeech
    • Chen, X.1    Wang, Y.2    Liu, X.3    Gales, M.4    Woodland, P.C.5
  • 34
    • 0034296009 scopus 로고    scopus 로고
    • Finding consensus in speech recognition: Word error minimization and other applications of confusion networks
    • L. Mangu, E. Brill, and A. Stolcke, "Finding consensus in speech recognition: word error minimization and other applications of confusion networks, " Computer Speech & Language, vol. 14, no. 4, pp. 373-400, 2000.
    • (2000) Computer Speech & Language , vol.14 , Issue.4 , pp. 373-400
    • Mangu, L.1    Brill, E.2    Stolcke, A.3


* 이 정보는 Elsevier사의 SCOPUS DB에서 KISTI가 분석하여 추출한 것입니다.