메뉴 건너뛰기




Volumn , Issue , 2014, Pages 2635-2639

One billion word benchmark for measuring progress in statistical language modeling

Author keywords

Benchmark; Language modeling; Reproducible research

Indexed keywords

BENCHMARKING; NATURAL LANGUAGE PROCESSING SYSTEMS; RECURRENT NEURAL NETWORKS; SPEECH COMMUNICATION;

EID: 84910091099     PISSN: 2308457X     EISSN: 19909772     Source Type: Conference Proceeding    
DOI: None     Document Type: Conference Paper
Times cited : (283)

References (32)
  • 4
    • 26444565569 scopus 로고
    • Finding structure in time
    • Elman, 1990
    • [Elman, 1990] J. Elman. 1990. Finding Structure in Time. Cognitive Science, 14, 179-211.
    • (1990) Cognitive Science , vol.14 , pp. 179-211
    • Elman, J.1
  • 5
    • 85055309630 scopus 로고    scopus 로고
    • Emami, 2006 Ph.D. thesis, Johns Hopkins University
    • [Emami, 2006] A. Emami. 2006. A Neural Syntactic Language Model. Ph.D. thesis, Johns Hopkins University.
    • (2006) A Neural Syntactic Language Model
    • Emami, A.1
  • 6
    • 0012356157 scopus 로고    scopus 로고
    • A bit of progress in language modeling, extended version
    • Goodman, 2001a
    • [Goodman, 2001a] J. T. Goodman. 2001a. A bit of progress in language modeling, extended version. Technical report MSR-TR- 2001-72.
    • (2001) Technical Report MSR-TR
    • Goodman, J.T.1
  • 7
    • 0034856455 scopus 로고    scopus 로고
    • Classes for fast maximum entropy training
    • Goodman, 2001b
    • [Goodman, 2001b] J. T. Goodman. 2001b. Classes for fast maximum entropy training. In Proceedings of ICASSP.
    • (2001) Proceedings of ICASSP
    • Goodman, J.T.1
  • 9
  • 10
    • 79959818347 scopus 로고    scopus 로고
    • Study on interaction between entropy pruning and kneser-ney smoothing
    • Chelba et al. 2010
    • [Chelba et al., 2010] C. Chelba, T. Brants, W. Neveitt, and P. Xu. 2010. Study on Interaction between Entropy Pruning and Kneser-Ney Smoothing. In Proceedings of Interspeech.
    • (2010) Proceedings of Interspeech
    • Chelba, C.1    Brants, T.2    Neveitt, W.3    Xu, P.4
  • 11
    • 85024115120 scopus 로고    scopus 로고
    • An empirical study of smoothing techniques for language modeling
    • Chen and Goodman, 1996
    • [Chen and Goodman, 1996] S. F. Chen and J. T. Goodman. 1996. An empirical study of smoothing techniques for language modeling. In Proceedings of ACL.
    • (1996) Proceedings of ACL
    • Chen, S.F.1    Goodman, J.T.2
  • 12
    • 84863387613 scopus 로고    scopus 로고
    • Shrinking exponential language models
    • Chen, 2009
    • [Chen, 2009] S. F. Chen. 2009. Shrinking exponential language models. In Proceedings of NAACL-HLT.
    • (2009) Proceedings of NAACL-HLT
    • Chen, S.F.1
  • 13
    • 0023312404 scopus 로고
    • Estimation of probabilities from sparse data for the language model component of a speech recognizer
    • Katz, 1995
    • [Katz, 1995] S. Katz. 1987. Estimation of probabilities from sparse data for the language model component of a speech recognizer. In IEEE Transactions on Acoustics, Speech and Signal Processing.
    • (1987) IEEE Transactions on Acoustics, Speech and Signal Processing
    • Katz, S.1
  • 14
    • 0028996876 scopus 로고
    • Improved backing-off for M-gram language modeling
    • Kneser and Ney, 1995
    • [Kneser and Ney, 1995] R. Kneser and H. Ney. 1995. Improved Backing-Off For M-Gram Language Modeling. In Proceedings of ICASSP.
    • (1995) Proceedings of ICASSP
    • Kneser, R.1    Ney, H.2
  • 17
  • 18
    • 84858966958 scopus 로고    scopus 로고
    • Strategies for training large scale neural network language models
    • Mikolov et al. 2011c
    • [Mikolov et al., 2011c] T. Mikolov, A. Deoras, D. Povey, L. Burget, and J. Cěrnocky. 2011b. Strategies for Training Large Scale Neural Network Language Models. In Proceedings of ASRU.
    • (2011) Proceedings of ASRU
    • Mikolov, T.1    Deoras, A.2    Povey, D.3    Burget, L.4    Cěrnocky, J.5
  • 20
    • 34547970628 scopus 로고    scopus 로고
    • Three new graphical models for statistical language modelling
    • Mnih and Hinton, 2007
    • [Mnih and Hinton, 2007] A. Mnih and G. Hinton. 2007. Three new graphical models for statistical language modelling. In Proceedings of ICML.
    • (2007) Proceedings of ICML
    • Mnih, A.1    Hinton, G.2
  • 21
    • 34547997987 scopus 로고    scopus 로고
    • Hierarchical probabilistic neural network language model
    • Morin and Bengio, 2005
    • [Morin and Bengio, 2005] F. Morin and Y. Bengio. 2005. Hierarchical Probabilistic Neural Network Language Model. In Proceedings of AISTATS.
    • (2005) Proceedings of AISTATS
    • Morin, F.1    Bengio, Y.2
  • 22
    • 85149106909 scopus 로고    scopus 로고
    • Discriminative language modeling with conditional random fields and the perceptron algorithm
    • Roark et al. 2004
    • [Roark et al., 2004] B. Roark, M. Saralar, M. Collins, and M. Johnson. 2004. Discriminative language modeling with conditional random fields and the perceptron algorithm. In Proceedings of ACL.
    • (2004) Proceedings of ACL
    • Roark, B.1    Saralar, M.2    Collins, M.3    Johnson, M.4
  • 24
    • 0022471098 scopus 로고
    • Learning internal representations by backpropagating errors
    • Rumelhart et al. 1986
    • [Rumelhart et al., 1986] D. E. Rumelhart, G. E. Hinton, and R. J. Williams. 1986. Learning internal representations by backpropagating errors. Nature, 323:533-536.
    • (1986) Nature , vol.323 , pp. 533-536
    • Rumelhart, D.E.1    Hinton, G.E.2    Williams, R.J.3
  • 25
    • 33847610331 scopus 로고    scopus 로고
    • Continuous space language models
    • Schwenk, 2007
    • [Schwenk, 2007] H. Schwenk. 2007. Continuous space language models. Computer Speech and Language, vol. 21.
    • (2007) Computer Speech and Language , vol.21
    • Schwenk, H.1
  • 28
    • 38049151407 scopus 로고    scopus 로고
    • A hierarchical bayesian language model based on pitman yor processes
    • Teh, 2006
    • [Teh, 2006] Y.W. Teh. 2006. A hierarchical Bayesian language model based on Pitman Yor processes. In Proceedings of Coling/ACL.
    • (2006) Proceedings of Coling/ACL
    • Teh, Y.W.1
  • 29
    • 84910084714 scopus 로고    scopus 로고
    • Factored recurrent neural network language model in TED lecture transcription
    • Wu et al. 2012
    • [Wu et al., 2012] Y. Wu, H. Yamamoto, X. Lu, S. Matsuda, C. Hori, and H. Kashioka. 2012. Factored Recurrent Neural Network Language Model in TED Lecture Transcription. In Proceedings of IWSLT.
    • (2012) Proceedings of IWSLT
    • Wu, Y.1    Yamamoto, H.2    Lu, X.3    Matsuda, S.4    Hori, C.5    Kashioka, H.6
  • 31
    • 80053246370 scopus 로고    scopus 로고
    • Efficient subsampling for training complex language models
    • Xu et al. 2011
    • [Xu et al., 2011] Puyang Xu, A. Gunawardana, and S. Khudanpur. 2011. Efficient Subsampling for Training Complex Language Models. In Proceedings of EMNLP.
    • (2011) Proceedings of EMNLP
    • Xu, P.1    Gunawardana, A.2    Khudanpur, S.3
  • 32
    • 84890477112 scopus 로고    scopus 로고
    • Speed regularization and optimality in word classing
    • Zweig and Makarychev, 2013
    • [Zweig and Makarychev, 2013] G. Zweig and K. Makarychev. 2013. Speed Regularization and Optimality in Word Classing. In Proceedings of ICASSP.
    • (2013) Proceedings of ICASSP
    • Zweig, G.1    Makarychev, K.2


* 이 정보는 Elsevier사의 SCOPUS DB에서 KISTI가 분석하여 추출한 것입니다.