-
1
-
-
84890469222
-
Converting neural network language models into back-off language models for efficient decoding in automatic speech recognition
-
E. Arisoy, S. F. Chen, B. Ramabhadran, and A. Sethy, "Converting neural network language models into back-off language models for efficient decoding in automatic speech recognition," in Proc. ICASSP, 2013, pp. 8242-8246.
-
Proc. ICASSP, 2013
, pp. 8242-8246
-
-
Arisoy, E.1
Chen, S.F.2
Ramabhadran, B.3
Sethy, A.4
-
2
-
-
0142166851
-
A neural probabilistic language model
-
Y. Bengio, R. Ducharme, P. Vincent, and C. Jauvin, "A neural probabilistic language model," J. Mach. Learn. Res., vol. 3, pp. 1137-1155, 2003.
-
(2003)
J. Mach. Learn. Res.
, vol.3
, pp. 1137-1155
-
-
Bengio, Y.1
Ducharme, R.2
Vincent, P.3
Jauvin, C.4
-
3
-
-
34547973194
-
Training neural network language models on very large corpora
-
H. Schwenk and J.-L. Gauvain, "Training neural network language models on very large corpora," in Proc. HLT-EMNLP, 2005, pp. 201-208.
-
Proc. HLT-EMNLP, 2005
, pp. 201-208
-
-
Schwenk, H.1
Gauvain, J.-L.2
-
4
-
-
33847610331
-
Continuous space language models
-
Jul.
-
H. Schwenk, "Continuous space language models," Comput. Speech Lang., vol. 21, no. 3, pp. 492-518, Jul. 2007.
-
(2007)
Comput. Speech Lang.
, vol.21
, Issue.3
, pp. 492-518
-
-
Schwenk, H.1
-
5
-
-
84878422162
-
Large scale hierarchical neural network language models
-
H.-K. J. Kuo, E. Arisoy, A. Emami, and P. Vozila, "Large scale hierarchical neural network language models," in Proc. Interspeech, Portland, OR, USA, 2012.
-
Proc. Interspeech, Portland, OR, USA, 2012
-
-
Kuo, H.-K.J.1
Arisoy, E.2
Emami, A.3
Vozila, P.4
-
6
-
-
79959829092
-
Recurrent neural network based language model
-
T. Mikolov, M. Karafiat, L. Burget, J. Cernocky, and S. Khudanpur, "Recurrent neural network based language model," in Proc. Interspeech, 2010, pp. 1045-1048.
-
Proc. Interspeech, 2010
, pp. 1045-1048
-
-
Mikolov, T.1
Karafiat, M.2
Burget, L.3
Cernocky, J.4
Khudanpur, S.5
-
7
-
-
80051643236
-
Extensions of recurrent neural network language model
-
T. Mikolov, S. Kombrink, L. Burget, J. Cernocky, and S. Khudanpur, "Extensions of recurrent neural network language model," in Proc. ICASSP, 2011, pp. 5528-5531.
-
Proc. ICASSP, 2011
, pp. 5528-5531
-
-
Mikolov, T.1
Kombrink, S.2
Burget, L.3
Cernocky, J.4
Khudanpur, S.5
-
8
-
-
0036293862
-
Connectionist language modeling for large vocabulary continuous speech recognition
-
H. Schwenk and J.-L. Gauvain, "Connectionist language modeling for large vocabulary continuous speech recognition," in Proc. ICASSP, Orlando, FL, USA, 2002, pp. 765-768.
-
Proc. ICASSP, Orlando, FL, USA, 2002
, pp. 765-768
-
-
Schwenk, H.1
Gauvain, J.-L.2
-
9
-
-
0012611072
-
Entropy-based pruning of backoff language models
-
A. Stolcke, "Entropy-based pruning of backoff language models," in Proc. DARPA Broadcast News Transcription and Understanding Workshop, Lansdowne, VA, USA, 1998, pp. 270-274.
-
Proc. DARPA Broadcast News Transcription and Understanding Workshop, Lansdowne, VA, USA, 1998
, pp. 270-274
-
-
Stolcke, A.1
-
10
-
-
0033873049
-
Variable n-grams and extensions for conversational speech language modeling
-
DOI 10.1109/89.817454
-
M. Siu and M. Ostendorf, "Variable n-grams and extensions for conversational speech language modeling," IEEE Trans. Acoust., Speech, Signal Process., vol. 8, no. 1, pp. 63-75, Jan. 2000. (Pubitemid 30540740)
-
(2000)
IEEE Transactions on Speech and Audio Processing
, vol.8
, Issue.1
, pp. 63-75
-
-
Siu, M.1
Ostendorf, M.2
-
12
-
-
58349107420
-
On growing and pruning Kneser-Ney smoothed n-gram models
-
Jul.
-
V. Siivola, T. Hirsimaki, and S. Virpioja, "On growing and pruning Kneser-Ney smoothed n-gram models," IEEE Trans. Audio, Speech, Lang. Process., vol. 15, no. 5, pp. 1617-1624, Jul. 2007.
-
(2007)
IEEE Trans. Audio, Speech, Lang. Process.
, vol.15
, Issue.5
, pp. 1617-1624
-
-
Siivola, V.1
Hirsimaki, T.2
Virpioja, S.3
-
13
-
-
44949140825
-
Compact n-gram models by incremental growing and clustering of histories
-
S. Virpioja and M. Kurimo, "Compact n-gram models by incremental growing and clustering of histories," in Proc. Interspeech - ICSLP, Pittsburgh, PA, USA, 2006, pp. 1037-1040.
-
Proc. Interspeech - ICSLP, Pittsburgh, PA, USA, 2006
, pp. 1037-1040
-
-
Virpioja, S.1
Kurimo, M.2
-
14
-
-
0033329799
-
Empirical study of smoothing techniques for language modeling
-
DOI 10.1006/csla.1999.0128
-
S. F. Chen and J. Goodman, "An empirical study of smoothing techniques for language modeling," Comput. Speech Lang., vol. 13, no. 4, pp. 359-394, 1999. (Pubitemid 30518216)
-
(1999)
Computer Speech and Language
, vol.13
, Issue.4
, pp. 359-394
-
-
Chen, S.F.1
Goodman, J.2
-
15
-
-
85022919385
-
Class-based n-gram models of natural language
-
P. F. Brown, V. J. Della Pietra, P. V. deSouza, J. C. Lai, and R. L. Mercer, "Class-based n-gram models of natural language," Comput. Linguist., vol. 18, no. 4, pp. 467-479, 1992.
-
(1992)
Comput. Linguist.
, vol.18
, Issue.4
, pp. 467-479
-
-
Brown, P.F.1
Della Pietra, V.J.2
DeSouza, P.V.3
Lai, J.C.4
Mercer, R.L.5
-
16
-
-
85075436378
-
Deep neural network language models
-
E. Arisoy, T. N. Sainath, B. Kingsbury, and B. Ramabhadran, "Deep neural network language models," in Proc. NAACL-HLT Workshop: Will We Ever Really Replace the N-gram Model? On the Future of Lang. Model. for HLT, Montreal, QC, Canada, Jun. 2012, pp. 20-28.
-
Proc. NAACL-HLT Workshop: Will We Ever Really Replace the N-gram Model? On the Future of Lang. Model. for HLT, Montreal, QC, Canada, Jun. 2012
, pp. 20-28
-
-
Arisoy, E.1
Sainath, T.N.2
Kingsbury, B.3
Ramabhadran, B.4
-
17
-
-
0034856455
-
Classes for fast maximum entropy training
-
J. Goodman, "Classes for fast maximum entropy training," in Proc. ICASSP, 2001, pp. 561-564.
-
Proc. ICASSP, 2001
, pp. 561-564
-
-
Goodman, J.1
-
18
-
-
85055309630
-
-
Ph.D. dissertation, Johns Hopkins Univ., Baltimore, MD, USA
-
A. Emami, "A neural syntactic language model," Ph.D. dissertation, Johns Hopkins Univ., Baltimore, MD, USA, 2006.
-
(2006)
A Neural Syntactic Language Model
-
-
Emami, A.1
-
19
-
-
84890477112
-
Speed regularization and optimality in word classing
-
G. Zweig and K. Makarychev, "Speed regularization and optimality in word classing," in Proc. ICASSP, Vancouver, BC, Canada, 2013, pp. 8237-8241.
-
Proc. ICASSP, Vancouver, BC, Canada, 2013
, pp. 8237-8241
-
-
Zweig, G.1
Makarychev, K.2
-
20
-
-
34547997987
-
Hierarchical probabilistic neural network language model
-
F. Morin and Y. Bengio, "Hierarchical probabilistic neural network language model," in Proc. AISTATS, 2005, pp. 246-252.
-
Proc. AISTATS, 2005
, pp. 246-252
-
-
Morin, F.1
Bengio, Y.2
-
21
-
-
84858779990
-
A scalable hierarchical distributed language model
-
A. Mnih and G. Hinton, "A scalable hierarchical distributed language model," in Proc. NIPS, 2008, pp. 1081-1088.
-
Proc. NIPS, 2008
, pp. 1081-1088
-
-
Mnih, A.1
Hinton, G.2
-
22
-
-
84869479578
-
Structured output layer neural network language models for speech recognition
-
Jan.
-
H.-S. Le, I. Oparin, A. Allauzen, J.-L. Gauvain, and F. Yvon, "Structured output layer neural network language models for speech recognition," IEEE Trans. Audio, Speech, Lang. Process., vol. 21, no. 1, pp. 197-206, Jan. 2013.
-
(2013)
IEEE Trans. Audio, Speech, Lang. Process.
, vol.21
, Issue.1
, pp. 197-206
-
-
Le, H.-S.1
Oparin, I.2
Allauzen, A.3
Gauvain, J.-L.4
Yvon, F.5
-
23
-
-
10944221006
-
Quick training of probabilistic neural nets by importance sampling
-
Y. Bengio and J.-S. Senecal, "Quick training of probabilistic neural nets by importance sampling," in Proc. AISTATS, 2003.
-
Proc. AISTATS, 2003
-
-
Bengio, Y.1
Senecal, J.-S.2
-
24
-
-
84867118996
-
A fast and simple algorithm for training neural probabilistic language models
-
A. Mnih and Y. W. Teh, "A fast and simple algorithm for training neural probabilistic language models," in Proc. ICML, Edinburgh, U.K., 2012.
-
Proc. ICML, Edinburgh, U.K., 2012
-
-
Mnih, A.1
Teh, Y.W.2
-
25
-
-
77949369404
-
Syntactic features for arabic speech recognition
-
H.-K. J. Kuo, L. Mangu, A. Emami, I. Zitouni, and Y.-S. Lee, "Syntactic features for arabic speech recognition," in Proc. ASRU, Merano, Italy, 2009, pp. 327-332.
-
Proc. ASRU, Merano, Italy, 2009
, pp. 327-332
-
-
Kuo, H.-K.J.1
Mangu, L.2
Emami, A.3
Zitouni, I.4
Lee, Y.-S.5
-
26
-
-
84858966958
-
Strategies for training large scale neural network language models
-
T. Mikolov, A. Deoras, D. Povey, L. Burget, and J. Cernocky, "Strategies for training large scale neural network language models," in Proc. ASRU, 2011, pp. 196-201.
-
Proc. ASRU, 2011
, pp. 196-201
-
-
Mikolov, T.1
Deoras, A.2
Povey, D.3
Burget, L.4
Cernocky, J.5
-
27
-
-
80051613991
-
Variational approximation of long-span language models for LVCSR
-
A. Deoras, T. Mikolov, S. Kombrink, M. Karafiat, and S. Khudanpur, "Variational approximation of long-span language models for LVCSR," in Proc. ICASSP, Prague, Czech Republic, 2012, pp. 5532-5535.
-
Proc. ICASSP, Prague, Czech Republic, 2012
, pp. 5532-5535
-
-
Deoras, A.1
Mikolov, T.2
Kombrink, S.3
Karafiat, M.4
Khudanpur, S.5
-
28
-
-
84870293590
-
Approximate inference: A sampling based modeling technique to capture complex dependencies in a language model
-
Jan.
-
A. Deoras, T. Mikolov, S. Kombrink, and K. Church, "Approximate inference: A sampling based modeling technique to capture complex dependencies in a language model," Speech Commun., vol. 55, no. 1, pp. 162-177, Jan. 2013.
-
(2013)
Speech Commun.
, vol.55
, Issue.1
, pp. 162-177
-
-
Deoras, A.1
Mikolov, T.2
Kombrink, S.3
Church, K.4
-
29
-
-
84878381641
-
Conversion of recurrent neural network language models to weighted finite state transducers for automatic speech recognition
-
G. Lecorvé and P. Motlicek, "Conversion of recurrent neural network language models to weighted finite state transducers for automatic speech recognition," in Proc. Interspeech, Portland, OR, USA, 2012.
-
Proc. Interspeech, Portland, OR, USA, 2012
-
-
Lecorvé, G.1
Motlicek, P.2
-
30
-
-
4544383109
-
The use of a linguistically motivated language model in conversational speech recognition
-
W. Wang, A. Stolcke, and M. P. Harper, "The use of a linguistically motivated language model in conversational speech recognition," in Proc. ICASSP, 2004, pp. 261-264.
-
Proc. ICASSP, 2004
, pp. 261-264
-
-
Wang, W.1
Stolcke, A.2
Harper, M.P.3
-
31
-
-
34047266376
-
Advances in speech transcription at IBM under the DARPA EARS program
-
Sep.
-
S. F. Chen, B. Kingsbury, L. Mangu, D. Povey, G. Saon, H. Soltau, and G. Zweig, "Advances in speech transcription at IBM under the DARPA EARS program," IEEE Trans. Audio, Speech, Lang. Process., vol. 14, no. 5, pp. 1596-1608, Sep. 2006.
-
(2006)
IEEE Trans. Audio, Speech, Lang. Process.
, vol.14
, Issue.5
, pp. 1596-1608
-
-
Chen, S.F.1
Kingsbury, B.2
Mangu, L.3
Povey, D.4
Saon, G.5
Soltau, H.6
Zweig, G.7
|