-
2
-
-
41049105254
-
Joint-sequence models for grapheme-to-phoneme conversion
-
M. Bisani and H. Ney, "joint-sequence models for grapheme-to-phoneme conversion, " Speech Communications, vol. 50, no. 5, pp. 434-451, 2008
-
(2008)
Speech Communications
, vol.50
, Issue.5
, pp. 434-451
-
-
Bisani, M.1
Ney, H.2
-
3
-
-
0031573117
-
Long short-term memory
-
S. Hochreiter and]. Schmidhuber, "Long short-term memory," Neural Computation, vol. 9(8), pp. 17351780, 1997
-
(1997)
Neural Computation
, vol.9
, Issue.8
, pp. 17351780
-
-
Hochreiter, S.1
Schmidhuber2
-
4
-
-
84910064367
-
Pronunciation of proper names with a joint n-gram model for bi-directional grapheme-to-phoneme conversion
-
L. Galescu and]. F. Allen, "Pronunciation of proper names with a joint n-gram model for bi-directional grapheme-to-phoneme conversion, " in Proceedings of InterSpeech,2002
-
(2002)
Proceedings of InterSpeech
-
-
Galescu, L.1
Allen, F.2
-
5
-
-
84878571769
-
Improving wfst-based g2p conversion with alignment constraints and rnnlm n-best rescoring
-
J. R. Novak et aI., "Improving wfst-based g2p conversion with alignment constraints and rnnlm n-best rescoring, " in Proceedings of InterSpeech, 2012
-
(2012)
Proceedings of InterSpeech
-
-
Novak et aI, J.R.1
-
7
-
-
33745216153
-
Conditional and joint models for graphemeto-phoneme conversion
-
S. F. Chen, "Conditional and joint models for graphemeto-phoneme conversion, " in Proceedings ofInterSpeech, 2003
-
(2003)
Proceedings OfInterSpeech
-
-
Chen, S.F.1
-
8
-
-
78650966704
-
Letter-to-sound pronunciation prediction using conditional random fields
-
D. Wang and S. King, "Letter-to-sound pronunciation prediction using conditional random fields, " IEEE Signal Processing Letters, vol. 18 (2), pp. 122-125, 20 1 1
-
(2011)
IEEE Signal Processing Letters
, vol.18
, Issue.2
, pp. 122-125
-
-
Wang, D.1
King, S.2
-
9
-
-
84906222525
-
Structure learning in hidden conditional random fields for grapheme-to-phoneme conversion
-
P. Lehnen, A. Allauzen, T. Lavergne, F. Yvon, S. Hahn, and H. Ney, "Structure learning in hidden conditional random fields for grapheme-to-phoneme conversion, " in Proceedings of InterSpeech, 2013
-
(2013)
Proceedings of InterSpeech
-
-
Lehnen, P.1
Allauzen, A.2
Lavergne, T.3
Yvon, F.4
Hahn, S.5
Ney, H.6
-
10
-
-
79952264781
-
Joint processing and discriminative training for letter-tophoneme conversion
-
S. Jiampojamarn, C. Cherry, and G. Kondrak, "Joint processing and discriminative training for letter-tophoneme conversion, " in Proceedings of ACL, 2008, pp. 905-9 13
-
(2008)
Proceedings of ACL
, pp. 905-913
-
-
Jiampojamarn, S.1
Cherry, C.2
Kondrak, G.3
-
12
-
-
84910037610
-
Encoding linear models as weighted finite-state transducers
-
K. Wu et aI., "Encoding linear models as weighted finite-state transducers, " in Proceedings of InterSpeech, 2014
-
(2014)
Proceedings of InterSpeech
-
-
Wu et aI, K.1
-
13
-
-
84878590885
-
Comparison of grapheme-to-phoneme methods on large pronunciation dictionaries and lvcsr tasks
-
S. Hahn, P. Vozila, and M. Bisani, "Comparison of grapheme-to-phoneme methods on large pronunciation dictionaries and lvcsr tasks, " in Proceedings of InterSpeech, 2012
-
(2012)
Proceedings of InterSpeech
-
-
Hahn, S.1
Vozila, P.2
Bisani, M.3
-
14
-
-
84890543083
-
Speech recognition with deep recurrent neural networks
-
A. Graves, A. Mohamed, and G. Hinton, "Speech recognition with deep recurrent neural networks, " in Proceedings of ICASSP, 2013, pp. 6645-6649
-
(2013)
Proceedings of ICASSP
, pp. 6645-6649
-
-
Graves, A.1
Mohamed, A.2
Hinton, G.3
-
15
-
-
84908677215
-
Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition
-
H. Sak, A. Senior, and F. Beau fays, "Long short-term memory based recurrent neural network architectures for large vocabulary speech recognition, " in Proceedings of InterSpeech, 2014
-
(2014)
Proceedings of InterSpeech
-
-
Sak, H.1
Senior, A.2
Beau Fays, F.3
-
16
-
-
71249112130
-
Offline handwriting recognition with multidimensional recurrent neural networks
-
A. Graves and]. Schmidhuber, "Offline handwriting recognition with multidimensional recurrent neural networks, " in Proceedings of NIPS, 2008, pp. 545-552
-
(2008)
Proceedings of NIPS
, pp. 545-552
-
-
Graves, A.1
Schmidhuber2
-
17
-
-
84878402147
-
Lstm neural networks for language modeling
-
M. Sundermeyer, R. Schluter, and H. Ney, "Lstm neural networks for language modeling, " in Proceedings of InterSpeech, 2012, pp. 194-197
-
(2012)
Proceedings of InterSpeech
, pp. 194-197
-
-
Sundermeyer, M.1
Schluter, R.2
Ney, H.3
-
18
-
-
33749259827
-
Connectionist temporal classification: Labelling unsegmented sequence data with recurrent neural networks
-
A. Graves, S. Fernandez, F. Gomez, and]. Schmidhuber, "Connectionist temporal classification: Labelling unsegmented sequence data with recurrent neural networks, " in Proceedings of ICML, 2006
-
(2006)
Proceedings of ICML
-
-
Graves, A.1
Fernandez, S.2
Gomez, F.3
Schmidhuber4
|