-
1
-
-
85014561619
-
A fast iterative shrinkage-thresholding algorithm for linear inverse problems
-
Beck, A. and Teboulle, M. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences, 2(1):183–202, 2009.
-
(2009)
SIAM Journal on Imaging Sciences
, vol.2
, Issue.1
, pp. 183-202
-
-
Beck, A.1
Teboulle, M.2
-
2
-
-
84890543516
-
Advances in optimizing recurrent networks
-
Vancouver, Canada, May
-
Bengio, Y., Boulanger-Lewandowski, N., and Pascanu, R. Advances in optimizing recurrent networks. In Proc. ICASSP, Vancouver, Canada, May 2013.
-
(2013)
Proc. ICASSP
-
-
Bengio, Y.1
Boulanger-Lewandowski, N.2
Pascanu, R.3
-
4
-
-
84055222005
-
Context-dependent pre-trained deep neural networks for large-vocabulary speech recognition
-
jan
-
Dahl, G., Yu, D., Deng, L., and Acero, A. Context-dependent pre-trained deep neural networks for large-vocabulary speech recognition. IEEE Trans. on Audio, Speech and Language Processing, 20(1):30–42, jan 2012.
-
(2012)
IEEE Trans. On Audio, Speech and Language Processing
, vol.20
, Issue.1
, pp. 30-42
-
-
Dahl, G.1
Yu, D.2
Deng, L.3
Acero, A.4
-
5
-
-
80051616844
-
Large vocabulary continuous speech recognition with context-dependent DBN-HMMs
-
Prague, Czech, May
-
Dahl, G. E., Yu, D., Deng, L., and Acero, A. Large vocabulary continuous speech recognition with context-dependent DBN-HMMs. In Proc. IEEE ICASSP, pp. 4688–4691, Prague, Czech, May 2011.
-
(2011)
Proc. IEEE ICASSP
, pp. 4688-4691
-
-
Dahl, G.E.1
Yu, D.2
Deng, L.3
Acero, A.4
-
6
-
-
84890527827
-
Improving deep neural networks for lvcsr using rectified linear units and dropout
-
Vancouver, Canada, May IEEE
-
Dahl, G. E., Sainath, T. N., and Hinton, G. E. Improving deep neural networks for lvcsr using rectified linear units and dropout. In Proc. ICASSP, pp. 8609–8613, Vancouver, Canada, May 2013. IEEE.
-
(2013)
Proc. ICASSP
, pp. 8609-8613
-
-
Dahl, G.E.1
Sainath, T.N.2
Hinton, G.E.3
-
7
-
-
0028256706
-
Analysis of the correlation structure for a neural predictive model with application to speech recognition
-
Deng, L., Hassanein, K., and Elmasry, M. Analysis of the correlation structure for a neural predictive model with application to speech recognition. Neural Networks, 7(2):331–339, 1994.
-
(1994)
Neural Networks
, vol.7
, Issue.2
, pp. 331-339
-
-
Deng, L.1
Hassanein, K.2
Elmasry, M.3
-
8
-
-
84890545163
-
A deep convolutional neural network using heterogeneous pooling for trading acoustic invariance with phonetic confusion
-
Vancouver, Canada, May
-
Deng, L., Abdel-Hamid, O., and Yu, D. A deep convolutional neural network using heterogeneous pooling for trading acoustic invariance with phonetic confusion. In Proc. IEEE ICASSP, Vancouver, Canada, May 2013a.
-
(2013)
Proc. IEEE ICASSP
-
-
Deng, L.1
Abdel-Hamid, O.2
Yu, D.3
-
9
-
-
84890526837
-
New types of deep neural network learning for speech recognition and related applications: An overview
-
Vancouver, Canada, May
-
Deng, L., Hinton, G., and Kingsbury, B. New types of deep neural network learning for speech recognition and related applications: An overview. In Proc. IEEE ICASSP, Vancouver, Canada, May 2013b.
-
(2013)
Proc. IEEE ICASSP
-
-
Deng, L.1
Hinton, G.2
Kingsbury, B.3
-
10
-
-
84890491198
-
Recent advances in deep learning for speech research at microsoft
-
Vancouver, Canada
-
Deng, L., Li, J., Huang, J.-T., Yao, K., Yu, D., Seide, F., Seltzer, M., Zweig, G., He, X., Williams, J., Gong, Y., and Acero, A. Recent advances in deep learning for speech research at microsoft. In Proc. ICASSP, Vancouver, Canada, 2013c.
-
(2013)
Proc. ICASSP
-
-
Deng, L.1
Li, J.2
Huang, J.-T.3
Yao, K.4
Yu, D.5
Seide, F.6
Seltzer, M.7
Zweig, G.8
He, X.9
Williams, J.10
Gong, Y.11
Acero, A.12
-
12
-
-
34250704813
-
Connectionist temporal classification: Labelling unsegmented sequence data with recurrent neural networks
-
Pittsburgh, PA, June ACM
-
Graves, A., Fernández, S., Gomez, F., and Schmidhuber, J. Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks. In Proc. ICML, pp. 369–376, Pittsburgh, PA, June 2006. ACM.
-
(2006)
Proc. ICML
, pp. 369-376
-
-
Graves, A.1
Fernández, S.2
Gomez, F.3
Schmidhuber, J.4
-
13
-
-
84890543083
-
Speech recognition with deep recurrent neural networks
-
Vancouver, Canada, May
-
Graves, A., Mohamed, A., and Hinton, G. Speech recognition with deep recurrent neural networks. In Proc. ICASSP, Vancouver, Canada, May 2013.
-
(2013)
Proc. ICASSP
-
-
Graves, A.1
Mohamed, A.2
Hinton, G.3
-
14
-
-
85032751458
-
Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups
-
November
-
Hinton, G., Deng, L., Yu, D., Dahl, G. E., Mohamed, A., Jaitly, N., Senior, A., Vanhoucke, V., Nguyen, P., Sainath, T. N., and Kingsbury, B. Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups. IEEE Signal Processing Magazine, 29(6):82–97, November 2012.
-
(2012)
IEEE Signal Processing Magazine
, vol.29
, Issue.6
, pp. 82-97
-
-
Hinton, G.1
Deng, L.2
Yu, D.3
Dahl, G.E.4
Mohamed, A.5
Jaitly, N.6
Senior, A.7
Vanhoucke, V.8
Nguyen, P.9
Sainath, T.N.10
Kingsbury, B.11
-
15
-
-
1842436050
-
The “echo state” approach to analysing and training recurrent neural networks
-
GMD German National Research Institute for Computer Science
-
Jaeger, H. The “echo state” approach to analysing and training recurrent neural networks. GMD Report 148, GMD - German National Research Institute for Computer Science, 2001a.
-
(2001)
GMD Report 148
-
-
Jaeger, H.1
-
16
-
-
1842488370
-
Short term memory in echo state networks
-
GMD German National Research Institute for Computer Science
-
Jaeger, H. Short term memory in echo state networks. GMD Report 152, GMD - German National Research Institute for Computer Science, 2001b.
-
(2001)
GMD Report 152
-
-
Jaeger, H.1
-
17
-
-
33749833931
-
Tutorial on training recurrent neural networks, covering BPPT, RTRL, EKF and the “echo state network” approach
-
GMD German National Research Institute for Computer Science
-
Jaeger, H. Tutorial on training recurrent neural networks, covering BPPT, RTRL, EKF and the “echo state network” approach. GMD Report 159, GMD - German National Research Institute for Computer Science, 2002.
-
(2002)
GMD Report 159
-
-
Jaeger, H.1
-
18
-
-
84878379108
-
Scalable minimum bayes risk training of deep neural network acoustic models using distributed hessian-free optimization
-
Portland, OR, September
-
Kingsbury, B., Sainath, T. N., and Soltau, H. Scalable minimum bayes risk training of deep neural network acoustic models using distributed hessian-free optimization. In Proc. INTERSPEECH, Portland, OR, September 2012.
-
(2012)
Proc. INTERSPEECH
-
-
Kingsbury, B.1
Sainath, T.N.2
Soltau, H.3
-
19
-
-
84867711674
-
Learning invariant feature hierarchies
-
Firenze, Italy, October Springer
-
LeCun, Yann. Learning invariant feature hierarchies. In Proc. ECCV, pp. 496–505, Firenze, Italy, October 2012. Springer.
-
(2012)
Proc. ECCV
, pp. 496-505
-
-
LeCun, Y.1
-
20
-
-
0024768209
-
Speaker-independent phone recognition using hidden markov models
-
November
-
Lee, K.-F. and Hon, H.-W. Speaker-independent phone recognition using hidden Markov models. IEEE Transactions on Acoustics, Speech and Signal Processing,, 37(11):1641–1648, November 1989.
-
(1989)
IEEE Transactions on Acoustics, Speech and Signal Processing
, vol.37
, Issue.11
, pp. 1641-1648
-
-
Lee, K.-F.1
Hon, H.-W.2
-
21
-
-
84878409063
-
Recurrent neural networks for noise reduction in robust ASR
-
Portland, OR, September
-
Maas, A. L., Le, Q., O’Neil, T. M., Vinyals, O., Nguyen, P., and Ng, A. Y. Recurrent Neural Networks for Noise Reduction in Robust ASR. In Proc. INTERSPEECH, Portland, OR, September 2012.
-
(2012)
Proc. INTERSPEECH
-
-
Maas, A.L.1
Le, Q.2
O’Neil, T.M.3
Vinyals, O.4
Nguyen, P.5
Ng, A.Y.6
-
22
-
-
84877823547
-
Echo state property linked to an input: Exploring a fundamental characteristic of recurrent neural networks
-
Manjunath, G. and Jaeger, H. Echo state property linked to an input: Exploring a fundamental characteristic of recurrent neural networks. Neural computation, 25(3):671–696, 2013.
-
(2013)
Neural Computation
, vol.25
, Issue.3
, pp. 671-696
-
-
Manjunath, G.1
Jaeger, H.2
-
23
-
-
80053451847
-
Learning recurrent neural networks with hessian-free optimization
-
Bellevue, WA, June
-
Martens, J. and Sutskever, I. Learning recurrent neural networks with hessian-free optimization. In Proc. ICML, pp. 1033–1040, Bellevue, WA, June 2011.
-
(2011)
Proc. ICML
, pp. 1033-1040
-
-
Martens, J.1
Sutskever, I.2
-
25
-
-
84858966958
-
Strategies for training large scale neural network language models
-
Honolulu, HI, December IEEE
-
Mikolov, T., Deoras, A., Povey, D., Burget, L., and Cernocky, J. Strategies for training large scale neural network language models. In Proc. IEEE ASRU, pp. 196–201, Honolulu, HI, December 2011. IEEE.
-
(2011)
Proc. IEEE ASRU
, pp. 196-201
-
-
Mikolov, T.1
Deoras, A.2
Povey, D.3
Burget, L.4
Cernocky, J.5
-
26
-
-
84897497795
-
On the difficulty of training recurrent neural networks
-
Atlanta, GA, June
-
Pascanu, R., Mikolov, T., and Bengio, Y. On the difficulty of training recurrent neural networks. In Proc. ICML, Atlanta, GA, June 2013.
-
(2013)
Proc. ICML
-
-
Pascanu, R.1
Mikolov, T.2
Bengio, Y.3
-
28
-
-
0028392167
-
An application of recurrent nets to phone probability estimation
-
August
-
Robinson, A. J. An application of recurrent nets to phone probability estimation. IEEE Transactions on Neural Networks, 5(2):298–305, August 1994.
-
(1994)
IEEE Transactions on Neural Networks
, vol.5
, Issue.2
, pp. 298-305
-
-
Robinson, A.J.1
-
29
-
-
84886829539
-
Optimization techniques to improve training speed of deep neural networks for large speech tasks
-
November
-
Sainath, T.N., Kingsbury, B., Soltau, H., and Ramabhadran, B. Optimization techniques to improve training speed of deep neural networks for large speech tasks. IEEE Transactions on Audio, Speech, and Language Processing, 21(11):2267–2276, November 2013.
-
(2013)
IEEE Transactions on Audio, Speech, and Language Processing
, vol.21
, Issue.11
, pp. 2267-2276
-
-
Sainath, T.N.1
Kingsbury, B.2
Soltau, H.3
Ramabhadran, B.4
-
30
-
-
84865801985
-
Conversational speech transcription using context-dependent deep neural networks
-
Florence, Italy, August
-
Seide, F., Li, G., and Yu, D. Conversational speech transcription using context-dependent deep neural networks. In Proc. INTERSPEECH, pp. 437–440, Florence, Italy, August 2011.
-
(2011)
Proc. INTERSPEECH
, pp. 437-440
-
-
Seide, F.1
Li, G.2
Yu, D.3
-
32
-
-
84897510162
-
On the importance of initialization and momentum in deep learning
-
Atlanta, GA, June
-
Sutskever, I., Martens, J., Dahl, G., and Hinton, G. E. On the importance of initialization and momentum in deep learning. In Proc. ICML, Atlanta, GA, June 2013.
-
(2013)
Proc. ICML
-
-
Sutskever, I.1
Martens, J.2
Dahl, G.3
Hinton, G.E.4
-
33
-
-
84878403164
-
Context-dependent MLPs for LVCSR: Tandem, hybrid or both?
-
Portland, OR, September
-
Tüske, Z., Sundermeyer, M., Schlüter, R., and Ney, H. Context-Dependent MLPs for LVCSR: TANDEM, Hybrid or Both? In Proc. Interspeech, Portland, OR, September 2012.
-
(2012)
Proc. Interspeech
-
-
Tüske, Z.1
Sundermeyer, M.2
Schlüter, R.3
Ney, H.4
-
34
-
-
84867626068
-
Revisiting recurrent neural networks for robust ASR
-
Kyoto, Japan, March IEEE
-
Vinyals, Oriol, Ravuri, Suman V, and Povey, Daniel. Revisiting recurrent neural networks for robust ASR. In Proc. ICASSP, pp. 4085–4088, Kyoto, Japan, March 2012. IEEE.
-
(2012)
Proc. ICASSP
, pp. 4085-4088
-
-
Vinyals, O.1
Ravuri, S.V.2
Povey, D.3
-
36
-
-
84871387302
-
The deep tensor neural network with applications to large vocabulary speech recognition
-
Yu, D., Deng, L., and Seide, F. The deep tensor neural network with applications to large vocabulary speech recognition. IEEE Trans. on Audio, Speech and Language Processing, 21(2):388 –396, 2013.
-
(2013)
IEEE Trans. On Audio, Speech and Language Processing
, vol.21
, Issue.2
, pp. 388-396
-
-
Yu, D.1
Deng, L.2
Seide, F.3
|