메뉴 건너뛰기




Volumn , Issue , 2014, Pages

A primal-dual method for training recurrent neural networks constrained by the echo-state property

Author keywords

[No Author keywords available]

Indexed keywords

CONSTRAINT THEORY; DEEP NEURAL NETWORKS;

EID: 85083950550     PISSN: None     EISSN: None     Source Type: Conference Proceeding    
DOI: None     Document Type: Conference Paper
Times cited : (18)

References (36)
  • 1
    • 85014561619 scopus 로고    scopus 로고
    • A fast iterative shrinkage-thresholding algorithm for linear inverse problems
    • Beck, A. and Teboulle, M. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences, 2(1):183–202, 2009.
    • (2009) SIAM Journal on Imaging Sciences , vol.2 , Issue.1 , pp. 183-202
    • Beck, A.1    Teboulle, M.2
  • 4
    • 84055222005 scopus 로고    scopus 로고
    • Context-dependent pre-trained deep neural networks for large-vocabulary speech recognition
    • jan
    • Dahl, G., Yu, D., Deng, L., and Acero, A. Context-dependent pre-trained deep neural networks for large-vocabulary speech recognition. IEEE Trans. on Audio, Speech and Language Processing, 20(1):30–42, jan 2012.
    • (2012) IEEE Trans. On Audio, Speech and Language Processing , vol.20 , Issue.1 , pp. 30-42
    • Dahl, G.1    Yu, D.2    Deng, L.3    Acero, A.4
  • 5
    • 80051616844 scopus 로고    scopus 로고
    • Large vocabulary continuous speech recognition with context-dependent DBN-HMMs
    • Prague, Czech, May
    • Dahl, G. E., Yu, D., Deng, L., and Acero, A. Large vocabulary continuous speech recognition with context-dependent DBN-HMMs. In Proc. IEEE ICASSP, pp. 4688–4691, Prague, Czech, May 2011.
    • (2011) Proc. IEEE ICASSP , pp. 4688-4691
    • Dahl, G.E.1    Yu, D.2    Deng, L.3    Acero, A.4
  • 6
    • 84890527827 scopus 로고    scopus 로고
    • Improving deep neural networks for lvcsr using rectified linear units and dropout
    • Vancouver, Canada, May IEEE
    • Dahl, G. E., Sainath, T. N., and Hinton, G. E. Improving deep neural networks for lvcsr using rectified linear units and dropout. In Proc. ICASSP, pp. 8609–8613, Vancouver, Canada, May 2013. IEEE.
    • (2013) Proc. ICASSP , pp. 8609-8613
    • Dahl, G.E.1    Sainath, T.N.2    Hinton, G.E.3
  • 7
    • 0028256706 scopus 로고
    • Analysis of the correlation structure for a neural predictive model with application to speech recognition
    • Deng, L., Hassanein, K., and Elmasry, M. Analysis of the correlation structure for a neural predictive model with application to speech recognition. Neural Networks, 7(2):331–339, 1994.
    • (1994) Neural Networks , vol.7 , Issue.2 , pp. 331-339
    • Deng, L.1    Hassanein, K.2    Elmasry, M.3
  • 8
    • 84890545163 scopus 로고    scopus 로고
    • A deep convolutional neural network using heterogeneous pooling for trading acoustic invariance with phonetic confusion
    • Vancouver, Canada, May
    • Deng, L., Abdel-Hamid, O., and Yu, D. A deep convolutional neural network using heterogeneous pooling for trading acoustic invariance with phonetic confusion. In Proc. IEEE ICASSP, Vancouver, Canada, May 2013a.
    • (2013) Proc. IEEE ICASSP
    • Deng, L.1    Abdel-Hamid, O.2    Yu, D.3
  • 9
    • 84890526837 scopus 로고    scopus 로고
    • New types of deep neural network learning for speech recognition and related applications: An overview
    • Vancouver, Canada, May
    • Deng, L., Hinton, G., and Kingsbury, B. New types of deep neural network learning for speech recognition and related applications: An overview. In Proc. IEEE ICASSP, Vancouver, Canada, May 2013b.
    • (2013) Proc. IEEE ICASSP
    • Deng, L.1    Hinton, G.2    Kingsbury, B.3
  • 12
    • 34250704813 scopus 로고    scopus 로고
    • Connectionist temporal classification: Labelling unsegmented sequence data with recurrent neural networks
    • Pittsburgh, PA, June ACM
    • Graves, A., Fernández, S., Gomez, F., and Schmidhuber, J. Connectionist temporal classification: labelling unsegmented sequence data with recurrent neural networks. In Proc. ICML, pp. 369–376, Pittsburgh, PA, June 2006. ACM.
    • (2006) Proc. ICML , pp. 369-376
    • Graves, A.1    Fernández, S.2    Gomez, F.3    Schmidhuber, J.4
  • 13
    • 84890543083 scopus 로고    scopus 로고
    • Speech recognition with deep recurrent neural networks
    • Vancouver, Canada, May
    • Graves, A., Mohamed, A., and Hinton, G. Speech recognition with deep recurrent neural networks. In Proc. ICASSP, Vancouver, Canada, May 2013.
    • (2013) Proc. ICASSP
    • Graves, A.1    Mohamed, A.2    Hinton, G.3
  • 15
    • 1842436050 scopus 로고    scopus 로고
    • The “echo state” approach to analysing and training recurrent neural networks
    • GMD German National Research Institute for Computer Science
    • Jaeger, H. The “echo state” approach to analysing and training recurrent neural networks. GMD Report 148, GMD - German National Research Institute for Computer Science, 2001a.
    • (2001) GMD Report 148
    • Jaeger, H.1
  • 16
    • 1842488370 scopus 로고    scopus 로고
    • Short term memory in echo state networks
    • GMD German National Research Institute for Computer Science
    • Jaeger, H. Short term memory in echo state networks. GMD Report 152, GMD - German National Research Institute for Computer Science, 2001b.
    • (2001) GMD Report 152
    • Jaeger, H.1
  • 17
    • 33749833931 scopus 로고    scopus 로고
    • Tutorial on training recurrent neural networks, covering BPPT, RTRL, EKF and the “echo state network” approach
    • GMD German National Research Institute for Computer Science
    • Jaeger, H. Tutorial on training recurrent neural networks, covering BPPT, RTRL, EKF and the “echo state network” approach. GMD Report 159, GMD - German National Research Institute for Computer Science, 2002.
    • (2002) GMD Report 159
    • Jaeger, H.1
  • 18
    • 84878379108 scopus 로고    scopus 로고
    • Scalable minimum bayes risk training of deep neural network acoustic models using distributed hessian-free optimization
    • Portland, OR, September
    • Kingsbury, B., Sainath, T. N., and Soltau, H. Scalable minimum bayes risk training of deep neural network acoustic models using distributed hessian-free optimization. In Proc. INTERSPEECH, Portland, OR, September 2012.
    • (2012) Proc. INTERSPEECH
    • Kingsbury, B.1    Sainath, T.N.2    Soltau, H.3
  • 19
    • 84867711674 scopus 로고    scopus 로고
    • Learning invariant feature hierarchies
    • Firenze, Italy, October Springer
    • LeCun, Yann. Learning invariant feature hierarchies. In Proc. ECCV, pp. 496–505, Firenze, Italy, October 2012. Springer.
    • (2012) Proc. ECCV , pp. 496-505
    • LeCun, Y.1
  • 22
    • 84877823547 scopus 로고    scopus 로고
    • Echo state property linked to an input: Exploring a fundamental characteristic of recurrent neural networks
    • Manjunath, G. and Jaeger, H. Echo state property linked to an input: Exploring a fundamental characteristic of recurrent neural networks. Neural computation, 25(3):671–696, 2013.
    • (2013) Neural Computation , vol.25 , Issue.3 , pp. 671-696
    • Manjunath, G.1    Jaeger, H.2
  • 23
    • 80053451847 scopus 로고    scopus 로고
    • Learning recurrent neural networks with hessian-free optimization
    • Bellevue, WA, June
    • Martens, J. and Sutskever, I. Learning recurrent neural networks with hessian-free optimization. In Proc. ICML, pp. 1033–1040, Bellevue, WA, June 2011.
    • (2011) Proc. ICML , pp. 1033-1040
    • Martens, J.1    Sutskever, I.2
  • 25
    • 84858966958 scopus 로고    scopus 로고
    • Strategies for training large scale neural network language models
    • Honolulu, HI, December IEEE
    • Mikolov, T., Deoras, A., Povey, D., Burget, L., and Cernocky, J. Strategies for training large scale neural network language models. In Proc. IEEE ASRU, pp. 196–201, Honolulu, HI, December 2011. IEEE.
    • (2011) Proc. IEEE ASRU , pp. 196-201
    • Mikolov, T.1    Deoras, A.2    Povey, D.3    Burget, L.4    Cernocky, J.5
  • 26
    • 84897497795 scopus 로고    scopus 로고
    • On the difficulty of training recurrent neural networks
    • Atlanta, GA, June
    • Pascanu, R., Mikolov, T., and Bengio, Y. On the difficulty of training recurrent neural networks. In Proc. ICML, Atlanta, GA, June 2013.
    • (2013) Proc. ICML
    • Pascanu, R.1    Mikolov, T.2    Bengio, Y.3
  • 28
    • 0028392167 scopus 로고
    • An application of recurrent nets to phone probability estimation
    • August
    • Robinson, A. J. An application of recurrent nets to phone probability estimation. IEEE Transactions on Neural Networks, 5(2):298–305, August 1994.
    • (1994) IEEE Transactions on Neural Networks , vol.5 , Issue.2 , pp. 298-305
    • Robinson, A.J.1
  • 30
    • 84865801985 scopus 로고    scopus 로고
    • Conversational speech transcription using context-dependent deep neural networks
    • Florence, Italy, August
    • Seide, F., Li, G., and Yu, D. Conversational speech transcription using context-dependent deep neural networks. In Proc. INTERSPEECH, pp. 437–440, Florence, Italy, August 2011.
    • (2011) Proc. INTERSPEECH , pp. 437-440
    • Seide, F.1    Li, G.2    Yu, D.3
  • 32
    • 84897510162 scopus 로고    scopus 로고
    • On the importance of initialization and momentum in deep learning
    • Atlanta, GA, June
    • Sutskever, I., Martens, J., Dahl, G., and Hinton, G. E. On the importance of initialization and momentum in deep learning. In Proc. ICML, Atlanta, GA, June 2013.
    • (2013) Proc. ICML
    • Sutskever, I.1    Martens, J.2    Dahl, G.3    Hinton, G.E.4
  • 33
    • 84878403164 scopus 로고    scopus 로고
    • Context-dependent MLPs for LVCSR: Tandem, hybrid or both?
    • Portland, OR, September
    • Tüske, Z., Sundermeyer, M., Schlüter, R., and Ney, H. Context-Dependent MLPs for LVCSR: TANDEM, Hybrid or Both? In Proc. Interspeech, Portland, OR, September 2012.
    • (2012) Proc. Interspeech
    • Tüske, Z.1    Sundermeyer, M.2    Schlüter, R.3    Ney, H.4
  • 34
    • 84867626068 scopus 로고    scopus 로고
    • Revisiting recurrent neural networks for robust ASR
    • Kyoto, Japan, March IEEE
    • Vinyals, Oriol, Ravuri, Suman V, and Povey, Daniel. Revisiting recurrent neural networks for robust ASR. In Proc. ICASSP, pp. 4085–4088, Kyoto, Japan, March 2012. IEEE.
    • (2012) Proc. ICASSP , pp. 4085-4088
    • Vinyals, O.1    Ravuri, S.V.2    Povey, D.3
  • 36
    • 84871387302 scopus 로고    scopus 로고
    • The deep tensor neural network with applications to large vocabulary speech recognition
    • Yu, D., Deng, L., and Seide, F. The deep tensor neural network with applications to large vocabulary speech recognition. IEEE Trans. on Audio, Speech and Language Processing, 21(2):388 –396, 2013.
    • (2013) IEEE Trans. On Audio, Speech and Language Processing , vol.21 , Issue.2 , pp. 388-396
    • Yu, D.1    Deng, L.2    Seide, F.3


* 이 정보는 Elsevier사의 SCOPUS DB에서 KISTI가 분석하여 추출한 것입니다.