메뉴 건너뛰기




Volumn , Issue , 2013, Pages 321-325

Combining stochastic average gradient and Hessian-free optimization for sequence training of deep neural networks

Author keywords

Deep Neural Network; Hessian free Optimization; Sequence Training; Stochastic Average Gradient; Stochastic Training

Indexed keywords

AUTOMATIC SPEECH RECOGNITION; AVERAGE GRADIENT; DEEP NEURAL NETWORKS; GRADIENT INFORMATIONS; MINIMUM PHONE ERROR; MODEL PARAMETERS; STOCHASTIC APPROACH; STOCHASTIC GRADIENT DESCENT;

EID: 84893690218     PISSN: None     EISSN: None     Source Type: Conference Proceeding    
DOI: 10.1109/ASRU.2013.6707750     Document Type: Conference Paper
Times cited : (9)

References (7)
  • 1
    • 70349213445 scopus 로고    scopus 로고
    • Lattice-based optimization of sequence classification criteria for neural-network acoustic modeling
    • April
    • Brian Kingsbury, "lattice-based optimization of sequence classification criteria for neural-network acoustic modeling, " in ICASSP, April 2009, pp. 3761-3764.
    • (2009) ICASSP , pp. 3761-3764
    • Kingsbury, B.1
  • 2
    • 0036296863 scopus 로고    scopus 로고
    • Minimum phone error and i-smoothing for improved discriminative training
    • May
    • Daniel Povey and Philip C.Woodland, "Minimum phone error and i-smoothing for improved discriminative training, " in ICASSP, May 2002, pp. 105-108.
    • (2002) ICASSP , pp. 105-108
    • Povey, D.1    Woodland, C.P.2
  • 3
    • 84890543852 scopus 로고    scopus 로고
    • Error back propagation for sequence training of contextdependent deep networks for conversational speech transcription
    • May
    • Hang Su, Gang Li, Dong Yu, and Frank Seide, "error back propagation for sequence training of contextdependent deep networks for conversational speech transcription, " in ICASSP, May 2013, pp. 6664-6668.
    • (2013) ICASSP , pp. 6664-6668
    • Su, H.1    Li, G.2    Yu, D.3    Seide, F.4
  • 4
    • 84877725219 scopus 로고    scopus 로고
    • A stochastic gradient method with an exponential convergence rate for finite training sets
    • NIPS
    • Nicolas Le Roux, Mark Schmidt, and Francis Bach, "A stochastic gradient method with an exponential convergence rate for finite training sets, " in Advances in Neural Information Processing Systems 25, pp. 2672-2680. NIPS, 2012.
    • (2012) Advances in Neural Information Processing Systems , vol.25 , pp. 2672-2680
    • Roux, N.L.1    Schmidt, M.2    Bach, F.3
  • 5
    • 77956541496 scopus 로고    scopus 로고
    • Deep learning via hessian-free optimization
    • Johannes Fürnkranz and Thorsten Joachims, Eds. Haifa, Israel, June, Omnipress
    • James Martens, "Deep learning via hessian-free optimization, " in Proceedings of the 27th International Conference on Machine Learning (ICML-10), Johannes Fürnkranz and Thorsten Joachims, Eds., Haifa, Israel, June 2010, pp. 735-742, Omnipress.
    • (2010) Proceedings of the 27th International Conference on Machine Learning (ICML-10) , pp. 735-742
    • Martens, J.1
  • 7
    • 84899022736 scopus 로고    scopus 로고
    • Large scale online learning
    • Sebastian Thrun, Lawrence Saul, and Bernhard Schölkopf, Eds. MIT Press, Cambridge, MA
    • Léon Bottou and Yann LeCun, "Large scale online learning, " in Advances in Neural Information Processing Systems 16, Sebastian Thrun, Lawrence Saul, and Bernhard Schölkopf, Eds. MIT Press, Cambridge, MA, 2004.
    • (2004) Advances in Neural Information Processing Systems , vol.16
    • Bottou, L.1    LeCun, Y.2


* 이 정보는 Elsevier사의 SCOPUS DB에서 KISTI가 분석하여 추출한 것입니다.