메뉴 건너뛰기




Volumn 74, Issue 5, 2011, Pages 765-770

Boundedness and convergence of online gradient method with penalty and momentum

Author keywords

Boundedness; Convergence; Feedforward neural network; Momentum; Online gradient method; Penalty

Indexed keywords

BOUNDEDNESS; CONVERGENCE; ERROR FUNCTION; MONOTONICITY; ONLINE GRADIENT METHOD; PENALTY; PENALTY TERM; SUFFICIENT CONDITIONS; TRAINING PROCESS; TWO LAYERS; UNIFORMLY BOUNDED; WEAK AND STRONG CONVERGENCE;

EID: 78650719006     PISSN: 09252312     EISSN: None     Source Type: Journal    
DOI: 10.1016/j.neucom.2010.10.005     Document Type: Article
Times cited : (25)

References (17)
  • 3
    • 0036644480 scopus 로고    scopus 로고
    • Deterministic convergence of an online gradient method for neural networks
    • Wu W., Xu Y.S. Deterministic convergence of an online gradient method for neural networks. Journal of Computational and Applied Mathematics 2002, 144:335-347.
    • (2002) Journal of Computational and Applied Mathematics , vol.144 , pp. 335-347
    • Wu, W.1    Xu, Y.S.2
  • 4
    • 0141849409 scopus 로고    scopus 로고
    • Training multilayer perceptrons via minimization of sum of ridge functions
    • Wu W., Feng G.R., Li X. Training multilayer perceptrons via minimization of sum of ridge functions. Advances in Computational Mathematics 2002, 17:331-347.
    • (2002) Advances in Computational Mathematics , vol.17 , pp. 331-347
    • Wu, W.1    Feng, G.R.2    Li, X.3
  • 5
    • 0036565025 scopus 로고    scopus 로고
    • Stability of steepest descent with momentum for quadratic functions
    • Torii M., Hagan M.T. Stability of steepest descent with momentum for quadratic functions. IEEE Transactions on Neural Networks 2002, 13(3):752-756.
    • (2002) IEEE Transactions on Neural Networks , vol.13 , Issue.3 , pp. 752-756
    • Torii, M.1    Hagan, M.T.2
  • 6
    • 0346881152 scopus 로고    scopus 로고
    • Steepest descent with momentum for quadratic functions is a version of the conjugate gradient method
    • Bhaya A., Kaszkurewicz E. Steepest descent with momentum for quadratic functions is a version of the conjugate gradient method. Neural Networks 2004, 17:65-71.
    • (2004) Neural Networks , vol.17 , pp. 65-71
    • Bhaya, A.1    Kaszkurewicz, E.2
  • 7
    • 38049007529 scopus 로고    scopus 로고
    • Analysis of global convergence and learning parameters of the back-propagation algorithm for quadratic functions
    • Zeng Z.G. Analysis of global convergence and learning parameters of the back-propagation algorithm for quadratic functions. Lecture Notes in Computer Science 2007, vol. 4682:7-13.
    • (2007) Lecture Notes in Computer Science , vol.4682 , pp. 7-13
    • Zeng, Z.G.1
  • 8
    • 33644892170 scopus 로고    scopus 로고
    • Convergence of gradient method with momentum for two-layer feedforward neural networks
    • Zhang N.M., Wu W., Zheng G.F. Convergence of gradient method with momentum for two-layer feedforward neural networks. IEEE Transactions on Neural Networks 2006, 17(2):522-525.
    • (2006) IEEE Transactions on Neural Networks , vol.17 , Issue.2 , pp. 522-525
    • Zhang, N.M.1    Wu, W.2    Zheng, G.F.3
  • 9
    • 33749539894 scopus 로고    scopus 로고
    • Deterministic convergence of an online gradient method with momentum
    • Zhang N.M. Deterministic convergence of an online gradient method with momentum. Lecture Notes in Computer Science 2007, vol. 4113:94-105.
    • (2007) Lecture Notes in Computer Science , vol.4113 , pp. 94-105
    • Zhang, N.M.1
  • 10
    • 67349220042 scopus 로고    scopus 로고
    • An online gradient method with momentum for two-layer feedforward neural networks
    • Zhang N.M. An online gradient method with momentum for two-layer feedforward neural networks. Applied Mathematics and Computation 2009, 212:488-498.
    • (2009) Applied Mathematics and Computation , vol.212 , pp. 488-498
    • Zhang, N.M.1
  • 11
    • 84898957627 scopus 로고    scopus 로고
    • For valid generalization, the size of the weights is more important than the size of the network
    • Bartlett P.L. For valid generalization, the size of the weights is more important than the size of the network. Advances in Neural Information Processing Systems 1997, 9:134-140.
    • (1997) Advances in Neural Information Processing Systems , vol.9 , pp. 134-140
    • Bartlett, P.L.1
  • 13
    • 0024732792 scopus 로고
    • Connectionist learning procedures
    • Hinton G.E. Connectionist learning procedures. Artificial Intelligence 1989, 40:185-234.
    • (1989) Artificial Intelligence , vol.40 , pp. 185-234
    • Hinton, G.E.1
  • 15
    • 0030633575 scopus 로고    scopus 로고
    • A penalty-function approach for pruning feedforward neural networks
    • Setiono R. A penalty-function approach for pruning feedforward neural networks. Neural Networks 1997, 9:185-204.
    • (1997) Neural Networks , vol.9 , pp. 185-204
    • Setiono, R.1
  • 16
    • 67649085858 scopus 로고    scopus 로고
    • Boundedness and convergence of online gradient method with penalty for linear output feedforward neural networks
    • Zhang H.S., Wu W. Boundedness and convergence of online gradient method with penalty for linear output feedforward neural networks. Neural Process Letters 2009, 29:205-212.
    • (2009) Neural Process Letters , vol.29 , pp. 205-212
    • Zhang, H.S.1    Wu, W.2


* 이 정보는 Elsevier사의 SCOPUS DB에서 KISTI가 분석하여 추출한 것입니다.