메뉴 건너뛰기




Volumn 24, Issue 1, 2011, Pages 91-98

Convergence analysis of online gradient method for BP neural networks

Author keywords

Backpropagation learning; Neural networks; Online gradient method; Strong convergence; Weak convergence

Indexed keywords

ACTIVATION FUNCTIONS; BACK-PROPAGATION NEURAL NETWORKS; BACKPROPAGATION LEARNING; BP NEURAL NETWORKS; CONVERGENCE ANALYSIS; CONVERGENCE RESULTS; ERROR FUNCTION; FIXED POINTS; GRADIENT LEARNING METHODS; HIDDEN LAYERS; HIDDEN NEURONS; LEARNING METHODS; LEARNING RATES; ONLINE GRADIENT METHOD; P-TYPE; POLYNOMIAL FUNCTIONS; SIGMOID FUNCTION; STOCHASTIC LEARNING; STOCHASTIC ORDER; STRONG CONVERGENCE; TRAINING SETS; WEAK AND STRONG CONVERGENCE; WEAK CONVERGENCE;

EID: 78649659993     PISSN: 08936080     EISSN: None     Source Type: Journal    
DOI: 10.1016/j.neunet.2010.09.007     Document Type: Article
Times cited : (160)

References (25)
  • 2
    • 0037279487 scopus 로고    scopus 로고
    • A novel training scheme for multilayered perceptrons to realize proper generalization and incremental learning
    • Chakraborty D., Pal N.R. A novel training scheme for multilayered perceptrons to realize proper generalization and incremental learning. IEEE Transactions on Neural Networks 2003, 14:1-14.
    • (2003) IEEE Transactions on Neural Networks , vol.14 , pp. 1-14
    • Chakraborty, D.1    Pal, N.R.2
  • 3
    • 0040907600 scopus 로고    scopus 로고
    • Parameter convergence and learning curves for neural networks
    • Fine T.L., Mukherjee S. Parameter convergence and learning curves for neural networks. Neural Computation 1999, 11:747-769.
    • (1999) Neural Computation , vol.11 , pp. 747-769
    • Fine, T.L.1    Mukherjee, S.2
  • 4
    • 0001699239 scopus 로고
    • Diffusion approximations for the constant learning rate backpropagation algorithm and resistance to local minima
    • Finnoff W. Diffusion approximations for the constant learning rate backpropagation algorithm and resistance to local minima. Neural Computation 1994, 6:285-295.
    • (1994) Neural Computation , vol.6 , pp. 285-295
    • Finnoff, W.1
  • 5
    • 0030195986 scopus 로고    scopus 로고
    • A theoretical comparison of batch-mode, on-line, cyclic, and almost-cyclic learning
    • Heskes T., Wiegerinck W. A theoretical comparison of batch-mode, on-line, cyclic, and almost-cyclic learning. IEEE Transactions on Neural Networks 1996, 7:919-925.
    • (1996) IEEE Transactions on Neural Networks , vol.7 , pp. 919-925
    • Heskes, T.1    Wiegerinck, W.2
  • 7
    • 78649637733 scopus 로고    scopus 로고
    • Prediction of stock market by bp neural networks with technical indexes as input
    • Li Z.X., Ding X.S. Prediction of stock market by bp neural networks with technical indexes as input. Numerical Mathematics: A Journal of Chinese Universities 2005, 27:373-377.
    • (2005) Numerical Mathematics: A Journal of Chinese Universities , vol.27 , pp. 373-377
    • Li, Z.X.1    Ding, X.S.2
  • 8
    • 1642562585 scopus 로고    scopus 로고
    • Convergence of an online gradient method for feedforward neural networks with stochastic inputs
    • Li Z.X., Wu W., Tian Y.L. Convergence of an online gradient method for feedforward neural networks with stochastic inputs. Journal of Computational and Applied Mathematics 2004, 163:165-176.
    • (2004) Journal of Computational and Applied Mathematics , vol.163 , pp. 165-176
    • Li, Z.X.1    Wu, W.2    Tian, Y.L.3
  • 9
    • 0036131765 scopus 로고    scopus 로고
    • Successive approximation training algorithm for feedforward neural networks
    • Liang Y.C., Feng D.P., Lee H.P., Lim S.P., Lee K.H. Successive approximation training algorithm for feedforward neural networks. Neurocomputing 2002, 42:11-322.
    • (2002) Neurocomputing , vol.42 , pp. 11-322
    • Liang, Y.C.1    Feng, D.P.2    Lee, H.P.3    Lim, S.P.4    Lee, K.H.5
  • 10
    • 84972916837 scopus 로고
    • Serial and parallel backpropagation convergence via nonmonotone perturbed minimization
    • Mangasarian O.L., Solodov M.V. Serial and parallel backpropagation convergence via nonmonotone perturbed minimization. Optimization Methods and Software 1994, 4:117-134.
    • (1994) Optimization Methods and Software , vol.4 , pp. 117-134
    • Mangasarian, O.L.1    Solodov, M.V.2
  • 11
    • 70350710039 scopus 로고    scopus 로고
    • Theoretical analysis of batch and on-line training for gradient descent learning in neural networks
    • Nakama T. Theoretical analysis of batch and on-line training for gradient descent learning in neural networks. Neurocomputing 2009, 73:151-159.
    • (2009) Neurocomputing , vol.73 , pp. 151-159
    • Nakama, T.1
  • 12
    • 78649650762 scopus 로고
    • Learning-logic, invention report. Stanford University, Stanford, Calif.
    • Parker, D. B. (1982). Learning-logic, invention report. Stanford University, Stanford, Calif.
    • (1982)
    • Parker, D.B.1
  • 13
    • 0022471098 scopus 로고
    • Learning representations by back-propagation errors
    • Rumelhart D.E., Hinton G.E., Williams R.J. Learning representations by back-propagation errors. Nature 1986, 323:533-536.
    • (1986) Nature , vol.323 , pp. 533-536
    • Rumelhart, D.E.1    Hinton, G.E.2    Williams, R.J.3
  • 15
    • 67649130661 scopus 로고    scopus 로고
    • Learning in neural networks by normalized stochastic gradient algorithm: local convergence. In Proceedings of the 5th seminar neural networks application electronic engineering.
    • Tadic, V., & Stankovic, S. (2000). Learning in neural networks by normalized stochastic gradient algorithm: local convergence. In Proceedings of the 5th seminar neural networks application electronic engineering.
    • (2000)
    • Tadic, V.1    Stankovic, S.2
  • 16
    • 0024883243 scopus 로고
    • Optimal unsupervised learning in a single-layer linear feedforward neural network
    • Terence D.S. Optimal unsupervised learning in a single-layer linear feedforward neural network. Neural Networks 1989, 2:459-473.
    • (1989) Neural Networks , vol.2 , pp. 459-473
    • Terence, D.S.1
  • 17
    • 78649640635 scopus 로고
    • Beyond regression: new tools for prediction and analysis in the behavioral sciences. Ph.D. thesis. Harvard University, Cambridge, MA.
    • Werbos, P. J. (1974). Beyond regression: new tools for prediction and analysis in the behavioral sciences. Ph.D. thesis. Harvard University, Cambridge, MA.
    • (1974)
    • Werbos, P.J.1
  • 18
    • 0242662161 scopus 로고    scopus 로고
    • The general inefficiency of batch training for gradient descent learning
    • Wilson D.R., Martinez T.R. The general inefficiency of batch training for gradient descent learning. Neural Networks 2003, 16:1429-1451.
    • (2003) Neural Networks , vol.16 , pp. 1429-1451
    • Wilson, D.R.1    Martinez, T.R.2
  • 19
    • 0036644480 scopus 로고    scopus 로고
    • Deterministic convergence of an on-line gradient method for neural networks
    • Wu W., Xu Y.S. Deterministic convergence of an on-line gradient method for neural networks. Journal of Computational and Applied Mathematics 2002, 144:335-347.
    • (2002) Journal of Computational and Applied Mathematics , vol.144 , pp. 335-347
    • Wu, W.1    Xu, Y.S.2
  • 20
    • 19344362900 scopus 로고    scopus 로고
    • Deterministic convergence of an online gradient method for BP neural networks
    • Wu W., Feng G.R., Li Z.X., Xu Y.S. Deterministic convergence of an online gradient method for BP neural networks. IEEE Transactions on Neural Networks 2005, 16:533-540.
    • (2005) IEEE Transactions on Neural Networks , vol.16 , pp. 533-540
    • Wu, W.1    Feng, G.R.2    Li, Z.X.3    Xu, Y.S.4
  • 21
    • 0141849409 scopus 로고    scopus 로고
    • Training multilayer perceptrons via minimization of sum of ridge functions
    • Wu W., Feng G.R., Li X. Training multilayer perceptrons via minimization of sum of ridge functions. Advances in Computational Mathematics 2002, 17:331-347.
    • (2002) Advances in Computational Mathematics , vol.17 , pp. 331-347
    • Wu, W.1    Feng, G.R.2    Li, X.3
  • 22
    • 0242364795 scopus 로고    scopus 로고
    • Convergence of online gradient methods for continuous perceptrons with linearly separable training patterns
    • Wu W., Shao Z.Q. Convergence of online gradient methods for continuous perceptrons with linearly separable training patterns. Applied Mathematics Letters 2003, 16:999-1002.
    • (2003) Applied Mathematics Letters , vol.16 , pp. 999-1002
    • Wu, W.1    Shao, Z.Q.2
  • 23
    • 33847148535 scopus 로고    scopus 로고
    • Strong convergence of gradient methods for BP networks training. In Proceedings of 2005 international conference on neural networks and brains
    • Wu, W., Shao, H. M., & Qu, D. (2005). Strong convergence of gradient methods for BP networks training. In Proceedings of 2005 international conference on neural networks and brains (pp. 332-334).
    • (2005) , pp. 332-334
    • Wu, W.1    Shao, H.M.2    Qu, D.3
  • 25
    • 67649385962 scopus 로고    scopus 로고
    • Boundedness and convergence of online gradient method with penalty for feedforward neural networks
    • Zhang H.S., Wu W., Liu F., Yao M.C. Boundedness and convergence of online gradient method with penalty for feedforward neural networks. IEEE Transactions on Neural Networks 2009, 20:1050-1054.
    • (2009) IEEE Transactions on Neural Networks , vol.20 , pp. 1050-1054
    • Zhang, H.S.1    Wu, W.2    Liu, F.3    Yao, M.C.4


* 이 정보는 Elsevier사의 SCOPUS DB에서 KISTI가 분석하여 추출한 것입니다.