메뉴 건너뛰기




Volumn 20, Issue 10, 2009, Pages 1529-1539

When does online BP training converge?

Author keywords

Backpropagation (BP) neural networks; Convergence analysis; Online BP training procedure

Indexed keywords

ANALYSIS RESULTS; BACKPROPAGATION (BP) NEURAL NETWORKS; BP NEURAL NETWORKS; CONVERGENCE ANALYSIS; ERROR FUNCTION; FUNDAMENTAL THEOREMS; NEURAL NETWORK LEARNING; ONLINE BP TRAINING PROCEDURE; SCIENTIFIC RESEARCHES; SIGMOID ACTIVATION FUNCTION; STRONG CONVERGENCE; TRAINING PROCEDURES; WEAK CONVERGENCE;

EID: 70350336479     PISSN: 10459227     EISSN: None     Source Type: Journal    
DOI: 10.1109/TNN.2009.2025946     Document Type: Article
Times cited : (52)

References (26)
  • 1
    • 0037136833 scopus 로고    scopus 로고
    • Proper orthogonal decomposition and its application - Part II: Model reduction for MEMS dynamical analysis
    • Y. C. Liang, W. Z. Lin, H. P. Lee, S. P. Lim, K. H. Lee, and H. Sun, "Proper orthogonal decomposition and its application - Part II: Model reduction for MEMS dynamical analysis," J. Sound Vib., vol. 256, pp. 515-532, 2002.
    • (2002) J. Sound Vib , vol.256 , pp. 515-532
    • Liang, Y.C.1    Lin, W.Z.2    Lee, H.P.3    Lim, S.P.4    Lee, K.H.5    Sun, H.6
  • 3
    • 0040907600 scopus 로고    scopus 로고
    • Parameter convergence and learning curves for neural networks
    • T. L. Fine and S. Mukherjee, "Parameter convergence and learning curves for neural networks," Neural Comput., vol. 11, pp. 747-769, 1999.
    • (1999) Neural Comput , vol.11 , pp. 747-769
    • Fine, T.L.1    Mukherjee, S.2
  • 4
    • 0001699239 scopus 로고
    • Diffusion approximations for the constant learning rate BP algorithm and resistance to local minima
    • W. Finnoff, "Diffusion approximations for the constant learning rate BP algorithm and resistance to local minima," Neural Comput., vol. 6, no. 2, pp. 285-295, 1994.
    • (1994) Neural Comput , vol.6 , Issue.2 , pp. 285-295
    • Finnoff, W.1
  • 5
    • 0001518167 scopus 로고
    • On the convergence of the LMS algorithm with adaptive learning rate for linear feedforward networks
    • Z. Luo, "On the convergence of the LMS algorithm with adaptive learning rate for linear feedforward networks," Neural Comput., vol. 3, no. 2, pp. 226-245, 1991.
    • (1991) Neural Comput , vol.3 , Issue.2 , pp. 226-245
    • Luo, Z.1
  • 6
    • 84973041797 scopus 로고
    • Analysis of an approximate gradient projection method with application to the backpropagation algorithm
    • Z. Luo and P. Tseng, "Analysis of an approximate gradient projection method with application to the backpropagation algorithm," Optim. Methods Softw., vol. 4, no. 2, pp. 85-101, 1994.
    • (1994) Optim. Methods Softw , vol.4 , Issue.2 , pp. 85-101
    • Luo, Z.1    Tseng, P.2
  • 7
    • 0032533064 scopus 로고    scopus 로고
    • Online learning from finite training sets and robustness to input bias
    • P. Sollich and D. Barber, "Online learning from finite training sets and robustness to input bias," Neural Comput., vol. 10, no. 8, pp. 2201-2217, 1998.
    • (1998) Neural Comput , vol.10 , Issue.8 , pp. 2201-2217
    • Sollich, P.1    Barber, D.2
  • 9
    • 0031143834 scopus 로고    scopus 로고
    • Improving the error bp algorithm with a modified error function, IEEE Trans. Neural Networks
    • May
    • S. H. Oh, "Improving the error bp algorithm with a modified error function, IEEE Trans. Neural Networks," IEEE Trans. Circuits Syst. vol. 8, no. 3, pp. 799-803, May 1997.
    • (1997) IEEE Trans. Circuits Syst , vol.8 , Issue.3 , pp. 799-803
    • Oh, S.H.1
  • 10
    • 0026222695 scopus 로고
    • Convergence of learning algorithms with constant learning rates
    • Sep
    • C. M. Kuan and K. Hornik, "Convergence of learning algorithms with constant learning rates," IEEE Trans. Neural Netw., vol. 2, no. 5, pp. 484-489, Sep. 1991.
    • (1991) IEEE Trans. Neural Netw , vol.2 , Issue.5 , pp. 484-489
    • Kuan, C.M.1    Hornik, K.2
  • 11
    • 84973041784 scopus 로고
    • Convergence properties of backpropagation for neural nets via theory of stochastic gradient methods. Part I
    • A. A. Gaivoronski, "Convergence properties of backpropagation for neural nets via theory of stochastic gradient methods. Part I," Optim. Methods Softw., vol. 4, no. 2, pp. 117-134, 1994.
    • (1994) Optim. Methods Softw , vol.4 , Issue.2 , pp. 117-134
    • Gaivoronski, A.A.1
  • 12
    • 84972916837 scopus 로고
    • Serial and parallel backpropagation convergence via nonmonotone perturbed minimization
    • O. L. Manggasarian and M. V. Solodov, "Serial and parallel backpropagation convergence via nonmonotone perturbed minimization," Optim. Methods Softw., vol. 4, pp. 103-116, 1994.
    • (1994) Optim. Methods Softw , vol.4 , pp. 103-116
    • Manggasarian, O.L.1    Solodov, M.V.2
  • 13
    • 18544379130 scopus 로고    scopus 로고
    • Prediction of stock market by BP neural networks with technical indexes as input
    • Z. X. Li, W. Wu, and W. Q. Chen, "Prediction of stock market by BP neural networks with technical indexes as input," J. Math. Res. Explosion, vol. 23, no. 1, pp. 83-97, 2003.
    • (2003) J. Math. Res. Explosion , vol.23 , Issue.1 , pp. 83-97
    • Li, Z.X.1    Wu, W.2    Chen, W.Q.3
  • 15
    • 0012195187 scopus 로고
    • Some asymptotic results for learning in single hidden-layer feedforward neural network models
    • H. White, "Some asymptotic results for learning in single hidden-layer feedforward neural network models," J. Amer. Statist. Assoc., vol. 84, no. 408, pp. 1003-1013, 1989.
    • (1989) J. Amer. Statist. Assoc , vol.84 , Issue.408 , pp. 1003-1013
    • White, H.1
  • 16
    • 0242364795 scopus 로고    scopus 로고
    • convergence of an online gradient methods for continuous perceptrons with linearly separable training patterns
    • W. Wu and Z. Shao, "convergence of an online gradient methods for continuous perceptrons with linearly separable training patterns," Appl. Math. Lett., vol. 16, no. 2, pp. 999-1002, 2003.
    • (2003) Appl. Math. Lett , vol.16 , Issue.2 , pp. 999-1002
    • Wu, W.1    Shao, Z.2
  • 17
    • 0036644480 scopus 로고    scopus 로고
    • Deterministic convergence of an online gradient method for neural networks
    • W.Wu and Y. S. Xu, "Deterministic convergence of an online gradient method for neural networks," J. Comput. Appl. Math., vol. 144, pp. 335-347, 2002.
    • (2002) J. Comput. Appl. Math , vol.144 , pp. 335-347
    • Wu, W.1    Xu, Y.S.2
  • 18
    • 0141849409 scopus 로고    scopus 로고
    • Training multilayer perceptrons via minimization of sum of ridge functions
    • W. Wu, G. R. Feng, and X. Li, "Training multilayer perceptrons via minimization of sum of ridge functions," Adv. Comput. Math., vol. 17, no. 4, pp. 331-347, 2002.
    • (2002) Adv. Comput. Math , vol.17 , Issue.4 , pp. 331-347
    • Wu, W.1    Feng, G.R.2    Li, X.3
  • 19
    • 1642562585 scopus 로고    scopus 로고
    • Convergence of an online gradient method for FNN with stochastic inputs
    • Z. Li, W.Wu, and Y. Tian, "Convergence of an online gradient method for FNN with stochastic inputs," J. Comput. Appl. Math., vol. 163, pp. 165-176, 2004.
    • (2004) J. Comput. Appl. Math , vol.163 , pp. 165-176
    • Li, Z.1    Wu, W.2    Tian, Y.3
  • 20
    • 19344362900 scopus 로고    scopus 로고
    • Deterministic convergence of an online gradient method for BP neural networks
    • May
    • W.Wu, G. R. Feng, Z. X. Li, and Y. S. Xu, "Deterministic convergence of an online gradient method for BP neural networks," IEEE Trans. Neural Netw., vol. 16, no. 3, pp. 533-540, May 2005.
    • (2005) IEEE Trans. Neural Netw , vol.16 , Issue.3 , pp. 533-540
    • Wu, W.1    Feng, G.R.2    Li, Z.X.3    Xu, Y.S.4
  • 21
    • 26844570247 scopus 로고    scopus 로고
    • Convergence of an online gradient method for BP neural networks with stochastic inputs
    • Berlin, Germany: Springer-Verlag
    • Z. X. Li, W. Wu, G. R. Feng, and H. Lu, "Convergence of an online gradient method for BP neural networks with stochastic inputs," in Lecture Notes in Computer Science. Berlin, Germany: Springer-Verlag, 2005, vol. 3601, pp. 720-729.
    • (2005) Lecture Notes in Computer Science , vol.3601 , pp. 720-729
    • Li, Z.X.1    Wu, W.2    Feng, G.R.3    Lu, H.4
  • 22
    • 33847148535 scopus 로고    scopus 로고
    • Strong convergence for gradient methods for BP networks training
    • W.Wu, H. Shao, and D. Qu, "Strong convergence for gradient methods for BP networks training," in Proc. Int. Conf. Neural Netw. Brains, 2005, pp. 332-334.
    • (2005) Proc. Int. Conf. Neural Netw. Brains , pp. 332-334
    • Wu, W.1    Shao, H.2    Qu, D.3
  • 23
    • 33644892170 scopus 로고    scopus 로고
    • Convergence of gradient method with momentum for two-layer feedforward neural networks with stochastic inputs
    • Mar
    • N. Zhang, W. Wu, and G. Zheng, "Convergence of gradient method with momentum for two-layer feedforward neural networks with stochastic inputs," IEEE Trans. Neural Netw., vol. 17, no. 2, pp. 522-525, Mar. 2006.
    • (2006) IEEE Trans. Neural Netw , vol.17 , Issue.2 , pp. 522-525
    • Zhang, N.1    Wu, W.2    Zheng, G.3
  • 24
    • 0034389611 scopus 로고    scopus 로고
    • Gradient convergence in gradient methods with errors
    • D. P. Bertsekas and J. N. Tsitsiklis, "Gradient convergence in gradient methods with errors," SIAM J. Optim., vol. 10, no. 3, pp. 627-642, 2000.
    • (2000) SIAM J. Optim , vol.10 , Issue.3 , pp. 627-642
    • Bertsekas, D.P.1    Tsitsiklis, J.N.2


* 이 정보는 Elsevier사의 SCOPUS DB에서 KISTI가 분석하여 추출한 것입니다.