메뉴 건너뛰기




Volumn 23, Issue 2, 2012, Pages 330-341

Global convergence of online BP training with dynamic learning rate

Author keywords

Backpropagation (BP) neural networks; dynamic learning rate; global convergence analysis; online BP training procedure

Indexed keywords

BACK-PROPAGATION NEURAL NETWORKS; DYNAMIC LEARNING RATES; ENGINEERING APPLICATIONS; ERROR FUNCTION; GLOBAL CONVER-GENCE; LEARNING RATES; SCIENTIFIC RESEARCHES; TRAINING PROCEDURES;

EID: 84868489803     PISSN: 2162237X     EISSN: 21622388     Source Type: Journal    
DOI: 10.1109/TNNLS.2011.2178315     Document Type: Article
Times cited : (77)

References (34)
  • 5
    • 0034283719 scopus 로고    scopus 로고
    • Enhancing the estimation of plant Jacobian for adaptive neural inverse control
    • Sep
    • D. Wang and P. Bao, "Enhancing the estimation of plant Jacobian for adaptive neural inverse control," Neurocomputing, vol. 34, nos. 1-4, pp. 99-115, Sep. 2000.
    • (2000) Neurocomputing , vol.34 , Issue.1-4 , pp. 99-115
    • Wang, D.1    Bao, P.2
  • 6
    • 27944446077 scopus 로고    scopus 로고
    • A hybrid image retrieval system with user's relevance feedback using neurocomputing
    • D. Wang and X. Ma, "A hybrid image retrieval system with user's relevance feedback using neurocomputing," Informatica, vol. 29, no. 3, pp. 271-280, 2005.
    • (2005) Informatica , vol.29 , Issue.3 , pp. 271-280
    • Wang, D.1    Ma, X.2
  • 7
    • 77953119547 scopus 로고    scopus 로고
    • Convergence and objective functions of some fault/noise-injection-based online learning algorithms for RBF networks
    • Jun.
    • K. I.-J. Ho, C.-S. Leung, and J. Sum, "Convergence and objective functions of some fault/noise-injection-based online learning algorithms for RBF networks," IEEE Trans. Neural Netw., vol. 21, no. 6, pp. 938-947, Jun. 2010.
    • (2010) IEEE Trans. Neural Netw , vol.21 , Issue.6 , pp. 938-947
    • Ho, K.I.-J.1    Leung, C.-S.2    Sum, J.3
  • 8
    • 77954565185 scopus 로고    scopus 로고
    • Mean-square converge analysis of ADALINE training with minimum error entropy criterion
    • Jul.
    • B. Chen, Y. Zhu, and J. Hu, "Mean-square converge analysis of ADALINE training with minimum error entropy criterion," IEEE Trans. Neural Netw., vol. 21, no. 7, pp. 1168-1178, Jul. 2010.
    • (2010) IEEE Trans. Neural Netw , vol.21 , Issue.7 , pp. 1168-1178
    • Chen, B.1    Zhu, Y.2    Hu, J.3
  • 9
    • 79953806698 scopus 로고    scopus 로고
    • Converge study in extended Kalman filterbased training of recurrent neural networks
    • Apr.
    • X. Wang and Y. Huang, "Converge study in extended Kalman filterbased training of recurrent neural networks," IEEE Trans. Neural Netw., vol. 22, no. 4, pp. 588-600, Apr. 2011.
    • (2011) IEEE Trans. Neural Netw , vol.22 , Issue.4 , pp. 588-600
    • Wang, X.1    Huang, Y.2
  • 10
    • 0040907600 scopus 로고    scopus 로고
    • Parameter convergence and learning curves for neural networks
    • L. T. Fine and S. Mukherjee, "Parameter convergence and learning curves for neural networks," Neural Comput., vol. 11, no. 3, pp. 747-769, 1998.
    • (1998) Neural Comput , vol.11 , Issue.3 , pp. 747-769
    • Fine, L.T.1    Mukherjee, S.2
  • 11
    • 0001699239 scopus 로고
    • Diffusion approximations for the constant learning rate BP algorithm and resistance to local minima
    • W. Finnoff, "Diffusion approximations for the constant learning rate BP algorithm and resistance to local minima," Neural Comput., vol. 6, no. 2, pp. 285-295, 1994.
    • (1994) Neural Comput , vol.6 , Issue.2 , pp. 285-295
    • Finnoff, W.1
  • 12
    • 84973041784 scopus 로고
    • Convergence properties of backpropagation for neural nets via theory of stochastic gradient methods. Part i
    • A. A. Gaivoronski, "Convergence properties of backpropagation for neural nets via theory of stochastic gradient methods. Part I," Opt. Methods Softw., vol. 4, no. 2, pp. 117-134, 1994.
    • (1994) Opt. Methods Softw , vol.4 , Issue.2 , pp. 117-134
    • Gaivoronski, A.A.1
  • 13
    • 0026222695 scopus 로고
    • Convergence of learning algorithms with constant learning rates
    • Sep
    • C.-M. Kuan and K. Hornik, "Convergence of learning algorithms with constant learning rates," IEEE Trans. Neural Netw., vol. 2, no. 5, pp. 484-489, Sep. 1991.
    • (1991) IEEE Trans. Neural Netw , vol.2 , Issue.5 , pp. 484-489
    • Kuan, C.-M.1    Hornik, K.2
  • 15
    • 0031143834 scopus 로고    scopus 로고
    • Improving the error BP algorithm with a modified error function
    • May
    • S.-H. Oh, "Improving the error BP algorithm with a modified error function," IEEE Trans. Neural Netw., vol. 8, no. 3, pp. 799-803, May 1997.
    • (1997) IEEE Trans. Neural Netw , vol.8 , Issue.3 , pp. 799-803
    • Oh, S.-H.1
  • 16
    • 0012195187 scopus 로고
    • Some asymptotic results for learning in single hidden-layer feedforward neural network models
    • Dec
    • H. White, "Some asymptotic results for learning in single hidden-layer feedforward neural network models," J. Amer. Stat. Assoc., vol. 84, no. 408, pp. 1003-1013, Dec. 1989.
    • (1989) J. Amer. Stat. Assoc , vol.84 , Issue.408 , pp. 1003-1013
    • White, H.1
  • 18
    • 1642562585 scopus 로고    scopus 로고
    • Convergence of an online gradient method for FNN with stochastic inputs
    • Z.-X. Li, W. Wu, and Y.-L. Tian, "Convergence of an online gradient method for FNN with stochastic inputs," J. Comput. Appl. Math., vol. 163, no. 1, pp. 165-176, 2004.
    • (2004) J. Comput. Appl. Math , vol.163 , Issue.1 , pp. 165-176
    • Li, Z.-X.1    Wu, W.2    Tian, Y.-L.3
  • 19
    • 84973041797 scopus 로고
    • Analysis of an approximate gradient projection method with application to the backpropagation algorithm
    • Z.-Q. Luo and P. Tseng, "Analysis of an approximate gradient projection method with application to the backpropagation algorithm," Optim. Methods Softw., vol. 4, no. 2, pp. 85-101, 1994.
    • (1994) Optim. Methods Softw , vol.4 , Issue.2 , pp. 85-101
    • Luo, Z.-Q.1    Tseng, P.2
  • 20
    • 84972916837 scopus 로고
    • Serial and parallel backpropagation convergence via nonmonotone perturbed minimization
    • O. L. Manggasarian and M. V. Solodov, "Serial and parallel backpropagation convergence via nonmonotone perturbed minimization," Optim. Methods Softw., vol. 4, no. 2, pp. 103-116, 1994.
    • (1994) Optim. Methods Softw , vol.4 , Issue.2 , pp. 103-116
    • Manggasarian, O.L.1    Solodov, M.V.2
  • 21
    • 0141849409 scopus 로고    scopus 로고
    • Training multilayer perceptrons via minimization of sum of ridge functions
    • W. Wu, G.-R. Feng, and X. Li, "Training multilayer perceptrons via minimization of sum of ridge functions," Adv. Comput. Math., vol. 17, no. 4, pp. 331-347, 2002.
    • (2002) Adv. Comput. Math , vol.17 , Issue.4 , pp. 331-347
    • Wu, W.1    Feng, G.-R.2    Li, X.3
  • 22
    • 19344362900 scopus 로고    scopus 로고
    • Deterministic convergence of an online gradient method for BP neural networks
    • May
    • W. Wu, G.-R. Feng, Z.-X. Li, and Y.-S. Xu, "Deterministic convergence of an online gradient method for BP neural networks," IEEE Trans. Neural Netw., vol. 16, no. 3, pp. 533-540, May 2005.
    • (2005) IEEE Trans. Neural Netw , vol.16 , Issue.3 , pp. 533-540
    • Wu, W.1    Feng, G.-R.2    Li, Z.-X.3    Xu, Y.-S.4
  • 23
    • 0242364795 scopus 로고    scopus 로고
    • Convergence of an online gradient methods for continuous perceptrons with linearly separable training patterns
    • W. Wu and Z.-Q. Shao, "Convergence of an online gradient methods for continuous perceptrons with linearly separable training patterns," Appl. Math. Lett., vol. 16, no. 7, pp. 999-1002, 2003.
    • (2003) Appl. Math. Lett , vol.16 , Issue.7 , pp. 999-1002
    • Wu, W.1    Shao, Z.-Q.2
  • 24
    • 0036644480 scopus 로고    scopus 로고
    • Deterministic convergence of an online gradient method for neural networks
    • W. Wu and Y.-S. Xu, "Deterministic convergence of an online gradient method for neural networks," J. Comput. Appl. Math., vol. 144, no. 1, pp. 335-347, 2002.
    • (2002) J. Comput. Appl. Math , vol.144 , Issue.1 , pp. 335-347
    • Wu, W.1    Xu, Y.-S.2
  • 25
    • 70350336479 scopus 로고    scopus 로고
    • When does online BP training converge?
    • Oct
    • Z.-B. Xu, R. Zhang, and W.-F. Jing, "When does online BP training converge?" IEEE Trans. Neural Netw., vol. 20, no. 10, pp. 1529-1539, Oct. 2009.
    • (2009) IEEE Trans. Neural Netw , vol.20 , Issue.10 , pp. 1529-1539
    • Xu, Z.-B.1    Zhang, R.2    Jing, W.-F.3
  • 26
    • 67649385962 scopus 로고    scopus 로고
    • Boundedness and converge of online gradient method with penalty for feedforward neural network
    • Jun
    • H. Zhang, W. Wu, F. Liu, and M. Yao, "Boundedness and converge of online gradient method with penalty for feedforward neural network," IEEE Trans. Neural Netw., vol. 20, no. 6, pp. 1050-1054, Jun. 2009.
    • (2009) IEEE Trans. Neural Netw , vol.20 , Issue.6 , pp. 1050-1054
    • Zhang, H.1    Wu, W.2    Liu, F.3    Yao, M.4
  • 27
    • 33644892170 scopus 로고    scopus 로고
    • Convergence of gradient method with momentum for two-layer feedforward neural networks
    • Mar
    • N.-M. Zhang, W. Wu, and G.-F. Zheng, "Convergence of gradient method with momentum for two-layer feedforward neural networks," IEEE Trans. Neural Netw., vol. 17, no. 2, pp. 522-525, Mar. 2006.
    • (2006) IEEE Trans. Neural Netw , vol.17 , Issue.2 , pp. 522-525
    • Zhang, N.-M.1    Wu, W.2    Zheng, G.-F.3
  • 28
    • 0036342213 scopus 로고    scopus 로고
    • Incremental subgradient methods for nondifferentiable optimization
    • A. Nedic and D. P. Bertsekas, "Incremental subgradient methods for nondifferentiable optimization," SIAM J. Optim., vol. 12, no. 1, pp. 109-138, 2001.
    • (2001) SIAM J. Optim , vol.12 , Issue.1 , pp. 109-138
    • Nedic, A.1    Bertsekas, D.P.2
  • 29
    • 38149100484 scopus 로고    scopus 로고
    • An online backpropagation algorithm with validation error-based adaptive learning rate
    • S. Duffner and C. Garcia, "An online backpropagation algorithm with validation error-based adaptive learning rate," in Proc. Int. Conf. Artif. Neural Netw., vol. 1. 2007, pp. 249-258.
    • (2007) Proc. Int. Conf. Artif. Neural Netw , vol.1 , pp. 249-258
    • Duffner, S.1    Garcia, C.2
  • 30
    • 0034389611 scopus 로고    scopus 로고
    • Gradient convergence in gradient methods with errors
    • P. D. Bertsekas and J. N. Tsitsiklis, "Gradient convergence in gradient methods with errors," SIAM J. Optim., vol. 10, no. 3, pp. 627-642, 2000.
    • (2000) SIAM J. Optim , vol.10 , Issue.3 , pp. 627-642
    • Bertsekas, P.D.1    Tsitsiklis, J.N.2
  • 32
    • 0036825833 scopus 로고    scopus 로고
    • N-bit parity neural networks: New solutions based on linear programming
    • Oct
    • D. Liu, M. E. Hohil, and S. H. Smith, "N-bit parity neural networks: New solutions based on linear programming," Neurocomputing, vol. 48, nos. 1-4, pp. 477-488, Oct. 2002.
    • (2002) Neurocomputing , vol.48 , Issue.1-4 , pp. 477-488
    • Liu, D.1    Hohil, M.E.2    Smith, S.H.3


* 이 정보는 Elsevier사의 SCOPUS DB에서 KISTI가 분석하여 추출한 것입니다.