메뉴 건너뛰기




Volumn 89, Issue , 2012, Pages 141-146

Boundedness and convergence of batch back-propagation algorithm with penalty for feedforward neural networks

Author keywords

Batch back propagation algorithm; Boundedness; Convergence; Feedforward neural networks; Penalty

Indexed keywords

BOUNDEDNESS; CONVERGENCE; CONVERGENCE RESULTS; LEARNING RATES; NETWORK TRAINING; PENALTY; WEAK AND STRONG CONVERGENCE;

EID: 84860249350     PISSN: 09252312     EISSN: 18728286     Source Type: Journal    
DOI: 10.1016/j.neucom.2012.02.029     Document Type: Article
Times cited : (47)

References (21)
  • 2
    • 0034389611 scopus 로고    scopus 로고
    • Gradient convergence in gradient methods with errors
    • Bertsekas D.P., Tsitsiklis J.N. Gradient convergence in gradient methods with errors. SIAM J. Optim. 2000, 3:627.
    • (2000) SIAM J. Optim. , vol.3 , pp. 627
    • Bertsekas, D.P.1    Tsitsiklis, J.N.2
  • 3
    • 0040907600 scopus 로고    scopus 로고
    • Parameter convergence and learning curves for neural networks
    • Fine T.L., Mukherjee S. Parameter convergence and learning curves for neural networks. Neural Comput. 1999, 11:747.
    • (1999) Neural Comput. , vol.11 , pp. 747
    • Fine, T.L.1    Mukherjee, S.2
  • 4
    • 84972916837 scopus 로고
    • Serial and parallel backpropagation convergence via nonmonotone perturbed minimization
    • Mangasarian O.L., Solodov M.V. Serial and parallel backpropagation convergence via nonmonotone perturbed minimization. Optim. Methods Software 1994, 4:117.
    • (1994) Optim. Methods Software , vol.4 , pp. 117
    • Mangasarian, O.L.1    Solodov, M.V.2
  • 5
    • 19344362900 scopus 로고    scopus 로고
    • Deterministic convergence of an online gradient method for BP neural networks
    • Wu W., Feng G., Li Z., Xu Y. Deterministic convergence of an online gradient method for BP neural networks. IEEE Trans. Neural Networks 2005, 16:533.
    • (2005) IEEE Trans. Neural Networks , vol.16 , pp. 533
    • Wu, W.1    Feng, G.2    Li, Z.3    Xu, Y.4
  • 6
    • 33750587262 scopus 로고    scopus 로고
    • Convergence of batch BP algorithm with penalty for FNN training
    • Wu W., Shao H., Li Z. Convergence of batch BP algorithm with penalty for FNN training. Lect. Notes in Comput. Sci. 2006, 4232:562.
    • (2006) Lect. Notes in Comput. Sci. , vol.4232 , pp. 562
    • Wu, W.1    Shao, H.2    Li, Z.3
  • 7
    • 51449090350 scopus 로고    scopus 로고
    • Convergence of approximated gradient method for Elman network
    • Xu D., Li Z., Wu W. Convergence of approximated gradient method for Elman network. Neural Network World 2008, 18(3):171.
    • (2008) Neural Network World , vol.18 , Issue.3 , pp. 171
    • Xu, D.1    Li, Z.2    Wu, W.3
  • 8
    • 78149331700 scopus 로고    scopus 로고
    • Convergence analysis of three classes of split-complex gradient algorithms for complex-valued recurrent neural networks
    • Xu D., Zhang H., Liu L. Convergence analysis of three classes of split-complex gradient algorithms for complex-valued recurrent neural networks. Neural Comput. 2010, 22(10):2655.
    • (2010) Neural Comput. , vol.22 , Issue.10 , pp. 2655
    • Xu, D.1    Zhang, H.2    Liu, L.3
  • 9
    • 35548986202 scopus 로고    scopus 로고
    • Convergence analysis of batch gradient algorithm for three classes of sigma-pi neural networks
    • Zhang C., Wu W., Xiong Y. Convergence analysis of batch gradient algorithm for three classes of sigma-pi neural networks. Neural Process. Lett. 2007, 26(3):177.
    • (2007) Neural Process. Lett. , vol.26 , Issue.3 , pp. 177
    • Zhang, C.1    Wu, W.2    Xiong, Y.3
  • 10
    • 79956113937 scopus 로고    scopus 로고
    • AVLR-EBP: a variable step size approach to speed-up the convergence of error back-propagation algorithm
    • Didandeh A., Mirbakhsh N., Amiri A., Fathy M. AVLR-EBP: a variable step size approach to speed-up the convergence of error back-propagation algorithm. Neural Process. Lett. 2011, 33(2):201.
    • (2011) Neural Process. Lett. , vol.33 , Issue.2 , pp. 201
    • Didandeh, A.1    Mirbakhsh, N.2    Amiri, A.3    Fathy, M.4
  • 11
    • 0025447562 scopus 로고
    • A simple procedure for pruning back-propagation trained neural networks
    • Karnin E.D. A simple procedure for pruning back-propagation trained neural networks. IEEE Trans. Neural Networks 1990, 1:239.
    • (1990) IEEE Trans. Neural Networks , vol.1 , pp. 239
    • Karnin, E.D.1
  • 12
  • 13
    • 79957959758 scopus 로고    scopus 로고
    • Deterministic convergence of conjugate gradient method for feedforward neural networks
    • Wang J., Wu W., Zurada J.M. Deterministic convergence of conjugate gradient method for feedforward neural networks. Neurocomputing 2011, 74:2368.
    • (2011) Neurocomputing , vol.74 , pp. 2368
    • Wang, J.1    Wu, W.2    Zurada, J.M.3
  • 14
    • 67649385962 scopus 로고    scopus 로고
    • Boundedness and convergence of online gradient method with penalty for feedforward neural networks
    • Zhang H., Wu W., Liu F., Yao M. Boundedness and convergence of online gradient method with penalty for feedforward neural networks. IEEE Trans. Neural Networks 2009, 20(6):1050.
    • (2009) IEEE Trans. Neural Networks , vol.20 , Issue.6 , pp. 1050
    • Zhang, H.1    Wu, W.2    Liu, F.3    Yao, M.4
  • 15
    • 78650719006 scopus 로고    scopus 로고
    • Boundedness and convergence of online gradient method with penalty and momentum
    • Shao H.M., Zheng G.F. Boundedness and convergence of online gradient method with penalty and momentum. Neurocomputing 2011, 74:765.
    • (2011) Neurocomputing , vol.74 , pp. 765
    • Shao, H.M.1    Zheng, G.F.2
  • 16
    • 80955130842 scopus 로고    scopus 로고
    • Convergence of an online gradient method with inner-product penalty and adaptive momentum
    • Shao H.M., Xu D.P., Zheng G.F., Liu L.J. Convergence of an online gradient method with inner-product penalty and adaptive momentum. Neurocomputing 2012, 77:243.
    • (2012) Neurocomputing , vol.77 , pp. 243
    • Shao, H.M.1    Xu, D.P.2    Zheng, G.F.3    Liu, L.J.4
  • 17
    • 67649322111 scopus 로고    scopus 로고
    • Recurrent neural networks training with stable bounding ellipsoid algorithm
    • Yu W., Rubio J.J. Recurrent neural networks training with stable bounding ellipsoid algorithm. IEEE Trans. Neural Networks 2009, 20(6):983.
    • (2009) IEEE Trans. Neural Networks , vol.20 , Issue.6 , pp. 983
    • Yu, W.1    Rubio, J.J.2
  • 18
    • 72649095852 scopus 로고    scopus 로고
    • SOFMLS: online self-organizing fuzzy modified least square network
    • Rubio J.J. SOFMLS: online self-organizing fuzzy modified least square network. IEEE Trans. Fuzzy Systems 2009, 17(6):1296.
    • (2009) IEEE Trans. Fuzzy Systems , vol.17 , Issue.6 , pp. 1296
    • Rubio, J.J.1
  • 19
    • 79952195647 scopus 로고    scopus 로고
    • Uniformly stable backpropagation algorithm to train a feedforward neural network
    • Rubio J.J., Angelov P., Pacheco J. Uniformly stable backpropagation algorithm to train a feedforward neural network. IEEE Trans. Neural Networks 2011, 22(3):256.
    • (2011) IEEE Trans. Neural Networks , vol.22 , Issue.3 , pp. 256
    • Rubio, J.J.1    Angelov, P.2    Pacheco, J.3
  • 20
    • 33644603978 scopus 로고    scopus 로고
    • A new discrete-time sliding-mode control with time-varying gain and neural identification
    • Rubio J.J., Yu W. A new discrete-time sliding-mode control with time-varying gain and neural identification. Int. J. Control 2006, 79(4):338.
    • (2006) Int. J. Control , vol.79 , Issue.4 , pp. 338
    • Rubio, J.J.1    Yu, W.2
  • 21
    • 0242523780 scopus 로고    scopus 로고
    • Nonlinear system identification using discrete-time recurrent neural networks with stable learning algorithms
    • Yu W. Nonlinear system identification using discrete-time recurrent neural networks with stable learning algorithms. Inf. Sci. 2004, 158(1):131.
    • (2004) Inf. Sci. , vol.158 , Issue.1 , pp. 131
    • Yu, W.1


* 이 정보는 Elsevier사의 SCOPUS DB에서 KISTI가 분석하여 추출한 것입니다.