메뉴 건너뛰기




Volumn 77, Issue 1, 2012, Pages 243-252

Convergence of an online gradient method with inner-product penalty and adaptive momentum

Author keywords

Adaptive momentum; Convergence; Feedforward neural network; Inner product penalty; Online gradient method

Indexed keywords

ADAPTIVE MOMENTUM; CONVERGENCE; CONVERGENCE THEOREM; GABOR FUNCTION; INNER PRODUCT; ONLINE GRADIENT METHOD; SUFFICIENT CONDITIONS; THREE-LAYER NEURAL NETWORKS; TRAINING SAMPLE; TWO LAYERS; TWO-SPIRAL PROBLEM; WEAK AND STRONG CONVERGENCE;

EID: 80955130842     PISSN: 09252312     EISSN: 18728286     Source Type: Journal    
DOI: 10.1016/j.neucom.2011.09.003     Document Type: Article
Times cited : (10)

References (31)
  • 3
    • 0036644480 scopus 로고    scopus 로고
    • Deterministic convergence of an online gradient method for neural networks
    • Wu W., Xu Y.S. Deterministic convergence of an online gradient method for neural networks. J. Comput. Appl. Math. 2002, 144:335-347.
    • (2002) J. Comput. Appl. Math. , vol.144 , pp. 335-347
    • Wu, W.1    Xu, Y.S.2
  • 4
    • 0141849409 scopus 로고    scopus 로고
    • Training multilayer perceptrons via minimization of sum of ridge functions
    • Wu W., Feng G.R., Li X. Training multilayer perceptrons via minimization of sum of ridge functions. Adv. Comput. Math. 2002, 17:331-347.
    • (2002) Adv. Comput. Math. , vol.17 , pp. 331-347
    • Wu, W.1    Feng, G.R.2    Li, X.3
  • 5
    • 84898957627 scopus 로고    scopus 로고
    • For valid generalization, the size of the weights is more important than the size of the network
    • P.L. Bartlett, For valid generalization, the size of the weights is more important than the size of the network, in: Advances in Neural Information Processing Systems, vol. 9, 1997, pp. 134-140.
    • (1997) Advances in Neural Information Processing Systems , vol.9 , pp. 134-140
    • Bartlett, P.L.1
  • 6
  • 7
    • 0024732792 scopus 로고
    • Connectionist learning procedures
    • Hinton G.E. Connectionist learning procedures. Artif. Intell. 1989, 40:185-234.
    • (1989) Artif. Intell. , vol.40 , pp. 185-234
    • Hinton, G.E.1
  • 8
    • 33750578263 scopus 로고    scopus 로고
    • Pruning algorithms-a survey
    • Reed R. Pruning algorithms-a survey. IEEE Trans. Neural Networks 1997, 8:185-204.
    • (1997) IEEE Trans. Neural Networks , vol.8 , pp. 185-204
    • Reed, R.1
  • 9
    • 0030633575 scopus 로고    scopus 로고
    • A penalty-function approach for pruning feedforward neural networks
    • Setiono R. A penalty-function approach for pruning feedforward neural networks. Neural Networks 1997, 9:185-204.
    • (1997) Neural Networks , vol.9 , pp. 185-204
    • Setiono, R.1
  • 10
    • 0030130724 scopus 로고    scopus 로고
    • Structural learning with forgetting
    • Ishikawa M. Structural learning with forgetting. Neural Networks 1996, 9(3):509-521.
    • (1996) Neural Networks , vol.9 , Issue.3 , pp. 509-521
    • Ishikawa, M.1
  • 11
    • 67649085858 scopus 로고    scopus 로고
    • Boundedness and convergence of online gradient method with penalty for linear output feedforward neural networks
    • Zhang H.S., Wu W. Boundedness and convergence of online gradient method with penalty for linear output feedforward neural networks. Neural Process. Lett. 2009, 29:205-212.
    • (2009) Neural Process. Lett. , vol.29 , pp. 205-212
    • Zhang, H.S.1    Wu, W.2
  • 12
    • 84888405240 scopus 로고    scopus 로고
    • Online gradient methods with a punishing term for neural networks
    • Kong J., Wu W. Online gradient methods with a punishing term for neural networks. Northeast. Math. 2001, 173:371-378.
    • (2001) Northeast. Math. , vol.173 , pp. 371-378
    • Kong, J.1    Wu, W.2
  • 13
    • 80955169481 scopus 로고    scopus 로고
    • Online gradient methods with a penalty term for neural networks with large training set
    • Zhang L.Q., Wu W. Online gradient methods with a penalty term for neural networks with large training set. J. Nonlinear Dyn. Sci. Technol. 2004, 11:53-58.
    • (2004) J. Nonlinear Dyn. Sci. Technol. , vol.11 , pp. 53-58
    • Zhang, L.Q.1    Wu, W.2
  • 14
    • 1642562585 scopus 로고    scopus 로고
    • Convergence of an online gradient method for feedforword neural networks with stochastic inputs
    • Li Z.X., Wu W., Tian Y.L. Convergence of an online gradient method for feedforword neural networks with stochastic inputs. J. Comput. Appl. Math. 2004, 163(1):165-176.
    • (2004) J. Comput. Appl. Math. , vol.163 , Issue.1 , pp. 165-176
    • Li, Z.X.1    Wu, W.2    Tian, Y.L.3
  • 15
    • 33847154462 scopus 로고    scopus 로고
    • Convergence of online gradient method with a penalty term for feedforward neural networks with stochastic inputs
    • Shao H.M., Wu W., Li F. Convergence of online gradient method with a penalty term for feedforward neural networks with stochastic inputs. Numer. Math. J. Chin. Universities Eng. Ser. 2005, 14(10):87-96.
    • (2005) Numer. Math. J. Chin. Universities Eng. Ser. , vol.14 , Issue.10 , pp. 87-96
    • Shao, H.M.1    Wu, W.2    Li, F.3
  • 16
    • 0001031887 scopus 로고
    • An adaptive training algorithm for backpropagation networks
    • Chan L.W., Fallside F. An adaptive training algorithm for backpropagation networks. Comput. Speech Lang. 1987, 2:205-218.
    • (1987) Comput. Speech Lang. , vol.2 , pp. 205-218
    • Chan, L.W.1    Fallside, F.2
  • 17
    • 0026817671 scopus 로고
    • Accelerated training of backpropagation networks by using adaptive momentum step
    • Qiu G., Varley M.R., Terrell T.J. Accelerated training of backpropagation networks by using adaptive momentum step. IEE Electron. Lett. 1992, 28(4):377-379.
    • (1992) IEE Electron. Lett. , vol.28 , Issue.4 , pp. 377-379
    • Qiu, G.1    Varley, M.R.2    Terrell, T.J.3
  • 18
    • 2342518351 scopus 로고    scopus 로고
    • Improved backpropagation learning in neural networks with windowed momentum
    • Istook E., Martinez T. Improved backpropagation learning in neural networks with windowed momentum. Int. J. Neural Syst. 2002, 12(3-4):303-318.
    • (2002) Int. J. Neural Syst. , vol.12 , Issue.3-4 , pp. 303-318
    • Istook, E.1    Martinez, T.2
  • 19
    • 0036565025 scopus 로고    scopus 로고
    • Stability of steepest descent with momentum for quadratic functions
    • Torii M., Hagan M.T. Stability of steepest descent with momentum for quadratic functions. IEEE Trans. Neural Networks 2002, 13(3):752-756.
    • (2002) IEEE Trans. Neural Networks , vol.13 , Issue.3 , pp. 752-756
    • Torii, M.1    Hagan, M.T.2
  • 20
    • 0346881152 scopus 로고    scopus 로고
    • Steepest descent with momentum for quadratic functions is a version of the conjugate gradient method
    • Bhaya A., Kaszkurewicz E. Steepest descent with momentum for quadratic functions is a version of the conjugate gradient method. Neural Networks 2004, 17:65-71.
    • (2004) Neural Networks , vol.17 , pp. 65-71
    • Bhaya, A.1    Kaszkurewicz, E.2
  • 21
    • 38049007529 scopus 로고    scopus 로고
    • Analysis of Global Convergence and Learning Parameters of the Back-propagation Algorithm for Quadratic Functions
    • Z.G. Zeng, Analysis of Global Convergence and Learning Parameters of the Back-propagation Algorithm for Quadratic Functions, Lecture Notes in Computer Science, vol. 4682, 2007, pp. 7-13.
    • (2007) Lecture Notes in Computer Science , vol.4682 , pp. 7-13
    • Zeng, Z.G.1
  • 22
    • 33644892170 scopus 로고    scopus 로고
    • Convergence of gradient method with momentum for two-layer feedforward neural networks
    • Zhang N.M., Wu W., Zheng G.F. Convergence of gradient method with momentum for two-layer feedforward neural networks. IEEE Trans. Neural Networks 2006, 17(2):522-525.
    • (2006) IEEE Trans. Neural Networks , vol.17 , Issue.2 , pp. 522-525
    • Zhang, N.M.1    Wu, W.2    Zheng, G.F.3
  • 23
    • 33749539894 scopus 로고    scopus 로고
    • Deterministic Convergence of an Online Gradient Method with Momentum
    • N.M. Zhang, Deterministic Convergence of an Online Gradient Method with Momentum, Lecture Notes in Computer Science, vol. 4113, 2006, pp. 94-105.
    • (2006) Lecture Notes in Computer Science , vol.4113 , pp. 94-105
    • Zhang, N.M.1
  • 24
    • 67349220042 scopus 로고    scopus 로고
    • An online gradient method with momentum for two-layer feedforward neural networks
    • Zhang N.M. An online gradient method with momentum for two-layer feedforward neural networks. Appl. Math. Comput. 2009, 212:488-498.
    • (2009) Appl. Math. Comput. , vol.212 , pp. 488-498
    • Zhang, N.M.1
  • 25
    • 19344362900 scopus 로고    scopus 로고
    • Convergence of an online gradient method for BP neural networks
    • Wu W., Feng G.R., Li Z.X., Xu Y.S. Convergence of an online gradient method for BP neural networks. IEEE Trans. Neural Networks 2005, 16(3):533-540.
    • (2005) IEEE Trans. Neural Networks , vol.16 , Issue.3 , pp. 533-540
    • Wu, W.1    Feng, G.R.2    Li, Z.X.3    Xu, Y.S.4
  • 26
    • 33750587262 scopus 로고    scopus 로고
    • Convergence of Batch BP Algorithm with Penalty for FNN Training
    • W. Wu, H.M. Shao, Z.X. Li, Convergence of Batch BP Algorithm with Penalty for FNN Training, Lecture Notes in Computer Science, vol. 4232, 2006, pp. 562-569.
    • (2006) Lecture Notes in Computer Science , vol.4232 , pp. 562-569
    • Wu, W.1    Shao, H.M.2    Li, Z.X.3
  • 27
    • 48549095370 scopus 로고    scopus 로고
    • Convergence of gradient method with momentum for back-propagation neural networks
    • Wu W., et al. Convergence of gradient method with momentum for back-propagation neural networks. J. Comput. Math. 2008, 26(4):613-623.
    • (2008) J. Comput. Math. , vol.26 , Issue.4 , pp. 613-623
    • Wu, W.1
  • 28
    • 78149331700 scopus 로고    scopus 로고
    • Convergence analysis of three classes of split-complex gradient algorithms for complex-valued recurrent neural networks
    • Xu D.P., Zhang H.S., Liu L.J. Convergence analysis of three classes of split-complex gradient algorithms for complex-valued recurrent neural networks. Neural Comput. 2010, 22(10):2655-2677.
    • (2010) Neural Comput. , vol.22 , Issue.10 , pp. 2655-2677
    • Xu, D.P.1    Zhang, H.S.2    Liu, L.J.3
  • 30
    • 76349113332 scopus 로고    scopus 로고
    • A modified gradient-based neuro-fuzzy learning algorithm and its convergence
    • Wu W., et al. A modified gradient-based neuro-fuzzy learning algorithm and its convergence. Inf. Sci. 2010, 180(9):1630-1642.
    • (2010) Inf. Sci. , vol.180 , Issue.9 , pp. 1630-1642
    • Wu, W.1
  • 31
    • 69249225560 scopus 로고    scopus 로고
    • Neighborhood based modified backpropagation algorithm using adaptive learning parameters for training feedforward neural networks
    • Kathirvalavakumar T., JeyaseeliSubavathi S. Neighborhood based modified backpropagation algorithm using adaptive learning parameters for training feedforward neural networks. Neurocomputing 2009, 72:3915-3921.
    • (2009) Neurocomputing , vol.72 , pp. 3915-3921
    • Kathirvalavakumar, T.1    JeyaseeliSubavathi, S.2


* 이 정보는 Elsevier사의 SCOPUS DB에서 KISTI가 분석하여 추출한 것입니다.