메뉴 건너뛰기




Volumn 74, Issue 14-15, 2011, Pages 2368-2376

Deterministic convergence of conjugate gradient method for feedforward neural networks

Author keywords

Backpropagation; Conjugate gradient; Deterministic convergence; Feedforward neural networks

Indexed keywords

BACK PROPAGATION NEURAL NETWORKS; BATCH MODES; CONJUGATE GRADIENT; CONVERGENCE PROPERTIES; CONVERGENCE RESULTS; DETERMINISTIC CONVERGENCE; ERROR FUNCTION; FAST CONVERGENCE; FEED-FORWARD; FIXED POINTS; LEARNING MODE; LEARNING RATES; LOW MEMORY; NUMERICAL EXAMPLE; NUMERICAL EXPERIMENTS; THREE-LAYER; WEAK AND STRONG CONVERGENCE;

EID: 79957959758     PISSN: 09252312     EISSN: 18728286     Source Type: Journal    
DOI: 10.1016/j.neucom.2011.03.016     Document Type: Article
Times cited : (51)

References (41)
  • 1
    • 0022471098 scopus 로고
    • Learning representations by back-propagating errors
    • Rumelhart D.E., Hinton G.E., Williams R.J. Learning representations by back-propagating errors. Nature 1986, 323:533-536.
    • (1986) Nature , vol.323 , pp. 533-536
    • Rumelhart, D.E.1    Hinton, G.E.2    Williams, R.J.3
  • 2
    • 0030195986 scopus 로고    scopus 로고
    • A theoretical comparison of batch-mode, on-line, cyclic, and almost-cyclic learning
    • Heskes T., Wiegerinck W. A theoretical comparison of batch-mode, on-line, cyclic, and almost-cyclic learning. IEEE Transactions on Neural Networks 1996, 7:919-925.
    • (1996) IEEE Transactions on Neural Networks , vol.7 , pp. 919-925
    • Heskes, T.1    Wiegerinck, W.2
  • 3
    • 0242662161 scopus 로고    scopus 로고
    • The general inefficiency of batch training for gradient descent learning
    • Wilson D.R., Martinez T.R. The general inefficiency of batch training for gradient descent learning. Neural Networks 2003, 16:1429-1451.
    • (2003) Neural Networks , vol.16 , pp. 1429-1451
    • Wilson, D.R.1    Martinez, T.R.2
  • 5
    • 78649659993 scopus 로고    scopus 로고
    • Convergence analysis of online gradient method for BP neural networks
    • Wu W., Wang J., Cheng M.S., Li Z.X. Convergence analysis of online gradient method for BP neural networks. Neural Networks 2011, 24:91-98.
    • (2011) Neural Networks , vol.24 , pp. 91-98
    • Wu, W.1    Wang, J.2    Cheng, M.S.3    Li, Z.X.4
  • 6
    • 79957939525 scopus 로고    scopus 로고
    • Convergence of cyclic and almost-cyclic learning with momentum for feedforward neural networks, IEEE Transactions on Neural Networks, submitted for publication.
    • J. Wang, J. Yang, W. Wu, Convergence of cyclic and almost-cyclic learning with momentum for feedforward neural networks, IEEE Transactions on Neural Networks, submitted for publication.
    • Wang, J.1    Yang, J.2    Wu, W.3
  • 8
    • 0036698972 scopus 로고    scopus 로고
    • Artificial neural network-based peak load forecasting using conjugate gradient methods
    • Saini L.M., Soni M.K. Artificial neural network-based peak load forecasting using conjugate gradient methods. IEEE Transactions on Power Systems 2002, 17:907-912.
    • (2002) IEEE Transactions on Power Systems , vol.17 , pp. 907-912
    • Saini, L.M.1    Soni, M.K.2
  • 10
    • 0028548319 scopus 로고
    • An implementation of a neural-network-based load forecasting-model for the EMS
    • Papalexopoulos A.D., Hao S.Y., Peng T.M. An implementation of a neural-network-based load forecasting-model for the EMS. IEEE Transactions on Power Systems 1994, 9:1956-1962.
    • (1994) IEEE Transactions on Power Systems , vol.9 , pp. 1956-1962
    • Papalexopoulos, A.D.1    Hao, S.Y.2    Peng, T.M.3
  • 11
    • 40049104779 scopus 로고    scopus 로고
    • A comparison of neural network approaches for on-line prediction in IGRT
    • Goodband J.H., Haas O.C.L., Mills J.A. A comparison of neural network approaches for on-line prediction in IGRT. Medical Physics 2008, 35:1113-1122.
    • (2008) Medical Physics , vol.35 , pp. 1113-1122
    • Goodband, J.H.1    Haas, O.C.L.2    Mills, J.A.3
  • 14
    • 79957966502 scopus 로고
    • Method of Conjugate Gradients for Solving Linear Systems, National Bureau of Standards, Washington
    • M.R. Hestenes, E.L. Stiefel, Method of Conjugate Gradients for Solving Linear Systems, National Bureau of Standards, Washington, 1952.
    • (1952)
    • Hestenes, M.R.1    Stiefel, E.L.2
  • 15
    • 0000615669 scopus 로고
    • Function minimization by conjugate gradients
    • Fletcher R., Reeves C.M. Function minimization by conjugate gradients. The Computer Journal 1964, 7:149-154.
    • (1964) The Computer Journal , vol.7 , pp. 149-154
    • Fletcher, R.1    Reeves, C.M.2
  • 17
    • 56449100415 scopus 로고    scopus 로고
    • Natural conjugate gradient training of multilayer perceptrons
    • Gonzalez A., Dorronsoro J.R. Natural conjugate gradient training of multilayer perceptrons. Neurocomputing 2008, 71:2499-2506.
    • (2008) Neurocomputing , vol.71 , pp. 2499-2506
    • Gonzalez, A.1    Dorronsoro, J.R.2
  • 20
    • 33845893334 scopus 로고    scopus 로고
    • Convergence of the Polak-Ribiere-Polyak conjugate gradient method
    • Shi Z.J., Shen J. Convergence of the Polak-Ribiere-Polyak conjugate gradient method. Nonlinear Analysis-Theory 2007, 66:1428-1441.
    • (2007) Nonlinear Analysis-Theory , vol.66 , pp. 1428-1441
    • Shi, Z.J.1    Shen, J.2
  • 21
  • 22
    • 0031334055 scopus 로고    scopus 로고
    • On-line adaptive algorithms in non-stationary environments using a modified conjugate gradient approach, in: Neural Networks for Signal Processing [1997] VII. Proceedings of the 1997 IEEE Workshop
    • A. Cichocki, B. Orsier, A. Back, S.I. Amari, On-line adaptive algorithms in non-stationary environments using a modified conjugate gradient approach, in: Neural Networks for Signal Processing [1997] VII. Proceedings of the 1997 IEEE Workshop, 1997, pp. 316-325.
    • (1997) , pp. 316-325
    • Cichocki, A.1    Orsier, B.2    Back, A.3    Amari, S.I.4
  • 23
    • 33748573992 scopus 로고    scopus 로고
    • Online algorithm of blind source separation based on conjugate gradient method
    • Shen X.Z., Shi X.Z., Meng G. Online algorithm of blind source separation based on conjugate gradient method. Circuits, Systems, and Signal Processing 2006, 25:381-388.
    • (2006) Circuits, Systems, and Signal Processing , vol.25 , pp. 381-388
    • Shen, X.Z.1    Shi, X.Z.2    Meng, G.3
  • 24
    • 77954561082 scopus 로고    scopus 로고
    • A novel recurrent neural network with one neuron and finite-time convergence k-winners-take-all operation
    • Liu Q.S., Dang C.Y., Cao J.D. A novel recurrent neural network with one neuron and finite-time convergence k-winners-take-all operation. IEEE Transactions on Neural Networks 2010, 21:1140-1148.
    • (2010) IEEE Transactions on Neural Networks , vol.21 , pp. 1140-1148
    • Liu, Q.S.1    Dang, C.Y.2    Cao, J.D.3
  • 25
    • 78149304403 scopus 로고    scopus 로고
    • A novel recurrent neural network with finite-time convergence for linear programming
    • Liu Q., Cao J., Chen G. A novel recurrent neural network with finite-time convergence for linear programming. Neural Computation 2010, 22:2962-2978.
    • (2010) Neural Computation , vol.22 , pp. 2962-2978
    • Liu, Q.1    Cao, J.2    Chen, G.3
  • 26
    • 33745903481 scopus 로고    scopus 로고
    • Extreme learning machine: theory and applications
    • Huang G.B., Zhu Q.Y., Siew C.K. Extreme learning machine: theory and applications. Neurocomputing 2006, 70:489-501.
    • (2006) Neurocomputing , vol.70 , pp. 489-501
    • Huang, G.B.1    Zhu, Q.Y.2    Siew, C.K.3
  • 27
    • 33748432957 scopus 로고    scopus 로고
    • Improved extreme learning machine for function approximation by encoding a priori information
    • Han F., Huang D.S. Improved extreme learning machine for function approximation by encoding a priori information. Neurocomputing 2006, 69:2369-2373.
    • (2006) Neurocomputing , vol.69 , pp. 2369-2373
    • Han, F.1    Huang, D.S.2
  • 28
    • 78649492473 scopus 로고    scopus 로고
    • Optimization method based extreme learning machine for classification
    • Huang G.B., Ding X., Zhou H. Optimization method based extreme learning machine for classification. Neurocomputing 2010, 74:155-163.
    • (2010) Neurocomputing , vol.74 , pp. 155-163
    • Huang, G.B.1    Ding, X.2    Zhou, H.3
  • 29
    • 78149331700 scopus 로고    scopus 로고
    • Convergence analysis of three classes of split-complex gradient algorithms for complex-valued recurrent neural networks
    • Xu D.P., Zhang H.S., Liu L.J. Convergence analysis of three classes of split-complex gradient algorithms for complex-valued recurrent neural networks. Neural Computation 2010, 22:2655-2677.
    • (2010) Neural Computation , vol.22 , pp. 2655-2677
    • Xu, D.P.1    Zhang, H.S.2    Liu, L.J.3
  • 30
    • 0024883243 scopus 로고
    • Optimal unsupervised learning in a single-layer linear feedforward neural network
    • Sanger T.D. Optimal unsupervised learning in a single-layer linear feedforward neural network. Neural Networks 1989, 2:459-473.
    • (1989) Neural Networks , vol.2 , pp. 459-473
    • Sanger, T.D.1
  • 31
    • 0001699239 scopus 로고
    • Diffusion approximations for the constant learning rate backpropagation algorithm and resistance to local minima
    • Finnoff W. Diffusion approximations for the constant learning rate backpropagation algorithm and resistance to local minima. Neural Computation 1994, 6:285-295.
    • (1994) Neural Computation , vol.6 , pp. 285-295
    • Finnoff, W.1
  • 32
    • 0037279487 scopus 로고    scopus 로고
    • A novel training scheme for multilayered perceptrons to realize proper generalization and incremental learning
    • Chakraborty D., Pal N.R. A novel training scheme for multilayered perceptrons to realize proper generalization and incremental learning. IEEE Transactions on Neural Networks 2003, 14:1-14.
    • (2003) IEEE Transactions on Neural Networks , vol.14 , pp. 1-14
    • Chakraborty, D.1    Pal, N.R.2
  • 33
    • 67649385962 scopus 로고    scopus 로고
    • Boundedness and convergence of online gradient method with penalty for feedforward neural networks
    • Zhang H.S., Wu W., Liu F., Yao M.C. Boundedness and convergence of online gradient method with penalty for feedforward neural networks. IEEE Transactions on Neural Networks 2009, 20:1050-1054.
    • (2009) IEEE Transactions on Neural Networks , vol.20 , pp. 1050-1054
    • Zhang, H.S.1    Wu, W.2    Liu, F.3    Yao, M.C.4
  • 34
    • 19344362900 scopus 로고    scopus 로고
    • Deterministic convergence of an online gradient method for BP neural networks
    • Wu W., Feng G.R., Li Z.X., Xu Y.S. Deterministic convergence of an online gradient method for BP neural networks. IEEE Transactions on Neural Networks 2005, 16:533-540.
    • (2005) IEEE Transactions on Neural Networks , vol.16 , pp. 533-540
    • Wu, W.1    Feng, G.R.2    Li, Z.X.3    Xu, Y.S.4
  • 35
    • 1642562585 scopus 로고    scopus 로고
    • Convergence of an online gradient method for feedforward neural networks with stochastic inputs
    • Li Z.X., Wu W., Tian Y.L. Convergence of an online gradient method for feedforward neural networks with stochastic inputs. Journal of Computational and Applied Mathematics 2004, 163:165-176.
    • (2004) Journal of Computational and Applied Mathematics , vol.163 , pp. 165-176
    • Li, Z.X.1    Wu, W.2    Tian, Y.L.3
  • 36
    • 0013387067 scopus 로고    scopus 로고
    • A globally convergent version of the Polak-Ribiere conjugate gradient method
    • Grippo L., Lucidi S. A globally convergent version of the Polak-Ribiere conjugate gradient method. Mathematics Programming 1997, 78:375-391.
    • (1997) Mathematics Programming , vol.78 , pp. 375-391
    • Grippo, L.1    Lucidi, S.2
  • 38
    • 84972047841 scopus 로고
    • Theory of algorithms for unconstrained optimization
    • Nocedal J. Theory of algorithms for unconstrained optimization. Acta Numerica 1992, 1:199-242.
    • (1992) Acta Numerica , vol.1 , pp. 199-242
    • Nocedal, J.1
  • 39
    • 3042570018 scopus 로고    scopus 로고
    • Restricted PR conjugate gradient method and its global convergence
    • Shi Z.J. Restricted PR conjugate gradient method and its global convergence. Advances in Mathematics 2002, 1:47-55.
    • (2002) Advances in Mathematics , vol.1 , pp. 47-55
    • Shi, Z.J.1
  • 40
    • 52949120960 scopus 로고    scopus 로고
    • A new algorithm of nonlinear conjugate gradient method with strong convergence
    • Shi Z.J., Guo J.H. A new algorithm of nonlinear conjugate gradient method with strong convergence. Computational and Applied Mathematics 2008, 27:93-106.
    • (2008) Computational and Applied Mathematics , vol.27 , pp. 93-106
    • Shi, Z.J.1    Guo, J.H.2
  • 41
    • 0023843391 scopus 로고
    • Analysis of hidden units in a layered network trained to classify sonar targets
    • Gorman R.P., Sejnowski T.J. Analysis of hidden units in a layered network trained to classify sonar targets. Neural Networks 1988, 1:75-89.
    • (1988) Neural Networks , vol.1 , pp. 75-89
    • Gorman, R.P.1    Sejnowski, T.J.2


* 이 정보는 Elsevier사의 SCOPUS DB에서 KISTI가 분석하여 추출한 것입니다.