메뉴 건너뛰기




Volumn 10, Issue 3, 1997, Pages 517-527

Efficient backpropagation learning using optimal learning rate and momentum

Author keywords

backpropagation learning; multilayer feedforward neural networks

Indexed keywords

ALGORITHMS; BACKPROPAGATION; COMPUTER SIMULATION; FEEDFORWARD NEURAL NETWORKS; OPTIMAL SYSTEMS;

EID: 0031127257     PISSN: 08936080     EISSN: None     Source Type: Journal    
DOI: 10.1016/S0893-6080(96)00102-5     Document Type: Article
Times cited : (129)

References (21)
  • 2
    • 0001024110 scopus 로고
    • First- and Second-order methods for learning: Between steepest descent and Newton's method
    • Battiti R. First- and Second-order methods for learning: Between steepest descent and Newton's method. Neural Computation. 4:1992;141-166.
    • (1992) Neural Computation , vol.4 , pp. 141-166
    • Battiti, R.1
  • 7
    • 0023331258 scopus 로고
    • An introduction to computing with neural networks
    • Lippmann R.P. An introduction to computing with neural networks. IEEE ASSP Magazine. 4(2):1987;4-22.
    • (1987) IEEE ASSP Magazine , vol.4 , Issue.2 , pp. 4-22
    • Lippmann, R.P.1
  • 8
    • 0024137490 scopus 로고
    • Increased rates of convergence through learning rate adaptation
    • Jacobs R.A. Increased rates of convergence through learning rate adaptation. Neural Networks. 1(4):1988;295-308.
    • (1988) Neural Networks , vol.1 , Issue.4 , pp. 295-308
    • Jacobs, R.A.1
  • 9
    • 0024715766 scopus 로고
    • An adaptive least square algorithm for the efficient training of artificial neural networks
    • Kollias S., Anastrassiou D. An adaptive least square algorithm for the efficient training of artificial neural networks. IEEE Transactions on Circuits and Systems. 36:1989;1092-1101.
    • (1989) IEEE Transactions on Circuits and Systems , vol.36 , pp. 1092-1101
    • Kollias, S.1    Anastrassiou, D.2
  • 10
    • 0027205884 scopus 로고
    • A scaled conjugate gradient algorithm for fast supervised learning
    • Møller M.S. A scaled conjugate gradient algorithm for fast supervised learning. Neural Networks. 6(4):1993;525-534.
    • (1993) Neural Networks , vol.6 , Issue.4 , pp. 525-534
    • Møller, M.S.1
  • 11
    • 0028755643 scopus 로고
    • Two adaptive stepsize rules for gradient descent and their application to the training of feedforward artificial neural networks
    • Orlando (pp.
    • Monhandes, M., Codrington, C.W., & Gelfand, S.B. (1994). Two adaptive stepsize rules for gradient descent and their application to the training of feedforward artificial neural networks. In Proceedings of the IEEE International Conference on Neural networks, I, Orlando (pp. 555-560).
    • (1994) In Proceedings of the IEEE International Conference on Neural Networks , vol.1 , pp. 555-560
    • Monhandes, M.1    Codrington, C.W.2    Gelfand, S.B.3
  • 19
    • 0025493673 scopus 로고
    • Training algorithms for backpropagation neural networks with optimal descent factor
    • Yu X.-H., Cheng S.-X. Training algorithms for backpropagation neural networks with optimal descent factor. Electronics Letters. 26(20):1990;1698-1700.
    • (1990) Electronics Letters , vol.26 , Issue.20 , pp. 1698-1700
    • Yu, X.-H.1    Cheng, S.-X.2
  • 20
    • 0027912094 scopus 로고
    • Acceleration of back-propagation learning using optimized learning rate and momentum
    • Yu X.-H., Chen G.-A., Cheng S.-X. Acceleration of back-propagation learning using optimized learning rate and momentum. Electronics Letters. 59(14):1993;1288-1290.
    • (1993) Electronics Letters , vol.59 , Issue.14 , pp. 1288-1290
    • Yu, X.-H.1    Chen, G.-A.2    Cheng, S.-X.3
  • 21
    • 0029304808 scopus 로고
    • Dynamic learning rate optimization of the backpropagation algorithm
    • Yu X.-H., Chen G.-A., Cheng S.-X. Dynamic learning rate optimization of the backpropagation algorithm. IEEE Transactions on Neural Networks. 6(3):1995;669-677.
    • (1995) IEEE Transactions on Neural Networks , vol.6 , Issue.3 , pp. 669-677
    • Yu, X.-H.1    Chen, G.-A.2    Cheng, S.-X.3


* 이 정보는 Elsevier사의 SCOPUS DB에서 KISTI가 분석하여 추출한 것입니다.