메뉴 건너뛰기




Volumn 73, Issue 7-9, 2010, Pages 1303-1323

A heuristically enhanced gradient approximation (HEGA) algorithm for training neural networks

Author keywords

Gradient approximation; Local search; Neural networks learning; Weight perturbation

Indexed keywords

ARTIFICIAL NEURAL NETWORK; BENCH-MARK PROBLEMS; GRADIENT APPROXIMATION; LOCAL SEARCH; TRAINING ALGORITHMS; WEIGHT PERTURBATION;

EID: 77949267043     PISSN: 09252312     EISSN: None     Source Type: Journal    
DOI: 10.1016/j.neucom.2009.12.014     Document Type: Article
Times cited : (7)

References (20)
  • 1
    • 77949263762 scopus 로고    scopus 로고
    • Can organism pick up new valuable information from the environment?
    • Melkikh A.V. Can organism pick up new valuable information from the environment?. Biophysics (Biofizika) 47 6 (2002) 1053-1058
    • (2002) Biophysics (Biofizika) , vol.47 , Issue.6 , pp. 1053-1058
    • Melkikh, A.V.1
  • 4
    • 0000615669 scopus 로고
    • Function minimization by conjugate gradients
    • Fletcher R., and Reeves C.M. Function minimization by conjugate gradients. Computer Journal 7 (1964) 149-154
    • (1964) Computer Journal , vol.7 , pp. 149-154
    • Fletcher, R.1    Reeves, C.M.2
  • 5
    • 0027205884 scopus 로고
    • A scaled conjugate gradient algorithm for fast supervised learning
    • Moller M.F. A scaled conjugate gradient algorithm for fast supervised learning. Neural Networks 6 (1993) 525-533
    • (1993) Neural Networks , vol.6 , pp. 525-533
    • Moller, M.F.1
  • 6
    • 0028543366 scopus 로고
    • Training feedforward networks with the marquardt algorithm
    • Hagan M.T., and Menhaj M. Training feedforward networks with the marquardt algorithm. IEEE Transactions on Neural Networks 5 6 (1994) 989-993
    • (1994) IEEE Transactions on Neural Networks , vol.5 , Issue.6 , pp. 989-993
    • Hagan, M.T.1    Menhaj, M.2
  • 7
    • 26444479778 scopus 로고
    • Optimization by simulated annealing
    • Kirkpatrik S., Gelatt C., and Vecchi M.P. Optimization by simulated annealing. Science 220 (1983) 671-680
    • (1983) Science , vol.220 , pp. 671-680
    • Kirkpatrik, S.1    Gelatt, C.2    Vecchi, M.P.3
  • 8
    • 0026712578 scopus 로고
    • Weight perturbation: an optimal architecture and learning technique for analog vlsi feedforward and recurrent multilayer networks
    • Jabri M., and Flower B. Weight perturbation: an optimal architecture and learning technique for analog vlsi feedforward and recurrent multilayer networks. IEEE Transactions on Neural Networks 3 1 (1992) 154-157
    • (1992) IEEE Transactions on Neural Networks , vol.3 , Issue.1 , pp. 154-157
    • Jabri, M.1    Flower, B.2
  • 9
    • 0000610830 scopus 로고
    • Summed weight neuron perturbation: an o(n) improvement over weight perturbation
    • Morgan Kaufman Publishers, San Mateo, CA
    • Flower B., and Jabri M. Summed weight neuron perturbation: an o(n) improvement over weight perturbation. Advances in Neural Information Processing Systems vol. 5 (1993), Morgan Kaufman Publishers, San Mateo, CA 212-219
    • (1993) Advances in Neural Information Processing Systems , vol.5 , pp. 212-219
    • Flower, B.1    Jabri, M.2
  • 10
    • 0001149625 scopus 로고
    • A fast stochastic error-descent algorithm for supervised learning and optimization
    • Morgan Kaufman Publishers, San Mateo, CA
    • Gauwenberghs G. A fast stochastic error-descent algorithm for supervised learning and optimization. Advances in Neural Information Processing Systems vol. 5 (1993), Morgan Kaufman Publishers, San Mateo, CA 244-251
    • (1993) Advances in Neural Information Processing Systems , vol.5 , pp. 244-251
    • Gauwenberghs, G.1
  • 11
    • 0000260241 scopus 로고
    • A parallel gradient descent method for learning in analog vlsi neural networks
    • Morgan Kaufman Publishers, San Mateo, CA
    • Alspector J., Meir R., Yuhasa B., Jayakumar A., and Lippe D. A parallel gradient descent method for learning in analog vlsi neural networks. Advances in Neural Information Processing Systems vol. 5 (1993), Morgan Kaufman Publishers, San Mateo, CA 836-844
    • (1993) Advances in Neural Information Processing Systems , vol.5 , pp. 836-844
    • Alspector, J.1    Meir, R.2    Yuhasa, B.3    Jayakumar, A.4    Lippe, D.5
  • 12
    • 0001682375 scopus 로고
    • Alopex: a correlation-based learning algorithm for feedforward and recurrent neural networks
    • Unnikrishnan K.P., and Venugopal K.P. Alopex: a correlation-based learning algorithm for feedforward and recurrent neural networks. Neural Computation 6 (1994) 469-490
    • (1994) Neural Computation , vol.6 , pp. 469-490
    • Unnikrishnan, K.P.1    Venugopal, K.P.2
  • 13
    • 2342649032 scopus 로고    scopus 로고
    • Alopex-b: a new, simpler, but yet faster version of the alopex training algorithm
    • Bia A. Alopex-b: a new, simpler, but yet faster version of the alopex training algorithm. International Journal of Neural Systems 11 6 (2001) 497-507
    • (2001) International Journal of Neural Systems , vol.11 , Issue.6 , pp. 497-507
    • Bia, A.1
  • 14
    • 32944476846 scopus 로고    scopus 로고
    • Synaptic noise as a means of implementing weight-perturbation learning
    • TR-2005-2-1, University of Louisiana at Lafayette, Center for Advanced Computer Studies
    • B.A. Rowland, A.S. Maida, I.S.N. Berkeley, Synaptic noise as a means of implementing weight-perturbation learning. CACS Technical Report TR-2005-2-1, University of Louisiana at Lafayette, Center for Advanced Computer Studies, 2005.
    • (2005) CACS Technical Report
    • Rowland, B.A.1    Maida, A.S.2    Berkeley, I.S.N.3
  • 16
    • 27144462270 scopus 로고    scopus 로고
    • Learning curves for stochastic gradient descent in linear feedforward networks
    • Werfel J., Xie X., and Seung H.S. Learning curves for stochastic gradient descent in linear feedforward networks. Neural Computation 17 12 (2005) 2699-2718
    • (2005) Neural Computation , vol.17 , Issue.12 , pp. 2699-2718
    • Werfel, J.1    Xie, X.2    Seung, H.S.3
  • 18
    • 0025488663 scopus 로고
    • 30 years of adaptive neural networks: perceptron, madaline, and backpropagation
    • Widrow B., and Lehr M.A. 30 years of adaptive neural networks: perceptron, madaline, and backpropagation. Proceedings of the IEEE 78 9 (1990) 1415-1442
    • (1990) Proceedings of the IEEE , vol.78 , Issue.9 , pp. 1415-1442
    • Widrow, B.1    Lehr, M.A.2
  • 20
    • 0003539213 scopus 로고
    • The monk's problems-a performance comparison of different learning algorithms
    • Technical Report CMU-CS-91197, Carnegie Mellon University, December
    • S.B. Thrun, et al., The monk's problems-a performance comparison of different learning algorithms. Technical Report CMU-CS-91197, Carnegie Mellon University, December 1991.
    • (1991)
    • Thrun, S.B.1


* 이 정보는 Elsevier사의 SCOPUS DB에서 KISTI가 분석하여 추출한 것입니다.