메뉴 건너뛰기




Volumn 7, Issue , 2006, Pages 1159-1182

A very fast learning method for neural networks based on sensitivity analysis

Author keywords

Initialization method; Least squares; Linear optimization; Neural networks; Sensitivity analysis; Supervised learning

Indexed keywords

KNOWLEDGE ENGINEERING; LEARNING ALGORITHMS; LEARNING SYSTEMS; LEAST SQUARES APPROXIMATIONS; LINEAR EQUATIONS; NEURAL NETWORKS;

EID: 33745751396     PISSN: 15337928     EISSN: 15337928     Source Type: Journal    
DOI: None     Document Type: Article
Times cited : (113)

References (49)
  • 1
    • 0000335879 scopus 로고    scopus 로고
    • Parameter adaptation in stochastic optimization
    • D. Saad, editor, chapter 6. Cambridge University Press
    • L. B. Almeida, T. Langlois, J. D. Amaral, and A. Plakhov. Parameter adaptation in stochastic optimization. In D. Saad, editor, On-line Learning in Neural Networks, chapter 6, pages 111-134. Cambridge University Press, 1999.
    • (1999) On-line Learning in Neural Networks , pp. 111-134
    • Almeida, L.B.1    Langlois, T.2    Amaral, J.D.3    Plakhov, A.4
  • 2
    • 0001024110 scopus 로고
    • First and second order methods for learning: Between steepest descent and Newton's method
    • R. Battiti. First and second order methods for learning: Between steepest descent and Newton's method. Neural Computation, 4(2): 141-166, 1992.
    • (1992) Neural Computation , vol.4 , Issue.2 , pp. 141-166
    • Battiti, R.1
  • 3
    • 0010515793 scopus 로고
    • A derivation of conjugate gradients
    • F. A. Lootsma, editor. Academic Press, London
    • E. M. L. Beale. A derivation of conjugate gradients. In F. A. Lootsma, editor, Numerical methods for nonlinear optimization, pages 39-43. Academic Press, London, 1972.
    • (1972) Numerical Methods for Nonlinear Optimization , pp. 39-43
    • Beale, E.M.L.1
  • 4
    • 0027224761 scopus 로고
    • A learning algorithm for multilayered neural networks based on linear least-squares problems
    • F. Biegler-König and F. Bärmann. A learning algorithm for multilayered neural networks based on linear least-squares problems. Neural Networks, 6:127-131, 1993.
    • (1993) Neural Networks , vol.6 , pp. 127-131
    • Biegler-König, F.1    Bärmann, F.2
  • 5
    • 0028424954 scopus 로고
    • Computing second derivatives in feed-forward networks: A review
    • W. L. Buntine and A. S. Weigend. Computing second derivatives in feed-forward networks: A review. IEEE Transactions on Neural Networks, 5(3):480-488, 1993.
    • (1993) IEEE Transactions on Neural Networks , vol.5 , Issue.3 , pp. 480-488
    • Buntine, W.L.1    Weigend, A.S.2
  • 7
    • 0033079314 scopus 로고    scopus 로고
    • Working with differential, functional and difference equations using functional networks
    • E. Castillo, A. Cobo, J. M. Gutiérrez, and R. E. Pruneda. Working with differential, functional and difference equations using functional networks. Applied Mathematical Modelling, 23(2):89-107, 1999.
    • (1999) Applied Mathematical Modelling , vol.23 , Issue.2 , pp. 89-107
    • Castillo, E.1    Cobo, A.2    Gutiérrez, J.M.3    Pruneda, R.E.4
  • 11
    • 8344246198 scopus 로고    scopus 로고
    • A general method for local sensitivity analysis with application to regression models and other optimization problems
    • E. Castillo, A. S. Hadi, A. Conejo, and A. Fernández-Canteli. A general method for local sensitivity analysis with application to regression models and other optimization problems. Technometrics, 46(4):430-445, 2004.
    • (2004) Technometrics , vol.46 , Issue.4 , pp. 430-445
    • Castillo, E.1    Hadi, A.S.2    Conejo, A.3    Fernández-Canteli, A.4
  • 17
    • 0026897995 scopus 로고
    • Statistically controlled activation weight initialization (SCAWI)
    • G. P. Drago and S. Ridella. Statistically controlled activation weight initialization (SCAWI). IEEE Transactions on Neural Networks, 3:899-905, 1992.
    • (1992) IEEE Transactions on Neural Networks , vol.3 , pp. 899-905
    • Drago, G.P.1    Ridella, S.2
  • 18
    • 0000615669 scopus 로고
    • Function minimization by conjugate gradients
    • R. Fletcher and C. M. Reeves. Function minimization by conjugate gradients. Computer Journal, 7 (149-154), 1964.
    • (1964) Computer Journal , vol.7 , Issue.149-154
    • Fletcher, R.1    Reeves, C.M.2
  • 20
    • 0028543366 scopus 로고
    • Training feedforward networks with the marquardt algorithm
    • M. T. Hagan and M. Menhaj. Training feedforward networks with the marquardt algorithm. IEEE Transactions on Neural Networks, 5(6):989-993, 1994.
    • (1994) IEEE Transactions on Neural Networks , vol.5 , Issue.6 , pp. 989-993
    • Hagan, M.T.1    Menhaj, M.2
  • 24
    • 0024124741 scopus 로고
    • Improving the learning rate of back-propagation with the gradient reuse algorithm
    • D. R. Hush and J. M. Salas. Improving the learning rate of back-propagation with the gradient reuse algorithm. Proceedings of the IEEE Conference of Neural Networks, 1:441-447, 1988.
    • (1988) Proceedings of the IEEE Conference of Neural Networks , vol.1 , pp. 441-447
    • Hush, D.R.1    Salas, J.M.2
  • 26
    • 0024137490 scopus 로고
    • Increased rates of convergence through learning rate adaptation
    • R. A. Jacobs. Increased rates of convergence through learning rate adaptation. Neural Networks, 1 (4):295-308, 1988.
    • (1988) Neural Networks , vol.1 , Issue.4 , pp. 295-308
    • Jacobs, R.A.1
  • 27
    • 0009772169 scopus 로고
    • Second order properties of error surfaces: Learning time and generalization
    • R.P. Lippmann, J.E. Moody, and D.S. Touretzky, editors, San Mateo, CA. Morgan Kaufmann
    • Y. LeCun, I. Kanter, and S.A. Solla. Second order properties of error surfaces: Learning time and generalization. In R.P. Lippmann, J.E. Moody, and D.S. Touretzky, editors, Neural Information Processing Systems, volume 3, pages 918-924, San Mateo, CA, 1991. Morgan Kaufmann.
    • (1991) Neural Information Processing Systems , vol.3 , pp. 918-924
    • Lecun, Y.1    Kanter, I.2    Solla, S.A.3
  • 29
    • 0000873069 scopus 로고
    • A method for the solution of certain non-linear problems in least squares
    • K. Levenberg. A method for the solution of certain non-linear problems in least squares. Quaterly Journal of Applied Mathematics, 2(2): 164-168, 1944.
    • (1944) Quaterly Journal of Applied Mathematics , vol.2 , Issue.2 , pp. 164-168
    • Levenberg, K.1
  • 30
    • 0030328721 scopus 로고    scopus 로고
    • On the peculiar distribution of the U.S. stock indeces' first digits
    • E. Ley. On the peculiar distribution of the U.S. stock indeces' first digits. The American Statistician, 50(4):311-314, 1996.
    • (1996) The American Statistician , vol.50 , Issue.4 , pp. 311-314
    • Ley, E.1
  • 33
    • 0027205884 scopus 로고
    • A scaled conjugate gradient algorithm for fast supervised learning
    • M. F. Moller. A scaled conjugate gradient algorithm for fast supervised learning. Neural Networks, 6:525-533, 1993.
    • (1993) Neural Networks , vol.6 , pp. 525-533
    • Moller, M.F.1
  • 34
    • 0025536870 scopus 로고
    • Improving the learning speed of 2-layer neural networks by choosing initial values of the adaptive weights
    • D. Nguyen and B. Widrow. Improving the learning speed of 2-layer neural networks by choosing initial values of the adaptive weights. Proceedings of the International Joint Conference on Neural Networks, 3:21-26, 1990.
    • (1990) Proceedings of the International Joint Conference on Neural Networks , vol.3 , pp. 21-26
    • Nguyen, D.1    Widrow, B.2
  • 35
    • 84898987060 scopus 로고    scopus 로고
    • Using curvature information for fast stochastic search
    • M.I. Jordan, M.C. Mozer, and T. Petsche, editors, Cambridge. MIT Press
    • G. B. Orr and T. K. Leen. Using curvature information for fast stochastic search. In M.I. Jordan, M.C. Mozer, and T. Petsche, editors, Neural Information Processing Systems, volume 9, pages 606-612, Cambridge, 1996. MIT Press.
    • (1996) Neural Information Processing Systems , vol.9 , pp. 606-612
    • Orr, G.B.1    Leen, T.K.2
  • 36
    • 0023602770 scopus 로고
    • Optimal algorithms for adaptive networks: Second order back propagation, second order direct propagation, and second order hebbian learning
    • D. B. Parker. Optimal algorithms for adaptive networks: second order back propagation, second order direct propagation, and second order hebbian learning. Proceedings of the IEEE Conference on Neural Networks, 2:593-600, 1987.
    • (1987) Proceedings of the IEEE Conference on Neural Networks , vol.2 , pp. 593-600
    • Parker, D.B.1
  • 37
    • 33745742562 scopus 로고
    • Characterization of optical instabilities and chaos using MLP training algorithms
    • S. Pethel, C. Bowden, and M. Scalora. Characterization of optical instabilities and chaos using MLP training algorithms. SPIE Chaos Opt, 2039:129-140, 1993.
    • (1993) SPIE Chaos Opt , vol.2039 , pp. 129-140
    • Pethel, S.1    Bowden, C.2    Scalora, M.3
  • 38
    • 33846446220 scopus 로고
    • Restart procedures for the conjugate gradient method
    • M. J. D. Powell. Restart procedures for the conjugate gradient method. Mathematical Programming, 12:241-254, 1977.
    • (1977) Mathematical Programming , vol.12 , pp. 241-254
    • Powell, M.J.D.1
  • 40
    • 0025841422 scopus 로고
    • Rescaling of variables in back propagation learning
    • A. K. Rigler, J. M. Irvine, and T. P. Vogl. Rescaling of variables in back propagation learning. Neural Networks, 4:225-229, 1991.
    • (1991) Neural Networks , vol.4 , pp. 225-229
    • Rigler, A.K.1    Irvine, J.M.2    Vogl, T.P.3
  • 41
    • 0022471098 scopus 로고
    • Learning representations of back-propagation errors
    • D. E. Rumelhart, G. E. Hinten, and R. J. Willian. Learning representations of back-propagation errors. Nature, 323:533-536, 1986.
    • (1986) Nature , vol.323 , pp. 533-536
    • Rumelhart, D.E.1    Hinten, G.E.2    Willian, R.J.3
  • 42
    • 0036631778 scopus 로고    scopus 로고
    • Fast curvature matrix-vector products for second order gradient descent
    • N. N. Schraudolph. Fast curvature matrix-vector products for second order gradient descent. Neural Computation, 14(7): 1723-1738, 2002.
    • (2002) Neural Computation , vol.14 , Issue.7 , pp. 1723-1738
    • Schraudolph, N.N.1
  • 43
    • 0027313792 scopus 로고
    • Speed up learning and network optimization with extended back propagation
    • A. Sperduti and S. Antonina. Speed up learning and network optimization with extended back propagation. Neural Networks, 6:365-383, 1993.
    • (1993) Neural Networks , vol.6 , pp. 365-383
    • Sperduti, A.1    Antonina, S.2
  • 44
    • 0003953609 scopus 로고    scopus 로고
    • J. A. K. Suykens and J. Vandewalle, editors. Kluwer Academic Publishers Boston
    • J. A. K. Suykens and J. Vandewalle, editors. Nonlinear Modeling: advanced black-box techniques. Kluwer Academic Publishers Boston, 1998.
    • (1998) Nonlinear Modeling: Advanced Black-box Techniques
  • 45
    • 0025593679 scopus 로고
    • Supersab: Fast adaptive back propagation with good scaling properties
    • T. Tollenaere. Supersab: Fast adaptive back propagation with good scaling properties. Neural Networks, 3(561-573), 1990.
    • (1990) Neural Networks , vol.3 , Issue.561-573
    • Tollenaere, T.1
  • 47
    • 0025724253 scopus 로고
    • A method for self-determination of adaptive learning rates in back propagation
    • M. K. Weir. A method for self-determination of adaptive learning rates in back propagation. Neural Networks, 4:371-379, 1991.
    • (1991) Neural Networks , vol.4 , pp. 371-379
    • Weir, M.K.1
  • 49
    • 0031193520 scopus 로고    scopus 로고
    • A new method in determining the initial weights of feedforward neural networks
    • J. Y. F. Yam, T. W. S Chow, and C. T Leung. A new method in determining the initial weights of feedforward neural networks. Neurocomputing, 16(1):23-32, 1997.
    • (1997) Neurocomputing , vol.16 , Issue.1 , pp. 23-32
    • Yam, J.Y.F.1    Chow, T.W.S.2    Leung, C.T.3


* 이 정보는 Elsevier사의 SCOPUS DB에서 KISTI가 분석하여 추출한 것입니다.