메뉴 건너뛰기




Volumn 25, Issue 1-3, 1999, Pages 55-72

A neural network training algorithm utilizing multiple sets of linear equations

Author keywords

Backpropagation; Conjugate gradient method; Fast training; Hidden weight optimization; Learning factor calculation; Levenberg Marquardt algorithm; Multilayer perception; Output weight optimization; Second order methods

Indexed keywords

BACKPROPAGATION; CONVERGENCE OF NUMERICAL METHODS; ERROR ANALYSIS; ITERATIVE METHODS; LEARNING ALGORITHMS; LINEAR EQUATIONS; OPTIMIZATION;

EID: 0032938952     PISSN: 09252312     EISSN: None     Source Type: Journal    
DOI: 10.1016/S0925-2312(98)00109-X     Document Type: Article
Times cited : (65)

References (31)
  • 2
    • 0024774330 scopus 로고
    • Neural networks and principal component analysis: Learning from examples without local minima
    • Baldi P., Hornik K. Neural networks and principal component analysis: learning from examples without local minima. Neural Networks. 2:1989;53-58.
    • (1989) Neural Networks , vol.2 , pp. 53-58
    • Baldi, P.1    Hornik, K.2
  • 3
    • 0011729314 scopus 로고
    • A matrix method for optimizing a neural network
    • Barton S.A. A matrix method for optimizing a neural network. Neural Comput. 3(3):1991;450-459.
    • (1991) Neural Comput. , vol.3 , Issue.3 , pp. 450-459
    • Barton, S.A.1
  • 4
    • 0001024110 scopus 로고
    • First- And second - order methods for learning: Between steepest descent and Newton's method
    • Battiti R. First- and second - order methods for learning: between steepest descent and Newton's method. Neural Comput. 4(2):1992;141-166.
    • (1992) Neural Comput. , vol.4 , Issue.2 , pp. 141-166
    • Battiti, R.1
  • 5
    • 0025547726 scopus 로고
    • Back-propagation representation theorem using power series
    • San Diego
    • M.S. Chen, M.T. Manry, Back-propagation representation theorem using power series, Proc. Int. Joint Conf. on Neural Networks, San Diego, vol. 1, 1990, pp. 643-648.
    • (1990) Proc. Int. Joint Conf. on Neural Networks , vol.1 , pp. 643-648
    • Chen, M.S.1    Manry, M.T.2
  • 7
    • 0027208467 scopus 로고
    • Conventional modeling of the multi-layer perceptron using polynomial basis function
    • Chen M.S., Manry M.T. Conventional modeling of the multi-layer perceptron using polynomial basis function. IEEE Trans. Neural Networks. 4(1):1993;164-166.
    • (1993) IEEE Trans. Neural Networks , vol.4 , Issue.1 , pp. 164-166
    • Chen, M.S.1    Manry, M.T.2
  • 8
    • 0011785449 scopus 로고
    • Inversion of surface parameters using fast learning neural networks
    • Houston, TX
    • M.S. Dawson et al., Inversion of surface parameters using fast learning neural networks, Proc. Int. Geoscience and Remote Sensing Symp., Houston, TX, vol. 2 1992, pp. 910-912.
    • (1992) Proc. Int. Geoscience and Remote Sensing Symp. , vol.2 , pp. 910-912
    • Dawson, M.S.1
  • 9
    • 0027382740 scopus 로고
    • Surface parameter retrieval using fast learning neural networks
    • Dawson M.S., Fung A.K., Manry M.T. Surface parameter retrieval using fast learning neural networks. Remote Sensing Rev. 7(1):1993;1-18.
    • (1993) Remote Sensing Rev. , vol.7 , Issue.1 , pp. 1-18
    • Dawson, M.S.1    Fung, A.K.2    Manry, M.T.3
  • 12
    • 0026835122 scopus 로고
    • Backscattering from a randomly rough dielectric surface
    • Fung A.K., Li Z., Chen K.S. Backscattering from a randomly rough dielectric surface. IEEE Trans. Geosci. Remote Sensing. 30(2):1992;356-369.
    • (1992) IEEE Trans. Geosci. Remote Sensing , vol.30 , Issue.2 , pp. 356-369
    • Fung, A.K.1    Li, Z.2    Chen, K.S.3
  • 16
    • 0028543366 scopus 로고
    • Training Feedforward Networks with the Marquardt Algorithm
    • Hagan M.T., Menhaj M.B. Training Feedforward Networks with the Marquardt Algorithm. IEEE Trans. Neural Networks. 5(6):1994;989-993.
    • (1994) IEEE Trans. Neural Networks , vol.5 , Issue.6 , pp. 989-993
    • Hagan, M.T.1    Menhaj, M.B.2
  • 17
    • 0024715766 scopus 로고
    • An adaptive least squares algorithm for the efficient training of artificial neural networks
    • Kollias S., Anastassiou D. An adaptive least squares algorithm for the efficient training of artificial neural networks. IEEE Trans. Circuits Systems. 36(8):1989;1092-1101.
    • (1989) IEEE Trans. Circuits Systems , vol.36 , Issue.8 , pp. 1092-1101
    • Kollias, S.1    Anastassiou, D.2
  • 18
    • 0028681243 scopus 로고
    • Fast training of neural networks for remote sensing
    • Manry M.T.et al. Fast training of neural networks for remote sensing. Remote Sensing Rev. 9:1994;77-96.
    • (1994) Remote Sensing Rev. , vol.9 , pp. 77-96
    • Manry, M.T.1
  • 20
    • 0032123629 scopus 로고    scopus 로고
    • A hybrid linear/nonlinear training algorithm for feedforward neural networks
    • McLoone S., Brown M.D., Irwin G., Lightbody G. A hybrid linear/nonlinear training algorithm for feedforward neural networks. IEEE Trans. Neural Networks. 9(9):1998;669-683.
    • (1998) IEEE Trans. Neural Networks , vol.9 , Issue.9 , pp. 669-683
    • McLoone, S.1    Brown, M.D.2    Irwin, G.3    Lightbody, G.4
  • 22
    • 0004225003 scopus 로고
    • Learning logic
    • File 1, Office of Technology Licensing, Stanford University
    • D.B. Parker, Learning logic, Invention Report S81-64, File 1, Office of Technology Licensing, Stanford University, 1982.
    • (1982) Invention Report S81-64
    • Parker, D.B.1
  • 23
    • 0004161838 scopus 로고
    • New York: Cambridge University Press
    • Press W.H.et al. Numerical Recipes. 1986;Cambridge University Press, New York.
    • (1986) Numerical Recipes
    • Press, W.H.1
  • 24
    • 0026953147 scopus 로고
    • Neural subnet design by direct polynomial mapping
    • Rohani K., Chen M.S., Manry M.T. Neural subnet design by direct polynomial mapping. IEEE Trans. Neural Networks. 3(6):1992;1024-1026.
    • (1992) IEEE Trans. Neural Networks , vol.3 , Issue.6 , pp. 1024-1026
    • Rohani, K.1    Chen, M.S.2    Manry, M.T.3
  • 25
    • 0000646059 scopus 로고
    • Learning internal representations by error propagation
    • D.E. Rumelhart, McClelland J.L. Cambridge, MA: The MIT Press
    • Rumelhart D.E., Hinton G.E., Williams R.J. Learning internal representations by error propagation. Rumelhart D.E., McClelland J.L. Parallel Distributed Processing. 1:1986;The MIT Press, Cambridge, MA.
    • (1986) Parallel Distributed Processing , vol.1
    • Rumelhart, D.E.1    Hinton, G.E.2    Williams, R.J.3
  • 26
    • 0026190194 scopus 로고
    • A simple method to derive bounds on the size and to train multilayer neural networks
    • Sartori M.A., Antsaklis P.J. A simple method to derive bounds on the size and to train multilayer neural networks. IEEE Trans. Neural Networks. 2(4):1991;467-471.
    • (1991) IEEE Trans. Neural Networks , vol.2 , Issue.4 , pp. 467-471
    • Sartori, M.A.1    Antsaklis, P.J.2
  • 27
    • 0026727333 scopus 로고
    • A fast new algorithm for training feedforward neural networks
    • Scalero R.S., Tepedelenlioglu N. A fast new algorithm for training feedforward neural networks. IEEE Trans. Signal Process. 40(1):1992;202-210.
    • (1992) IEEE Trans. Signal Process. , vol.40 , Issue.1 , pp. 202-210
    • Scalero, R.S.1    Tepedelenlioglu, N.2
  • 31
    • 0031139318 scopus 로고    scopus 로고
    • Extended least squares based algorithm for training feedforward networks
    • Yam J.Y.F., Chow T.W.S. Extended least squares based algorithm for training feedforward networks. IEEE Trans. Neural Networks. 8(3):1997;803-810.
    • (1997) IEEE Trans. Neural Networks , vol.8 , Issue.3 , pp. 803-810
    • Yam, J.Y.F.1    Chow, T.W.S.2


* 이 정보는 Elsevier사의 SCOPUS DB에서 KISTI가 분석하여 추출한 것입니다.