메뉴 건너뛰기




Volumn 4, Issue 1, 2007, Pages 251-255

Convergence of batch gradient algorithm for feedforward neural network training

Author keywords

Convergence; Feedforward neural network; Gradient algorithm

Indexed keywords

CONVERGENCE OF NUMERICAL METHODS; ERRORS; FUNCTIONS; ITERATIVE METHODS; LEARNING ALGORITHMS; OPTIMIZATION;

EID: 34547738577     PISSN: 15487741     EISSN: None     Source Type: Journal    
DOI: None     Document Type: Article
Times cited : (4)

References (8)
  • 1
    • 84966275544 scopus 로고
    • Minimization of functions having Lipschitz continuous first partial derivatives
    • L. Armijo, Minimization of functions having Lipschitz continuous first partial derivatives, Pacific J. Math. 16 (1966) 1-3.
    • (1966) Pacific J. Math. , vol.16 , pp. 1-3
    • Armijo, L.1
  • 2
    • 0011854355 scopus 로고
    • Caucby's method of minimization
    • A. A. Goldstein, Caucby's method of minimization, Numer. Math. 4 (1962) 146-150.
    • (1962) Numer. Math. , vol.4 , pp. 146-150
    • Goldstein, A.A.1
  • 6
    • 0141849409 scopus 로고    scopus 로고
    • Training multilayer perceptrons via minimization of sum of ridge functions
    • W. Wu, G. R. Feng and X. Li, Training multilayer perceptrons via minimization of sum of ridge functions, Advances in Computational Mathematics 17 (2002) 331-347.
    • (2002) Advances in Computational Mathematics , vol.17 , pp. 331-347
    • Wu, W.1    Feng, G.R.2    Li, X.3
  • 7
    • 0034856562 scopus 로고    scopus 로고
    • The need for small learning rates on large problems
    • D. R. Wilson and T. R. Martinez, The need for small learning rates on large problems, in: IJCNN'01, 2001, pp. 115-119.
    • (2001) IJCNN'01 , pp. 115-119
    • Wilson, D.R.1    Martinez, T.R.2


* 이 정보는 Elsevier사의 SCOPUS DB에서 KISTI가 분석하여 추출한 것입니다.