메뉴 건너뛰기




Volumn 144, Issue 1-2, 2002, Pages 335-347

Deterministic convergence of an online gradient method for neural networks

Author keywords

Constant learning rate; Deterministic convergence; Monotonicity; Nonlinear feedforward neural networks; Online stochastic gradient method

Indexed keywords

CONVERGENCE OF NUMERICAL METHODS; ERRORS; GRADIENT METHODS;

EID: 0036644480     PISSN: 03770427     EISSN: None     Source Type: Journal    
DOI: 10.1016/S0377-0427(01)00571-4     Document Type: Article
Times cited : (42)

References (18)
  • 1
    • 0001024110 scopus 로고
    • First- and second-order methods for learning: Between steepest sescent and Newton's method
    • (1992) Neural Networks , vol.4 , pp. 141-166
    • Battiti, R.1
  • 5
    • 0001699239 scopus 로고
    • Diffusion approximations for the constant learning rate backpropagation algorithm and resistance to locol minima
    • (1994) Neural Comput , vol.6 , pp. 285-295
    • Finnoff, W.1
  • 12
    • 0001518167 scopus 로고
    • On the convergence of the LMS algorithm with adaptive learning rate for linear feedforward networks
    • (1991) Neural Comput , vol.3 , pp. 226-245
    • Lou, Z.1


* 이 정보는 Elsevier사의 SCOPUS DB에서 KISTI가 분석하여 추출한 것입니다.