메뉴 건너뛰기




Volumn 29, Issue 3, 2009, Pages 205-212

Boundedness and convergence of online gradient method with penalty for linear output feedforward neural networks

Author keywords

Boundedness; Convergence; Feedforward neural networks; Linear output; Online gradient method; Penalty

Indexed keywords

BOUNDEDNESS; CONVERGENCE; LINEAR OUTPUT; ONLINE GRADIENT METHOD; PENALTY;

EID: 67649085858     PISSN: 13704621     EISSN: 1573773X     Source Type: Journal    
DOI: 10.1007/s11063-009-9104-6     Document Type: Article
Times cited : (13)

References (12)
  • 1
    • 0029343809 scopus 로고
    • Universal approximation to nonlinear operations by Neural Networks with arbitrary activation functions and its application to dynamical system
    • T Chen 1995 Universal approximation to nonlinear operations by Neural Networks with arbitrary activation functions and its application to dynamical system IEEE Trans Neural Netw 6 4 911 917
    • (1995) IEEE Trans Neural Netw , vol.6 , Issue.4 , pp. 911-917
    • Chen, T.1
  • 2
    • 0040907600 scopus 로고    scopus 로고
    • Parameter convergence and learning curves for neural networks
    • TL Fine S Mukherjee 1999 Parameter convergence and learning curves for neural networks Neural Comput 11 747 769
    • (1999) Neural Comput , vol.11 , pp. 747-769
    • Fine, T.L.1    Mukherjee, S.2
  • 3
    • 84973041784 scopus 로고
    • Convergence properties of backpropagation for neural nets via theory of stochastic gradient methods (Part I)
    • AA Gaivoronski 1994 Convergence properties of backpropagation for neural nets via theory of stochastic gradient methods (Part I) Optim Methods Softw 4 117 134
    • (1994) Optim Methods Softw , vol.4 , pp. 117-134
    • Gaivoronski, A.A.1
  • 5
    • 0000991092 scopus 로고
    • Comparing biases for minimal network construction with back-propagation
    • SJ Hanson LY Pratt 1989 Comparing biases for minimal network construction with back-propagation Neural Inf Process 1 177 185
    • (1989) Neural Inf Process , vol.1 , pp. 177-185
    • Hanson, S.J.1    Pratt, L.Y.2
  • 7
    • 0027662338 scopus 로고
    • Pruning algorithms: A survey
    • R Reed 1993 Pruning algorithms: a survey IEEE Trans Neural Netw 4 5 740 747
    • (1993) IEEE Trans Neural Netw , vol.4 , Issue.5 , pp. 740-747
    • Reed, R.1
  • 8
    • 0034151613 scopus 로고    scopus 로고
    • Second-order learning algorithm with squared penalty term
    • K Saito R Nakano 2000 Second-order learning algorithm with squared penalty term Neural Comput 12 709 729
    • (2000) Neural Comput , vol.12 , pp. 709-729
    • Saito, K.1    Nakano, R.2
  • 9
    • 33847668764 scopus 로고    scopus 로고
    • Convergence and monotonicity of an online gradient method with penalty for neural networks
    • H Shao W Wu L Liu 2007 Convergence and monotonicity of an online gradient method with penalty for neural networks WSEAS Trans Math 6 3 469 476 (Pubitemid 46358891)
    • (2007) WSEAS Transactions on Mathematics , vol.6 , Issue.3 , pp. 469-476
    • Shao, H.1    Wu, W.2    Liu, L.3
  • 11
    • 0012195187 scopus 로고
    • Some asymptotic results for learning in single hidden-layer feedforward network models
    • H White 1989 Some asymptotic results for learning in single hidden-layer feedforward network models J Am Stat Ass 84 1003 1013
    • (1989) J Am Stat Ass , vol.84 , pp. 1003-1013
    • White, H.1
  • 12
    • 19344362900 scopus 로고    scopus 로고
    • Convergence of an online gradient method for BP neural networks
    • W Wu G Feng Z Li 2005 Convergence of an online gradient method for BP neural networks IEEE Trans Neural Netw 16 3 533 540
    • (2005) IEEE Trans Neural Netw , vol.16 , Issue.3 , pp. 533-540
    • Wu, W.1    Feng, G.2    Li, Z.3


* 이 정보는 Elsevier사의 SCOPUS DB에서 KISTI가 분석하여 추출한 것입니다.