메뉴 건너뛰기




Volumn 20, Issue 6, 2009, Pages 1050-1054

Boundedness and convergence of online gadient method with penalty for feedforward neural networks

Author keywords

Boundedness; Convergence; Feedforward neural networks; Online gradient method; Penalty

Indexed keywords

BOUNDEDNESS; CONVERGENCE; GENERALIZATION PERFORMANCE; NETWORK TRAINING; NUMERICAL EXAMPLE; ONLINE GRADIENT METHOD; PENALTY;

EID: 67649385962     PISSN: 10459227     EISSN: None     Source Type: Journal    
DOI: 10.1109/TNN.2009.2020848     Document Type: Article
Times cited : (64)

References (21)
  • 2
    • 0034389611 scopus 로고    scopus 로고
    • Gradient convergence in gradient methods with errors
    • D. P. Bertsekas and J. N. Tsitsiklis, "Gradient convergence in gradient methods with errors," SIAM J. Optim., vol. 3, pp. 627-642, 2000.
    • (2000) SIAM J. Optim , vol.3 , pp. 627-642
    • Bertsekas, D.P.1    Tsitsiklis, J.N.2
  • 4
    • 0034419669 scopus 로고    scopus 로고
    • Regularization networks and support vector machines
    • T. Evgeniou, M. Pontil, and T. Poggio, "Regularization networks and support vector machines," Adv. Comput. Math., vol. 13, pp. 1-50, 2000.
    • (2000) Adv. Comput. Math , vol.13 , pp. 1-50
    • Evgeniou, T.1    Pontil, M.2    Poggio, T.3
  • 5
    • 0040907600 scopus 로고    scopus 로고
    • Parameter convergence and learning curves for neural networks
    • T. L. Fine and S. Mukherjee, "Parameter convergence and learning curves for neural networks," Neural Comput., vol. 11, pp. 747-769, 1999.
    • (1999) Neural Comput , vol.11 , pp. 747-769
    • Fine, T.L.1    Mukherjee, S.2
  • 6
    • 84973041784 scopus 로고
    • Convergence properties of backpropagation for neural nets via theory of stochastic gradient methods (Part I)
    • A. A. Gaivoronski, "Convergence properties of backpropagation for neural nets via theory of stochastic gradient methods (Part I)," Optim. Methods Software, vol. 4, pp. 117-134, 1994.
    • (1994) Optim. Methods Software , vol.4 , pp. 117-134
    • Gaivoronski, A.A.1
  • 7
    • 0001546764 scopus 로고    scopus 로고
    • Convergent on-line algorithms for supervised learning in neural networks
    • Nov
    • L. Grippo, "Convergent on-line algorithms for supervised learning in neural networks," IEEE Trans. Neural Netw., vol. 11, no. 6, pp. 1284-1299, Nov. 2000.
    • (2000) IEEE Trans. Neural Netw , vol.11 , Issue.6 , pp. 1284-1299
    • Grippo, L.1
  • 8
    • 0000991092 scopus 로고
    • Comparing biases for minimal network construction with back-propagation
    • S. J. Hanson and L. Y. Pratt, "Comparing biases for minimal network construction with back-propagation," Adv. Neural Inf. Process., vol. 1, pp. 177-185, 1989.
    • (1989) Adv. Neural Inf. Process , vol.1 , pp. 177-185
    • Hanson, S.J.1    Pratt, L.Y.2
  • 9
    • 0025447562 scopus 로고
    • A simple procedure for pruning back-propagation trained neural networks
    • Jun
    • E. D. Karnin, "A simple procedure for pruning back-propagation trained neural networks," IEEE Trans. Neural Netw., vol. 1, no. 2, pp. 239-242, Jun. 1990.
    • (1990) IEEE Trans. Neural Netw , vol.1 , Issue.2 , pp. 239-242
    • Karnin, E.D.1
  • 10
    • 0017526570 scopus 로고
    • Analysis of recursive stochastic algorithm
    • Aug
    • L. Ljung, "Analysis of recursive stochastic algorithm," IEEE Trans. Autom. Control, vol. AC-22, no. 4, pp. 551-575, Aug. 1977.
    • (1977) IEEE Trans. Autom. Control , vol.AC-22 , Issue.4 , pp. 551-575
    • Ljung, L.1
  • 11
    • 84972916837 scopus 로고
    • Serial and parallel backpropagation convergence via nonmonotone perturbed minimization
    • O. L. Mangasarian and M. V. Solodov, "Serial and parallel backpropagation convergence via nonmonotone perturbed minimization," Optim. Methods Software, vol. 4, pp. 117-134, 1994.
    • (1994) Optim. Methods Software , vol.4 , pp. 117-134
    • Mangasarian, O.L.1    Solodov, M.V.2
  • 12
    • 9244257332 scopus 로고    scopus 로고
    • Magnified gradient function with deterministic weight modification in adaptive learning
    • Nov
    • S. C. Ng, C. C. Cheung, and S. H. Leung, "Magnified gradient function with deterministic weight modification in adaptive learning," IEEE Trans. Neural Netw., vol. 15, no. 6, pp. 1411-1423, Nov. 2004.
    • (2004) IEEE Trans. Neural Netw , vol.15 , Issue.6 , pp. 1411-1423
    • Ng, S.C.1    Cheung, C.C.2    Leung, S.H.3
  • 13
    • 0001765492 scopus 로고
    • Simplifying neural networks by soft weight sharing
    • S. J. Nowlan and G. E. Hinton, "Simplifying neural networks by soft weight sharing," Neural Comput., vol. 4, pp. 173-193, 1992.
    • (1992) Neural Comput , vol.4 , pp. 173-193
    • Nowlan, S.J.1    Hinton, G.E.2
  • 15
    • 0027662338 scopus 로고
    • Pruning algorithms: A survey
    • Sep
    • R. Reed, "Pruning algorithms: A survey," IEEE Trans. Neural Netw. vol. 4, no. 5, pp. 740-747, Sep. 1993.
    • (1993) IEEE Trans. Neural Netw , vol.4 , Issue.5 , pp. 740-747
    • Reed, R.1
  • 16
    • 0034151613 scopus 로고    scopus 로고
    • Second-order learning algorithm with squared penalty term
    • K. Saito and R. Nakano, "Second-order learning algorithm with squared penalty term," Neural Comput., vol. 12, pp. 709-729, 2000.
    • (2000) Neural Comput , vol.12 , pp. 709-729
    • Saito, K.1    Nakano, R.2
  • 17
    • 33847668764 scopus 로고    scopus 로고
    • Convergence and monotonicity of an online gradient method with penalty for neural networks
    • H. Shao, W. Wu, and L. Liu, "Convergence and monotonicity of an online gradient method with penalty for neural networks," WSEAS Trans. Math., vol. 6, pp. 469-476, 2007.
    • (2007) WSEAS Trans. Math , vol.6 , pp. 469-476
    • Shao, H.1    Wu, W.2    Liu, L.3
  • 18
    • 67649130661 scopus 로고    scopus 로고
    • Learning in neural networks by normalized stochastic gradient algorithm: Local convergence
    • Yugoslavia, Sep
    • V. Tadic and S. Stankovic, "Learning in neural networks by normalized stochastic gradient algorithm: Local convergence," in Proc. 5th Seminar Neural Netw. Appl. Electr. Eng., Yugoslavia, Sep. 2000, pp. 11-17.
    • (2000) Proc. 5th Seminar Neural Netw. Appl. Electr. Eng , pp. 11-17
    • Tadic, V.1    Stankovic, S.2
  • 19
    • 0012195187 scopus 로고
    • Some asymptotic results for learning in single hidden-layer feedforward network models
    • H. White, "Some asymptotic results for learning in single hidden-layer feedforward network models," J. Amer. Statist. Assoc., vol. 84, pp. 1003-1013, 1989.
    • (1989) J. Amer. Statist. Assoc , vol.84 , pp. 1003-1013
    • White, H.1
  • 20
    • 0141849409 scopus 로고    scopus 로고
    • Training multilayer perceptrons via minimization of sum of ridge functions
    • W. Wu, G. Feng, and X. Li, "Training multilayer perceptrons via minimization of sum of ridge functions," Adv. Comput. Math., vol. 17, pp. 331-347, 2002.
    • (2002) Adv. Comput. Math , vol.17 , pp. 331-347
    • Wu, W.1    Feng, G.2    Li, X.3
  • 21
    • 19344362900 scopus 로고    scopus 로고
    • Deterministic convergence of an online gradient method for BP neural networks
    • May
    • W. Wu, G. Feng, Z. Li, and Y. Xu, "Deterministic convergence of an online gradient method for BP neural networks," IEEE Trans. Neural Netw., vol. 16, no. 3, pp. 533-540, May 2005.
    • (2005) IEEE Trans. Neural Netw , vol.16 , Issue.3 , pp. 533-540
    • Wu, W.1    Feng, G.2    Li, Z.3    Xu, Y.4


* 이 정보는 Elsevier사의 SCOPUS DB에서 KISTI가 분석하여 추출한 것입니다.