메뉴 건너뛰기




Volumn 28, Issue 9, 1998, Pages 43-62

Approximation properties of local bases assembled from neural network transfer functions

Author keywords

(Artificial) neural networks; Function approximation; Local bases; Transfer functions

Indexed keywords


EID: 0032213552     PISSN: 08957177     EISSN: None     Source Type: Journal    
DOI: 10.1016/S0895-7177(98)00144-7     Document Type: Article
Times cited : (13)

References (34)
  • 1
    • 0024861871 scopus 로고
    • Approximation by superposition of a sigmoidal function
    • 1. G. Cybenko, Approximation by superposition of a sigmoidal function, Mathematics of Control, Signals, and Systems 2, 303-314 (1989).
    • (1989) Mathematics of Control, Signals, and Systems , vol.2 , pp. 303-314
    • Cybenko, G.1
  • 2
    • 0024991997 scopus 로고
    • Networks and the best approximation property
    • 2. F. Girosi and T. Poggio, Networks and the best approximation property, Biological Cybernetics 63, 169-176 (1990).
    • (1990) Biological Cybernetics , vol.63 , pp. 169-176
    • Girosi, F.1    Poggio, T.2
  • 3
    • 0027812765 scopus 로고
    • Some new results on neural network approximation
    • 3. K. Hornik, Some new results on neural network approximation, Neural Networks 6, 1069-1072 (1993).
    • (1993) Neural Networks , vol.6 , pp. 1069-1072
    • Hornik, K.1
  • 4
    • 0025627940 scopus 로고
    • Universal approximation of an unknown mapping and its derivatives using multilayer feedforward networks
    • 4. K. Hornik, M. Stinchcombe and H. White, Universal approximation of an unknown mapping and its derivatives using multilayer feedforward networks, Neural Networks 3, 551-560 (1990).
    • (1990) Neural Networks , vol.3 , pp. 551-560
    • Hornik, K.1    Stinchcombe, M.2    White, H.3
  • 5
    • 0025799121 scopus 로고
    • Representation of functions by superpositions of a step or sigmoidal function and their applications to neural network theory
    • 5. Y. Ito, Representation of functions by superpositions of a step or sigmoidal function and their applications to neural network theory, Neural Networks 4, 385-394 (1991).
    • (1991) Neural Networks , vol.4 , pp. 385-394
    • Ito, Y.1
  • 6
    • 0000646059 scopus 로고
    • Learning internal representations by error propagation
    • Edited by D. Rumelhart, J.L. McClelland and the PDP Research Group, Ch. 8, MIT Press, Cambridge, MA
    • 6. D. Rumelhart, G.E. Hinton and R.J. Williams, Learning internal representations by error propagation, In Parallel Distributed Processing: Exploration in the Microstructure of Cognition, (Edited by D. Rumelhart, J.L. McClelland and the PDP Research Group), Ch. 8, pp. 318-362, MIT Press, Cambridge, MA, (1986).
    • (1986) Parallel Distributed Processing: Exploration in the Microstructure of Cognition , pp. 318-362
    • Rumelhart, D.1    Hinton, G.E.2    Williams, R.J.3
  • 7
    • 0000443781 scopus 로고
    • Ill-conditioning in neural network training problems
    • 7. S. Saarinen, R. Bramley and G. Cybenko, Ill-conditioning in neural network training problems, SIAM J. Sci. Comput. 14 (3), 693-714 (1993).
    • (1993) SIAM J. Sci. Comput. , vol.14 , Issue.3 , pp. 693-714
    • Saarinen, S.1    Bramley, R.2    Cybenko, G.3
  • 8
    • 0025056697 scopus 로고
    • Regularization algorithms for learning that are equivalent to multilayer networks
    • 8. T. Poggio and F. Girosi, Regularization algorithms for learning that are equivalent to multilayer networks, Science 247, 978-982 (1990).
    • (1990) Science , vol.247 , pp. 978-982
    • Poggio, T.1    Girosi, F.2
  • 9
    • 0025490985 scopus 로고
    • Networks for approximation and learning
    • 9. T. Poggio and F. Girosi, Networks for approximation and learning, Proceedings of the IEEE 78 (9), 1481-1497 (1990).
    • (1990) Proceedings of the IEEE 78 , vol.9 , pp. 1481-1497
    • Poggio, T.1    Girosi, F.2
  • 11
    • 0027659357 scopus 로고
    • Curvature-driven smoothing: A learning algorithm for feedforward networks
    • 11. C.M. Bishop, Curvature-driven smoothing: A learning algorithm for feedforward networks, IEEE Transactions on Neural Networks 4 (5), 882-884 (1993).
    • (1993) IEEE Transactions on Neural Networks , vol.4 , Issue.5 , pp. 882-884
    • Bishop, C.M.1
  • 12
    • 0001740650 scopus 로고
    • Training with noise is equivalent to Tikhonov regularization
    • 12. C.M. Bishop, Training with noise is equivalent to Tikhonov regularization, Neural Computation 7, 108-116 (1995).
    • (1995) Neural Computation , vol.7 , pp. 108-116
    • Bishop, C.M.1
  • 13
    • 0028425697 scopus 로고
    • Functional approximation by feed-forward networks: A least square approach to generalization
    • 13. A.R. Webb, Functional approximation by feed-forward networks: A least square approach to generalization, IEEE Transactions on Neural Networks 5 (3), 363-371 (1994).
    • (1994) IEEE Transactions on Neural Networks , vol.5 , Issue.3 , pp. 363-371
    • Webb, A.R.1
  • 14
    • 0029306953 scopus 로고
    • Similarity of error regularization, sigmoid gain scaling, target smoothing, and training with jitter
    • 14. R. Reed, R.J. Marks and S. Oh, Similarity of error regularization, sigmoid gain scaling, target smoothing, and training with jitter, IEEE Transactions on Neural Networks 6 (3), 529-538 (1995).
    • (1995) IEEE Transactions on Neural Networks , vol.6 , Issue.3 , pp. 529-538
    • Reed, R.1    Marks, R.J.2    Oh, S.3
  • 15
    • 0029403793 scopus 로고
    • Stochastic choice of basis functions in adaptive function approximation and the functional-link net
    • 15. B. Igelnik and Y.-H. Pao, Stochastic choice of basis functions in adaptive function approximation and the functional-link net, IEEE Transactions on Neural Networks 6 (6), 1320-1329 (1995).
    • (1995) IEEE Transactions on Neural Networks , vol.6 , Issue.6 , pp. 1320-1329
    • Igelnik, B.1    Pao, Y.-H.2
  • 16
    • 0026116468 scopus 로고
    • Orthogonal least square algorithm for radial basis function networks
    • 16. S. Chen, C.F.N. Cowan and P.M. Grant, Orthogonal least square algorithm for radial basis function networks, IEEE Transactions on Neural Networks 2 (2), 302-309 (1991).
    • (1991) IEEE Transactions on Neural Networks , vol.2 , Issue.2 , pp. 302-309
    • Chen, S.1    Cowan, C.F.N.2    Grant, P.M.3
  • 17
    • 0000992610 scopus 로고
    • Initializing weights of a multilayer perception network by using the orthogonal least squares algorithm
    • 17. M. Lehtokangas, J. Saarinen, K. Kaski and P. Huuntanen, Initializing weights of a multilayer perception network by using the orthogonal least squares algorithm, Neural Computation 7, 982-999 (1995).
    • (1995) Neural Computation , vol.7 , pp. 982-999
    • Lehtokangas, M.1    Saarinen, J.2    Kaski, K.3    Huuntanen, P.4
  • 19
    • 0021481123 scopus 로고
    • Multivariate smoothing spline functions
    • 19. D.D. Cox, Multivariate smoothing spline functions, SIAM J. Numer. Anal. 21 (4), 789-813 (1984).
    • (1984) SIAM J. Numer. Anal. , vol.21 , Issue.4 , pp. 789-813
    • Cox, D.D.1
  • 21
    • 0026727494 scopus 로고
    • Approximation of a function and its derivative with a neural network
    • 21. P. Cardaliaguet and G. Euvrard, Approximation of a function and its derivative with a neural network, Neural Networks 5, 207-220 (1992).
    • (1992) Neural Networks , vol.5 , pp. 207-220
    • Cardaliaguet, P.1    Euvrard, G.2
  • 22
    • 0002560493 scopus 로고
    • Multivariate approximation from the cardinal interpolation point of view
    • Edited by E.W. Cheney, C.K.Chui and L.L. Schumaker
    • 22. K. Jetter, Multivariate approximation from the cardinal interpolation point of view, Approximation Theory VII, (Edited by E.W. Cheney, C.K.Chui and L.L. Schumaker), pp. 131-161, (1992).
    • (1992) Approximation Theory VII , pp. 131-161
    • Jetter, K.1
  • 24
    • 0027262895 scopus 로고
    • Multilayer feedforward networks with a nonpolynomial activation function can approximate any function
    • 24. M. Leshno, V.Y. Lin, A. Pinkus and S. Schocken, Multilayer feedforward networks with a nonpolynomial activation function can approximate any function, Neural Networks 6, 861-867 (1993).
    • (1993) Neural Networks , vol.6 , pp. 861-867
    • Leshno, M.1    Lin, V.Y.2    Pinkus, A.3    Schocken, S.4
  • 25
    • 0000041417 scopus 로고    scopus 로고
    • Neural networks for optimal approximation of smooth and analytic functions
    • 25. H.N. Mhaskar, Neural networks for optimal approximation of smooth and analytic functions, Neural Computations, 8, 164-177 (1995).
    • (1995) Neural Computations , vol.8 , pp. 164-177
    • Mhaskar, H.N.1
  • 27
    • 0003106786 scopus 로고
    • Approximation by multi-integer translates of functions having global support
    • 27. R.-Q. Jia and J. Lei, Approximation by multi-integer translates of functions having global support, Journal of Approximation Theory 72, 2-23 (1993).
    • (1993) Journal of Approximation Theory , vol.72 , pp. 2-23
    • Jia, R.-Q.1    Lei, J.2
  • 28
    • 0027599793 scopus 로고
    • Universal approximation bounds for superpositions of a sigmoidal function
    • 28. A.R. Barron, Universal approximation bounds for superpositions of a sigmoidal function, IEEE Transactions on Information Theory 39 (3), 930-945 (1993).
    • (1993) IEEE Transactions on Information Theory , vol.39 , Issue.3 , pp. 930-945
    • Barron, A.R.1
  • 29
    • 0025503458 scopus 로고
    • Constructive approximations for neural networks by sigmoidal functions
    • 29. L.K. Jones, Constructive approximations for neural networks by sigmoidal functions, Proceedings of the IEEE 78 (10), 1586-1589 (1990).
    • (1990) Proceedings of the IEEE , vol.78 , Issue.10 , pp. 1586-1589
    • Jones, L.K.1
  • 30
    • 0000796112 scopus 로고
    • A simple lemma on greedy approximation in hilbert space and convergence rates for projection pursuit regression and neural network training
    • 30. L.K. Jones, A simple lemma on greedy approximation in Hilbert space and convergence rates for projection pursuit regression and neural network training, The Annals of Statistics 20 (1), 608-613 (1992).
    • (1992) The Annals of Statistics , vol.20 , Issue.1 , pp. 608-613
    • Jones, L.K.1
  • 31
    • 0003891770 scopus 로고
    • Convergence rates of approximation by translates
    • Artificial Intelligence Laboratory, Massachusetts Institute of Technology
    • 31. F. Girosi and G. Anzellotti, Convergence rates of approximation by translates, Report AI 1288, Artificial Intelligence Laboratory, Massachusetts Institute of Technology, (1992).
    • (1992) Report AI 1288
    • Girosi, F.1    Anzellotti, G.2
  • 32
    • 0000041417 scopus 로고    scopus 로고
    • Neural network for optimal approximation of smooth and analytic functions
    • 32. H.N. Mhaskar, Neural network for optimal approximation of smooth and analytic functions, Neural Computation 8, 164-177 (1996).
    • (1996) Neural Computation , vol.8 , pp. 164-177
    • Mhaskar, H.N.1
  • 34
    • 0002967446 scopus 로고
    • Solution of nonlinear ordinary differential equations by feedforward neural networks
    • 34. A.J. Meade and A.A. Fernandez, Solution of nonlinear ordinary differential equations by feedforward neural networks, Mathl. Comput., Modelling 20 (9), 19-44 (1994).
    • (1994) Mathl. Comput., Modelling , vol.20 , Issue.9 , pp. 19-44
    • Meade, A.J.1    Fernandez, A.A.2


* 이 정보는 Elsevier사의 SCOPUS DB에서 KISTI가 분석하여 추출한 것입니다.