메뉴 건너뛰기




Volumn 31, Issue 1-4, 2000, Pages 107-123

On the approximation capability of recurrent neural networks

Author keywords

Computational capability; Recurrent neural networks; Sigmoidal networks; Universal approximation

Indexed keywords

APPROXIMATION THEORY; FUNCTIONS; PROBABILITY; VECTORS;

EID: 0034069365     PISSN: 09252312     EISSN: None     Source Type: Journal    
DOI: 10.1016/S0925-2312(99)00174-5     Document Type: Article
Times cited : (103)

References (21)
  • 2
    • 0343090605 scopus 로고    scopus 로고
    • Size of multilayer networks for exact learning: analytic approach
    • in: M.C. Mozer, M.I. Jordan, T. Petsche (Eds.), The MIT Press, Cambridge, MA
    • A. Eliseeff, H. Paugam-Moisy, Size of multilayer networks for exact learning: analytic approach, in: M.C. Mozer, M.I. Jordan, T. Petsche (Eds.), Advances in Neural Information Processing Systems, Vol. 9, The MIT Press, Cambridge, MA, 1997, pp. 162-168.
    • (1997) Advances in Neural Information Processing Systems , vol.9 , pp. 162-168
    • Eliseeff, A.1    Paugam-Moisy, H.2
  • 3
    • 0001419757 scopus 로고
    • Distributed representations, simple recurrent networks, and grammatical structure
    • Elman J.L. Distributed representations, simple recurrent networks, and grammatical structure. Mach. Learning. 7:1991;195-225.
    • (1991) Mach. Learning , vol.7 , pp. 195-225
    • Elman, J.L.1
  • 6
    • 0032629292 scopus 로고    scopus 로고
    • On the learnability of recursive data
    • Hammer B. On the learnability of recursive data. Math. Control Signals Systems. 12:1999;62-79.
    • (1999) Math. Control Signals Systems , vol.12 , pp. 62-79
    • Hammer, B.1
  • 7
    • 0006662331 scopus 로고    scopus 로고
    • Neural networks can approximate mappings on structured objects
    • in: P.P. Wang (Ed.)
    • B. Hammer, V. Sperschneider, Neural networks can approximate mappings on structured objects, in: P.P. Wang (Ed.), International Conference of Information Sciences, Vol. 2, 1997, pp. 211-214.
    • (1997) International Conference of Information Sciences , vol.2 , pp. 211-214
    • Hammer, B.1    Sperschneider, V.2
  • 9
    • 0027812765 scopus 로고
    • Some new results on neural network approximation
    • Hornik K. Some new results on neural network approximation. Neural Networks. 6:1993;1069-1072.
    • (1993) Neural Networks , vol.6 , pp. 1069-1072
    • Hornik, K.1
  • 10
    • 0024880831 scopus 로고
    • Multilayer feedforward networks are universal approximators
    • Hornik K., Stinchcombe M., White H. Multilayer feedforward networks are universal approximators. Neural Networks. 2:1989;359-366.
    • (1989) Neural Networks , vol.2 , pp. 359-366
    • Hornik, K.1    Stinchcombe, M.2    White, H.3
  • 11
    • 0001713459 scopus 로고    scopus 로고
    • The dynamic universality of sigmoidal neural networks
    • Kilian J., Siegelmann H.T. The dynamic universality of sigmoidal neural networks. Inform. and Comput. 128:1996;48-56.
    • (1996) Inform. and Comput. , vol.128 , pp. 48-56
    • Kilian, J.1    Siegelmann, H.T.2
  • 12
    • 84951490428 scopus 로고
    • Review of neural networks for speech recognition
    • Lippmann R. Review of neural networks for speech recognition. Neural Comput. 1:1989;1-38.
    • (1989) Neural Comput. , vol.1 , pp. 1-38
    • Lippmann, R.1
  • 13
    • 0001243798 scopus 로고    scopus 로고
    • On the effect of analog noise in discrete-time analog computation
    • Maass W., Orponen P. On the effect of analog noise in discrete-time analog computation. Neural Comput. 10:1998;1071-1095.
    • (1998) Neural Comput. , vol.10 , pp. 1071-1095
    • Maass, W.1    Orponen, P.2
  • 14
    • 0002291616 scopus 로고
    • Neural net architectures for temporal sequence processing
    • in: A. Weigend, N. Gershenfeld (Eds.), Addison-Wesley, Reading, MA
    • M. Mozer, Neural net architectures for temporal sequence processing, in: A. Weigend, N. Gershenfeld (Eds.), Predicting the Future and Understanding the Past, Addison-Wesley, Reading, MA, 1993.
    • (1993) Predicting the Future and Understanding the Past
    • Mozer, M.1
  • 15
    • 0025399567 scopus 로고
    • Identification and control of dynamical systems using neural networks
    • Narendra K.S., Parthasarathy K. Identification and control of dynamical systems using neural networks. IEEE Trans. Neural Networks. 1:1990;4-27.
    • (1990) IEEE Trans. Neural Networks , vol.1 , pp. 4-27
    • Narendra, K.S.1    Parthasarathy, K.2
  • 16
    • 0343090604 scopus 로고    scopus 로고
    • Some experiments on the applicability of folding architectures to guide theorem proving
    • in: D.D. Dankel (Ed.), AI Research Society, Florida
    • S. Schulz, A. Küchler, C. Goller, Some experiments on the applicability of folding architectures to guide theorem proving, in: D.D. Dankel (Ed.), Proceedings of the 10th International FLAIRS Conference, AI Research Society, Florida, 1997, pp. 377-381.
    • (1997) Proceedings of the 10th International FLAIRS Conference , pp. 377-381
    • Schulz, S.1    Küchler, A.2    Goller, C.3
  • 17
    • 0028500244 scopus 로고
    • Analog computation, neural networks, and circuits
    • Siegelmann H.T., Sontag E.D. Analog computation, neural networks, and circuits. Theoret. Comput. Sci. 131:1994;331-360.
    • (1994) Theoret. Comput. Sci. , vol.131 , pp. 331-360
    • Siegelmann, H.T.1    Sontag, E.D.2
  • 18
  • 19
    • 0026904597 scopus 로고
    • Feedforward nets for interpolation and classification
    • Sontag E.D. Feedforward nets for interpolation and classification. J. Comput. System Sci. 45:1992;20-48.
    • (1992) J. Comput. System Sci. , vol.45 , pp. 20-48
    • Sontag, E.D.1
  • 21
    • 0001202594 scopus 로고
    • A learning algorithm for continually running fully recurrent neural networks
    • Williams R., Zipser D. A learning algorithm for continually running fully recurrent neural networks. Neural Comput. 1:1989;270-280.
    • (1989) Neural Comput. , vol.1 , pp. 270-280
    • Williams, R.1    Zipser, D.2


* 이 정보는 Elsevier사의 SCOPUS DB에서 KISTI가 분석하여 추출한 것입니다.