메뉴 건너뛰기




Volumn 5, Issue 2, 1994, Pages 185-197

Recurrent Neural Network Training with Feedforward Complexity

Author keywords

[No Author keywords available]

Indexed keywords

ALGORITHMS; COMPUTATIONAL COMPLEXITY; COMPUTER SIMULATION; DATA STRUCTURES; INVERSE PROBLEMS; LEARNING SYSTEMS; MATHEMATICAL OPERATORS; MATHEMATICAL TRANSFORMATIONS; NONLINEAR CONTROL SYSTEMS; OPTIMAL CONTROL SYSTEMS; PARAMETER ESTIMATION; VARIATIONAL TECHNIQUES;

EID: 0028400379     PISSN: 10459227     EISSN: 19410093     Source Type: Journal    
DOI: 10.1109/72.279184     Document Type: Article
Times cited : (27)

References (31)
  • 1
    • 85032752004 scopus 로고
    • Progress in supervised neural networks
    • D. R. Hush and B. G. Horne, “Progress in supervised neural networks,” IEEE Signal Processing Magazine, vol. 10, no. 1, pp. 8–39, 1993.
    • (1993) IEEE Signal Processing Magazine , vol.10 , Issue.1 , pp. 8-39
    • Hush, D.R.1    Horne, B.G.2
  • 2
    • 0024880831 scopus 로고
    • Multilayer feedforward networks are universal approximators
    • K. Hornik, M. Stinchcombe, and H. White, “Multilayer feedforward networks are universal approximators,” Neural Networks, vol. 2, pp. 359–366, 1989.
    • (1989) Neural Networks , vol.2 , pp. 359-366
    • Hornik, K.1    Stinchcombe, M.2    White, H.3
  • 4
    • 0025627940 scopus 로고
    • Universal approximation of an unknown mapping and its derivatives using multilayer feedforward networks
    • K. Hornik, M. Stinchcombe and H. White, “Universal approximation of an unknown mapping and its derivatives using multilayer feedforward networks,” Neural Networks, vol. 3, pp. 551–560, 1990.
    • (1990) Neural Networks , vol.3 , pp. 551-560
    • Hornik, K.1    Stinchcombe, M.2    White, H.3
  • 5
    • 0026727494 scopus 로고
    • Approximation of a function and its derivative with a neural network
    • P. Cardaliaguet and G. Euvrard, “Approximation of a function and its derivative with a neural network,” Neural Networks, vol. 5, pp. 207–220, 1992.
    • (1992) Neural Networks , vol.5 , pp. 207-220
    • Cardaliaguet, P.1    Euvrard, G.2
  • 6
    • 0026449851 scopus 로고
    • On learning the derivatives of an unknown mapping with multilayer feedforward networks
    • A. R. Gallant and H. White, “On learning the derivatives of an unknown mapping with multilayer feedforward networks,” Neural Networks, vol. 5, pp. 129–138, 1992.
    • (1992) Neural Networks , vol.5 , pp. 129-138
    • Gallant, A.R.1    White, H.2
  • 8
    • 0002437599 scopus 로고
    • Neurocontrol and supervised learning: An overview and evaluation
    • D. White and D. Sofge, Eds. Van Nostrand
    • P. J. Werbos, “Neurocontrol and supervised learning: An overview and evaluation,” in Handbook of intelligent control, D. White and D. Sofge, Eds. Van Nostrand, 1992.
    • (1992) Handbook of intelligent control
    • Werbos, P.J.1
  • 9
    • 0000676676 scopus 로고
    • Learning to control an unstable system with forward modeling
    • David S. Touretzky, Ed. Morgan Kaufmann
    • M. I. Jordan and R. A. Jacobs, “Learning to control an unstable system with forward modeling,” in Neural Information Processing Systems 2, David S. Touretzky, Ed. Morgan Kaufmann, 1990, pp. 324–331.
    • (1990) Neural Information Processing Systems 2 , pp. 324-331
    • Jordan, M.I.1    Jacobs, R.A.2
  • 10
    • 0023869287 scopus 로고
    • Feedback-error-learning neural network for trajectory control of a robotic manipulator
    • H. Miyamoto, M. Kawato, T. Setoyama and R. Suzuki, “Feedback-error-learning neural network for trajectory control of a robotic manipulator,” Neural Networks, vol. 1, pp. 251–265, 1988.
    • (1988) Neural Networks , vol.1 , pp. 251-265
    • Miyamoto, H.1    Kawato, M.2    Setoyama, T.3    Suzuki, R.4
  • 11
    • 0001414608 scopus 로고
    • Generalization of backpropagation to recurrent and higher order neural networks
    • Dana Z. Anderson, Ed. American Institute of Physics
    • F. J. Pineda, “Generalization of backpropagation to recurrent and higher order neural networks,” in Neural Information Processing Systems, Dana Z. Anderson, Ed. American Institute of Physics, 1988, pp. 602–611.
    • (1988) Neural Information Processing Systems , pp. 602-611
    • Pineda, F.J.1
  • 13
    • 0001202597 scopus 로고
    • Learning state space trajectories in recurrent neural networks
    • B. A. Pearlmutter, “Learning state space trajectories in recurrent neural networks,” Neural Computation, vol. 1, no. 2, pp. 263–269, 1989.
    • (1989) Neural Computation , vol.1 , Issue.2 , pp. 263-269
    • Pearlmutter, B.A.1
  • 14
    • 0025399567 scopus 로고
    • Identification and control of dynamical systems using neural networks
    • K. S. Narendra and K. Parthasarathy, “Identification and control of dynamical systems using neural networks,” IEEE Transactions on Neural Networks, vol. 1, no. 1, pp. 4—27, 1990.
    • (1990) IEEE Transactions on Neural Networks , vol.1 , Issue.1 , pp. 4-27
    • Narendra, K.S.1    Parthasarathy, K.2
  • 15
    • 0026117466 scopus 로고
    • Gradient methods for the optimization of dynamical systems containing neural networks
    • K. S. Narendra and K. Parthasarathy, “Gradient methods for the optimization of dynamical systems containing neural networks,” IEEE Transactions on Neural Networks, vol. 2, pp. 252–262, 1991.
    • (1991) IEEE Transactions on Neural Networks , vol.2 , pp. 252-262
    • Narendra, K.S.1    Parthasarathy, K.2
  • 16
    • 0001202594 scopus 로고
    • A learning algorithm for continually running fully recurrent neural networks
    • R. J. Williams and D. Zipser, “A learning algorithm for continually running fully recurrent neural networks,” Neural Computation, vol. 1, no. 2, pp. 270–280, 1989.
    • (1989) Neural Computation , vol.1 , Issue.2 , pp. 270-280
    • Williams, R.J.1    Zipser, D.2
  • 17
    • 0003444646 scopus 로고    scopus 로고
    • Parallel Distributed Processing
    • Cambridge, MA: MIT Press
    • D. Rumelhart and J. McClelland, Parallel Distributed Processing. Vol. 1, Cambridge, MA: MIT Press, 1987
    • , vol.1
    • Rumelhart, D.1    McClelland, J.2
  • 18
    • 0000903748 scopus 로고
    • Generalization of backpropagation with application to a recurrent gas market model
    • P. J. Werbos, “Generalization of backpropagation with application to a recurrent gas market model,” Neural Networks, vol. 1, pp. 339–356, 1988.
    • (1988) Neural Networks , vol.1 , pp. 339-356
    • Werbos, P.J.1
  • 19
    • 0005813339 scopus 로고    scopus 로고
    • Application of adjoint operators to neural learning
    • J. Barhen, N. Toomarian and S. Gulati, “Application of adjoint operators to neural learning,” Applied Mathematics Letters, vol. 3, no. 3, pp. 13–18.
    • Applied Mathematics Letters , vol.3 , Issue.3 , pp. 13-18
    • Barhen, J.1    Toomarian, N.2    Gulati, S.3
  • 20
    • 25144495329 scopus 로고
    • Adjoint operator algorithms for faster learning in dynamical neural networks
    • David S. Touretzky, Ed. San Matteo, CA: Morgan Kaufmann
    • J. Barhen, N. Toomarian and S. Gulati, “Adjoint operator algorithms for faster learning in dynamical neural networks,” Advances in Neural Information Processing Systems 2 David S. Touretzky, Ed. San Matteo, CA: Morgan Kaufmann, 1990, pp. 498–-508.
    • (1990) Advances in Neural Information Processing Systems 2 , pp. 498-508
    • Barhen, J.1    Toomarian, N.2    Gulati, S.3
  • 21
    • 0026685372 scopus 로고    scopus 로고
    • Learning a trajectory using adjoint functions and teacher forcing
    • N. B. Toomarian and J. Barhen, “Learning a trajectory using adjoint functions and teacher forcing,” Neural Networks, vol. 5, no. 3, pp. 473–484.
    • Neural Networks , vol.5 , Issue.3 , pp. 473-484
    • Toomarian, N.B.1    Barhen, J.2
  • 23
    • 0003792312 scopus 로고    scopus 로고
    • Englewood Cliffs, NJ: Prentice Hall
    • T. Kailath, Linear Systems. Englewood Cliffs, NJ: Prentice Hall, 1980
    • Linear Systems
    • Kailath, T.1
  • 25
    • 0000718697 scopus 로고
    • Higher order recurrent networks and grammatical inference
    • David S. Touretzky, Ed. San Matteo, CA: Morgan Kaufmann
    • C. L. Giles, G. Z. Sun, H. H. Chen, Y. C. Lee and D. Chen, “Higher order recurrent networks and grammatical inference,” Neural Information Processing Systems 2, David S. Touretzky, Ed. San Matteo, CA: Morgan Kaufmann, 1990, pp. 380–387.
    • (1990) Neural Information Processing Systems 2 , pp. 380-387
    • Giles, C.L.1    Sun, G.Z.2    Chen, H.H.3    Lee, Y.C.4    Chen, D.5
  • 26
    • 0001327717 scopus 로고
    • Learning and extracting finite state automata with second-order recurrent neural networks
    • C. L. Giles, C. B. Miller, D. Chen, H. H. Chen, G. Z. Sun and Y. C. Lee, “Learning and extracting finite state automata with second-order recurrent neural networks,” Neural Computation, vol. 4, 1992, pp. 393—405.
    • (1992) Neural Computation , vol.4 , pp. 393-405
    • Giles, C.L.1    Miller, C.B.2    Chen, D.3    Chen, H.H.4    Sun, G.Z.5    Lee, Y.C.6
  • 27
    • 0025254722 scopus 로고
    • A time-delay neural network architecture for isolated word recognition
    • K. J. Lang, A. H. Waibel, and G. E. Hinton, “A time-delay neural network architecture for isolated word recognition,” Neural Networks, vol. 3, pp. 23–43, 1990.
    • (1990) Neural Networks , vol.3 , pp. 23-43
    • Lang, K.J.1    Waibel, A.H.2    Hinton, G.E.3
  • 29
    • 40649128119 scopus 로고
    • Nonlinear neural networks: principles, mechanisms, and architectures
    • S. Grossberg, “Nonlinear neural networks: principles, mechanisms, and architectures,” Neural Networks, vol. 1, no. 1, pp. 17–61, 1988.
    • (1988) Neural Networks , vol.1 , Issue.1 , pp. 17-61
    • Grossberg, S.1
  • 30
    • 0001160588 scopus 로고
    • What size net gives valid generalization?
    • E. B. Baum and D. Haussler, “What size net gives valid generalization?” Neural Computation, vol. 1, pp. 151–160, 1989.
    • (1989) Neural Computation , vol.1 , pp. 151-160
    • Baum, E.B.1    Haussler, D.2


* 이 정보는 Elsevier사의 SCOPUS DB에서 KISTI가 분석하여 추출한 것입니다.