-
1
-
-
0000971250
-
Credit assignment through time: Alternatives to backpropagation
-
J. D. Cowan, G. Tesauro, and J. Alspector, editors, San Mateo, CA: Morgan Kaufmann
-
Y. Bengio and P. Frasconi. Credit assignment through time: Alternatives to backpropagation. In J. D. Cowan, G. Tesauro, and J. Alspector, editors, Advances in Neural Information Processing Systems 6, pages 75-82. San Mateo, CA: Morgan Kaufmann, 1994.
-
(1994)
Advances in Neural Information Processing Systems
, vol.6
, pp. 75-82
-
-
Bengio, Y.1
Frasconi, P.2
-
2
-
-
85153946439
-
An input output hmm architecture
-
G. Tesauro, D. S. Touretzky, and T. K. Leen, editors, MIT Press, Cambridge MA
-
Y. Bengio and P. Frasconi. An input output HMM architecture. In G. Tesauro, D. S. Touretzky, and T. K. Leen, editors, Advances in Neural Information Processing Systems 7, pages 427-434. MIT Press, Cambridge MA, 1995.
-
(1995)
Advances in Neural Information Processing Systems
, vol.7
, pp. 427-434
-
-
Bengio, Y.1
Frasconi, P.2
-
3
-
-
0028392483
-
Learning long-term dependencies with gradient descent is difficult
-
Y. Bengio, P. Simard, and P. Frasconi. Learning long-term dependencies with gradient descent is difficult. IEEE Transactions on Neural Networks, 5(2):157-166, 1994.
-
(1994)
IEEE Transactions on Neural Networks
, vol.5
, Issue.2
, pp. 157-166
-
-
Bengio, Y.1
Simard, P.2
Frasconi, P.3
-
6
-
-
0001086881
-
The recurrent cascade-correlation learning algorithm
-
R. P. Lipp-mann, J. E. Moody, and D. S. Touretzky, editors, San Mateo, CA: Morgan Kaufmann
-
S. E. Fahlman. The recurrent cascade-correlation learning algorithm. In R. P. Lipp-mann, J. E. Moody, and D. S. Touretzky, editors, Advances in Neural Information Processing Systems 3, pages 190-196. San Mateo, CA: Morgan Kaufmann, 1991.
-
(1991)
Advances in Neural Information Processing Systems
, vol.3
, pp. 190-196
-
-
Fahlman, S.E.1
-
7
-
-
0003575034
-
-
Diploma thesis, Institut fur Inform at ik, Lehrstuhl Prof. Brauer, Technische Universitat Munchen
-
J. Hochreiter. Untersuchungen zu dynamischen neuronalen Netzen. Diploma thesis, Institut fur Inform at ik, Lehrstuhl Prof. Brauer, Technische Universitat Munchen, 1991. See www7informatik.tu-muenchen.de/~hochrat.
-
(1991)
Untersuchungen zu Dynamischen Neuronalen Netzen
-
-
Hochreiter, J.1
-
8
-
-
84886287164
-
Long short-term memory
-
Fakultat fur Informatik, Technische Universitat Munchen
-
S. Hochreiter and J. Schmidhuber. Long short-term memory. Technical Report FKI-207-95, Fakultat fur Informatik, Technische Universitat Munchen, 1995. Revised 1996 (see www.idsia.ch/~juergen, www7informatik.tu-muenchen.de/ ~hochreit).
-
(1995)
Technical Report FKI-207-95
-
-
Hochreiter, S.1
Schmidhuber, J.2
-
9
-
-
84890453272
-
Learning long-term dependencies is not as difficult with narx recurrent neural networks
-
Institute for Advanced Computer Studies, University of Maryland, College Park, MD 20742
-
T. Lin, B. G. Home, P. Tino, and C. L. Giles. Learning long-term dependencies is not as difficult with NARX recurrent neural networks. Technical Report UMIACS-TR-95-78 and CS-TR-3500, Institute for Advanced Computer Studies, University of Maryland, College Park, MD 20742, 1995.
-
(1995)
Technical Report UMIACS-TR-95-78 and CS-TR-3500
-
-
Lin, T.1
Home, B.G.2
Tino, P.3
Giles, C.L.4
-
10
-
-
0006311788
-
First-order recurrent neural networks and deterministic finite state automata
-
P. Manolios and R. Fanelli. First-order recurrent neural networks and deterministic finite state automata. Neural Computation, 6:1155-1173, 1994.
-
(1994)
Neural Computation
, vol.6
, pp. 1155-1173
-
-
Manolios, P.1
Fanelli, R.2
-
12
-
-
0005316958
-
Induction of multiscale temporal structure
-
J. E. Moody, S. J. Hanson, and R. P. Lippman, editors, San Mateo, CA: Morgan Kaufmann
-
M. C. Mozer. Induction of multiscale temporal structure. In J. E. Moody, S. J. Hanson, and R. P. Lippman, editors, Advances in Neural Information Processing Systems 4, pages 275-282. San Mateo, CA: Morgan Kaufmann, 1992.
-
(1992)
Advances in Neural Information Processing Systems
, vol.4
, pp. 275-282
-
-
Mozer, M.C.1
-
13
-
-
0029375851
-
Gradient calculations for dynamic recurrent neural networks: A survey
-
B. A. Pearlmutter. Gradient calculations for dynamic recurrent neural networks: A survey. IEEE Transactions on Neural Networks, 6(5):1212-1228, 1995.
-
(1995)
IEEE Transactions on Neural Networks
, vol.6
, Issue.5
, pp. 1212-1228
-
-
Pearlmutter, B.A.1
-
14
-
-
0001460434
-
The induction of dynamical recognizers
-
J. B. Pollack. The induction of dynamical recognizers. Machine Learning, 7:227-252, 1991.
-
(1991)
Machine Learning
, vol.7
, pp. 227-252
-
-
Pollack, J.B.1
-
15
-
-
0003838146
-
The utility driven dynamic error propagation network
-
Cambridge University Engineering Department
-
A. J. Robinson and F. Fallside. The utility driven dynamic error propagation network. Technical Report CUED/F-INFENG/TR. 1, Cambridge University Engineering Department, 1987.
-
(1987)
Technical Report CUED/F-INFENG/TR
, vol.1
-
-
Robinson, A.J.1
Fallside, F.2
-
16
-
-
0001033889
-
Learning complex, extended sequences using the principle of history compression
-
J. H. Schmidhuber. Learning complex, extended sequences using the principle of history compression. Neural Computation, 4(2):234-242, 1992.
-
(1992)
Neural Computation
, vol.4
, Issue.2
, pp. 234-242
-
-
Schmidhuber, J.H.1
-
17
-
-
0001274675
-
Learning sequential structures with the real-time recurrent learning algorithm
-
A. W. Smith and D. Zipser. Learning sequential structures with the real-time recurrent learning algorithm. International Journal of Neural Systems, 1(2):125-131, 1989.
-
(1989)
International Journal of Neural Systems
, vol.1
, Issue.2
, pp. 125-131
-
-
Smith, A.W.1
Zipser, D.2
-
18
-
-
0001770758
-
Dynamic construction of finite automata from examples using hill-climbing
-
Ann Arbor, MI
-
M. Tomita. Dynamic construction of finite automata from examples using hill-climbing. In Proceedings of the Fourth Annual Cognitive Science Conference, pages 105-108. Ann Arbor, MI, 1982.
-
(1982)
Proceedings of the Fourth Annual Cognitive Science Conference
, pp. 105-108
-
-
Tomita, M.1
-
19
-
-
0000032536
-
Induction of finite-state automata using second-order recurrent networks
-
J. E. Moody, S. J. Hanson, and R. P. Lippman, editors, San Mateo, CA: Morgan Kaufmann
-
R. L. Watrous and G. M. Kuhn. Induction of finite-state automata using second-order recurrent networks. In J. E. Moody, S. J. Hanson, and R. P. Lippman, editors, Advances in Neural Information Processing Systems 4, pages 309-316. San Mateo, CA: Morgan Kaufmann, 1992.
-
(1992)
Advances in Neural Information Processing Systems
, vol.4
, pp. 309-316
-
-
Watrous, R.L.1
Kuhn, G.M.2
-
20
-
-
0001609567
-
An efficient gradient-based algorithm for on-line training of recurrent network trajectories
-
R. J. Williams and J. Peng. An efficient gradient-based algorithm for on-line training of recurrent network trajectories. Neural Computation, 4:491-501, 1990.
-
(1990)
Neural Computation
, vol.4
, pp. 491-501
-
-
Williams, R.J.1
Peng, J.2
|