-
2
-
-
0034069365
-
On the approximation capability of recurrent neural networks
-
B. Hammer, "On the approximation capability of recurrent neural networks, " Neurocomputing, vol. 31, no. 1-4, pp. 107-123, 2000.
-
(2000)
Neurocomputing
, vol.31
, Issue.1-4
, pp. 107-123
-
-
Hammer, B.1
-
3
-
-
2442432590
-
Multistability of discrete-time recurrent neural networks with unsaturating piecewise linear activation functions
-
Mar
-
Z. Yi and K. K. Tan, "Multistability of discrete-time recurrent neural networks with unsaturating piecewise linear activation functions, " IEEE Trans. Neural Netw., vol. 15, no. 2, pp. 329-336, Mar. 2004.
-
(2004)
IEEE Trans. Neural Netw
, vol.15
, Issue.2
, pp. 329-336
-
-
Yi, Z.1
Tan, K.K.2
-
4
-
-
67649583798
-
Activity invariant sets and exponentially stable attractors of linear threshold discrete-time recurrent neural networks
-
Jun
-
L. Zhang, Z. Yi, S. L. Zhang, and P. A. Heng, "Activity invariant sets and exponentially stable attractors of linear threshold discrete-time recurrent neural networks, " IEEE Trans. Autom. Control, vol. 54, no. 6, pp. 1341-1347, Jun. 2009.
-
(2009)
IEEE Trans. Autom. Control
, vol.54
, Issue.6
, pp. 1341-1347
-
-
Zhang, L.1
Yi, Z.2
Zhang, S.L.3
Heng, P.A.4
-
5
-
-
28444431612
-
Global convergence of lotka-volterra recurrent neural networks with delays
-
Nov
-
Z. Yi and K. K. Tan, "Global convergence of Lotka-Volterra recurrent neural networks with delays, " IEEE Trans. Circuits Syst. I, Reg. Papers, vol. 52, no. 11, pp. 2482-2489, Nov. 2005.
-
(2005)
IEEE Trans. Circuits Syst. I, Reg. Papers
, vol.52
, Issue.11
, pp. 2482-2489
-
-
Yi, Z.1
Tan, K.K.2
-
6
-
-
77649270351
-
Foundations of implementing the competitive layer model by lotka-volterra recurrent neural networks
-
Mar
-
Z. Yi, "Foundations of implementing the competitive layer model by Lotka-Volterra recurrent neural networks, " IEEE Trans. Neural Netw., vol. 21, no. 3, pp. 494-507, Mar. 2010.
-
(2010)
IEEE Trans. Neural Netw
, vol.21
, Issue.3
, pp. 494-507
-
-
Yi, Z.1
-
7
-
-
26444565569
-
Finding structure in time
-
Mar
-
J. L. Elman, "Finding structure in time, " Cognit. Sci., vol. 14, no. 2, pp. 179-211, Mar. 1990.
-
(1990)
Cognit. Sci.
, vol.14
, Issue.2
, pp. 179-211
-
-
Elman, J.L.1
-
8
-
-
0031573117
-
Long short-term memory
-
S. Hochreiter and J. Schmidhuber, "Long short-term memory, " Neural Comput., vol. 9, no. 8, pp. 1735-1780, 1997.
-
(1997)
Neural Comput
, vol.9
, Issue.8
, pp. 1735-1780
-
-
Hochreiter, S.1
Schmidhuber, J.2
-
10
-
-
84919728106
-
-
[Online].
-
K. Cho, B. van Merrienboer, C. Gulcehre, F. Bougares, H. Schwenk, and Y. Bengio. (2014). "Learning phrase representations using RNN encoder-decoder for statistical machine translation." [Online]. Available: https://arxiv.org/abs/1406.1078.
-
(2014)
Learning Phrase Representations Using RNN Encoder-decoder for Statistical Machine Translation
-
-
Cho, K.1
Merrienboer, B.V.2
Gulcehre, C.3
Bougares, F.4
Schwenk, H.5
Bengio, Y.6
-
11
-
-
80053459857
-
Generating text with recurrent neural networks
-
I. Sutskever, J. Martens, and G. E. Hinton, "Generating text with recurrent neural networks, " in Proc. 28th Int. Conf. Mach. Learn. (ICML), 2011, pp. 1017-1024.
-
(2011)
Proc. 28th Int. Conf. Mach. Learn. (ICML)
, pp. 1017-1024
-
-
Sutskever, I.1
Martens, J.2
Hinton, G.E.3
-
12
-
-
84890543083
-
Speech recognition with deep recurrent neural networks
-
May
-
A. Graves, A.-R. Mohamed, and G. E. Hinton, "Speech recognition with deep recurrent neural networks, " in Proc. IEEE Int. Conf. Acoust., Speech Signal Process. (ICASSP), May 2013, pp. 6645-6649.
-
(2013)
Proc. IEEE Int. Conf. Acoust., Speech Signal Process. (ICASSP)
, pp. 6645-6649
-
-
Graves, A.1
Mohamed, A.-R.2
Hinton, G.E.3
-
13
-
-
84959236502
-
Long-term recurrent convolutional networks for visual recognition and description
-
Jun
-
J. Donahue et al., "Long-term recurrent convolutional networks for visual recognition and description, " in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Jun. 2015, pp. 2625-2634.
-
(2015)
Proc. IEEE Conf. Comput. Vis. Pattern Recognit
, pp. 2625-2634
-
-
Donahue, J.1
-
14
-
-
84946747440
-
Show and tell: A neural image caption generator
-
Boston, MA, USA, Jun
-
O. Vinyals, A. Toshev, S. Bengio, and D. Erhan, "Show and tell: A neural image caption generator, " in Proc. IEEE Conf. Comput. Vis. Pattern Recognit., Boston, MA, USA, Jun. 2015, pp. 3156-3164.
-
(2015)
Proc. IEEE Conf. Comput. Vis. Pattern Recognit.
, pp. 3156-3164
-
-
Vinyals, O.1
Toshev, A.2
Bengio, S.3
Erhan, D.4
-
15
-
-
84946763507
-
Describing multimedia content using attention-based encoder-decoder networks
-
Nov
-
K. Cho, A. Courville, and Y. Bengio, "Describing multimedia content using attention-based encoder-decoder networks, " IEEE Trans. Multimedia, vol. 17, no. 11, pp. 1875-1886, Nov. 2015.
-
(2015)
IEEE Trans. Multimedia
, vol.17
, Issue.11
, pp. 1875-1886
-
-
Cho, K.1
Courville, A.2
Bengio, Y.3
-
16
-
-
0028392483
-
Learning long-term dependencies with gradient descent is difficult
-
Mar
-
Y. Bengio, P. Simard, and P. Frasconi, "Learning long-term dependencies with gradient descent is difficult, " IEEE Trans. Neural Netw., vol. 5, no. 2, pp. 157-166, Mar. 1994.
-
(1994)
IEEE Trans. Neural Netw
, vol.5
, Issue.2
, pp. 157-166
-
-
Bengio, Y.1
Simard, P.2
Frasconi, P.3
-
17
-
-
84892982833
-
On the difficulty of training recurrent neural networks
-
R. Pascanu, T. Mikolov, and Y. Bengio, "On the difficulty of training recurrent neural networks, " in Proc. ICML, vol. 28. 2013, pp. 1310-1318.
-
(2013)
Proc. ICML
, vol.28
, pp. 1310-1318
-
-
Pascanu, R.1
Mikolov, T.2
Bengio, Y.3
-
18
-
-
0034293152
-
Learning to forget: Continual prediction with LSTM
-
F. A. Gers, J. Schmidhuber, and F. Cummins, "Learning to forget: Continual prediction with LSTM, " Neural Comput., vol. 12, no. 10, pp. 2451-2471, 2000.
-
(2000)
Neural Comput
, vol.12
, Issue.10
, pp. 2451-2471
-
-
Gers, F.A.1
Schmidhuber, J.2
Cummins, F.3
-
20
-
-
0001765578
-
Gradient-based learning algorithms for recurrent networks and their computational complexity
-
Hillsdale, NJ, USA: Lawrence Erlbaum Associates
-
R. J. Williams and D. Zipser, "Gradient-based learning algorithms for recurrent networks and their computational complexity, " in Backpropagation: Theory, Architectures, and Applications, vol. 1. Hillsdale, NJ, USA: Lawrence Erlbaum Associates, 1995, pp. 433-486.
-
(1995)
Backpropagation: Theory, Architectures, and Applications
, vol.1
, pp. 433-486
-
-
Williams, R.J.1
Zipser, D.2
-
21
-
-
0001202594
-
A learning algorithm for continually running fully recurrent neural networks
-
R. J. Williams and D. Zipser, "A learning algorithm for continually running fully recurrent neural networks, " Neural Comput., vol. 1, no. 2, pp. 270-280, 1989.
-
(1989)
Neural Comput
, vol.1
, Issue.2
, pp. 270-280
-
-
Williams, R.J.1
Zipser, D.2
-
22
-
-
0022471098
-
Learning representations by back-propagating errors
-
Oct
-
D. E. Rumelhart, G. E. Hinton, and R. J. Williams, "Learning representations by back-propagating errors, " Nature, vol. 323, pp. 533-536, Oct. 1986.
-
(1986)
Nature
, vol.323
, pp. 533-536
-
-
Rumelhart, D.E.1
Hinton, G.E.2
Williams, R.J.3
-
23
-
-
80053451847
-
Learning recurrent neural networks with hessian-free optimization
-
J. Martens and I. Sutskever, "Learning recurrent neural networks with hessian-free optimization, " in Proc. 28th Int. Conf. Mach. Learn. (ICML), 2011, pp. 1033-1040.
-
(2011)
Proc. 28th Int. Conf. Mach. Learn. (ICML)
, pp. 1033-1040
-
-
Martens, J.1
Sutskever, I.2
-
24
-
-
84890543516
-
Advances in optimizing recurrent networks
-
May
-
Y. Bengio, N. Boulanger-Lewandowski, and R. Pascanu, "Advances in optimizing recurrent networks, " in Proc. IEEE Int. Conf. Acoust., Speech Signal Process. (ICASSP), May 2013, pp. 8624-8628.
-
(2013)
Proc. IEEE Int. Conf. Acoust., Speech Signal Process. (ICASSP)
, pp. 8624-8628
-
-
Bengio, Y.1
Boulanger-Lewandowski, N.2
Pascanu, R.3
|