-
1
-
-
84897544737
-
Theano: New features and speed improvements
-
Bastien, F., Lamblin, P., Pascanu, R., Bergstra, J., Goodfellow, I.J., Bergeron, A., Bouchard, N., Bengio, Y.: Theano: new features and speed improvements. In: Deep Learning and Unsupervised Feature Learning NIPS 2012 Workshop (2012)
-
(2012)
Deep Learning and Unsupervised Feature Learning NIPS 2012 Workshop
-
-
Bastien, F.1
Lamblin, P.2
Pascanu, R.3
Bergstra, J.4
Goodfellow, I.J.5
Bergeron, A.6
Bouchard, N.7
Bengio, Y.8
-
3
-
-
84883177910
-
Estimating or propagating gradients through stochastic neurons
-
arXiv:1305.2982
-
Bengio, Y.: Estimating or propagating gradients through stochastic neurons. Tech. Rep. Universite de Montreal (2013). arXiv:1305.2982
-
(2013)
Tech. Rep. Universite De Montreal
-
-
Bengio, Y.1
-
4
-
-
84971445948
-
How auto-encoders could provide credit assignment in deep networks via target propagation
-
arXiv:1407.7906
-
Bengio, Y.: How auto-encoders could provide credit assignment in deep networks via target propagation. Tech. rep. (2014). arXiv:1407.7906
-
(2014)
Tech. Rep
-
-
Bengio, Y.1
-
7
-
-
84857855190
-
Random search for hyper-parameter optimization
-
Bergstra, J., Bengio, Y.: Random search for hyper-parameter optimization. J. Machine Learning Res. 13, 281–305 (2012)
-
(2012)
J. Machine Learning Res
, vol.13
, pp. 281-305
-
-
Bergstra, J.1
Bengio, Y.2
-
8
-
-
84857819132
-
Theano: A CPU and GPU math expression compiler
-
(SciPy), oral Presentation, June
-
Bergstra, J., Breuleux, O., Bastien, F., Lamblin, P., Pascanu, R., Desjardins, G., Turian, J., Warde-Farley, D., Bengio, Y.: Theano: a CPU and GPU math expression compiler. In: Proceedings of the Python for Scientific Computing Conference (SciPy), oral Presentation, June 2010
-
(2010)
Proceedings of the Python for Scientific Computing Conference
-
-
Bergstra, J.1
Breuleux, O.2
Bastien, F.3
Lamblin, P.4
Pascanu, R.5
Desjardins, G.6
Turian, J.7
Warde-Farley, D.8
Bengio, Y.9
-
9
-
-
84955486808
-
Distributed optimization of deeply nested systems
-
Carreira-Perpinan, M., Wang, W.: Distributed optimization of deeply nested systems. In: AISTATS 2014, JMLR W&CP, vol. 33, pp. 10–19 (2014)
-
(2014)
AISTATS 2014, JMLR W&Cp
, vol.33
, pp. 10-19
-
-
Carreira-Perpinan, M.1
Wang, W.2
-
10
-
-
80055055551
-
Why does unsupervised pretraining help deep learning?
-
Erhan, D., Courville, A., Bengio, Y., Vincent, P.: Why does unsupervised pretraining help deep learning? In: JMLR W&CP: Proc. AISTATS 2010, vol. 9, pp. 201–208 (2010)
-
(2010)
JMLR W&Cp: Proc. AISTATS 2010
, vol.9
, pp. 201-208
-
-
Erhan, D.1
Courville, A.2
Bengio, Y.3
Vincent, P.4
-
12
-
-
85032751458
-
Deep neural networks for acoustic modeling in speech recognition
-
Hinton, G., Deng, L., Dahl, G.E., Mohamed, A., Jaitly, N., Senior, A., Vanhoucke, V., Nguyen, P., Sainath, T., Kingsbury, B.: Deep neural networks for acoustic modeling in speech recognition. IEEE Signal Processing Magazine 29(6), 82–97 (2012)
-
(2012)
IEEE Signal Processing Magazine
, vol.29
, Issue.6
, pp. 82-97
-
-
Hinton, G.1
Deng, L.2
Dahl, G.E.3
Mohamed, A.4
Jaitly, N.5
Senior, A.6
Vanhoucke, V.7
Nguyen, P.8
Sainath, T.9
Kingsbury, B.10
-
14
-
-
84876231242
-
Imagenet classification with deep convolutional neural networks
-
Krizhevsky, A., Sutskever, I., Hinton, G.: Imagenet classification with deep convolutional neural networks. In: NIPS 2012 (2012)
-
(2012)
NIPS 2012
-
-
Krizhevsky, A.1
Sutskever, I.2
Hinton, G.3
-
16
-
-
0002824144
-
Learning processes in an asymmetric threshold network
-
Fogelman- Soulié, F., Bienenstock, E., Weisbuch, G. (eds.), Springer-Verlag, Les Houches
-
LeCun, Y.: Learning processes in an asymmetric threshold network. In: Fogelman- Soulié, F., Bienenstock, E., Weisbuch, G. (eds.) Disordered Systems and Biological Organization, pp. 233–240. Springer-Verlag, Les Houches (1986)
-
(1986)
Disordered Systems and Biological Organization
, pp. 233-240
-
-
Lecun, Y.1
-
18
-
-
84984649525
-
Techniques for learning binary stochastic feedforward neural networks
-
Raiko, T., Berglund, M., Alain, G., Dinh, L.: Techniques for learning binary stochastic feedforward neural networks. In: NIPS Deep Learning Workshop 2014 (2014)
-
(2014)
NIPS Deep Learning Workshop 2014
-
-
Raiko, T.1
Berglund, M.2
Alain, G.3
Dinh, L.4
-
19
-
-
84910597353
-
Sequence to sequence learning with neural networks
-
arXiv:1409.3215
-
Sutskever, I., Vinyals, O., Le, Q.V.: Sequence to sequence learning with neural networks. Tech. rep. (2014). arXiv:1409.3215
-
(2014)
Tech. Rep
-
-
Sutskever, I.1
Vinyals, O.2
Le, Q.V.3
-
20
-
-
84964983441
-
Going deeper with convolutions
-
arXiv:1409.4842
-
Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., Erhan, D., Vanhoucke, V., Rabinovich, A.: Going deeper with convolutions. Tech. rep. (2014). arXiv:1409.4842
-
(2014)
Tech. Rep
-
-
Szegedy, C.1
Liu, W.2
Jia, Y.3
Sermanet, P.4
Reed, S.5
Anguelov, D.6
Erhan, D.7
Vanhoucke, V.8
Rabinovich, A.9
-
22
-
-
84893343292
-
Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude
-
Tieleman, T., Hinton, G.: Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural Networks for Machine Learning 4 (2012)
-
(2012)
COURSERA: Neural Networks for Machine Learning
, vol.4
-
-
Tieleman, T.1
Hinton, G.2
-
23
-
-
79551480483
-
Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion
-
Vincent, P., Larochelle, H., Lajoie, I., Bengio, Y., Manzagol, P.A.: Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. J. Machine Learning Res. 11 (2010)
-
(2010)
J. Machine Learning Res
, vol.11
-
-
Vincent, P.1
Larochelle, H.2
Lajoie, I.3
Bengio, Y.4
Manzagol, P.A.5
|