-
1
-
-
0024774330
-
Neural networks and principal component analysis: Learning from examples without local minima
-
Baldi, P. and Hornik, K. (1989). Neural networks and principal component analysis: Learning from examples without local minima. Neural Networks, 2, 53-58.
-
(1989)
Neural Networks
, vol.2
, pp. 53-58
-
-
Baldi, P.1
Hornik, K.2
-
2
-
-
69349090197
-
Learning deep architectures for AI
-
Also published as a book. Now Publishers, 2009
-
Bengio, Y. (2009). Learning deep architectures for AI. Foundations and Trends in Machine Learning, 2(1), 1-127. Also published as a book. Now Publishers, 2009.
-
(2009)
Foundations and Trends in Machine Learning
, vol.2
, Issue.1
, pp. 1-127
-
-
Bengio, Y.1
-
3
-
-
84864073449
-
Greedy layer-wise training of deep networks
-
B. Schölkopf, J. Platt, and T. Hoffman, editors, MIT Press
-
Bengio, Y., Lamblin, P., Popovici, D., and Larochelle, H. (2007). Greedy layer-wise training of deep networks. In B. Schölkopf, J. Platt, and T. Hoffman, editors, Advances in Neural Information Processing Systems 19 (NIPS'06), pages 153-160. MIT Press.
-
(2007)
Advances in Neural Information Processing Systems 19 (NIPS'06)
, pp. 153-160
-
-
Bengio, Y.1
Lamblin, P.2
Popovici, D.3
Larochelle, H.4
-
4
-
-
0001740650
-
Training with noise is equivalent to Tikhonov regularization
-
Bishop, C. M. (1995). Training with noise is equivalent to Tikhonov regularization. Neural Computation, 7(1), 108-116.
-
(1995)
Neural Computation
, vol.7
, Issue.1
, pp. 108-116
-
-
Bishop, C.M.1
-
5
-
-
33745805403
-
A fast learning algorithm for deep belief nets
-
DOI 10.1162/neco.2006.18.7.1527
-
Hinton, G. E., Osindero, S., and Teh, Y. (2006). A fast learning algorithm for deep belief nets. Neural Computation, 18, 1527-1554. (Pubitemid 44024729)
-
(2006)
Neural Computation
, vol.18
, Issue.7
, pp. 1527-1554
-
-
Hinton, G.E.1
Osindero, S.2
Teh, Y.-W.3
-
6
-
-
0034153465
-
Nonlinear autoassociation is not equivalent to PCA
-
Japkowicz, N., Hanson, S. J., and Gluck, M. A. (2000). Nonlinear autoassociation is not equivalent to PCA. Neural Computation, 12(3), 531-545.
-
(2000)
Neural Computation
, vol.12
, Issue.3
, pp. 531-545
-
-
Japkowicz, N.1
Hanson, S.J.2
Gluck, M.A.3
-
7
-
-
77953183471
-
What is the best multi-stage architecture for object recognition?
-
IEEE
-
Jarrett, K., Kavukcuoglu, K., Ranzato, M., and LeCun, Y. (2009). What is the best multi-stage architecture for object recognition? In Proc. International Conference on Computer Vision (ICCV'09). IEEE.
-
(2009)
Proc. International Conference on Computer Vision (ICCV'09)
-
-
Jarrett, K.1
Kavukcuoglu, K.2
Ranzato, M.3
LeCun, Y.4
-
8
-
-
70450177775
-
Learning invariant features through topographic filter maps
-
IEEE
-
Kavukcuoglu, K., Ranzato, M., Fergus, R., and LeCun, Y. (2009). Learning invariant features through topographic filter maps. In Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR'09). IEEE.
-
(2009)
Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR'09)
-
-
Kavukcuoglu, K.1
Ranzato, M.2
Fergus, R.3
LeCun, Y.4
-
10
-
-
34547967782
-
An empirical evaluation of deep architectures on problems with many factors of variation
-
Z. Ghahramani, editor, ACM
-
Larochelle, H., Erhan, D., Courville, A., Bergstra, J., and Bengio, Y. (2007). An empirical evaluation of deep architectures on problems with many factors of variation. In Z. Ghahramani, editor, Proceedings of the Twenty-fourth International Conference on Machine Learning (ICML'07), pages 473-480. ACM.
-
(2007)
Proceedings of the Twenty-fourth International Conference on Machine Learning (ICML'07)
, pp. 473-480
-
-
Larochelle, H.1
Erhan, D.2
Courville, A.3
Bergstra, J.4
Bengio, Y.5
-
11
-
-
85161980001
-
Sparse deep belief net model for visual area V2
-
J. Platt, D. Koller, Y. Singer, and S. Roweis, editors, MIT Press, Cambridge, MA
-
Lee, H., Ekanadham, C., and Ng, A. (2008). Sparse deep belief net model for visual area V2. In J. Platt, D. Koller, Y. Singer, and S. Roweis, editors, Advances in Neural Information Processing Systems 20 (NIPS'07), pages 873-880. MIT Press, Cambridge, MA.
-
(2008)
Advances in Neural Information Processing Systems 20 (NIPS'07)
, pp. 873-880
-
-
Lee, H.1
Ekanadham, C.2
Ng, A.3
-
12
-
-
71149119164
-
Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations
-
L. Bottou and M. Littman, editors, ACM, Montreal (Qc), Canada
-
Lee, H., Grosse, R., Ranganath, R., and Ng, A. Y. (2009). Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations. In L. Bottou and M. Littman, editors, Proceedings of the Twenty-sixth International Conference on Machine Learning (ICML'09). ACM, Montreal (Qc), Canada.
-
(2009)
Proceedings of the Twenty-sixth International Conference on Machine Learning (ICML'09)
-
-
Lee, H.1
Grosse, R.2
Ranganath, R.3
Ng, A.Y.4
-
13
-
-
0030779611
-
Sparse coding with an overcomplete basis set: A strategy employed by V1?
-
DOI 10.1016/S0042-6989(97)00169-7, PII S0042698997001697
-
Olshausen, B. A. and Field, D. J. (1997). Sparse coding with an overcomplete basis set: a strategy employed by Vl? Vision Research, 37, 3311-3325. (Pubitemid 27493805)
-
(1997)
Vision Research
, vol.37
, Issue.23
, pp. 3311-3325
-
-
Olshausen, B.A.1
Field, D.J.2
-
14
-
-
0022471098
-
Learning representations by backpropagating errors
-
Rumelhart, D. E., Hinton, G. E., and Williams, R. J. (1986). Learning representations by backpropagating errors. Nature, 323, 533-536.
-
(1986)
Nature
, vol.323
, pp. 533-536
-
-
Rumelhart, D.E.1
Hinton, G.E.2
Williams, R.J.3
-
15
-
-
0001440803
-
Tangent prop - A formalism for specifying selected invariances in an adaptive network
-
J. M. S. Hanson and R. Lippmann, editors, San Mateo, CA. Morgan Kaufmann
-
Simard, P., Victorri, B., LeCun, Y., and Denker, J. (1992). Tangent prop - A formalism for specifying selected invariances in an adaptive network. In J. M. S. Hanson and R. Lippmann, editors, Advances in Neural Information Processing Systems 4 (NIPS'91), pages 895-903, San Mateo, CA. Morgan Kaufmann.
-
(1992)
Advances in Neural Information Processing Systems 4 (NIPS'91)
, pp. 895-903
-
-
Simard, P.1
Victorri, B.2
LeCun, Y.3
Denker, J.4
-
16
-
-
79551480483
-
Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion
-
Vincent, P., Larochelle, H., Lajoie, I., Bengio, Y., and Manzagol, P.-A. (2010). Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. Journal of Machine Learning Research, 11(3371-3408).
-
(2010)
Journal of Machine Learning Research
, vol.11
, pp. 3371-3408
-
-
Vincent, P.1
Larochelle, H.2
Lajoie, I.3
Bengio, Y.4
Manzagol, P.-A.5
-
17
-
-
56449119888
-
Deep learning via semi-supervised embedding
-
W. W. Cohen, A. McCallum, and S. T. Roweis, editors, New York, NY, USA. ACM
-
Weston, J., Ratle, F., and Collobert, R. (2008). Deep learning via semi-supervised embedding. In W. W. Cohen, A. McCallum, and S. T. Roweis, editors, Proceedings of the Twenty-fifth International Conference on Machine Learning (ICML'08), pages 1168-1175, New York, NY, USA. ACM.
-
(2008)
Proceedings of the Twenty-fifth International Conference on Machine Learning (ICML'08)
, pp. 1168-1175
-
-
Weston, J.1
Ratle, F.2
Collobert, R.3
|