메뉴 건너뛰기




Volumn , Issue , 2011, Pages 833-840

Contractive auto-encoders: Explicit invariance during feature extraction

Author keywords

[No Author keywords available]

Indexed keywords

ACTIVATION LAYER; AUTOENCODERS; CLASSIFICATION ERRORS; DATA SETS; DE-NOISING; FROBENIUS NORM; NONLINEAR MANIFOLDS; PENALTY TERM; PRE-TRAINING; STATE OF THE ART;

EID: 80053460450     PISSN: None     EISSN: None     Source Type: Conference Proceeding    
DOI: None     Document Type: Conference Paper
Times cited : (1408)

References (17)
  • 1
    • 0024774330 scopus 로고
    • Neural networks and principal component analysis: Learning from examples without local minima
    • Baldi, P. and Hornik, K. (1989). Neural networks and principal component analysis: Learning from examples without local minima. Neural Networks, 2, 53-58.
    • (1989) Neural Networks , vol.2 , pp. 53-58
    • Baldi, P.1    Hornik, K.2
  • 2
    • 69349090197 scopus 로고    scopus 로고
    • Learning deep architectures for AI
    • Also published as a book. Now Publishers, 2009
    • Bengio, Y. (2009). Learning deep architectures for AI. Foundations and Trends in Machine Learning, 2(1), 1-127. Also published as a book. Now Publishers, 2009.
    • (2009) Foundations and Trends in Machine Learning , vol.2 , Issue.1 , pp. 1-127
    • Bengio, Y.1
  • 4
    • 0001740650 scopus 로고
    • Training with noise is equivalent to Tikhonov regularization
    • Bishop, C. M. (1995). Training with noise is equivalent to Tikhonov regularization. Neural Computation, 7(1), 108-116.
    • (1995) Neural Computation , vol.7 , Issue.1 , pp. 108-116
    • Bishop, C.M.1
  • 5
    • 33745805403 scopus 로고    scopus 로고
    • A fast learning algorithm for deep belief nets
    • DOI 10.1162/neco.2006.18.7.1527
    • Hinton, G. E., Osindero, S., and Teh, Y. (2006). A fast learning algorithm for deep belief nets. Neural Computation, 18, 1527-1554. (Pubitemid 44024729)
    • (2006) Neural Computation , vol.18 , Issue.7 , pp. 1527-1554
    • Hinton, G.E.1    Osindero, S.2    Teh, Y.-W.3
  • 6
    • 0034153465 scopus 로고    scopus 로고
    • Nonlinear autoassociation is not equivalent to PCA
    • Japkowicz, N., Hanson, S. J., and Gluck, M. A. (2000). Nonlinear autoassociation is not equivalent to PCA. Neural Computation, 12(3), 531-545.
    • (2000) Neural Computation , vol.12 , Issue.3 , pp. 531-545
    • Japkowicz, N.1    Hanson, S.J.2    Gluck, M.A.3
  • 11
    • 85161980001 scopus 로고    scopus 로고
    • Sparse deep belief net model for visual area V2
    • J. Platt, D. Koller, Y. Singer, and S. Roweis, editors, MIT Press, Cambridge, MA
    • Lee, H., Ekanadham, C., and Ng, A. (2008). Sparse deep belief net model for visual area V2. In J. Platt, D. Koller, Y. Singer, and S. Roweis, editors, Advances in Neural Information Processing Systems 20 (NIPS'07), pages 873-880. MIT Press, Cambridge, MA.
    • (2008) Advances in Neural Information Processing Systems 20 (NIPS'07) , pp. 873-880
    • Lee, H.1    Ekanadham, C.2    Ng, A.3
  • 12
    • 71149119164 scopus 로고    scopus 로고
    • Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations
    • L. Bottou and M. Littman, editors, ACM, Montreal (Qc), Canada
    • Lee, H., Grosse, R., Ranganath, R., and Ng, A. Y. (2009). Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations. In L. Bottou and M. Littman, editors, Proceedings of the Twenty-sixth International Conference on Machine Learning (ICML'09). ACM, Montreal (Qc), Canada.
    • (2009) Proceedings of the Twenty-sixth International Conference on Machine Learning (ICML'09)
    • Lee, H.1    Grosse, R.2    Ranganath, R.3    Ng, A.Y.4
  • 13
    • 0030779611 scopus 로고    scopus 로고
    • Sparse coding with an overcomplete basis set: A strategy employed by V1?
    • DOI 10.1016/S0042-6989(97)00169-7, PII S0042698997001697
    • Olshausen, B. A. and Field, D. J. (1997). Sparse coding with an overcomplete basis set: a strategy employed by Vl? Vision Research, 37, 3311-3325. (Pubitemid 27493805)
    • (1997) Vision Research , vol.37 , Issue.23 , pp. 3311-3325
    • Olshausen, B.A.1    Field, D.J.2
  • 14
    • 0022471098 scopus 로고
    • Learning representations by backpropagating errors
    • Rumelhart, D. E., Hinton, G. E., and Williams, R. J. (1986). Learning representations by backpropagating errors. Nature, 323, 533-536.
    • (1986) Nature , vol.323 , pp. 533-536
    • Rumelhart, D.E.1    Hinton, G.E.2    Williams, R.J.3
  • 15
    • 0001440803 scopus 로고
    • Tangent prop - A formalism for specifying selected invariances in an adaptive network
    • J. M. S. Hanson and R. Lippmann, editors, San Mateo, CA. Morgan Kaufmann
    • Simard, P., Victorri, B., LeCun, Y., and Denker, J. (1992). Tangent prop - A formalism for specifying selected invariances in an adaptive network. In J. M. S. Hanson and R. Lippmann, editors, Advances in Neural Information Processing Systems 4 (NIPS'91), pages 895-903, San Mateo, CA. Morgan Kaufmann.
    • (1992) Advances in Neural Information Processing Systems 4 (NIPS'91) , pp. 895-903
    • Simard, P.1    Victorri, B.2    LeCun, Y.3    Denker, J.4
  • 16
    • 79551480483 scopus 로고    scopus 로고
    • Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion
    • Vincent, P., Larochelle, H., Lajoie, I., Bengio, Y., and Manzagol, P.-A. (2010). Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion. Journal of Machine Learning Research, 11(3371-3408).
    • (2010) Journal of Machine Learning Research , vol.11 , pp. 3371-3408
    • Vincent, P.1    Larochelle, H.2    Lajoie, I.3    Bengio, Y.4    Manzagol, P.-A.5


* 이 정보는 Elsevier사의 SCOPUS DB에서 KISTI가 분석하여 추출한 것입니다.