메뉴 건너뛰기




Volumn , Issue , 2007, Pages 153-160

Greedy layer-wise training of deep networks

Author keywords

[No Author keywords available]

Indexed keywords

COMPLEXITY THEORY; COMPUTATIONAL ELEMENTS; DEEP BELIEF NETWORKS; DISTRIBUTED REPRESENTATION; GENERATIVE MODEL; GRADIENT-BASED OPTIMIZATION; HIGH-LEVEL ABSTRACTION; LAYER-WISE; LOCAL MINIMUMS; OPTIMIZATION PROBLEMS; UNSUPERVISED TRAINING;

EID: 84864073449     PISSN: 10495258     EISSN: None     Source Type: Conference Proceeding    
DOI: None     Document Type: Conference Paper
Times cited : (4285)

References (16)
  • 2
    • 77954662106 scopus 로고    scopus 로고
    • The curse of highly variable functions for local kernel machines
    • Weiss, Y., Schölkopf, B., & Platt, J. (Eds.). MIT Press, Cambridge, MA
    • Bengio, Y., Delalleau, O., & Le Roux, N. (2006). The curse of highly variable functions for local kernel machines. In Weiss, Y., Schölkopf, B., & Platt, J. (Eds.), Advances in Neural Information Processing Systems 18, pp. 107-114. MIT Press, Cambridge, MA.
    • (2006) Advances in Neural Information Processing Systems , vol.18 , pp. 107-114
    • Bengio, Y.1    Delalleau, O.2    Le Roux, N.3
  • 3
    • 34547975052 scopus 로고    scopus 로고
    • Scaling learning algorithms towards AI
    • Bottou, L., Chapelle, O., DeCoste, D., & Weston, J. (Eds.). MIT Press
    • Bengio, Y., & Le Cun, Y. (2007). Scaling learning algorithms towards AI. In Bottou, L., Chapelle, O., DeCoste, D., & Weston, J. (Eds.), Large Scale Kernel Machines. MIT Press.
    • (2007) Large Scale Kernel Machines
    • Bengio, Y.1    Le Cun, Y.2
  • 5
    • 0037768682 scopus 로고    scopus 로고
    • A continuous restricted boltzmann machine with an implementable training algorithm
    • Chen, H., & Murray, A. (2003). A continuous restricted boltzmann machine with an implementable training algorithm. IEE Proceedings of Vision, Image and Signal Processing, 150(3), 153-158.
    • (2003) IEE Proceedings of Vision, Image and Signal Processing , vol.150 , Issue.3 , pp. 153-158
    • Chen, H.1    Murray, A.2
  • 6
    • 0000155950 scopus 로고
    • The cascade-correlation learning architecture
    • Touretzky, D. (Ed.) Denver, CO. Morgan Kaufmann, San Mateo
    • Fahlman, S., & Lebiere, C. (1990). The cascade-correlation learning architecture. In Touretzky, D. (Ed.), Advances in Neural Information Processing Systems 2, pp. 524-532 Denver, CO. Morgan Kaufmann, San Mateo.
    • (1990) Advances in Neural Information Processing Systems , vol.2 , pp. 524-532
    • Fahlman, S.1    Lebiere, C.2
  • 8
    • 33745805403 scopus 로고    scopus 로고
    • A fast learning algorithm for deep belief nets
    • Hinton, G. E., Osindero, S., & Teh, Y. (2006). A fast learning algorithm for deep belief nets. Neural Computation, 18, 1527-1554.
    • (2006) Neural Computation , vol.18 , pp. 1527-1554
    • Hinton, G.E.1    Osindero, S.2    Teh, Y.3
  • 9
    • 0013344078 scopus 로고    scopus 로고
    • Training products of experts by minimizing contrastive divergence
    • Hinton, G. (2002). Training products of experts by minimizing contrastive divergence. Neural Computation, 14(8), 1771-1800.
    • (2002) Neural Computation , vol.14 , Issue.8 , pp. 1771-1800
    • Hinton, G.1
  • 10
    • 0029652445 scopus 로고
    • The wake-sleep algorithm for unsupervised neural networks
    • Hinton, G., Dayan, P., Frey, B., & Neal, R. (1995). The wake-sleep algorithm for unsupervised neural networks. Science, 268, 1558-1161.
    • (1995) Science , vol.268 , pp. 1558-1161
    • Hinton, G.1    Dayan, P.2    Frey, B.3    Neal, R.4
  • 11
    • 33746600649 scopus 로고    scopus 로고
    • Reducing the dimensionality of data with neural networks
    • Hinton, G., & Salakhutdinov, R. (2006). Reducing the dimensionality of data with neural networks. Science, 313(5786), 504-507.
    • (2006) Science , vol.313 , Issue.5786 , pp. 504-507
    • Hinton, G.1    Salakhutdinov, R.2
  • 12
    • 0029972717 scopus 로고    scopus 로고
    • Training MLPs layer by layer using an objective function for internal representations
    • Lengellé, R., & Denoeux, T. (1996). Training MLPs layer by layer using an objective function for internal representations. Neural Networks, 9, 83-97.
    • (1996) Neural Networks , vol.9 , pp. 83-97
    • Lengellé, R.1    Denoeux, T.2
  • 13
    • 0036634215 scopus 로고    scopus 로고
    • A monte-carlo em approach for partially observable diffusion processes: Theory and applications to neural networks
    • Movellan, J., Mineiro, P., &Williams, R. (2002). A monte-carlo EM approach for partially observable diffusion processes: theory and applications to neural networks. Neural Computation, 14, 1501-1544.
    • (2002) Neural Computation , vol.14 , pp. 1501-1544
    • Movellan, J.1    Mineiro, P.2    Williams, R.3
  • 14
    • 0001046225 scopus 로고
    • Practical issues in temporal difference learning
    • Tesauro, G. (1992). Practical issues in temporal difference learning. Machine Learning, 8, 257-277.
    • (1992) Machine Learning , vol.8 , pp. 257-277
    • Tesauro, G.1
  • 16
    • 84899000641 scopus 로고    scopus 로고
    • Exponential family harmoniums with an application to information retrieval
    • Cambridge, MA MIT Press
    • Welling, M., Rosen-Zvi, M., & Hinton, G. E. (2005). Exponential family harmoniums with an application to information retrieval. In Advances in Neural Information Processing Systems, Vol. 17 Cambridge, MA. MIT Press.
    • (2005) Advances in Neural Information Processing Systems , vol.17
    • Welling, M.1    Rosen-Zvi, M.2    Hinton, G.E.3


* 이 정보는 Elsevier사의 SCOPUS DB에서 KISTI가 분석하여 추출한 것입니다.