메뉴 건너뛰기




Volumn , Issue , 2009, Pages 41-48

Curriculum learning

Author keywords

[No Author keywords available]

Indexed keywords

CONTINUATION METHOD; LOCAL MINIMUMS; MACHINE-LEARNING; NONCONVEX FUNCTIONS; SET-UPS; SPEED OF CONVERGENCE; STOCHASTIC NEURAL NETWORK; TRAINING PROCESS; TRAINING STRATEGY;

EID: 71149116544     PISSN: None     EISSN: None     Source Type: Conference Proceeding    
DOI: None     Document Type: Conference Paper
Times cited : (3909)

References (30)
  • 2
    • 69349090197 scopus 로고    scopus 로고
    • Learning deep architectures for AI. Foundations & Trends in Mach. Learn
    • to appear
    • Bengio, Y. (2009). Learning deep architectures for AI. Foundations & Trends in Mach. Learn., to appear.
    • (2009)
    • Bengio, Y.1
  • 6
    • 0348021674 scopus 로고
    • Parallel continuation-based global optimization for molecular conformation and protein folding
    • Cornell University, Dept. of Computer Science
    • Coleman, T., & Wu, Z. (1994). Parallel continuation-based global optimization for molecular conformation and protein folding (Technical Report). Cornell University, Dept. of Computer Science.
    • (1994) Technical Report
    • Coleman, T.1    Wu, Z.2
  • 7
    • 56449095373 scopus 로고    scopus 로고
    • A unified architecture for natural language processing: Deep neural networks with multitask learning
    • Collobert, R., & Weston, J. (2008). A unified architecture for natural language processing: Deep neural networks with multitask learning. Int. Conf. Mach. Learn. 2008 (pp. 160-167).
    • (2008) Int. Conf. Mach. Learn. 2008 , pp. 160-167
    • Collobert, R.1    Weston, J.2
  • 8
    • 4243075127 scopus 로고
    • Generalization in the programed teaching of a perceptron
    • Derényi, I., Geszti, T., & Györgyi, G. (1994). Generalization in the programed teaching of a perceptron. Physical Review E, 50, 3192-3200.
    • (1994) Physical Review E , vol.50 , pp. 3192-3200
    • Derényi, I.1    Geszti, T.2    Györgyi, G.3
  • 9
    • 0027636611 scopus 로고
    • Learning and development in neural networks: The importance of starting small
    • Elman, J. L. (1993). Learning and development in neural networks: The importance of starting small. Cognition, 48, 781-799.
    • (1993) Cognition , vol.48 , pp. 781-799
    • Elman, J.L.1
  • 10
    • 73249147663 scopus 로고    scopus 로고
    • The difficulty of training deep architectures and the effect of unsupervised pre-training
    • Erhan, D., Manzagol, P.-A., Bengio, Y., Bengio, S., & Vincent, P. (2009). The difficulty of training deep architectures and the effect of unsupervised pre-training. AI & Stat. '2009.
    • (2009) AI & Stat. '2009
    • Erhan, D.1    Manzagol, P.-A.2    Bengio, Y.3    Bengio, S.4    Vincent, P.5
  • 11
    • 56449085852 scopus 로고
    • Unsupervised learning of distributions on binary vectors using two layer networks
    • UCSC-CRL-94-25, University of California, Santa Cruz
    • Freund, Y., & Haussler, D. (1994). Unsupervised learning of distributions on binary vectors using two layer networks (Technical Report UCSC-CRL-94-25). University of California, Santa Cruz.
    • (1994) Technical Report
    • Freund, Y.1    Haussler, D.2
  • 12
    • 0001295178 scopus 로고
    • On the power of small-depth threshold circuits
    • Hr̊stad, J., & Goldmann, M. (1991). On the power of small-depth threshold circuits. Computational Complexity, 1, 113-129.
    • (1991) Computational Complexity , vol.1 , pp. 113-129
    • Hr̊stad, J.1    Goldmann, M.2
  • 13
    • 33745805403 scopus 로고    scopus 로고
    • A fast learning algorithm for deep belief nets
    • Hinton, G. E., Osindero, S., & Teh, Y.-W. (2006). A fast learning algorithm for deep belief nets. Neural Computation, 18, 1527-1554.
    • (2006) Neural Computation , vol.18 , pp. 1527-1554
    • Hinton, G.E.1    Osindero, S.2    Teh, Y.-W.3
  • 14
    • 33746600649 scopus 로고    scopus 로고
    • Reducing the dimensionality of data with neural networks
    • Hinton, G. E., & Salakhutdinov, R. (2006). Reducing the dimensionality of data with neural networks. Science, 313, 504-507.
    • (2006) Science , vol.313 , pp. 504-507
    • Hinton, G.E.1    Salakhutdinov, R.2
  • 15
    • 59649113160 scopus 로고    scopus 로고
    • Flexible shaping: How learning in small steps helps
    • Krueger, K. A., & Dayan, P. (2009). Flexible shaping: how learning in small steps helps. Cognition, 110, 380-394.
    • (2009) Cognition , vol.110 , pp. 380-394
    • Krueger, K.A.1    Dayan, P.2
  • 16
    • 34547967782 scopus 로고    scopus 로고
    • An empirical evaluation of deep architectures on problems with many factors of variation
    • Larochelle, H., Erhan, D., Courville, A., Bergstra, J., & Bengio, Y. (2007). An empirical evaluation of deep architectures on problems with many factors of variation. Int. Conf. Mach. Learn. (pp. 473-480).
    • (2007) Int. Conf. Mach. Learn , pp. 473-480
    • Larochelle, H.1    Erhan, D.2    Courville, A.3    Bergstra, J.4    Bengio, Y.5
  • 17
    • 12344258158 scopus 로고    scopus 로고
    • Peterson, G. B. (2004). A day of great illumination: B. F. Skinner's discovery of shaping. Journal of the Experimental Analysis of Behavior, 82, 317-328.
    • Peterson, G. B. (2004). A day of great illumination: B. F. Skinner's discovery of shaping. Journal of the Experimental Analysis of Behavior, 82, 317-328.
  • 18
    • 85161966246 scopus 로고    scopus 로고
    • Sparse feature learning for deep belief networks
    • Ranzato, M., Boureau, Y., & LeCun, Y. (2008). Sparse feature learning for deep belief networks. Adv. Neural Inf. Proc. Sys. 20 (pp. 1185-1192).
    • (2008) Adv. Neural Inf. Proc. Sys , vol.20 , pp. 1185-1192
    • Ranzato, M.1    Boureau, Y.2    LeCun, Y.3
  • 19
    • 84864069017 scopus 로고    scopus 로고
    • Efficient learning of sparse representations with an energy-based model
    • Ranzato, M., Poultney, C., Chopra, S., & LeCun, Y. (2007). Efficient learning of sparse representations with an energy-based model. Adv. Neural Inf. Proc. Sys. 19 (pp. 1137-1144).
    • (2007) Adv. Neural Inf. Proc. Sys , vol.19 , pp. 1137-1144
    • Ranzato, M.1    Poultney, C.2    Chopra, S.3    LeCun, Y.4
  • 20
    • 0032888001 scopus 로고    scopus 로고
    • Language acquisition in the absence of explicit negative evidence: How important is starting small?
    • Rohde, D., & Plaut, D. (1999). Language acquisition in the absence of explicit negative evidence: How important is starting small? Cognition, 72, 67-109.
    • (1999) Cognition , vol.72 , pp. 67-109
    • Rohde, D.1    Plaut, D.2
  • 21
    • 34547997615 scopus 로고    scopus 로고
    • Learning a nonlinear embedding by preserving class neighbourhood structure
    • Salakhutdinov, R., & Hinton, G. (2007). Learning a nonlinear embedding by preserving class neighbourhood structure. AI & Stat. '2007.
    • (2007) AI & Stat. '2007
    • Salakhutdinov, R.1    Hinton, G.2
  • 22
    • 85162037149 scopus 로고    scopus 로고
    • Using Deep Belief Nets to learn covariance kernels for Gaussian processes
    • Salakhutdinov, R., & Hinton, G. (2008). Using Deep Belief Nets to learn covariance kernels for Gaussian processes. Adv. Neural Inf. Proc. Sys. 20 (pp. 1249-1256).
    • (2008) Adv. Neural Inf. Proc. Sys , vol.20 , pp. 1249-1256
    • Salakhutdinov, R.1    Hinton, G.2
  • 24
    • 0028443865 scopus 로고
    • Neural network learning control of robot manipulators using gradually increasing task difficulty
    • Sanger, T. D. (1994). Neural network learning control of robot manipulators using gradually increasing task difficulty. IEEE Trans. on Robotics and Automation, 10.
    • (1994) IEEE Trans. on Robotics and Automation , vol.10
    • Sanger, T.D.1
  • 30
    • 0031285908 scopus 로고    scopus 로고
    • Global continuation for distance geometry problems
    • Wu, Z. (1997). Global continuation for distance geometry problems. SIAM Journal of Optimization, 7, 814-836.
    • (1997) SIAM Journal of Optimization , vol.7 , pp. 814-836
    • Wu, Z.1


* 이 정보는 Elsevier사의 SCOPUS DB에서 KISTI가 분석하여 추출한 것입니다.