메뉴 건너뛰기




Volumn , Issue , 2016, Pages 3988-3996

Learning to learn by gradient descent by gradient descent

Author keywords

[No Author keywords available]

Indexed keywords

LEARNING SYSTEMS;

EID: 85019172761     PISSN: 10495258     EISSN: None     Source Type: Conference Proceeding    
DOI: None     Document Type: Conference Paper
Times cited : (1973)

References (35)
  • 3
    • 84921824478 scopus 로고
    • Université de Montréal, Département d'informatique et de recherche opérationnelle
    • Y. Bengio, S. Bengio, and J. Cloutier. Learning a synaptic learning rule. Université de Montréal, Département d'informatique et de recherche opérationnelle, 1990.
    • (1990) Learning a Synaptic Learning Rule
    • Bengio, Y.1    Bengio, S.2    Cloutier, J.3
  • 4
    • 85019265783 scopus 로고    scopus 로고
    • Creative Commons Attribution-ShareAlike 2.0 Generic
    • F. Bobolas. brain-neurons, 2009. URL https://www.flickr.com/photos/fbobolas/3822222947. Creative Commons Attribution-ShareAlike 2.0 Generic.
    • (2009) Brain-neurons
    • Bobolas, F.1
  • 9
    • 80052250414 scopus 로고    scopus 로고
    • Adaptive subgradient methods for online learning and stochastic optimization
    • J. Duchi, E. Hazan, and Y. Singer. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12: 2121-2159, 2011.
    • (2011) Journal of Machine Learning Research , vol.12 , pp. 2121-2159
    • Duchi, J.1    Hazan, E.2    Singer, Y.3
  • 10
    • 0032203999 scopus 로고    scopus 로고
    • A signal processing framework based on dynamic neural networks with application to problems in adaptation, filtering, and classification
    • L. A. Feldkamp and G. V. Puskorius. A signal processing framework based on dynamic neural networks with application to problems in adaptation, filtering, and classification. Proceedings of the IEEE, 86(11): 2259-2277, 1998.
    • (1998) Proceedings of the IEEE , vol.86 , Issue.11 , pp. 2259-2277
    • Feldkamp, L.A.1    Puskorius, G.V.2
  • 17
    • 85019206933 scopus 로고    scopus 로고
    • Creative Commons Attribution 2.0 Generic
    • T. Maley. neuron, 2011. URL https://www.flickr.com/photos/taylortotz101/6280077898. Creative Commons Attribution 2.0 Generic.
    • (2011) Neuron
    • Maley, T.1
  • 18
    • 84969988426 scopus 로고    scopus 로고
    • Optimizing neural networks with kronecker-factored approximate curvature
    • J. Martens and R. Grosse. Optimizing neural networks with Kronecker-factored approximate curvature. In International Conference on Machine Learning, pages 2408-2417, 2015.
    • (2015) International Conference on Machine Learning , pp. 2408-2417
    • Martens, J.1    Grosse, R.2
  • 20
    • 34548480020 scopus 로고
    • A method of solving a convex programming problem with convergence rate o (1/k2)
    • Y. Nesterov. A method of solving a convex programming problem with convergence rate o (1/k2). In Soviet Mathematics Doklady, Volume 27, pages 372-376, 1983.
    • (1983) Soviet Mathematics Doklady , vol.27 , pp. 372-376
    • Nesterov, Y.1
  • 21
    • 84943274699 scopus 로고
    • A direct adaptive method for faster backpropagation learning: The RPROP algorithm
    • M. Riedmiller and H. Braun. A direct adaptive method for faster backpropagation learning: The RPROP algorithm. In International Conference on Neural Networks, pages 586-591, 1993.
    • (1993) International Conference on Neural Networks , pp. 586-591
    • Riedmiller, M.1    Braun, H.2
  • 25
    • 0346377064 scopus 로고
    • Learning to control fast-weight memories: An alternative to dynamic recurrent networks
    • J. Schmidhuber. Learning to control fast-weight memories: An alternative to dynamic recurrent networks. Neural Computation, 4(1): 131-139, 1992.
    • (1992) Neural Computation , vol.4 , Issue.1 , pp. 131-139
    • Schmidhuber, J.1
  • 27
    • 0031186687 scopus 로고    scopus 로고
    • Shifting inductive bias with success-story algorithm, adaptive levin search, and incremental self-improvement
    • J. Schmidhuber, J. Zhao, and M. Wiering. Shifting inductive bias with success-story algorithm, adaptive levin search, and incremental self-improvement. Machine Learning, 28(1): 105-130, 1997.
    • (1997) Machine Learning , vol.28 , Issue.1 , pp. 105-130
    • Schmidhuber, J.1    Zhao, J.2    Wiering, M.3
  • 29
  • 31
    • 84893343292 scopus 로고    scopus 로고
    • Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude
    • T. Tieleman and G. Hinton. Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural Networks for Machine Learning, 4: 2, 2012.
    • (2012) COURSERA: Neural Networks for Machine Learning , vol.4 , pp. 2
    • Tieleman, T.1    Hinton, G.2
  • 32
    • 0032222083 scopus 로고    scopus 로고
    • An incremental gradient (-projection) method with momentum term and adaptive stepsize rule
    • P. Tseng. An incremental gradient (-projection) method with momentum term and adaptive stepsize rule. Journal on Optimization, 8(2): 506-531, 1998.
    • (1998) Journal on Optimization , vol.8 , Issue.2 , pp. 506-531
    • Tseng, P.1


* 이 정보는 Elsevier사의 SCOPUS DB에서 KISTI가 분석하여 추출한 것입니다.