-
1
-
-
84857710417
-
Optimization with sparsity-inducing penalties
-
F. Bach, R. Jenatton, J. Mairal, and G. Obozinski. Optimization with sparsity-inducing penalties. Foundations and Trends in Machine Learning, 4(1): 1-106, 2012.
-
(2012)
Foundations and Trends in Machine Learning
, vol.4
, Issue.1
, pp. 1-106
-
-
Bach, F.1
Jenatton, R.2
Mairal, J.3
Obozinski, G.4
-
2
-
-
34249757641
-
On the search for new learning rules for ANNs
-
S. Bengio, Y. Bengio, and J. Cloutier. On the search for new learning rules for ANNs. Neural Processing Letters, 2(4): 26-30, 1995.
-
(1995)
Neural Processing Letters
, vol.2
, Issue.4
, pp. 26-30
-
-
Bengio, S.1
Bengio, Y.2
Cloutier, J.3
-
4
-
-
85019265783
-
-
Creative Commons Attribution-ShareAlike 2.0 Generic
-
F. Bobolas. brain-neurons, 2009. URL https://www.flickr.com/photos/fbobolas/3822222947. Creative Commons Attribution-ShareAlike 2.0 Generic.
-
(2009)
Brain-neurons
-
-
Bobolas, F.1
-
7
-
-
72249100259
-
Imagenet: A large-scale hierarchical image database
-
IEEE
-
J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei. Imagenet: A large-scale hierarchical image database. In Computer Vision and Pattern Recognition, pages 248-255. IEEE, 2009.
-
(2009)
Computer Vision and Pattern Recognition
, pp. 248-255
-
-
Deng, J.1
Dong, W.2
Socher, R.3
Li, L.-J.4
Li, K.5
Fei-Fei, L.6
-
9
-
-
80052250414
-
Adaptive subgradient methods for online learning and stochastic optimization
-
J. Duchi, E. Hazan, and Y. Singer. Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12: 2121-2159, 2011.
-
(2011)
Journal of Machine Learning Research
, vol.12
, pp. 2121-2159
-
-
Duchi, J.1
Hazan, E.2
Singer, Y.3
-
10
-
-
0032203999
-
A signal processing framework based on dynamic neural networks with application to problems in adaptation, filtering, and classification
-
L. A. Feldkamp and G. V. Puskorius. A signal processing framework based on dynamic neural networks with application to problems in adaptation, filtering, and classification. Proceedings of the IEEE, 86(11): 2259-2277, 1998.
-
(1998)
Proceedings of the IEEE
, vol.86
, Issue.11
, pp. 2259-2277
-
-
Feldkamp, L.A.1
Puskorius, G.V.2
-
17
-
-
85019206933
-
-
Creative Commons Attribution 2.0 Generic
-
T. Maley. neuron, 2011. URL https://www.flickr.com/photos/taylortotz101/6280077898. Creative Commons Attribution 2.0 Generic.
-
(2011)
Neuron
-
-
Maley, T.1
-
18
-
-
84969988426
-
Optimizing neural networks with kronecker-factored approximate curvature
-
J. Martens and R. Grosse. Optimizing neural networks with Kronecker-factored approximate curvature. In International Conference on Machine Learning, pages 2408-2417, 2015.
-
(2015)
International Conference on Machine Learning
, pp. 2408-2417
-
-
Martens, J.1
Grosse, R.2
-
20
-
-
34548480020
-
A method of solving a convex programming problem with convergence rate o (1/k2)
-
Y. Nesterov. A method of solving a convex programming problem with convergence rate o (1/k2). In Soviet Mathematics Doklady, Volume 27, pages 372-376, 1983.
-
(1983)
Soviet Mathematics Doklady
, vol.27
, pp. 372-376
-
-
Nesterov, Y.1
-
21
-
-
84943274699
-
A direct adaptive method for faster backpropagation learning: The RPROP algorithm
-
M. Riedmiller and H. Braun. A direct adaptive method for faster backpropagation learning: The RPROP algorithm. In International Conference on Neural Networks, pages 586-591, 1993.
-
(1993)
International Conference on Neural Networks
, pp. 586-591
-
-
Riedmiller, M.1
Braun, H.2
-
23
-
-
84998717754
-
Meta-learning with memory-augmented neural networks
-
A. Santoro, S. Bartunov, M. Botvinick, D. Wierstra, and T. Lillicrap. Meta-learning with memory-augmented neural networks. In International Conference on Machine Learning, 2016.
-
(2016)
International Conference on Machine Learning
-
-
Santoro, A.1
Bartunov, S.2
Botvinick, M.3
Wierstra, D.4
Lillicrap, T.5
-
25
-
-
0346377064
-
Learning to control fast-weight memories: An alternative to dynamic recurrent networks
-
J. Schmidhuber. Learning to control fast-weight memories: An alternative to dynamic recurrent networks. Neural Computation, 4(1): 131-139, 1992.
-
(1992)
Neural Computation
, vol.4
, Issue.1
, pp. 131-139
-
-
Schmidhuber, J.1
-
27
-
-
0031186687
-
Shifting inductive bias with success-story algorithm, adaptive levin search, and incremental self-improvement
-
J. Schmidhuber, J. Zhao, and M. Wiering. Shifting inductive bias with success-story algorithm, adaptive levin search, and incremental self-improvement. Machine Learning, 28(1): 105-130, 1997.
-
(1997)
Machine Learning
, vol.28
, Issue.1
, pp. 105-130
-
-
Schmidhuber, J.1
Zhao, J.2
Wiering, M.3
-
29
-
-
0026971570
-
Adapting bias by gradient descent: An incremental version of delta-bar-delta
-
R. S. Sutton. Adapting bias by gradient descent: An incremental version of delta-bar-delta. In Association for the Advancement of Artificial Intelligence, pages 171-176, 1992.
-
(1992)
Association for the Advancement of Artificial Intelligence
, pp. 171-176
-
-
Sutton, R.S.1
-
31
-
-
84893343292
-
Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude
-
T. Tieleman and G. Hinton. Lecture 6.5-rmsprop: Divide the gradient by a running average of its recent magnitude. COURSERA: Neural Networks for Machine Learning, 4: 2, 2012.
-
(2012)
COURSERA: Neural Networks for Machine Learning
, vol.4
, pp. 2
-
-
Tieleman, T.1
Hinton, G.2
-
32
-
-
0032222083
-
An incremental gradient (-projection) method with momentum term and adaptive stepsize rule
-
P. Tseng. An incremental gradient (-projection) method with momentum term and adaptive stepsize rule. Journal on Optimization, 8(2): 506-531, 1998.
-
(1998)
Journal on Optimization
, vol.8
, Issue.2
, pp. 506-531
-
-
Tseng, P.1
-
34
-
-
0033100384
-
Fixed-weight on-line learning
-
A. S. Younger, P. R. Conwell, and N. E. Cotter. Fixed-weight on-line learning. Transactions on Neural Networks, 10(2): 272-283, 1999.
-
(1999)
Transactions on Neural Networks
, vol.10
, Issue.2
, pp. 272-283
-
-
Younger, A.S.1
Conwell, P.R.2
Cotter, N.E.3
|