-
2
-
-
72449211086
-
A stochastic quasiNewton method for online convex optimization
-
San Juan, Puerto Rico, Society for Artificial Intelligence and Statistics
-
N. N. Schraudolph, J. Yu, and S. Günter. A stochastic quasiNewton method for online convex optimization. In Proc. 11th Intl. Conf. Artificial Intelligence and Statistics (AIstats), San Juan, Puerto Rico, 2007. Society for Artificial Intelligence and Statistics.
-
(2007)
Proc. 11th Intl. Conf. Artificial Intelligence and Statistics (AIstats)
-
-
Schraudolph, N.N.1
Yu, J.2
Günter, S.3
-
4
-
-
0000396062
-
Natural gradient works efficiently in learning
-
S. Amari. Natural gradient works efficiently in learning. Neural Computation, 10(2):251-276, 1998.
-
(1998)
Neural Computation
, vol.10
, Issue.2
, pp. 251-276
-
-
Amari, S.1
-
6
-
-
0008815681
-
Exponentiated gradient versus gradient descent for linear predictors
-
J. Kivinen and M. K. Warmuth. Exponentiated gradient versus gradient descent for linear predictors. Information and Computation, 132(1):164, 1997.
-
(1997)
Information and Computation
, vol.132
, Issue.1
, pp. 164
-
-
Kivinen, J.1
Warmuth, M.K.2
-
7
-
-
0036631778
-
Fast curvature matrix-vector products for second-order gradient descent
-
N. N. Schraudolph. Fast curvature matrix-vector products for second-order gradient descent. Neural Computation, 14(7):1723-1738, 2002.
-
(2002)
Neural Computation
, vol.14
, Issue.7
, pp. 1723-1738
-
-
Schraudolph, N.N.1
-
8
-
-
0035370926
-
Relative loss bounds for online density estimation with the exponential family of distributions
-
Special issue on Theoretical Advances in Online Learning, Game Theory and Boosting
-
K. Azoury and M. K. Warmuth. Relative loss bounds for online density estimation with the exponential family of distributions. Machine Learning, 43 (3):211-246, 2001. Special issue on Theoretical Advances in Online Learning, Game Theory and Boosting.
-
(2001)
Machine Learning
, vol.43
, Issue.3
, pp. 211-246
-
-
Azoury, K.1
Warmuth, M.K.2
-
10
-
-
35348918820
-
Logarithmic regret algorithms for online convex optimization
-
E. Hazan, A. Agarwal, and S. Kale. Logarithmic regret algorithms for online convex optimization. Machine Learning, 69(23):169-192, 2007.
-
(2007)
Machine Learning
, vol.69
, Issue.23
, pp. 169-192
-
-
Hazan, E.1
Agarwal, A.2
Kale, S.3
-
11
-
-
1942484421
-
Online convex programming and generalised infinitesimal gradient ascent
-
M. Zinkevich. Online convex programming and generalised infinitesimal gradient ascent. In Proc. Intl. Conf. Machine Learning, pages 928-936, 2003.
-
(2003)
Proc. Intl. Conf. Machine Learning
, pp. 928-936
-
-
Zinkevich, M.1
-
13
-
-
0000792515
-
Multidimensional stochastic approximation methods
-
J. Blum. Multidimensional stochastic approximation methods. Annals of Mathematical Statistics, 25:737-744, 1954.
-
(1954)
Annals of Mathematical Statistics
, vol.25
, pp. 737-744
-
-
Blum, J.1
-
14
-
-
0002686402
-
A convergence theorem for non negative almost supermartingales and some applications
-
Ohio State Univ. Columbus, Ohio, Academic Press, New York
-
H. E. Robbins and D. O. Siegmund. A convergence theorem for non negative almost supermartingales and some applications. In Proc. Sympos. Optimiz ing Methods in Statistics, pages 233-257, Ohio State Univ., Columbus, Ohio, 1971. Academic Press, New York.
-
(1971)
Proc. Sympos. Optimiz Ing Methods in Statistics
, pp. 233-257
-
-
Robbins, H.E.1
Siegmund, D.O.2
-
15
-
-
0013309537
-
Online algorithms and stochastic approximations
-
D. Saad, editor, Online Lea, Cambridge University Press, Cambridge, UK
-
L. Bottou. Online algorithms and stochastic approximations. In D. Saad, editor, Online Learning and Neural Networks. Cambridge University Press, Cambridge, UK, 1998.
-
(1998)
Rning and Neural Networks
-
-
Bottou, L.1
|