-
1
-
-
84860244324
-
Information-theoretic lower bounds on the oracle complexity of stochastic convex optimization
-
A. Agarwal, P. L. Bartlett, P. D. Ravikumar, and M. J. Wainwright. Information-theoretic lower bounds on the oracle complexity of stochastic convex optimization. IEEE Transactions on Information Theory, 58(5):3235-3249, 2012.
-
(2012)
IEEE Transactions on Information Theory
, vol.58
, Issue.5
, pp. 3235-3249
-
-
Agarwal, A.1
Bartlett, P.L.2
Ravikumar, P.D.3
Wainwright, M.J.4
-
2
-
-
0037403111
-
Mirror descent and nonlinear projected subgradient methods for convex optimization
-
A. Beck and M. Teboulle. Mirror descent and nonlinear projected subgradient methods for convex optimization. Oper. Res. Lett., 31(3):167-175, 2003.
-
(2003)
Oper. Res. Lett.
, vol.31
, Issue.3
, pp. 167-175
-
-
Beck, A.1
Teboulle, M.2
-
3
-
-
85162035281
-
The tradeoffs of large scale learning
-
L. Bottou and O. Bousquet. The tradeoffs of large scale learning. In NIPS, pages 161-168, 2008.
-
(2008)
NIPS
, pp. 161-168
-
-
Bottou, L.1
Bousquet, O.2
-
6
-
-
84865685824
-
Sample size selection in optimization methods for machine learning
-
R. H. Byrd, G. M. Chin, J. Nocedal, and Y.Wu. Sample size selection in optimization methods for machine learning. Mathematical programming, 134(1):127-155, 2012.
-
(2012)
Mathematical Programming
, vol.134
, Issue.1
, pp. 127-155
-
-
Byrd, R.H.1
Chin, G.M.2
Nocedal, J.3
Wu, Y.4
-
7
-
-
85162498265
-
Better mini-batch algorithms via accelerated gradient methods
-
A. Cotter, O. Shamir, N. Srebro, and K. Sridharan. Better mini-batch algorithms via accelerated gradient methods. In NIPS, pages 1647-1655, 2011.
-
(2011)
NIPS
, pp. 1647-1655
-
-
Cotter, A.1
Shamir, O.2
Srebro, N.3
Sridharan, K.4
-
8
-
-
84857527621
-
Optimal distributed online prediction using mini-batches
-
O. Dekel, R. Gilad-Bachrach, O. Shamir, and L. Xiao. Optimal distributed online prediction using mini-batches. The Journal of Machine Learning Research, 13:165-202, 2012.
-
(2012)
The Journal of Machine Learning Research
, vol.13
, pp. 165-202
-
-
Dekel, O.1
Gilad-Bachrach, R.2
Shamir, O.3
Xiao, L.4
-
10
-
-
35348918820
-
Logarithmic regret algorithms for online convex optimization
-
E. Hazan, A. Agarwal, and S. Kale. Logarithmic regret algorithms for online convex optimization. Machine Learning, 69(2-3):169-192, 2007.
-
(2007)
Machine Learning
, vol.69
, Issue.2-3
, pp. 169-192
-
-
Hazan, E.1
Agarwal, A.2
Kale, S.3
-
11
-
-
84898471955
-
Beyond the regret minimization barrier: An optimal algorithm for stochastic strongly-convex optimization
-
E. Hazan and S. Kale. Beyond the regret minimization barrier: an optimal algorithm for stochastic strongly-convex optimization. Journal of Machine Learning Research-Proceedings Track, 19:421-436, 2011.
-
(2011)
Journal of Machine Learning Research-Proceedings Track
, vol.19
, pp. 421-436
-
-
Hazan, E.1
Kale, S.2
-
13
-
-
70450197241
-
Robust stochastic approximation approach to stochastic programming
-
A. Nemirovski, A. Juditsky, G. Lan, and A. Shapiro. Robust stochastic approximation approach to stochastic programming. SIAM J. on Optimization, 19:1574-1609, 2009.
-
(2009)
SIAM J. on Optimization
, vol.19
, pp. 1574-1609
-
-
Nemirovski, A.1
Juditsky, A.2
Lan, G.3
Shapiro, A.4
-
15
-
-
34548480020
-
A method of solving a convex programming problem with convergence rate o (1/k2)
-
Y. Nesterov. A method of solving a convex programming problem with convergence rate o (1/k2). In Soviet Mathematics Doklady, volume 27, pages 372-376, 1983.
-
(1983)
Soviet Mathematics Doklady
, vol.27
, pp. 372-376
-
-
Nesterov, Y.1
-
17
-
-
33144470576
-
Excessive gap technique in nonsmooth convex minimization
-
Y. Nesterov. Excessive gap technique in nonsmooth convex minimization. SIAM Journal on Optimization, 16(1):235-249, 2005.
-
(2005)
SIAM Journal on Optimization
, vol.16
, Issue.1
, pp. 235-249
-
-
Nesterov, Y.1
-
18
-
-
17444406259
-
Smooth minimization of non-smooth functions
-
Y. Nesterov. Smooth minimization of non-smooth functions. Math. Program., 103(1):127-152, 2005.
-
(2005)
Math. Program.
, vol.103
, Issue.1
, pp. 127-152
-
-
Nesterov, Y.1
-
19
-
-
84867120686
-
Making gradient descent optimal for strongly convex stochastic optimization
-
A. Rakhlin, O. Shamir, and K. Sridharan. Making gradient descent optimal for strongly convex stochastic optimization. In ICML, 2012.
-
(2012)
ICML
-
-
Rakhlin, A.1
Shamir, O.2
Sridharan, K.3
-
20
-
-
84877725219
-
A stochastic gradient method with an exponential convergence rate for finite training sets
-
N. L. Roux, M. W. Schmidt, and F. Bach. A stochastic gradient method with an exponential convergence rate for finite training sets. In NIPS, pages 2672-2680, 2012.
-
(2012)
NIPS
, pp. 2672-2680
-
-
Roux, N.L.1
Schmidt, M.W.2
Bach, F.3
-
21
-
-
34547964973
-
Pegasos: Primal estimated sub-gradient solver for svm
-
S. Shalev-Shwartz, Y. Singer, and N. Srebro. Pegasos: Primal estimated sub-gradient solver for svm. In ICML, pages 807-814, 2007.
-
(2007)
ICML
, pp. 807-814
-
-
Shalev-Shwartz, S.1
Singer, Y.2
Srebro, N.3
-
22
-
-
84875134236
-
Stochastic dual coordinate ascent methods for regularized loss minimization
-
S. Shalev-Shwartz and T. Zhang. Stochastic dual coordinate ascent methods for regularized loss minimization. JMLR, 14:567599, 2013.
-
(2013)
JMLR
, vol.14
, pp. 567-599
-
-
Shalev-Shwartz, S.1
Zhang, T.2
-
23
-
-
84897554805
-
Stochastic gradient descent for non-smooth optimization: Convergence results and optimal averaging schemes
-
O. Shamir and T. Zhang. Stochastic gradient descent for non-smooth optimization: Convergence results and optimal averaging schemes. ICML, 2013.
-
(2013)
ICML
-
-
Shamir, O.1
Zhang, T.2
-
24
-
-
84897492359
-
O(logt) projections for stochastic optimization of smooth and strongly convex functions
-
L. Zhang, T. Yang, R. Jin, and X. He. O(logt) projections for stochastic optimization of smooth and strongly convex functions. ICML, 2013.
-
(2013)
ICML
-
-
Zhang, L.1
Yang, T.2
Jin, R.3
He, X.4
|