-
1
-
-
33645505792
-
Convexity, classification, and risk bounds
-
Peter L. Bartlett, Michael I. Jordan, and Jon D. McAuliffe. Convexity, classification, and risk bounds. Journal of the American Statistical Association, 101(473): 138-156, 2006.
-
(2006)
Journal of the American Statistical Association
, vol.101
, Issue.473
, pp. 138-156
-
-
Bartlett, P.L.1
Jordan, M.I.2
McAuliffe, J.D.3
-
2
-
-
85014561619
-
A fast iterative shrinkage-thresholding algorithm for linear inverse problems
-
Amir Beck and Marc Teboulle. A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences, 2(1): 183-202, 2009.
-
(2009)
SIAM Journal on Imaging Sciences
, vol.2
, Issue.1
, pp. 183-202
-
-
Beck, A.1
Teboulle, M.2
-
3
-
-
85162035281
-
The tradeoffs of large scale learning
-
Vancouver, British Columbia, Canada
-
Léon Bottou and Olivier Bousquet. The tradeoffs of large scale learning. In Advances in Neural Information Processing Systems 20, Vancouver, British Columbia, Canada, pages 161-168, 2007.
-
(2007)
Advances in Neural Information Processing Systems 20
, pp. 161-168
-
-
Bottou, L.1
Bousquet, O.2
-
5
-
-
84998813250
-
Starting small - Learning with adaptive sample sizes
-
New York City, NY, USA
-
Hadi Daneshmand, Aurélien Lucchi, and Thomas Hofmann. Starting small - learning with adaptive sample sizes. In Proceedings of the 33nd International Conference on Machine Learning, ICML 2016, New York City, NY, USA, pages 1463-1471, 2016.
-
(2016)
Proceedings of the 33nd International Conference on Machine Learning, ICML 2016
, pp. 1463-1471
-
-
Daneshmand, H.1
Lucchi, A.2
Hofmann, T.3
-
6
-
-
84937908747
-
SAGA: A fast incremental gradient method with support for non-strongly convex composite objectives
-
Montreal, Quebec, Canada
-
Aaron Defazio, Francis R. Bach, and Simon Lacoste-Julien. SAGA: A fast incremental gradient method with support for non-strongly convex composite objectives. In Advances in Neural Information Processing Systems 27, Montreal, Quebec, Canada, pages 1646-1654, 2014.
-
(2014)
Advances in Neural Information Processing Systems 27
, pp. 1646-1654
-
-
Defazio, A.1
Bach, F.R.2
Lacoste-Julien, S.3
-
7
-
-
84965146340
-
Convergence rates of sub-sampled Newton methods
-
Montreal, Quebec, Canada
-
Murat A. Erdogdu and Andrea Montanari. Convergence rates of sub-sampled Newton methods. In Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015, Montreal, Quebec, Canada, pages 3052-3060, 2015.
-
(2015)
Advances in Neural Information Processing Systems 28: Annual Conference on Neural Information Processing Systems 2015
, pp. 3052-3060
-
-
Erdogdu, M.A.1
Montanari, A.2
-
8
-
-
84984710701
-
Competing with the empirical risk mini-mizer in a single pass
-
Paris, France, July 3-6, 2015
-
Roy Frostig, Rong Ge, Sham M. Kakade, and Aaron Sidford. Competing with the empirical risk mini-mizer in a single pass. In Proceedings of The 28th Conference on Learning Theory, COLT 2015, Paris, France, July 3-6, 2015, pages 728-763, 2015.
-
(2015)
Proceedings of the 28th Conference on Learning Theory, COLT 2015
, pp. 728-763
-
-
Frostig, R.1
Ge, R.2
Kakade, S.M.3
Sidford, A.4
-
9
-
-
84937762700
-
A globally convergent incremental Newton method
-
Mert Gürbüzbalaban, Asuman Ozdaglar, and Pablo Parrilo. A globally convergent incremental Newton method. Mathematical Programming, 151(1): 283-313, 2015.
-
(2015)
Mathematical Programming
, vol.151
, Issue.1
, pp. 283-313
-
-
Gürbüzbalaban, M.1
Ozdaglar, A.2
Parrilo, P.3
-
10
-
-
84898963415
-
Accelerating stochastic gradient descent using predictive variance reduction
-
Nevada, United States
-
Rie Johnson and Tong Zhang. Accelerating stochastic gradient descent using predictive variance reduction. In Advances in Neural Information Processing Systems 26. Lake Tahoe, Nevada, United States, pages 315-323, 2013.
-
(2013)
Advances in Neural Information Processing Systems 26. Lake Tahoe
, pp. 315-323
-
-
Johnson, R.1
Zhang, T.2
-
14
-
-
85067064967
-
A linearly-convergent stochastic L-BFGS algorithm
-
Cadiz, Spain
-
Philipp Moritz, Robert Nishihara, and Michael I. Jordan. A linearly-convergent stochastic L-BFGS algorithm. Proceedings of the 19th International Conference on Artificial Intelligence and Statistics, AISTATS 2016, Cadiz, Spain, pages 249-258, 2016.
-
(2016)
Proceedings of the 19th International Conference on Artificial Intelligence and Statistics, AISTATS 2016
, pp. 249-258
-
-
Moritz, P.1
Nishihara, R.2
Jordan, M.I.3
-
18
-
-
84998772498
-
SDNA: Stochastic dual Newton ascent for empirical risk minimization
-
New York City, NY, USA, June 19-24, 2016
-
Zheng Qu, Peter Richtárik, Martin Takác, and Olivier Fercoq. SDNA: stochastic dual Newton ascent for empirical risk minimization. In Proceedings of the 33nd International Conference on Machine Learning, ICML 2016, New York City, NY, USA, June 19-24, 2016, pages 1823-1832, 2016.
-
(2016)
Proceedings of the 33nd International Conference on Machine Learning, ICML 2016
, pp. 1823-1832
-
-
Qu, Z.1
Richtárik, P.2
Takác, M.3
Fercoq, O.4
-
20
-
-
84877725219
-
A stochastic gradient method with an exponential convergence rate for finite training sets
-
Nevada, United States
-
Nicolas Le Roux, Mark W. Schmidt, and Francis R. Bach. A stochastic gradient method with an exponential convergence rate for finite training sets. In Advances in Neural Information Processing Systems 25. Lake Tahoe, Nevada, United States, pages 2672-2680, 2012.
-
(2012)
Advances in Neural Information Processing Systems 25. Lake Tahoe
, pp. 2672-2680
-
-
Le Roux, N.1
Schmidt, M.W.2
Bach, F.R.3
-
21
-
-
84862300219
-
A stochastic quasi-Newton method for online convex optimization
-
San Juan, Puerto Rico
-
Nicol N. Schraudolph, Jin Yu, and Simon Günter. A stochastic quasi-Newton method for online convex optimization. In Proceedings of the Eleventh International Conference on Artificial Intelligence and Statistics, AISTATS 2007, San Juan, Puerto Rico, pages 436-443, 2007.
-
(2007)
Proceedings of the Eleventh International Conference on Artificial Intelligence and Statistics, AISTATS 2007
, pp. 436-443
-
-
Schraudolph, N.N.1
Yu, J.2
Günter, S.3
-
22
-
-
78649409695
-
Learnability, stability and uniform convergence
-
Shai Shalev-Shwartz, Ohad Shamir, Nathan Srebro, and Karthik Sridharan. Learnability, stability and uniform convergence. The Journal of Machine Learning Research, 11: 2635-2670, 2010.
-
(2010)
The Journal of Machine Learning Research
, vol.11
, pp. 2635-2670
-
-
Shalev-Shwartz, S.1
Shamir, O.2
Srebro, N.3
Sridharan, K.4
-
24
-
-
84953283129
-
Accelerated proximal stochastic dual coordinate ascent for regularized loss minimization
-
Shai Shalev-Shwartz and Tong Zhang. Accelerated proximal stochastic dual coordinate ascent for regularized loss minimization. Mathematical Programming, 155(1-2): 105-145, 2016.
-
(2016)
Mathematical Programming
, vol.155
, Issue.1-2
, pp. 105-145
-
-
Shalev-Shwartz, S.1
Zhang, T.2
-
26
-
-
84919793228
-
A proximal stochastic gradient method with progressive variance reduction
-
Lin Xiao and Tong Zhang. A proximal stochastic gradient method with progressive variance reduction. SIAM Journal on Optimization, 24(4): 2057-2075, 2014.
-
(2014)
SIAM Journal on Optimization
, vol.24
, Issue.4
, pp. 2057-2075
-
-
Xiao, L.1
Zhang, T.2
|