-
1
-
-
85014561619
-
A fast iterative shrinkage-threshold algorithm for linear inverse problems
-
A. BECK AND M. TEBOULLE, A fast iterative shrinkage-threshold algorithm for linear inverse problems, SIAM J. Imaging Sci., 2(2009), pp. 183-202.
-
(2009)
SIAM J. Imaging Sci.
, vol.2
, pp. 183-202
-
-
Beck, A.1
Teboulle, M.2
-
2
-
-
81155141540
-
-
Laboratory for Information and Decision Systems, MIT, Cambridge, MA
-
D. P. BERTSEKAS, Incremental Gradient, Subgradient, and Proximal Methods for Convex Optimization: A Survey, Report LIDS-P-2848, Laboratory for Information and Decision Systems, MIT, Cambridge, MA, 2010.
-
(2010)
Incremental Gradient, Subgradient, and Proximal Methods for Convex Optimization: A Survey
-
-
Bertsekas, D.P.1
-
3
-
-
81155150371
-
Incremental proximal methods for large scale convex optimization
-
D. P. BERTSEKAS, Incremental proximal methods for large scale convex optimization, Math. Program. Ser. B, 129(2011), pp. 163-195.
-
(2011)
Math. Program. Ser. B
, vol.129
, pp. 163-195
-
-
Bertsekas, D.P.1
-
4
-
-
84919828222
-
Covertype data set
-
K. Bache and M. Lichman, eds.
-
J. A. BLACKARD, D. J. DEAN, AND C. W. ANDERSON, Covertype data set, in UCI Machine Learning Repository, K. Bache and M. Lichman, eds., http://archive.ics.uci.edu/ml (2013).
-
(2013)
UCI Machine Learning Repository
-
-
Blackard, J.A.1
Dean, D.J.2
Anderson, C.W.3
-
5
-
-
39449100600
-
A convergent incremental gradient method with a constant step size
-
D. BLATT, A. O. HERO, AND H. GAUCHMAN, A convergent incremental gradient method with a constant step size, SIAM J. Optim., 18(2007), pp. 29-51.
-
(2007)
SIAM J. Optim.
, vol.18
, pp. 29-51
-
-
Blatt, D.1
Hero, A.O.2
Gauchman, H.3
-
6
-
-
84865685824
-
Sample size selection in optimization methods for machine learning
-
R. H. BYRD, G. M. CHIN, J. NOCEDAL, AND Y. WU, Sample size selection in optimization methods for machine learning, Math. Program. Ser. B, 134(2012), pp. 127-155.
-
(2012)
Math. Program. Ser. B
, vol.134
, pp. 127-155
-
-
Byrd, R.H.1
Chin, G.M.2
Nocedal, J.3
Wu, Y.4
-
7
-
-
0031496462
-
Convergence rates in forward-backward splitting
-
G. H.-G. CHEN AND R. T. ROCKAFELLAR, Convergence rates in forward-backward splitting, SIAM J. Optim., 7(1997), pp. 421-444.
-
(1997)
SIAM J. Optim.
, vol.7
, pp. 421-444
-
-
Chen, G.H.-G.1
Rockafellar, R.T.2
-
8
-
-
75249102673
-
Efficient online and batch learning using forward backward splitting
-
J. DUCHI AND Y. SINGER, Efficient online and batch learning using forward backward splitting, J. Mach. Learn. Res., 10(2009), pp. 2873-2898.
-
(2009)
J. Mach. Learn. Res.
, vol.10
, pp. 2873-2898
-
-
Duchi, J.1
Singer, Y.2
-
11
-
-
84877784292
-
Hybrid deterministic-stochastic methods for data fitting
-
M. P. FRIEDLANDER AND M. SCHMIDT, Hybrid deterministic-stochastic methods for data fitting, SIAM J. Sci. Comput., 34(2012), pp. 1380-1405.
-
(2012)
SIAM J. Sci. Comput.
, vol.34
, pp. 1380-1405
-
-
Friedlander, M.P.1
Schmidt, M.2
-
13
-
-
0003684449
-
-
2nd ed., Springer, New York
-
T. HASTIE, R. TIBSHIRANI, AND J. FRIEDMAN, The Elements of Statistical Learning: Data Mining, Inference, and Prediction, 2nd ed., Springer, New York, 2009.
-
(2009)
The Elements of Statistical Learning: Data Mining, Inference, and Prediction
-
-
Hastie, T.1
Tibshirani, R.2
Friedman, J.3
-
14
-
-
77956508892
-
Accelerated gradient methods for stochastic optimization and online learning
-
MIT Press, Cambridge, MA
-
C. HU, J. T. KWOK, AND W. PAN, Accelerated gradient methods for stochastic optimization and online learning, in Adv. Neural Inf. Process. Syst. 22, MIT Press, Cambridge, MA, 2009, pp. 781-789.
-
(2009)
Adv. Neural Inf. Process. Syst
, vol.22
, pp. 781-789
-
-
Hu, C.1
Kwok, J.T.2
Pan, W.3
-
15
-
-
84898963415
-
Accelerating stochastic gradient descent using predictive variance Reduction
-
MIT Press, Cambridge, MA
-
R. JOHNSON AND T. ZHANG, Accelerating stochastic gradient descent using predictive variance Reduction, in Adv. Neural Inf. Process. Syst. 26, MIT Press, Cambridge, MA, 2013, pp. 315-323.
-
(2013)
Adv. Neural Inf. Process. Syst
, vol.26
, pp. 315-323
-
-
Johnson, R.1
Zhang, T.2
-
17
-
-
64149115569
-
Sparse online learning via truncated gradient
-
J. LANGFORD, L. LI, AND T. ZHANG, Sparse online learning via truncated gradient, J. Mach. Learn. Res., 10(2009), pp. 777-801.
-
(2009)
J. Mach. Learn. Res.
, vol.10
, pp. 777-801
-
-
Langford, J.1
Li, L.2
Zhang, T.3
-
19
-
-
84876811202
-
RCV 1: A new benchmark collection for text categorization research
-
D. D. LEWIS, Y. YANG, T. ROSE, AND F. LI, RCV 1: A new benchmark collection for text categorization research, J. Mach. Learn. Res., 5(2004), pp. 361-397.
-
(2004)
J. Mach. Learn. Res.
, vol.5
, pp. 361-397
-
-
Lewis, D.D.1
Yang, Y.2
Rose, T.3
Li, F.4
-
20
-
-
0000345334
-
Splitting algorithms for the sum of two nonlinear operators
-
P.-L. LIONS AND B. MERCIER, Splitting algorithms for the sum of two nonlinear operators, SIAM J. Numer. Anal., 16(1979), pp. 964-979.
-
(1979)
SIAM J. Numer. Anal.
, vol.16
, pp. 964-979
-
-
Lions, P.-L.1
Mercier, B.2
-
21
-
-
84898988901
-
Mixed optimization for smooth functions
-
MIT Press, Cambridge, MA
-
M. MAHDAVI, L. ZHANG, AND R. JIN, Mixed optimization for smooth functions, in Adv. Neural Inf. Process. Syst. 26, MIT Press, Cambridge, MA, 2013, pp. 674-682.
-
(2013)
Adv. Neural Inf. Process. Syst
, vol.26
, pp. 674-682
-
-
Mahdavi, M.1
Zhang, L.2
Jin, R.3
-
22
-
-
84919828219
-
Stochastic gradient descent
-
preprint, arXiv:1310.5715
-
D. NEEDELL, N. SREBRO, AND R. WARD, Stochastic Gradient Descent, Weighted Sampling, and the Randomized Kaczmarz Algorithm, preprint, arXiv:1310.5715, 2014.
-
(2014)
Weighted Sampling, and the Randomized Kaczmarz Algorithm
-
-
Needell, D.1
Srebro, N.2
Ward, R.3
-
24
-
-
84865692149
-
Efficiency of coordinate descent methods on huge-scale optimization problems
-
Y. NESTEROV, Efficiency of coordinate descent methods on huge-scale optimization problems, SIAM J. Optim., 22(2012), pp. 341-362.
-
(2012)
SIAM J. Optim.
, vol.22
, pp. 341-362
-
-
Nesterov, Y.1
-
25
-
-
84879800501
-
Gradient methods for minimizing composite functions
-
YU. NESTEROV, Gradient methods for minimizing composite functions, Math. Program. Ser. B, 140(2013), pp. 125-161.
-
(2013)
Math. Program. Ser. B
, vol.140
, pp. 125-161
-
-
Nesterov, Yu.1
-
26
-
-
84897116612
-
Iteration complexity of randomized block-coordinate descent methods for minimizing a composite function
-
P. RICHTÁRIK AND M. TAKÁČ, Iteration complexity of randomized block-coordinate descent methods for minimizing a composite function, Math. Program., 144(2014), pp. 1-38.
-
(2014)
Math. Program.
, vol.144
, pp. 1-38
-
-
Richtárik, P.1
Takáč, M.2
-
27
-
-
0004267646
-
-
Princeton University Press, Princeton, NJ
-
R. T. ROCKAFELLAR, Convex Analysis, Princeton University Press, Princeton, NJ, 1970.
-
(1970)
Convex Analysis
-
-
Rockafellar, R.T.1
-
28
-
-
84877725219
-
A stochastic gradient method with an exponential convergence rate for finite training sets
-
MIT Press, Cambridge, MA
-
N. LE ROUX, M. SCHMIDT, AND F. BACH, A stochastic gradient method with an exponential convergence rate for finite training sets, in Adv. Neural Inf. Process. Syst. 25, MIT Press, Cambridge, MA, 2012, pp. 2672-2680.
-
(2012)
Adv. Neural Inf. Process. Syst
, vol.25
, pp. 2672-2680
-
-
Le Roux, N.1
Schmidt, M.2
Bach, F.3
-
29
-
-
84899025130
-
-
Technical report HAL, INRIA, Paris
-
M. SCHMIDT, N. LE ROUX, AND F. BACH, Minimizing Finite Sums with the Stochastic Average Gradient, Technical report HAL 00860051, INRIA, Paris, 2013.
-
(2013)
Minimizing Finite Sums with the Stochastic Average Gradient
-
-
Schmidt, M.1
Le Roux, N.2
Bach, F.3
-
31
-
-
84875134236
-
Stochastic dual coordinate ascent methods for regularized loss minimization
-
S. SHALEV-SHWARTZ AND T. ZHANG, Stochastic dual coordinate ascent methods for regularized loss minimization, J. Mach. Learn. Res., 14(2013), pp. 567-599.
-
(2013)
J. Mach. Learn. Res.
, vol.14
, pp. 567-599
-
-
Shalev-Shwartz, S.1
Zhang, T.2
-
32
-
-
0033884548
-
A modified forward-backward splitting method for maximal monotone mappings
-
P. TSENG, A modified forward-backward splitting method for maximal monotone mappings, SIAM J. Control Optim., 38(2000), pp. 431-446.
-
(2000)
SIAM J. Control Optim.
, vol.38
, pp. 431-446
-
-
Tseng, P.1
-
33
-
-
78649396336
-
Dual averaging methods for regularized stochastic learning and online optimization
-
L. XIAO, Dual averaging methods for regularized stochastic learning and online optimization, J. Mach. Learn. Res., 11(2010), pp. 2534-2596.
-
(2010)
J. Mach. Learn. Res.
, vol.11
, pp. 2534-2596
-
-
Xiao, L.1
-
34
-
-
84898971059
-
Linear convergence with condition number independent access of full gradients
-
MIT Press, Cambridge, MA
-
L. ZHANG, M. MAHDAVI, AND R. JIN, Linear convergence with condition number independent access of full gradients, in Adv. Neural Inf. Process. Syst. 26, MIT Press, Cambridge, MA, 2013, pp. 980-988.
-
(2013)
Adv. Neural Inf. Process. Syst
, vol.26
, pp. 980-988
-
-
Zhang, L.1
Mahdavi, M.2
Jin, R.3
|