-
1
-
-
84873371070
-
Fast global convergence rates of gradient methods for high-dimensional statistical recovery
-
Agarwal, A., Negahban, S.N., Wainwright, M.J.: Fast global convergence rates of gradient methods for high-dimensional statistical recovery. Ann. Statist. 40, 2452–2482 (2012)
-
(2012)
Ann. Statist
, vol.40
, pp. 2452-2482
-
-
Agarwal, A.1
Negahban, S.N.2
Wainwright, M.J.3
-
2
-
-
0034345435
-
Degenerate nonlinear programming with a quadratic growth condition. SIAM
-
Anitescu, M.: Degenerate nonlinear programming with a quadratic growth condition. SIAM J. Optim. 10, 1116–1135 (2000)
-
(2000)
J. Optim
, vol.10
, pp. 1116-1135
-
-
Anitescu, M.1
-
3
-
-
45749101721
-
On the convergence of the proximal algorithm for nonsmooth functions involving analytic features
-
Attouch, H., Bolte, J.: On the convergence of the proximal algorithm for nonsmooth functions involving analytic features. Math. Program. Ser. B 116, 5–16 (2009)
-
(2009)
Math. Program. Ser. B
, vol.116
, pp. 5-16
-
-
Attouch, H.1
Bolte, J.2
-
6
-
-
84988665116
-
-
arXiv:1510.08234
-
Bolte, J., Nguyen, T.P., Peypouquet, J., Suter, B.W.: From Error Bounds to the Complexity of First-Order Descent Methods for Convex Functions. arXiv:1510.08234 (2015)
-
(2015)
From Error Bounds to the Complexity of First-Order Descent Methods for Convex Functions
-
-
Bolte, J.1
Nguyen, T.P.2
Peypouquet, J.3
Suter, B.W.4
-
8
-
-
80053439496
-
Learning output kernels with block coordinate descent
-
Dinuzzo, F., Ong, C.S., Gehler, P., Pillonetto, G.: Learning output kernels with block coordinate descent. In: Proceedings of the 28th ICML, pp. 49–56 (2011)
-
(2011)
Proceedings of the 28Th ICML
, pp. 49-56
-
-
Dinuzzo, F.1
Ong, C.S.2
Gehler, P.3
Pillonetto, G.4
-
9
-
-
84969531714
-
Faster rates for the Frank-Wolfe method over stronglyconvex sets
-
Garber, D., Hazan, E.: Faster rates for the Frank-Wolfe method over stronglyconvex sets. In: Proceedings of the 32nd ICML, pp. 541–549 (2015)
-
(2015)
Proceedings of the 32Nd ICML
, pp. 541-549
-
-
Garber, D.1
Hazan, E.2
-
11
-
-
84884202492
-
ParNes: A rapidly convergent algorithm for accurate recovery of sparse and approximately sparse signals
-
Gu, M., Lim, L.-H., Wu, C.J.: ParNes: a rapidly convergent algorithm for accurate recovery of sparse and approximately sparse signals. Numer. Algor. 64, 321–347 (2013)
-
(2013)
Numer. Algor
, vol.64
, pp. 321-347
-
-
Gu, M.1
Lim, L.-H.2
Wu, C.J.3
-
12
-
-
49149139478
-
On sufficiency of the Kuhn-Tucker conditions
-
Hanson, M.A.: On sufficiency of the Kuhn-Tucker conditions. J. Math. Anal. Appl. 80, 545–550 (1981)
-
(1981)
J. Math. Anal. Appl
, vol.80
, pp. 545-550
-
-
Hanson, M.A.1
-
13
-
-
0000699491
-
On approximate solutions of systems of linear inequalities
-
Hoffman, A.J.: On approximate solutions of systems of linear inequalities. J. Res. Nat. Bur. Stand. 49, 263–265 (1952)
-
(1952)
J. Res. Nat. Bur. Stand
, vol.49
, pp. 263-265
-
-
Hoffman, A.J.1
-
14
-
-
84899032581
-
On the linear convergence of the proximal gradient method for trace norm regularization
-
Hou, K., Zhou, Z., So, A.M.-C., Luo, Z.-Q.: On the linear convergence of the proximal gradient method for trace norm regularization. In: Advances in Neural Information Processing Systems (NIPS), pp. 710–718 (2013)
-
(2013)
Advances in Neural Information Processing Systems (NIPS)
, pp. 710-718
-
-
Hou, K.1
Zhou, Z.2
So, A.M.3
Luo, Z.-Q.4
-
15
-
-
33646392997
-
QP algorithms with guaranteed accuracy and run time for support vector machines
-
Hush, D., Kelly, P., Scovel, C., Steinwart, I.: QP algorithms with guaranteed accuracy and run time for support vector machines. J. Mach. Learn. Res. 7, 733–769 (2006)
-
(2006)
J. Mach. Learn. Res
, vol.7
, pp. 733-769
-
-
Hush, D.1
Kelly, P.2
Scovel, C.3
Steinwart, I.4
-
19
-
-
84925448697
-
Asynchronous stochastic coordinate descent: Parallelism and convergence properties
-
Liu, J., Wright, S.J.: Asynchronous stochastic coordinate descent: parallelism and convergence properties. SIAM J. Optim. 25, 351–376 (2015)
-
(2015)
SIAM J. Optim
, vol.25
, pp. 351-376
-
-
Liu, J.1
Wright, S.J.2
-
20
-
-
84926624619
-
-
arXiv:1311.1873v3
-
Liu, J., Wright, S.J., Ré, C., Bittorf, V., Sridhar, S.: An Asynchronous Parallel Stochastic Coordinate Descent Algorithm. arXiv:1311.1873v3 (2014)
-
(2014)
An Asynchronous Parallel Stochastic Coordinate Descent Algorithm
-
-
Liu, J.1
Wright, S.J.2
Ré, C.3
Bittorf, V.4
Sridhar, S.5
-
21
-
-
0003366108
-
A Topological Property of Real Analytic Subsets (In French). Coll. du CNRS
-
Łojasiewicz, S.: A Topological Property of Real Analytic Subsets (in French). Coll. du CNRS, Les équations aux dérivées partielles, vol. 117, pp. 87–89 (1963)
-
(1963)
Les équations Aux dérivées Partielles
, vol.117
, pp. 87-89
-
-
Łojasiewicz, S.1
-
22
-
-
21344480786
-
Error bounds and convergence analysis of feasible descent methods: A general approach
-
Luo, Z.-Q., Tseng, P.: Error bounds and convergence analysis of feasible descent methods: a general approach. Ann. Oper. Res. 46, 157–178 (1993)
-
(1993)
Ann. Oper. Res
, vol.46
, pp. 157-178
-
-
Luo, Z.-Q.1
Tseng, P.2
-
24
-
-
35248862907
-
An introduction to boosting and leveraging
-
Mendelson, S., Smola, A.J. (eds.), Springer, Heidelberg
-
Meir, R., Rätsch, G.: An introduction to boosting and leveraging. In: Mendelson, S., Smola, A.J. (eds.) Advanced Lectures on Machine Learning. LNCS (LNAI), vol. 2600, pp. 118–183. Springer, Heidelberg (2003). doi:10.1007/3-540-36434-X_4
-
(2003)
Advanced Lectures on Machine Learning. LNCS (LNAI)
, vol.2600
, pp. 118-183
-
-
Meir, R.1
Rätsch, G.2
-
26
-
-
84962393728
-
Parallel random coordinate descent method for composite minimization: Convergence analysis and error bounds
-
Necoara, I., Clipici, D.: Parallel random coordinate descent method for composite minimization: convergence analysis and error bounds. SIAM J. Optim. 26, 197–226 (2016)
-
(2016)
SIAM J. Optim
, vol.26
, pp. 197-226
-
-
Necoara, I.1
Clipici, D.2
-
27
-
-
70450197241
-
Robust stochastic approximation approach to stochastic programming
-
Nemirovski, A., Juditsky, A., Lan, G., Shapiro, A.: Robust stochastic approximation approach to stochastic programming. SIAM J. Optim. 19, 1574–1609 (2009)
-
(2009)
SIAM J. Optim
, vol.19
, pp. 1574-1609
-
-
Nemirovski, A.1
Juditsky, A.2
Lan, G.3
Shapiro, A.4
-
29
-
-
84865692149
-
Efficiency of coordinate descent methods on huge-scale optimization problems
-
Nesterov, Y.: Efficiency of coordinate descent methods on huge-scale optimization problems. SIAM J. Optim. 22, 341–362 (2012)
-
(2012)
SIAM J. Optim
, vol.22
, pp. 341-362
-
-
Nesterov, Y.1
-
30
-
-
84969765273
-
Coordinate descent converges faster with the Gauss-Southwell rule than random selection
-
Nutini, J., Schmidt, M., Laradji, I.H., Friedlander, M., Koepke, H.: Coordinate descent converges faster with the Gauss-Southwell rule than random selection. In: Proceedings of the 32nd ICML, pp. 1632–1641 (2015)
-
(2015)
Proceedings of the 32Nd ICML
, pp. 1632-1641
-
-
Nutini, J.1
Schmidt, M.2
Laradji, I.H.3
Friedlander, M.4
Koepke, H.5
-
31
-
-
0001067809
-
Gradient methods for minimizing functionals
-
(in Russian)
-
Polyak, B.T.: Gradient methods for minimizing functionals. Zh. Vychisl. Mat. Mat. Fiz. 3, 643–653 (1963). (in Russian)
-
(1963)
Zh. Vychisl. Mat. Mat. Fiz
, vol.3
, pp. 643-653
-
-
Polyak, B.T.1
-
33
-
-
84877725219
-
A stochastic gradient method with an exponential convergence rate for finite training sets
-
Roux, N.L., Schmidt, M., Bach, F.R.: A stochastic gradient method with an exponential convergence rate for finite training sets. In: Advances in Neural Information Processing Systems (NIPS), pp. 2672–2680 (2012)
-
(2012)
Advances in Neural Information Processing Systems (NIPS)
, pp. 2672-2680
-
-
Roux, N.L.1
Schmidt, M.2
Bach, F.R.3
-
34
-
-
85162564991
-
Convergence rates of inexact proximalgradient methods for convex optimization
-
Schmidt, M., Roux, N.L., Bach, F.R.: Convergence rates of inexact proximalgradient methods for convex optimization. In: Advances in Neural Information Processing Systems (NIPS), pp. 1458–1466 (2011)
-
(2011)
Advances in Neural Information Processing Systems (NIPS)
, pp. 1458-1466
-
-
Schmidt, M.1
Roux, N.L.2
Bach, F.R.3
-
35
-
-
84875134236
-
Stochastic dual coordinate ascent methods for regularized loss minimization
-
Shalev-Shwartz, S., Zhang, T.: Stochastic dual coordinate ascent methods for regularized loss minimization. J. Mach. Learn. Res. 14, 567–599 (2013)
-
(2013)
J. Mach. Learn. Res
, vol.14
, pp. 567-599
-
-
Shalev-Shwartz, S.1
Zhang, T.2
-
36
-
-
84988550023
-
-
arXiv:1603.06159
-
Reddi, S.J., Sra, S., Poczos, B., Smola, A.: Fast Incremental Method for Nonconvex Optimization. arXiv:1603.06159 (2016)
-
(2016)
Fast Incremental Method for Nonconvex Optimization
-
-
Reddi, S.J.1
Sra, S.2
Poczos, B.3
Smola, A.4
-
37
-
-
84988681617
-
-
arXiv:1603.06160
-
Reddi, S.J., Hefny, A., Sra, S., Poczos, B., Smola, A.: Stochastic Variance Reduction for Nonconvex Optimization. arXiv:1603.06160 (2016)
-
(2016)
Stochastic Variance Reduction for Nonconvex Optimization
-
-
Reddi, S.J.1
Hefny, A.2
Sra, S.3
Poczos, B.4
Smola, A.5
-
38
-
-
84988616422
-
-
arXiv:1605.06900
-
Reddi, S.J., Sra, S., Poczos, B., Smola, A.,: Fast Stochastic Methods for Nonsmooth Nonconvex Optimization. arXiv:1605.06900 (2016)
-
(2016)
Fast Stochastic Methods for Nonsmooth Nonconvex Optimization
-
-
Reddi, S.J.1
Sra, S.2
Poczos, B.3
Smola, A.4
-
39
-
-
84897116612
-
Iteration complexity of randomized block-coordinate descent methods for minimizing a composite function
-
Richtárik, P., Takáč, M.: Iteration complexity of randomized block-coordinate descent methods for minimizing a composite function. Math. Program. Ser. A 144, 1–38 (2014)
-
(2014)
Math. Program. Ser. A
, vol.144
, pp. 1-38
-
-
Richtárik, P.1
Takáč, M.2
-
40
-
-
77958113563
-
Approximation accuracy, gradient methods, and error bound for structured convex optimization
-
Tseng, P.: Approximation accuracy, gradient methods, and error bound for structured convex optimization. Math. Program. Ser. B 125, 263–295 (2010)
-
(2010)
Math. Program. Ser. B
, vol.125
, pp. 263-295
-
-
Tseng, P.1
-
41
-
-
60349101047
-
Block-coordinate gradient descent method for linearly constrained nonsmooth separable optimization
-
Tseng, P., Yun, S.: Block-coordinate gradient descent method for linearly constrained nonsmooth separable optimization. J. Optim. Theory Appl. 140, 513–535 (2009)
-
(2009)
J. Optim. Theory Appl
, vol.140
, pp. 513-535
-
-
Tseng, P.1
Yun, S.2
-
42
-
-
84901632905
-
Iteration complexity of feasible descent methods for convex optimization
-
Wang, P.-W., Lin, C.-J.: Iteration complexity of feasible descent methods for convex optimization. J. Mach. Learn. Res. 15, 1523–1548 (2014)
-
(2014)
J. Mach. Learn. Res
, vol.15
, pp. 1523-1548
-
-
Wang, P.-W.1
Lin, C.-J.2
-
43
-
-
84880741411
-
A proximal-gradient homotopy method for the sparse leastsquares problem
-
Xiao, L., Zhang, T.: A proximal-gradient homotopy method for the sparse leastsquares problem. SIAM J. Optim. 23, 1062–1091 (2013)
-
(2013)
SIAM J. Optim
, vol.23
, pp. 1062-1091
-
-
Xiao, L.1
Zhang, T.2
|