-
1
-
-
0030303864
-
Incremental least squares methods and the extended Kalman filter
-
Bertsekas, D.: Incremental least squares methods and the extended Kalman filter. SIAM J. Optim. 6(3), 807–822 (1996)
-
(1996)
SIAM J. Optim.
, vol.6
, Issue.3
, pp. 807-822
-
-
Bertsekas, D.1
-
2
-
-
0031285678
-
A new class of incremental gradient methods for least squares problems
-
Bertsekas, D.: A new class of incremental gradient methods for least squares problems. SIAM J. Optim. 7(4), 913–926 (1997)
-
(1997)
SIAM J. Optim.
, vol.7
, Issue.4
, pp. 913-926
-
-
Bertsekas, D.1
-
4
-
-
84867120454
-
Incremental gradient, subgradient, and proximal methods for convex optimization: a survey
-
Bertsekas, D.: Incremental gradient, subgradient, and proximal methods for convex optimization: a survey. Optim. Mach. Learn. 2010, 1–38 (2011)
-
(2011)
Optim. Mach. Learn.
, vol.2010
-
-
Bertsekas, D.1
-
6
-
-
39449100600
-
A convergent incremental gradient method with a constant step size
-
Blatt, D., Hero, A., Gauchman, H.: A convergent incremental gradient method with a constant step size. SIAM J. Optim. 18(1), 29–51 (2007)
-
(2007)
SIAM J. Optim.
, vol.18
, Issue.1
, pp. 29-51
-
-
Blatt, D.1
Hero, A.2
Gauchman, H.3
-
7
-
-
68949096711
-
SGD-QN: careful quasi-Newton stochastic gradient descent
-
Bordes, A., Bottou, L., Gallinari, P.: SGD-QN: careful quasi-Newton stochastic gradient descent. J. Mach. Learn. Res. 10, 1737–1754 (2009)
-
(2009)
J. Mach. Learn. Res
, vol.10
, pp. 1737-1754
-
-
Bordes, A.1
Bottou, L.2
Gallinari, P.3
-
8
-
-
84904136037
-
Large-scale machine learning with stochastic gradient descent
-
Physica-Verlag HD, Heidelberg
-
Bottou, L.: Large-scale machine learning with stochastic gradient descent. In: Lechevallier, Y., Saporta, G. (eds.) Proceedings of COMPSTAT’2010, pp. 177–186. Physica-Verlag HD, Heidelberg (2010)
-
(2010)
Proceedings of COMPSTAT’2010
, pp. 177-186
-
-
Bottou, L.1
Lechevallier, Y.2
Saporta, G.3
-
9
-
-
17444425307
-
On-line learning for very large data sets
-
Bottou, L., Le Cun, Y.: On-line learning for very large data sets. Appl. Stoch. Models Bus. Ind. 21(2), 137–151 (2005)
-
(2005)
Appl. Stoch. Models Bus. Ind.
, vol.21
, Issue.2
, pp. 137-151
-
-
Bottou, L.1
Le Cun, Y.2
-
10
-
-
80051762104
-
Distributed optimization and statistical learning via the alternating direction method of multipliers
-
Boyd, S., Parikh, N., Chu, E., Peleato, B., Eckstein, J.: Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Learn. 3(1), 1–122 (2011)
-
(2011)
Found. Trends Mach. Learn.
, vol.3
, Issue.1
-
-
Boyd, S.1
Parikh, N.2
Chu, E.3
Peleato, B.4
Eckstein, J.5
-
11
-
-
84907027033
-
-
Byrd, R.H., Hansen, S.L., Nocedal, J., Singer, Y.: A Stochastic Quasi-Newton Method for Large-Scale Optimization. arXiv preprint arXiv:1401.7020 (2014)
-
(2014)
A Stochastic Quasi-Newton Method for Large-Scale Optimization. arXiv preprint arXiv
, vol.1401
, pp. 7020
-
-
Byrd, R.H.1
Hansen, S.L.2
Nocedal, J.3
Singer, Y.4
-
12
-
-
0035534791
-
Inexact perturbed Newton methods and applications to a class of Krylov solvers
-
Cătinaş, E.: Inexact perturbed Newton methods and applications to a class of Krylov solvers. J. Optim. Theory Appl. 108(3), 543–570 (2001)
-
(2001)
J. Optim. Theory Appl.
, vol.108
, Issue.3
, pp. 543-570
-
-
Cătinaş, E.1
-
13
-
-
0016916995
-
New least-square algorithms
-
Davidon, W.C.: New least-square algorithms. J. Optim. Theory Appl. 18(2), 187–197 (1976)
-
(1976)
J. Optim. Theory Appl.
, vol.18
, Issue.2
, pp. 187-197
-
-
Davidon, W.C.1
-
14
-
-
84945917651
-
-
Defazio, A., Bach, F., Lacoste-Julien, S.: SAGA: a fast incremental gradient method with support for non-strongly convex composite objectives. arXiv preprint arXiv:1407.0202 (2014)
-
(2014)
SAGA: a fast incremental gradient method with support for non-strongly convex composite objectives. arXiv preprint arXiv
, vol.1407
, pp. 0202
-
-
Defazio, A.1
Bach, F.2
Lacoste-Julien, S.3
-
15
-
-
0000746005
-
Inexact Newton methods
-
Dembo, R., Eisenstat, S., Steihaug, T.: Inexact Newton methods. SIAM J. Numer. Anal. 19(2), 400–408 (1982)
-
(1982)
SIAM J. Numer. Anal.
, vol.19
, Issue.2
, pp. 400-408
-
-
Dembo, R.1
Eisenstat, S.2
Steihaug, T.3
-
16
-
-
84966259557
-
A characterization of superlinear convergence and its application to quasi-newton methods
-
Dennis, J.E., Moré, J.J.: A characterization of superlinear convergence and its application to quasi-newton methods. Math. Comput. 28(126), 549–560 (1974)
-
(1974)
Math. Comput.
, vol.28
, Issue.126
, pp. 549-560
-
-
Dennis, J.E.1
Moré, J.J.2
-
17
-
-
80052250414
-
Adaptive subgradient methods for online learning and stochastic optimization
-
Duchi, J., Hazan, E., Singer, Y.: Adaptive subgradient methods for online learning and stochastic optimization. J. Mach. Learn. Res. 12, 2121–2159 (2011)
-
(2011)
J. Mach. Learn. Res.
, vol.12
, pp. 2121-2159
-
-
Duchi, J.1
Hazan, E.2
Singer, Y.3
-
18
-
-
84912526440
-
Optimization with first-order surrogate functions
-
Atlanta, United States
-
Mairal, J.: Optimization with first-order surrogate functions. In: ICML, Volume 28 of JMLR Proceedings, pp. 783–791, Atlanta, United States (2013)
-
(2013)
ICML, Volume 28 of JMLR Proceedings
, pp. 783-791
-
-
Mairal, J.1
-
19
-
-
84972916837
-
Serial and parallel backpropagation convergence via nonmonotone perturbed minimization
-
Mangasarian, O.L., Solodov, M.V.: Serial and parallel backpropagation convergence via nonmonotone perturbed minimization. Optim. Methods Softw. 4(2), 103–116 (1994)
-
(1994)
Optim. Methods Softw.
, vol.4
, Issue.2
, pp. 103-116
-
-
Mangasarian, O.L.1
Solodov, M.V.2
-
21
-
-
0242365738
-
The incremental Gauss–Newton algorithm with adaptive stepsize rule
-
Moriyama, H., Yamashita, N., Fukushima, M.: The incremental Gauss–Newton algorithm with adaptive stepsize rule. Comput. Optim. Appl. 26(2), 107–141 (2003)
-
(2003)
Comput. Optim. Appl.
, vol.26
, Issue.2
, pp. 107-141
-
-
Moriyama, H.1
Yamashita, N.2
Fukushima, M.3
-
22
-
-
0005422061
-
Convergence rate of incremental subgradient algorithms
-
Uryasev S, Pardalos PM, (eds), 54, Springer, US
-
Nedić, A., Bertsekas, D.: Convergence rate of incremental subgradient algorithms. In: Uryasev, S., Pardalos, P.M. (eds.) Stochastic Optimization: Algorithms and Applications. Applied Optimization, vol. 54, pp. 223–264. Springer, US (2001)
-
(2001)
Stochastic Optimization: Algorithms and Applications. Applied Optimization
, pp. 223-264
-
-
Nedić, A.1
Bertsekas, D.2
-
23
-
-
62749193789
-
On the rate of convergence of distributed subgradient methods for multi-agent optimization
-
Nedić, A., Ozdaglar, A.: On the rate of convergence of distributed subgradient methods for multi-agent optimization. In: Proceedings of IEEE CDC, pp. 4711–4716 (2007)
-
(2007)
Proceedings of IEEE CDC
, pp. 4711-4716
-
-
Nedić, A.1
Ozdaglar, A.2
-
24
-
-
59649103668
-
Distributed subgradient methods for multi-agent optimization
-
Nedić, A., Ozdaglar, A.: Distributed subgradient methods for multi-agent optimization. IEEE Trans. Autom. Control 54(1), 48–61 (2009)
-
(2009)
IEEE Trans. Autom. Control
, vol.54
, Issue.1
, pp. 48-61
-
-
Nedić, A.1
Ozdaglar, A.2
-
26
-
-
50249117735
-
Stochastic incremental gradient descent for estimation in sensor networks
-
Ram, S.S., Nedic, A., Veeravalli, V.V.: Stochastic incremental gradient descent for estimation in sensor networks. In: Signals, Systems and Computers, 2007. ACSSC 2007. Conference Record of the Forty-First Asilomar Conference on, pp. 582–586 (2007)
-
(2007)
Signals, Systems and Computers, 2007. ACSSC 2007. Conference Record of the Forty-First Asilomar Conference on
, pp. 582-586
-
-
Ram, S.S.1
Nedic, A.2
Veeravalli, V.V.3
-
27
-
-
0000016172
-
A stochastic approximation method
-
Robbins, H., Monro, S.: A stochastic approximation method. Ann. Math. Stat. 22(3), 400–407 (1951)
-
(1951)
Ann. Math. Stat.
, vol.22
, Issue.3
, pp. 400-407
-
-
Robbins, H.1
Monro, S.2
-
28
-
-
84877725219
-
A stochastic gradient method with an exponential convergence rate for finite training sets
-
Curran Associates Inc., NY, USA
-
Roux, N.L., Schmidt, M., Bach, F.R.: A stochastic gradient method with an exponential convergence rate for finite training sets. In: Pereira, F., Burges, C.J.C., Bottou, L., Weinberger, K.Q. (eds.) Advances in Neural Information Processing Systems 25, pp. 2663–2671. Curran Associates Inc., NY, USA (2012)
-
(2012)
Advances in Neural Information Processing Systems 25
, pp. 2663-2671
-
-
Roux, N.L.1
Schmidt, M.2
Bach, F.R.3
Pereira, F.4
Burges, C.J.C.5
Bottou, L.6
Weinberger, K.Q.7
-
30
-
-
72449211086
-
A stochastic quasi-Newton method for online convex optimization
-
Schraudolph, N., Yu, J., Günter, S.: A stochastic quasi-Newton method for online convex optimization. In: Proceedings of the 11th International Conference Artificial Intelligence and Statistics (AISTATS), pp. 433–440 (2007)
-
(2007)
Proceedings of the 11th International Conference Artificial Intelligence and Statistics (AISTATS)
, pp. 433-440
-
-
Schraudolph, N.1
Yu, J.2
Günter, S.3
-
31
-
-
84923894578
-
Communication efficient distributed optimization using an approximate Newton-type method
-
Shamir, O., Srebro, N., Zhang, T.: Communication efficient distributed optimization using an approximate Newton-type method. ICML 32(1), 1000–1008 (2014)
-
(2014)
ICML
, vol.32
, Issue.1
, pp. 1000-1008
-
-
Shamir, O.1
Srebro, N.2
Zhang, T.3
-
32
-
-
84937759137
-
Fast large-scale optimization by unifying stochastic gradient and quasi-Newton methods
-
JMLR Workshop and Conference Proceedings
-
Sohl-Dickstein, J., Poole, B., Ganguli, S.: Fast large-scale optimization by unifying stochastic gradient and quasi-Newton methods. In: Jebara, T., Xing, E.P. (eds.) ICML, pp. 604–612. JMLR Workshop and Conference Proceedings (2014)
-
(2014)
Jebara, T., Xing, E.P
, pp. 604-612
-
-
Sohl-Dickstein, J.1
Poole, B.2
Ganguli, S.3
-
33
-
-
0032186984
-
Incremental gradient algorithms with stepsizes bounded away from zero
-
Solodov, M.V.: Incremental gradient algorithms with stepsizes bounded away from zero. Comput. Optim. Appl. 11(1), 23–35 (1998)
-
(1998)
Comput. Optim. Appl.
, vol.11
, Issue.1
, pp. 23-35
-
-
Solodov, M.V.1
-
34
-
-
84894647945
-
MLI: an API for distributed machine learning
-
Sparks, E.R., Talwalkar, A., Smith, V., Kottalam, J., Xinghao, P., Gonzalez, J., Franklin, M.J., Jordan, M.I, Kraska, T.: MLI: an API for distributed machine learning. In: IEEE 13th International Conference on Data Mining (ICDM), pp. 1187–1192 (2013)
-
(2013)
IEEE 13th International Conference on Data Mining (ICDM)
, pp. 1187-1192
-
-
Sparks, E.R.1
Talwalkar, A.2
Smith, V.3
Kottalam, J.4
Xinghao, P.5
Gonzalez, J.6
Franklin, M.J.7
Jordan, M.I.8
Kraska, T.9
-
35
-
-
0032222083
-
An incremental gradient(-projection) method with momentum term and adaptive stepsize rule
-
Tseng, P.: An incremental gradient(-projection) method with momentum term and adaptive stepsize rule. SIAM J. Optim. 8(2), 506–531 (1998)
-
(1998)
SIAM J. Optim.
, vol.8
, Issue.2
, pp. 506-531
-
-
Tseng, P.1
-
36
-
-
84896495125
-
Incrementally updated gradient methods for constrained and regularized optimization
-
Tseng, P., Yun, S.: Incrementally updated gradient methods for constrained and regularized optimization. J. Optim. Theory Appl. 160(3), 832–853 (2014)
-
(2014)
J. Optim. Theory Appl.
, vol.160
, Issue.3
, pp. 832-853
-
-
Tseng, P.1
Yun, S.2
|