메뉴 건너뛰기




Volumn 151, Issue 1, 2015, Pages 283-313

A globally convergent incremental Newton method

Author keywords

Convex optimization; EKF algorithm; Gauss Newton method; Incremental methods; Newton method; Strong convexity

Indexed keywords

ARTIFICIAL INTELLIGENCE; CONVEX OPTIMIZATION; FUNCTIONS; LEARNING SYSTEMS;

EID: 84937762700     PISSN: 00255610     EISSN: 14364646     Source Type: Journal    
DOI: 10.1007/s10107-015-0897-y     Document Type: Article
Times cited : (44)

References (37)
  • 1
    • 0030303864 scopus 로고    scopus 로고
    • Incremental least squares methods and the extended Kalman filter
    • Bertsekas, D.: Incremental least squares methods and the extended Kalman filter. SIAM J. Optim. 6(3), 807–822 (1996)
    • (1996) SIAM J. Optim. , vol.6 , Issue.3 , pp. 807-822
    • Bertsekas, D.1
  • 2
    • 0031285678 scopus 로고    scopus 로고
    • A new class of incremental gradient methods for least squares problems
    • Bertsekas, D.: A new class of incremental gradient methods for least squares problems. SIAM J. Optim. 7(4), 913–926 (1997)
    • (1997) SIAM J. Optim. , vol.7 , Issue.4 , pp. 913-926
    • Bertsekas, D.1
  • 4
    • 84867120454 scopus 로고    scopus 로고
    • Incremental gradient, subgradient, and proximal methods for convex optimization: a survey
    • Bertsekas, D.: Incremental gradient, subgradient, and proximal methods for convex optimization: a survey. Optim. Mach. Learn. 2010, 1–38 (2011)
    • (2011) Optim. Mach. Learn. , vol.2010
    • Bertsekas, D.1
  • 6
    • 39449100600 scopus 로고    scopus 로고
    • A convergent incremental gradient method with a constant step size
    • Blatt, D., Hero, A., Gauchman, H.: A convergent incremental gradient method with a constant step size. SIAM J. Optim. 18(1), 29–51 (2007)
    • (2007) SIAM J. Optim. , vol.18 , Issue.1 , pp. 29-51
    • Blatt, D.1    Hero, A.2    Gauchman, H.3
  • 7
    • 68949096711 scopus 로고    scopus 로고
    • SGD-QN: careful quasi-Newton stochastic gradient descent
    • Bordes, A., Bottou, L., Gallinari, P.: SGD-QN: careful quasi-Newton stochastic gradient descent. J. Mach. Learn. Res. 10, 1737–1754 (2009)
    • (2009) J. Mach. Learn. Res , vol.10 , pp. 1737-1754
    • Bordes, A.1    Bottou, L.2    Gallinari, P.3
  • 8
    • 84904136037 scopus 로고    scopus 로고
    • Large-scale machine learning with stochastic gradient descent
    • Physica-Verlag HD, Heidelberg
    • Bottou, L.: Large-scale machine learning with stochastic gradient descent. In: Lechevallier, Y., Saporta, G. (eds.) Proceedings of COMPSTAT’2010, pp. 177–186. Physica-Verlag HD, Heidelberg (2010)
    • (2010) Proceedings of COMPSTAT’2010 , pp. 177-186
    • Bottou, L.1    Lechevallier, Y.2    Saporta, G.3
  • 9
    • 17444425307 scopus 로고    scopus 로고
    • On-line learning for very large data sets
    • Bottou, L., Le Cun, Y.: On-line learning for very large data sets. Appl. Stoch. Models Bus. Ind. 21(2), 137–151 (2005)
    • (2005) Appl. Stoch. Models Bus. Ind. , vol.21 , Issue.2 , pp. 137-151
    • Bottou, L.1    Le Cun, Y.2
  • 10
    • 80051762104 scopus 로고    scopus 로고
    • Distributed optimization and statistical learning via the alternating direction method of multipliers
    • Boyd, S., Parikh, N., Chu, E., Peleato, B., Eckstein, J.: Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Learn. 3(1), 1–122 (2011)
    • (2011) Found. Trends Mach. Learn. , vol.3 , Issue.1
    • Boyd, S.1    Parikh, N.2    Chu, E.3    Peleato, B.4    Eckstein, J.5
  • 12
    • 0035534791 scopus 로고    scopus 로고
    • Inexact perturbed Newton methods and applications to a class of Krylov solvers
    • Cătinaş, E.: Inexact perturbed Newton methods and applications to a class of Krylov solvers. J. Optim. Theory Appl. 108(3), 543–570 (2001)
    • (2001) J. Optim. Theory Appl. , vol.108 , Issue.3 , pp. 543-570
    • Cătinaş, E.1
  • 13
    • 0016916995 scopus 로고
    • New least-square algorithms
    • Davidon, W.C.: New least-square algorithms. J. Optim. Theory Appl. 18(2), 187–197 (1976)
    • (1976) J. Optim. Theory Appl. , vol.18 , Issue.2 , pp. 187-197
    • Davidon, W.C.1
  • 16
    • 84966259557 scopus 로고
    • A characterization of superlinear convergence and its application to quasi-newton methods
    • Dennis, J.E., Moré, J.J.: A characterization of superlinear convergence and its application to quasi-newton methods. Math. Comput. 28(126), 549–560 (1974)
    • (1974) Math. Comput. , vol.28 , Issue.126 , pp. 549-560
    • Dennis, J.E.1    Moré, J.J.2
  • 17
    • 80052250414 scopus 로고    scopus 로고
    • Adaptive subgradient methods for online learning and stochastic optimization
    • Duchi, J., Hazan, E., Singer, Y.: Adaptive subgradient methods for online learning and stochastic optimization. J. Mach. Learn. Res. 12, 2121–2159 (2011)
    • (2011) J. Mach. Learn. Res. , vol.12 , pp. 2121-2159
    • Duchi, J.1    Hazan, E.2    Singer, Y.3
  • 18
    • 84912526440 scopus 로고    scopus 로고
    • Optimization with first-order surrogate functions
    • Atlanta, United States
    • Mairal, J.: Optimization with first-order surrogate functions. In: ICML, Volume 28 of JMLR Proceedings, pp. 783–791, Atlanta, United States (2013)
    • (2013) ICML, Volume 28 of JMLR Proceedings , pp. 783-791
    • Mairal, J.1
  • 19
    • 84972916837 scopus 로고
    • Serial and parallel backpropagation convergence via nonmonotone perturbed minimization
    • Mangasarian, O.L., Solodov, M.V.: Serial and parallel backpropagation convergence via nonmonotone perturbed minimization. Optim. Methods Softw. 4(2), 103–116 (1994)
    • (1994) Optim. Methods Softw. , vol.4 , Issue.2 , pp. 103-116
    • Mangasarian, O.L.1    Solodov, M.V.2
  • 21
    • 0242365738 scopus 로고    scopus 로고
    • The incremental Gauss–Newton algorithm with adaptive stepsize rule
    • Moriyama, H., Yamashita, N., Fukushima, M.: The incremental Gauss–Newton algorithm with adaptive stepsize rule. Comput. Optim. Appl. 26(2), 107–141 (2003)
    • (2003) Comput. Optim. Appl. , vol.26 , Issue.2 , pp. 107-141
    • Moriyama, H.1    Yamashita, N.2    Fukushima, M.3
  • 23
    • 62749193789 scopus 로고    scopus 로고
    • On the rate of convergence of distributed subgradient methods for multi-agent optimization
    • Nedić, A., Ozdaglar, A.: On the rate of convergence of distributed subgradient methods for multi-agent optimization. In: Proceedings of IEEE CDC, pp. 4711–4716 (2007)
    • (2007) Proceedings of IEEE CDC , pp. 4711-4716
    • Nedić, A.1    Ozdaglar, A.2
  • 24
    • 59649103668 scopus 로고    scopus 로고
    • Distributed subgradient methods for multi-agent optimization
    • Nedić, A., Ozdaglar, A.: Distributed subgradient methods for multi-agent optimization. IEEE Trans. Autom. Control 54(1), 48–61 (2009)
    • (2009) IEEE Trans. Autom. Control , vol.54 , Issue.1 , pp. 48-61
    • Nedić, A.1    Ozdaglar, A.2
  • 27
    • 0000016172 scopus 로고
    • A stochastic approximation method
    • Robbins, H., Monro, S.: A stochastic approximation method. Ann. Math. Stat. 22(3), 400–407 (1951)
    • (1951) Ann. Math. Stat. , vol.22 , Issue.3 , pp. 400-407
    • Robbins, H.1    Monro, S.2
  • 31
    • 84923894578 scopus 로고    scopus 로고
    • Communication efficient distributed optimization using an approximate Newton-type method
    • Shamir, O., Srebro, N., Zhang, T.: Communication efficient distributed optimization using an approximate Newton-type method. ICML 32(1), 1000–1008 (2014)
    • (2014) ICML , vol.32 , Issue.1 , pp. 1000-1008
    • Shamir, O.1    Srebro, N.2    Zhang, T.3
  • 32
    • 84937759137 scopus 로고    scopus 로고
    • Fast large-scale optimization by unifying stochastic gradient and quasi-Newton methods
    • JMLR Workshop and Conference Proceedings
    • Sohl-Dickstein, J., Poole, B., Ganguli, S.: Fast large-scale optimization by unifying stochastic gradient and quasi-Newton methods. In: Jebara, T., Xing, E.P. (eds.) ICML, pp. 604–612. JMLR Workshop and Conference Proceedings (2014)
    • (2014) Jebara, T., Xing, E.P , pp. 604-612
    • Sohl-Dickstein, J.1    Poole, B.2    Ganguli, S.3
  • 33
    • 0032186984 scopus 로고    scopus 로고
    • Incremental gradient algorithms with stepsizes bounded away from zero
    • Solodov, M.V.: Incremental gradient algorithms with stepsizes bounded away from zero. Comput. Optim. Appl. 11(1), 23–35 (1998)
    • (1998) Comput. Optim. Appl. , vol.11 , Issue.1 , pp. 23-35
    • Solodov, M.V.1
  • 35
    • 0032222083 scopus 로고    scopus 로고
    • An incremental gradient(-projection) method with momentum term and adaptive stepsize rule
    • Tseng, P.: An incremental gradient(-projection) method with momentum term and adaptive stepsize rule. SIAM J. Optim. 8(2), 506–531 (1998)
    • (1998) SIAM J. Optim. , vol.8 , Issue.2 , pp. 506-531
    • Tseng, P.1
  • 36
    • 84896495125 scopus 로고    scopus 로고
    • Incrementally updated gradient methods for constrained and regularized optimization
    • Tseng, P., Yun, S.: Incrementally updated gradient methods for constrained and regularized optimization. J. Optim. Theory Appl. 160(3), 832–853 (2014)
    • (2014) J. Optim. Theory Appl. , vol.160 , Issue.3 , pp. 832-853
    • Tseng, P.1    Yun, S.2
  • 37


* 이 정보는 Elsevier사의 SCOPUS DB에서 KISTI가 분석하여 추출한 것입니다.