메뉴 건너뛰기




Volumn 82, Issue 3, 2020, Pages 891-917

Convergence of Stochastic Proximal Gradient Algorithm

Author keywords

Forward backward splitting algorithm; Proximal methods; Stochastic optimization

Indexed keywords

ITERATIVE METHODS; OPTIMIZATION;

EID: 85074528968     PISSN: 00954616     EISSN: 14320606     Source Type: Journal    
DOI: 10.1007/s00245-019-09617-7     Document Type: Article
Times cited : (121)

References (52)
  • 2
    • 84871604261 scopus 로고    scopus 로고
    • Structured sparsity through convex optimization
    • Bach, F., Jenatton, R., Mairal, J., Obozinski, G.: Structured sparsity through convex optimization. Stat. Sci. 27, 450–468 (2012)
    • (2012) Stat. Sci. , vol.27 , pp. 450-468
    • Bach, F.1    Jenatton, R.2    Mairal, J.3    Obozinski, G.4
  • 3
    • 85162480829 scopus 로고    scopus 로고
    • Non-asymptotic analysis of stochastic approximation algorithms for machine learning
    • Bach, F., Moulines, E.: Non-asymptotic analysis of stochastic approximation algorithms for machine learning. In: Advances in Neural Information Processing Systems, vol. 24 (2011)
    • (2011) Advances in Neural Information Processing Systems , vol.24
    • Bach, F.1    Moulines, E.2
  • 6
    • 84867120454 scopus 로고    scopus 로고
    • Incremental gradient, subgradient, and proximal methods for convex optimization: A survey
    • Bertsekas, D.P.: Incremental gradient, subgradient, and proximal methods for convex optimization: a survey. In: Optimization for Machine Learning, pp. 85–104 (2011)
    • (2011) Optimization for Machine Learning , pp. 85-104
    • Bertsekas, D.P.1
  • 7
    • 0034389611 scopus 로고    scopus 로고
    • Gradient convergence in gradient methods with errors
    • (electronic
    • Bertsekas, D.P., Tsitsiklis, J.N.: Gradient convergence in gradient methods with errors. SIAM J. Optim. 10, 627–642 (2000). (electronic)
    • (2000) SIAM J. Optim. , vol.10 , pp. 627-642
    • Bertsekas, D.P.1    Tsitsiklis, J.N.2
  • 8
    • 84979600342 scopus 로고    scopus 로고
    • Dynamical behavior of a stochastic forward-backward algorithm using random monotone operators
    • Bianchi, P., Hachem, W.: Dynamical behavior of a stochastic forward-backward algorithm using random monotone operators. J. Optim. Theory Appl. 171, 90–120 (2016)
    • (2016) J. Optim. Theory Appl. , vol.171 , pp. 90-120
    • Bianchi, P.1    Hachem, W.2
  • 9
    • 39449100600 scopus 로고    scopus 로고
    • A convergent incremental gradient method with a constant step size
    • Blatt, D., Hero, A., Gauchman, H.: A convergent incremental gradient method with a constant step size. SIAM J. Optim. 18, 29–51 (2007)
    • (2007) SIAM J. Optim. , vol.18 , pp. 29-51
    • Blatt, D.1    Hero, A.2    Gauchman, H.3
  • 10
    • 85046649212 scopus 로고    scopus 로고
    • Optimization methods for large-scale machine learning
    • Bottou, L., Curtis, F., Nocedal, J.: Optimization methods for large-scale machine learning. SIAM Rev. 60, 223–311 (2018)
    • (2018) SIAM Rev. , vol.60 , pp. 223-311
    • Bottou, L.1    Curtis, F.2    Nocedal, J.3
  • 12
    • 84940391194 scopus 로고    scopus 로고
    • Stochastic quasi-fejér block-coordinate fixed point iterations with random sweeping
    • Combettes, P.L., Pesquet, J.-C.: Stochastic quasi-fejér block-coordinate fixed point iterations with random sweeping. SIAM J. Optim. 25, 1221–1248 (2015)
    • (2015) SIAM J. Optim. , vol.25 , pp. 1221-1248
    • Combettes, P.L.1    Pesquet, J.-C.2
  • 13
    • 84964287980 scopus 로고    scopus 로고
    • Stochastic approximations and perturbations in forward-backward splitting for monotone operators
    • Combettes, P.L., Pesquet, J.-C.: Stochastic approximations and perturbations in forward-backward splitting for monotone operators. Pure Appl. Funct. Anal. 1, 13–37 (2016)
    • (2016) Pure Appl. Funct. Anal. , vol.1 , pp. 13-37
    • Combettes, P.L.1    Pesquet, J.-C.2
  • 15
    • 30844438177 scopus 로고    scopus 로고
    • Signal recovery by proximal forward-backward splitting
    • (electronic
    • Combettes, P.L., Wajs, Valérie R.: Signal recovery by proximal forward-backward splitting. Multiscale Model. Simul. 4, 1168–1200 (2005). (electronic)
    • (2005) Multiscale Model. Simul. , vol.4 , pp. 1168-1200
    • Combettes, P.L.1    Wajs, V.R.2
  • 16
    • 62549127689 scopus 로고    scopus 로고
    • Elastic-net regularization in learning theory
    • De Mol, C., De Vito, E., Rosasco, L.: Elastic-net regularization in learning theory. J. Complex. 25, 201–230 (2009)
    • (2009) J. Complex. , vol.25 , pp. 201-230
    • De Mol, C.1    De Vito, E.2    Rosasco, L.3
  • 17
    • 84937908747 scopus 로고    scopus 로고
    • SAGA: a fast incremental gradient method with support for non-strongly convex composite objectives
    • Defazio, A., Bach, F., Lacoste-Julien, S.: SAGA: a fast incremental gradient method with support for non-strongly convex composite objectives. Adv. Neural Inf. Process. Syst. 27, 1646–1654 (2014)
    • (2014) Adv. Neural Inf. Process. Syst. , vol.27 , pp. 1646-1654
    • Defazio, A.1    Bach, F.2    Lacoste-Julien, S.3
  • 18
    • 75249102673 scopus 로고    scopus 로고
    • Efficient online and batch learning using forward backward splitting
    • Duchi, J., Singer, Y.: Efficient online and batch learning using forward backward splitting. J. Mach. Learn. Res. 10, 2899–2934 (2009)
    • (2009) J. Mach. Learn. Res. , vol.10 , pp. 2899-2934
    • Duchi, J.1    Singer, Y.2
  • 19
    • 0040314261 scopus 로고
    • On the method of generalized stochastic gradients and quasi-fejér sequences
    • Ermol’ev, YuM: On the method of generalized stochastic gradients and quasi-fejér sequences. Cybernetics 5, 208–220 (1969)
    • (1969) Cybernetics , vol.5 , pp. 208-220
    • Ermol’ev, Y.M.1
  • 21
    • 84871576447 scopus 로고    scopus 로고
    • Optimal stochastic approximation algorithms for strongly convex stochastic composite optimization, I: a generic algorithmic framework
    • Ghadimi, S., Lan, G.: Optimal stochastic approximation algorithms for strongly convex stochastic composite optimization, I: a generic algorithmic framework. SIAM J. Optim. 22, 1469–1492 (2012)
    • (2012) SIAM J. Optim. , vol.22 , pp. 1469-1492
    • Ghadimi, S.1    Lan, G.2
  • 22
    • 84892856128 scopus 로고    scopus 로고
    • Optimal stochastic approximation algorithms for strongly convex stochastic composite optimization, II: shrinking procedures and optimal algorithms
    • Ghadimi, S., Lan, G.: Optimal stochastic approximation algorithms for strongly convex stochastic composite optimization, II: shrinking procedures and optimal algorithms. SIAM J. Optim. 23, 2061–2089 (2013)
    • (2013) SIAM J. Optim. , vol.23 , pp. 2061-2089
    • Ghadimi, S.1    Lan, G.2
  • 23
    • 84907359690 scopus 로고    scopus 로고
    • Beyond the regret minimization barrier: an optimal algorithm for stochastic strongly-convex optimization
    • Hazan, E., Kale, S.: Beyond the regret minimization barrier: an optimal algorithm for stochastic strongly-convex optimization. J. Mach. Learn. Res. 15, 2489–2512 (2014)
    • (2014) J. Mach. Learn. Res. , vol.15 , pp. 2489-2512
    • Hazan, E.1    Kale, S.2
  • 24
    • 85074491645 scopus 로고    scopus 로고
    • On variance reduction for stochastic smooth convex optimization with multiplicative noise
    • Published Online
    • Jofré, A., Thompson, P.: On variance reduction for stochastic smooth convex optimization with multiplicative noise. In: Mathematical Programming, Series B, Published Online, pp. 1–40 (2018)
    • (2018) Mathematical Programming, Series B , pp. 1-40
    • Jofré, A.1    Thompson, P.2
  • 25
    • 84964251369 scopus 로고    scopus 로고
    • Deterministic and stochastic primal-dual subgradient methods for uniformly convex minimization
    • Juditski, A., Nesterov, Y.: Deterministic and stochastic primal-dual subgradient methods for uniformly convex minimization. Stoch. Syst. 4, 44–80 (2014)
    • (2014) Stoch. Syst. , vol.4 , pp. 44-80
    • Juditski, A.1    Nesterov, Y.2
  • 26
    • 0001079593 scopus 로고
    • Stochastic estimation of the maximum of a regression function
    • Kiefer, J., Wolfowitz, J.: Stochastic estimation of the maximum of a regression function. Ann. Math. Stat. 23, 462–466 (1952)
    • (1952) Ann. Math. Stat. , vol.23 , pp. 462-466
    • Kiefer, J.1    Wolfowitz, J.2
  • 28
    • 77956508892 scopus 로고    scopus 로고
    • Accelerated gradient methods for stochastic optimization and online learning
    • Kwok, J.T., Hu, C., Pan, W.: Accelerated gradient methods for stochastic optimization and online learning. Adv. Neural Inf. Process. Syst. 22, 781–789 (2009)
    • (2009) Adv. Neural Inf. Process. Syst. , vol.22 , pp. 781-789
    • Kwok, J.T.1    Hu, C.2    Pan, W.3
  • 29
    • 84862273593 scopus 로고    scopus 로고
    • An optimal method for stochastic composite optimization
    • Lan, G.: An optimal method for stochastic composite optimization. Math. Progr. 133, 365–397 (2012)
    • (2012) Math. Progr. , vol.133 , pp. 365-397
    • Lan, G.1
  • 31
    • 84901193687 scopus 로고    scopus 로고
    • A sparsity preserving stochastic gradient methods for sparse regression
    • Lin, Q., Chen, X., Peña, J.: A sparsity preserving stochastic gradient methods for sparse regression. Comput. Optim. Appl. 58, 455–482 (2014)
    • (2014) Comput. Optim. Appl. , vol.58 , pp. 455-482
    • Lin, Q.1    Chen, X.2    Peña, J.3
  • 33
    • 0016479333 scopus 로고
    • The rate of convergence of the stochastic gradient method
    • Nekrylova, Z.V.: The rate of convergence of the stochastic gradient method. Cybernetics 11, 218–222 (1975)
    • (1975) Cybernetics , vol.11 , pp. 218-222
    • Nekrylova, Z.V.1
  • 34
    • 70450197241 scopus 로고    scopus 로고
    • Robust stochastic approximation approach to stochastic programming
    • Nemirovski, A., Juditsky, A., Lan, G., Shapiro, A.: Robust stochastic approximation approach to stochastic programming. SIAM J. Optim. 19, 1574–1609 (2008)
    • (2008) SIAM J. Optim. , vol.19 , pp. 1574-1609
    • Nemirovski, A.1    Juditsky, A.2    Lan, G.3    Shapiro, A.4
  • 37
    • 0026899240 scopus 로고
    • Acceleration of stochastic approximation by averaging
    • Polyak, B.T., Juditsky, A.B.: Acceleration of stochastic approximation by averaging. SIAM J. Control Optim. 30, 838–855 (1992)
    • (1992) SIAM J. Control Optim. , vol.30 , pp. 838-855
    • Polyak, B.T.1    Juditsky, A.B.2
  • 39
    • 0000016172 scopus 로고
    • A stochastic approximation method
    • Robbins, H., Monro, S.: A stochastic approximation method. Ann. Math. Stat. 22, 400–407 (1951)
    • (1951) Ann. Math. Stat. , vol.22 , pp. 400-407
    • Robbins, H.1    Monro, S.2
  • 40
    • 84964211787 scopus 로고    scopus 로고
    • Stochastic forward-backward splitting for monotone inclusions
    • Rosasco, L., Villa, S., Vu, B.C.: Stochastic forward-backward splitting for monotone inclusions. J. Optim. Theory Appl. 169, 388–406 (2016)
    • (2016) J. Optim. Theory Appl. , vol.169 , pp. 388-406
    • Rosasco, L.1    Villa, S.2    Vu, B.C.3
  • 41
    • 85017588583 scopus 로고    scopus 로고
    • A first-order stochastic primal-dual algorithm with correction step
    • Rosasco, L., Villa, S., Vu, B.C.: A first-order stochastic primal-dual algorithm with correction step. Numer. Funct. Anal. Optim. 38, 602–626 (2017)
    • (2017) Numer. Funct. Anal. Optim. , vol.38 , pp. 602-626
    • Rosasco, L.1    Villa, S.2    Vu, B.C.3
  • 42
    • 84974777235 scopus 로고    scopus 로고
    • Minimizing finite sums with the stochastic average gradient
    • Schmidt, M., Le Roux, N., Bach, F.: Minimizing finite sums with the stochastic average gradient. Math. Progr. Ser. B 162, 83–112 (2017)
    • (2017) Math. Progr. Ser. B , vol.162 , pp. 83-112
    • Schmidt, M.1    Le Roux, N.2    Bach, F.3
  • 43
    • 85162564991 scopus 로고    scopus 로고
    • Convergence rates of inexact proximal-gradient methods for convex optimization
    • Schmidt, M.W., Le Roux, N., Bach, F.: Convergence rates of inexact proximal-gradient methods for convex optimization. Adv. Neural Inf. Process. Syst. 24, 1458–1466 (2011)
    • (2011) Adv. Neural Inf. Process. Syst. , vol.24 , pp. 1458-1466
    • Schmidt, M.W.1    Le Roux, N.2    Bach, F.3
  • 47
    • 84897554805 scopus 로고    scopus 로고
    • Stochastic gradient descent for non-smooth optimization: Convergence results and optimal averaging schemes
    • Shamir, O., Zhang, T.: Stochastic gradient descent for non-smooth optimization: convergence results and optimal averaging schemes. In: Proceedings of the 30th International Conference on Machine Learning, pp. 71–79 (2013)
    • (2013) Proceedings of the 30Th International Conference on Machine Learning , pp. 71-79
    • Shamir, O.1    Zhang, T.2
  • 49
    • 84901191783 scopus 로고    scopus 로고
    • Proximal methods for the latent group lasso penalty
    • Villa, S., Rosasco, L., Mosci, S., Verri, A.: Proximal methods for the latent group lasso penalty. Comput. Optim. Appl. 58, 381–407 (2014)
    • (2014) Comput. Optim. Appl. , vol.58 , pp. 381-407
    • Villa, S.1    Rosasco, L.2    Mosci, S.3    Verri, A.4
  • 50
    • 84886247153 scopus 로고    scopus 로고
    • Accelerated and inexact forward-backward algorithms
    • Villa, S., Salzo, S., Baldassarre, L., Verri, A.: Accelerated and inexact forward-backward algorithms. SIAM J. Optim. 23, 1607–1633 (2013)
    • (2013) SIAM J. Optim. , vol.23 , pp. 1607-1633
    • Villa, S.1    Salzo, S.2    Baldassarre, L.3    Verri, A.4
  • 51
    • 78649396336 scopus 로고    scopus 로고
    • Dual averaging methods for regularized stochastic learning and online optimization
    • Xiao, L.: Dual averaging methods for regularized stochastic learning and online optimization. J. Mach. Learn. Res. 11, 2543–2596 (2010)
    • (2010) J. Mach. Learn. Res. , vol.11 , pp. 2543-2596
    • Xiao, L.1
  • 52
    • 16244401458 scopus 로고    scopus 로고
    • Regularization and variable selection via the elastic net
    • Zou, Z., Hastie, T.: Regularization and variable selection via the elastic net. J. R. Stat. Soc. Ser. B 67, 301–320 (2005)
    • (2005) J. R. Stat. Soc. Ser. B , vol.67 , pp. 301-320
    • Zou, Z.1    Hastie, T.2


* 이 정보는 Elsevier사의 SCOPUS DB에서 KISTI가 분석하여 추출한 것입니다.