메뉴 건너뛰기




Volumn 155, Issue 1-2, 2016, Pages 105-145

Accelerated proximal stochastic dual coordinate ascent for regularized loss minimization

Author keywords

90C06; 90C15; 90C25

Indexed keywords

ARTIFICIAL INTELLIGENCE; LEARNING SYSTEMS; OPTIMIZATION; REGRESSION ANALYSIS; STOCHASTIC SYSTEMS;

EID: 84953283129     PISSN: 00255610     EISSN: 14364646     Source Type: Journal    
DOI: 10.1007/s10107-014-0839-0     Document Type: Article
Times cited : (165)

References (31)
  • 2
    • 85014561619 scopus 로고    scopus 로고
    • A fast iterative shrinkage-thresholding algorithm for linear inverse problems
    • Beck, A., Teboulle, M.: A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM J. Imaging Sci. 2(1), 183–202 (2009)
    • (2009) SIAM J. Imaging Sci. , vol.2 , Issue.1 , pp. 183-202
    • Beck, A.1    Teboulle, M.2
  • 3
    • 50949133940 scopus 로고    scopus 로고
    • Exponentiated gradient algorithms for conditional random fields and max-margin markov networks
    • Collins, M., Globerson, A., Koo, T., Carreras, X., Bartlett, P.: Exponentiated gradient algorithms for conditional random fields and max-margin markov networks. J. Mach. Learn. Res. 9, 1775–1822 (2008)
    • (2008) J. Mach. Learn. Res. , vol.9 , pp. 1775-1822
    • Collins, M.1    Globerson, A.2    Koo, T.3    Carreras, X.4    Bartlett, P.5
  • 5
    • 0010442827 scopus 로고    scopus 로고
    • On the algorithmic implementation of multiclass kernel-based vector machines
    • Crammer, K., Singer, Y.: On the algorithmic implementation of multiclass kernel-based vector machines. J. Mach. Learn. Res. 2, 265–292 (2001)
    • (2001) J. Mach. Learn. Res. , vol.2 , pp. 265-292
    • Crammer, K.1    Singer, Y.2
  • 6
    • 57249107300 scopus 로고    scopus 로고
    • Smooth optimization with approximate gradient
    • d’Aspremont, A.: Smooth optimization with approximate gradient. SIAM J. Optim. 19(3), 1171–1183 (2008)
    • (2008) SIAM J. Optim. , vol.19 , Issue.3 , pp. 1171-1183
    • d’Aspremont, A.1
  • 7
    • 84905567870 scopus 로고    scopus 로고
    • First-order methods of smooth convex optimization with inexact oracle
    • Devolder, O., Glineur, F., Nesterov, Y.: First-order methods of smooth convex optimization with inexact oracle. Math. Program. 146(1–2), 37–75 (2014)
    • (2014) Math. Program , vol.146 , Issue.1-2 , pp. 37-75
    • Devolder, O.1    Glineur, F.2    Nesterov, Y.3
  • 8
    • 75249102673 scopus 로고    scopus 로고
    • Efficient online and batch learning using forward backward splitting
    • Duchi, J., Singer, Y.: Efficient online and batch learning using forward backward splitting. J. Mach. Learn. Res. 10, 2899–2934 (2009)
    • (2009) J. Mach. Learn. Res. , vol.10 , pp. 2899-2934
    • Duchi, J.1    Singer, Y.2
  • 11
    • 84912542181 scopus 로고    scopus 로고
    • Accelerated, parallel and proximal coordinate descent
    • Fercoq, O., Richtárik, P.: Accelerated, parallel and proximal coordinate descent. Technical report. arXiv:1312.5799 (2013)
    • (2013) Technical report. arXiv , vol.1312 , pp. 5799
    • Fercoq, O.1    Richtárik, P.2
  • 12
    • 84871576447 scopus 로고    scopus 로고
    • Optimal stochastic approximation algorithms for strongly convex stochastic composite optimization i: A generic algorithmic framework
    • Ghadimi, S., Lan, G.: Optimal stochastic approximation algorithms for strongly convex stochastic composite optimization i: A generic algorithmic framework. SIAM J. Optim. 22(4), 1469–1492 (2012)
    • (2012) SIAM J. Optim. , vol.22 , Issue.4 , pp. 1469-1492
    • Ghadimi, S.1    Lan, G.2
  • 13
  • 15
    • 84863353464 scopus 로고    scopus 로고
    • Sparse online learning via truncated gradient
    • Langford, J., Li, L., Zhang, T.: Sparse online learning via truncated gradient. In: NIPS, pp. 905–912 (2009)
    • (2009) NIPS , pp. 905-912
    • Langford, J.1    Li, L.2    Zhang, T.3
  • 17
    • 84865692149 scopus 로고    scopus 로고
    • Efficiency of coordinate descent methods on huge-scale optimization problems
    • Nesterov, Y.: Efficiency of coordinate descent methods on huge-scale optimization problems. SIAM J. Optim. 22(2), 341–362 (2012)
    • (2012) SIAM J. Optim. , vol.22 , Issue.2 , pp. 341-362
    • Nesterov, Y.1
  • 18
    • 17444406259 scopus 로고    scopus 로고
    • Smooth minimization of non-smooth functions
    • Nesterov, Y.: Smooth minimization of non-smooth functions. Math. Program. 103(1), 127–152 (2005)
    • (2005) Math. Program. , vol.103 , Issue.1 , pp. 127-152
    • Nesterov, Y.1
  • 19
    • 84879800501 scopus 로고    scopus 로고
    • Gradient methods for minimizing composite objective function
    • Nesterov, Y.: Gradient methods for minimizing composite objective function. Math. Program. 140, 125–161 (2013)
    • (2013) Math. Program. , vol.140 , pp. 125-161
    • Nesterov, Y.1
  • 20
    • 84897116612 scopus 로고    scopus 로고
    • Iteration complexity of randomized block-coordinate descent methods for minimizing a composite function
    • Richtárik, P., Takáč, M.: Iteration complexity of randomized block-coordinate descent methods for minimizing a composite function. Math. Program. 144(1–2), 1–38 (2014)
    • (2014) Math. Program , vol.144 , Issue.1-2 , pp. 1-38
    • Richtárik, P.1    Takáč, M.2
  • 21
    • 84860633393 scopus 로고    scopus 로고
    • Convergence rates of inexact proximal-gradient methods for convex optimization
    • Schmidt, M., Roux, N.L., Bach, F.: Convergence rates of inexact proximal-gradient methods for convex optimization. Technical report. arXiv:1109.2415 (2011)
    • (2011) Technical report. arXiv , vol.1109 , pp. 2415
    • Schmidt, M.1    Roux, N.L.2    Bach, F.3
  • 22
    • 79960131832 scopus 로고    scopus 로고
    • Stochastic methods for l 1-regularized loss minimization
    • Shalev-Shwartz, S., Tewari, A.: Stochastic methods for l 1-regularized loss minimization. J. Mach. Learn. Res. 12, 1865–1892 (2011)
    • (2011) J. Mach. Learn. Res. , vol.12 , pp. 1865-1892
    • Shalev-Shwartz, S.1    Tewari, A.2
  • 24
    • 71149119963 scopus 로고    scopus 로고
    • Stochastic methods for l 1-regularized loss minimization
    • Shalev-Shwartz, S., Tewari, A.: Stochastic methods for l 1-regularized loss minimization. In: ICML, p. 117 (2009)
    • (2009) ICML , pp. 117
    • Shalev-Shwartz, S.1    Tewari, A.2
  • 26
    • 84875134236 scopus 로고    scopus 로고
    • Stochastic dual coordinate ascent methods for regularized loss minimization
    • Shalev-Shwartz, S., Zhang, T.: Stochastic dual coordinate ascent methods for regularized loss minimization. J. Mach. Learn. Res. 14, 567–599 (2013)
    • (2013) J. Mach. Learn. Res. , vol.14 , pp. 567-599
    • Shalev-Shwartz, S.1    Zhang, T.2
  • 27
    • 34547964973 scopus 로고    scopus 로고
    • Pegasos: primal estimated sub-GrAdient SOlver for SVM
    • Shalev-Shwartz, S., Singer, Y., Srebro, N.: Pegasos: primal estimated sub-GrAdient SOlver for SVM. In: ICML, pp. 807–814 (2007)
    • (2007) ICML , pp. 807-814
    • Shalev-Shwartz, S.1    Singer, Y.2    Srebro, N.3
  • 28
    • 79251503629 scopus 로고    scopus 로고
    • Trading accuracy for sparsity in optimization problems with sparsity constraints
    • Shalev-Shwartz, S., Srebro, N., Zhang, T.: Trading accuracy for sparsity in optimization problems with sparsity constraints. SIAM J. Optim. 20(6), 2807–2832 (2010)
    • (2010) SIAM J. Optim. , vol.20 , Issue.6 , pp. 2807-2832
    • Shalev-Shwartz, S.1    Srebro, N.2    Zhang, T.3
  • 30
    • 78649396336 scopus 로고    scopus 로고
    • Dual averaging method for regularized stochastic learning and online optimization
    • Xiao, L.: Dual averaging method for regularized stochastic learning and online optimization. J. Mach. Learn. Res. 11, 2543–2596 (2010)
    • (2010) J. Mach. Learn. Res. , vol.11 , pp. 2543-2596
    • Xiao, L.1
  • 31
    • 0036158505 scopus 로고    scopus 로고
    • On the dual formulation of regularized linear systems
    • Zhang, T.: On the dual formulation of regularized linear systems. Mach. Learn. 46, 91–129 (2002)
    • (2002) Mach. Learn. , vol.46 , pp. 91-129
    • Zhang, T.1


* 이 정보는 Elsevier사의 SCOPUS DB에서 KISTI가 분석하여 추출한 것입니다.