메뉴 건너뛰기




Volumn 9851 LNAI, Issue , 2016, Pages 665-680

On the convergence of a family of robust losses for stochastic gradient descent

Author keywords

[No Author keywords available]

Indexed keywords

ARTIFICIAL INTELLIGENCE; STOCHASTIC SYSTEMS; TIME VARYING NETWORKS;

EID: 84988566042     PISSN: 03029743     EISSN: 16113349     Source Type: Book Series    
DOI: 10.1007/978-3-319-46128-1_42     Document Type: Conference Paper
Times cited : (23)

References (19)
  • 3
    • 84865692149 scopus 로고    scopus 로고
    • Efficiency of coordinate descent methods on huge-scale optimization problems
    • Nesterov, Y.: Efficiency of coordinate descent methods on huge-scale optimization problems. SIAM J. Optim. (SIAM) 22(2), 341–362 (2012)
    • (2012) SIAM J. Optim. (SIAM) , vol.22 , Issue.2 , pp. 341-362
    • Nesterov, Y.1
  • 7
    • 79952748054 scopus 로고    scopus 로고
    • Pegasos: Primal estimated sub-gradient solver for SVM
    • Shalev-shwartz, S., Singer, Y., Srebro, N., Cotter, A.: Pegasos: primal estimated sub-gradient solver for SVM. Math. Program. 127(1), 3–30 (2011)
    • (2011) Math. Program , vol.127 , Issue.1 , pp. 3-30
    • Shalev-Shwartz, S.1    Singer, Y.2    Srebro, N.3    Cotter, A.4
  • 12
    • 84892854517 scopus 로고    scopus 로고
    • Stochastic first-and zeroth-order methods for nonconvex stochastic programming
    • Ghadimi, S., Lan, G.-H.: Stochastic first-and zeroth-order methods for nonconvex stochastic programming. SIAM J. Optim. (SIAM) 23(4), 2341–2368 (2013)
    • (2013) SIAM J. Optim. (SIAM) , vol.23 , Issue.4 , pp. 2341-2368
    • Ghadimi, S.1    Lan, G.-H.2
  • 13
    • 84958124116 scopus 로고    scopus 로고
    • Accelerated gradient methods for nonconvex nonlinear and stochastic programming
    • Ghadimi, S., Lan, G.-H.: Accelerated gradient methods for nonconvex nonlinear and stochastic programming. Math. Program. 156, 59–99 (2015)
    • (2015) Math. Program , vol.156 , pp. 59-99
    • Ghadimi, S.1    Lan, G.-H.2
  • 14
    • 56449098486 scopus 로고    scopus 로고
    • Training robust support vector machine with smooth ramp loss in the primal space
    • Wang, L., Jia, H.-D., Li, J.: Training robust support vector machine with smooth ramp loss in the primal space. Neurocomputing 71(13), 3020–3025 (2008)
    • (2008) Neurocomputing , vol.71 , Issue.13 , pp. 3020-3025
    • Wang, L.1    Jia, H.-D.2    Li, J.3
  • 17
  • 18
    • 84873371070 scopus 로고    scopus 로고
    • Fast global convergence of gradient methods for high-dimensional statistical recovery
    • Agarwal, A., Negahban, S., Wainwright, M.J.: Fast global convergence of gradient methods for high-dimensional statistical recovery. Ann. Stat. 40(5), 2452–2482 (2012)
    • (2012) Ann. Stat , vol.40 , Issue.5 , pp. 2452-2482
    • Agarwal, A.1    Negahban, S.2    Wainwright, M.J.3
  • 19
    • 84930632658 scopus 로고    scopus 로고
    • Regularized M-estimators with nonconvexity: Statistical and algorithmic theory for local optima
    • Loh, P.-L., Wainwright, M.J.: Regularized M-estimators with nonconvexity: statistical and algorithmic theory for local optima. J. Mach. Learn. Res. (JMLR) 16, 559–616 (2015)
    • (2015) J. Mach. Learn. Res. (JMLR) , vol.16 , pp. 559-616
    • Loh, P.-L.1    Wainwright, M.J.2


* 이 정보는 Elsevier사의 SCOPUS DB에서 KISTI가 분석하여 추출한 것입니다.