메뉴 건너뛰기




Volumn 1, Issue , 2015, Pages 362-370

DiSCO: Distributed optimization for self-concordant empirical loss

Author keywords

[No Author keywords available]

Indexed keywords

ALGORITHMS; ARTIFICIAL INTELLIGENCE; CONJUGATE GRADIENT METHOD; ITERATIVE METHODS; LEARNING SYSTEMS; NEWTON-RAPHSON METHOD; OPTIMIZATION; REGRESSION ANALYSIS;

EID: 84969505806     PISSN: None     EISSN: None     Source Type: Conference Proceeding    
DOI: None     Document Type: Conference Paper
Times cited : (248)

References (27)
  • 1
    • 85162387277 scopus 로고    scopus 로고
    • Distributed delayed stochastic optimization
    • Agarwal, A. and Duchi, J. C. Distributed delayed stochastic optimization. In Advances in NIPS, pp. 873-881, 2011.
    • (2011) Advances in NIPS , pp. 873-881
    • Agarwal, A.1    Duchi, J.C.2
  • 6
    • 80051762104 scopus 로고    scopus 로고
    • Distributed optimization and statistical learning via the alternating direction method of multipliers
    • Boyd, S., Parikh, N., Chu, E., Peleato, B. and Eckstein, J. Distributed optimization and statistical learning via the alternating direction method of multipliers. Foundations and Trends in Machine Learning. 3 (1): 1-122, 2010.
    • (2010) Foundations and Trends in Machine Learning , vol.3 , Issue.1 , pp. 1-122
    • Boyd, S.1    Parikh, N.2    Chu, E.3    Peleato, B.4    Eckstein, J.5
  • 8
    • 37549003336 scopus 로고    scopus 로고
    • MapReduce: Simplfied data processing on large clusters
    • Dean, J. and Ghemawat, S. MapReduce: Simplfied data processing on large clusters. Communications of the ACM, 51 (1): 107-113, 2008.
    • (2008) Communications of the ACM , vol.51 , Issue.1 , pp. 107-113
    • Dean, J.1    Ghemawat, S.2
  • 9
    • 84937908747 scopus 로고    scopus 로고
    • SAGA: A fast incremental gradient method with support for non-strongly convex composite objectives
    • Defazio, A., Bach, F., and Lacoste-Julien, S. SAGA: A fast incremental gradient method with support for non-strongly convex composite objectives. In Advances in NIPS 27, pp. 1646-1654. 2014.
    • (2014) Advances in NIPS , vol.27 , pp. 1646-1654
    • Defazio, A.1    Bach, F.2    Lacoste-Julien, S.3
  • 11
    • 84878490420 scopus 로고    scopus 로고
    • On the global and linear convergence of the generalized alternating direction method of multipliers
    • Rice University
    • Deng, W. and Yin, W. On the global and linear convergence of the generalized alternating direction method of multipliers. CAAM Technical Report 12-14, Rice University, 2012.
    • (2012) CAAM Technical Report , pp. 12-14
    • Deng, W.1    Yin, W.2
  • 12
    • 84857708133 scopus 로고    scopus 로고
    • Dual averaging for distributed optimization: Convergence analysis and network scaling
    • Duchi, J. C, Agarwal, A., and Wainwright, M. J. Dual averaging for distributed optimization: convergence analysis and network scaling. IEEE Transactions on Automatic Control, 57 (3): 592-606, 2012.
    • (2012) IEEE Transactions on Automatic Control , vol.57 , Issue.3 , pp. 592-606
    • Duchi, J.C.1    Agarwal, A.2    Wainwright, M.J.3
  • 13
    • 0004236492 scopus 로고    scopus 로고
    • The John Hopkins University Press, Baltimore, MD, third edition
    • Golub, G. H. and Van Loan, C. F. Matrix Computations. The John Hopkins University Press, Baltimore, MD, third edition, 1996.
    • (1996) Matrix Computations
    • Golub, G.H.1    Van Loan, C.F.2
  • 15
    • 84872514178 scopus 로고    scopus 로고
    • A stochastic gradient method with an exponential convergence rate for finite training sets
    • Le Roux, N, Schmidt, M., and Bach, F. A stochastic gradient method with an exponential convergence rate for finite training sets. In Advances in NIPS 25, pp. 2672-2680. 2012.
    • (2012) Advances in NIPS , vol.25 , pp. 2672-2680
    • Le Roux, N.1    Schmidt, M.2    Bach, F.3
  • 18
    • 84879800501 scopus 로고    scopus 로고
    • Gradient methods for minimizing composite functions
    • Nesterov, Y. Gradient methods for minimizing composite functions. Mathematical Programming, Ser. B, 140: 125-161, 2013.
    • (2013) Mathematical Programming, Ser. B , vol.140 , pp. 125-161
    • Nesterov, Y.1
  • 21
    • 85162467517 scopus 로고    scopus 로고
    • Hogwild: A lock-free approach to parallelizing stochastic gradient descent
    • Recht, B., Re, C, Wright, S. J., and Niu, F. Hogwild: A lock-free approach to parallelizing stochastic gradient descent. In Advances inNIPS, pp. 693-701, 2011.
    • (2011) Advances InNIPS , pp. 693-701
    • Recht, B.1    Re, C.2    Wright, S.J.3    Niu, F.4
  • 24
    • 84919935839 scopus 로고    scopus 로고
    • Communication efficient distributed optimization using an approximate Newton-type method
    • Shamir, O., Srebro, N., and Zhang, T. Communication efficient distributed optimization using an approximate Newton-type method. In Proceedings of ICML. JMLR: W&CP volume 32, 2014.
    • (2014) Proceedings of ICML. JMLR: W&CP , vol.32
    • Shamir, O.1    Srebro, N.2    Zhang, T.3
  • 25
    • 84877779050 scopus 로고    scopus 로고
    • Communication-efficient algorithms for statistical optimization
    • Zhang, Y., Wainwright, M. J., and Duchi, J. C. Communication-efficient algorithms for statistical optimization. In Advances in NIPS, pp. 1502-1510, 2012.
    • (2012) Advances in NIPS , pp. 1502-1510
    • Zhang, Y.1    Wainwright, M.J.2    Duchi, J.C.3


* 이 정보는 Elsevier사의 SCOPUS DB에서 KISTI가 분석하여 추출한 것입니다.