메뉴 건너뛰기




Volumn 3, Issue , 2015, Pages 1973-1982

Adding vs. averaging in distributed primal-dual optimization

Author keywords

[No Author keywords available]

Indexed keywords


EID: 84970028927     PISSN: None     EISSN: None     Source Type: Conference Proceeding    
DOI: None     Document Type: Conference Paper
Times cited : (132)

References (34)
  • 1
    • 84893754489 scopus 로고    scopus 로고
    • Distributed learning, communication complexity and privacy
    • Balcan, M.-F., Blum, A., Fine, S., and Mansour, Y. Distributed Learning, Communication Complexity and Privacy. In COLT, pp. 26.1-26.22, 2012.
    • (2012) COLT , pp. 1-22
    • Balcan, M.-F.1    Blum, A.2    Fine, S.3    Mansour, Y.4
  • 2
    • 80051762104 scopus 로고    scopus 로고
    • Distributed optimization and statistical learning via the alternating direction method of multipliers
    • Boyd, S., Parikh, N., Chu, E., Peleato, B., and Eckstein, J. Distributed optimization and statistical learning via the alternating direction method of multipliers. Foundations and Trends in Machine Learning, 3(1): 1-122, 2011.
    • (2011) Foundations and Trends in Machine Learning , vol.3 , Issue.1 , pp. 1-122
    • Boyd, S.1    Parikh, N.2    Chu, E.3    Peleato, B.4    Eckstein, J.5
  • 3
    • 84898955869 scopus 로고    scopus 로고
    • Estimation, optimization, and parallelism when data is sparse
    • Duchi, J. C., Jordan, M. I., and McMahan, H. B. Estimation, Optimization, and Parallelism when Data is Sparse. In NIPS, 2013.
    • (2013) NIPS
    • Duchi, J.C.1    Jordan, M.I.2    McMahan, H.B.3
  • 6
    • 77953526250 scopus 로고    scopus 로고
    • Consensus-based distributed support vector machines
    • Forero, P. A., Cano, A., and Giannakis, G. B. Consensus-Based Distributed Support Vector Machines. JMLR, 11: 1663-1707, 2010.
    • (2010) JMLR , vol.11 , pp. 1663-1707
    • Forero, P.A.1    Cano, A.2    Giannakis, G.B.3
  • 8
    • 84969769860 scopus 로고    scopus 로고
    • Distributed box-constrained quadratic optimization for dual linear SVM
    • Lee, C.-P. and Roth, D. Distributed Box-Constrained Quadratic Optimization for Dual Linear SVM. In ICML, 2015.
    • (2015) ICML
    • Lee, C.-P.1    Roth, D.2
  • 10
    • 84919932688 scopus 로고    scopus 로고
    • An asynchronous parallel stochastic coordinate descent algorithm
    • Liu, J., Wright, S. J., Ré, C., Bittorf, V., and Sridhar, S. An Asynchronous Parallel Stochastic Coordinate Descent Algorithm. In ICML, 2014.
    • (2014) ICML
    • Liu, J.1    Wright, S.J.2    Ré, C.3    Bittorf, V.4    Sridhar, S.5
  • 12
    • 80052652249 scopus 로고    scopus 로고
    • Efficient large-scale distributed training of conditional maximum entropy models
    • Mann, G., McDonald, R., Mohri, M., Silberman, N., and Walker, D. D. Efficient Large-Scale Distributed Training of Conditional Maximum Entropy Models. NIPS, 2009.
    • (2009) NIPS
    • Mann, G.1    McDonald, R.2    Mohri, M.3    Silberman, N.4    Walker, D.D.5
  • 15
    • 85162467517 scopus 로고    scopus 로고
    • Hogwild!: A lock-free approach to parallelizing stochastic gradient descent
    • Niu, F., Recht, B., Ré, C., and Wright, S. J. Hogwild!: A Lock-Free Approach to Parallelizing Stochastic Gradient Descent. In NIPS, 2011.
    • (2011) NIPS
    • Niu, F.1    Recht, B.2    Ré, C.3    Wright, S.J.4
  • 16
  • 20
    • 84897116612 scopus 로고    scopus 로고
    • Iteration complexity of randomized block-coordinate descent methods for minimizing a composite function
    • April
    • RichtǍrik, P. and TakǍč, M. Iteration complexity of randomized block-coordinate descent methods for minimizing a composite function. Mathematical Programming, 144(1-2): 1-38, April 2014.
    • (2014) Mathematical Programming , vol.144 , Issue.1-2 , pp. 1-38
    • RichtǍrik, P.1    TakǍč, M.2
  • 21
    • 84947110023 scopus 로고    scopus 로고
    • Parallel coordinate descent methods for big data optimization
    • RichtǍrik, P. and TakǍč, M. Parallel coordinate descent methods for big data optimization. Mathematical Programming, pp. 1-52, 2015.
    • (2015) Mathematical Programming , pp. 1-52
    • RichtǍrik, P.1    TakǍč, M.2
  • 22
    • 84899021802 scopus 로고    scopus 로고
    • Accelerated mini-batch stochastic dual coordinate ascent
    • Shalev-Shwartz, S. and Zhang, T. Accelerated mini-batch stochastic dual coordinate ascent. In NIPS, 2013a.
    • (2013) NIPS
    • Shalev-Shwartz, S.1    Zhang, T.2
  • 24
    • 84875134236 scopus 로고    scopus 로고
    • Stochastic dual coordinate ascent methods for regularized loss minimization
    • Shalev-Shwartz, S. and Zhang, T. Stochastic Dual Coordinate Ascent Methods for Regularized Loss Minimization. JMLR, 14:567-599, 2013c.
    • (2013) JMLR , vol.14 , pp. 567-599
    • Shalev-Shwartz, S.1    Zhang, T.2
  • 25
    • 84946691162 scopus 로고    scopus 로고
    • Distributed stochastic optimization and learning
    • Shamir, O. and Srebro, N. Distributed Stochastic Optimization and Learning. In Allerton, 2014.
    • (2014) Allerton
    • Shamir, O.1    Srebro, N.2
  • 26
    • 84919935839 scopus 로고    scopus 로고
    • Communication efficient distributed optimization using an approximate Newton-type method
    • Shamir, O., Srebro, N., and Zhang, T. Communication efficient distributed optimization using an approximate newton-type method. In ICML, 2014.
    • (2014) ICML
    • Shamir, O.1    Srebro, N.2    Zhang, T.3
  • 28
    • 84898970556 scopus 로고    scopus 로고
    • Trading computation for communication: Distributed stochastic dual coordinate ascent
    • Yang, T. Trading Computation for Communication: Distributed Stochastic Dual Coordinate Ascent. In NIPS, 2013.
    • (2013) NIPS
    • Yang, T.1
  • 30
    • 84863266107 scopus 로고    scopus 로고
    • Large linear classification when data cannot fit in memory
    • Yu, H.-F., Hsieh, C.-J., Chang, K.-W., and Lin, C.-J. Large Linear Classification When Data Cannot Fit in Memory. TKDD, 5(4): 1-23, 2012.
    • (2012) TKDD , vol.5 , Issue.4 , pp. 1-23
    • Yu, H.-F.1    Hsieh, C.-J.2    Chang, K.-W.3    Lin, C.-J.4
  • 32
    • 84969505806 scopus 로고    scopus 로고
    • DiSCO: Distributed optimization for self-concordant empirical loss
    • Zhang, Y. and Lin, X. DiSCO: Distributed Optimization for Self-Concordant Empirical Loss. In ICML, pp. 362-370, 2015.
    • (2015) ICML , pp. 362-370
    • Zhang, Y.1    Lin, X.2
  • 33
    • 84890032023 scopus 로고    scopus 로고
    • Communication-efficient algorithms for statistical optimization
    • Zhang, Y., Duchi, J. C., and Wainwright, M. J. Communication-Efficient Algorithms for Statistical Optimization. JMLR, 14:3321-3363, 2013.
    • (2013) JMLR , vol.14 , pp. 3321-3363
    • Zhang, Y.1    Duchi, J.C.2    Wainwright, M.J.3


* 이 정보는 Elsevier사의 SCOPUS DB에서 KISTI가 분석하여 추출한 것입니다.