메뉴 건너뛰기




Volumn 15, Issue , 2014, Pages 2489-2512

Beyond the regret minimization barrier: Optimal algorithms for stochastic strongly-convex optimization

Author keywords

Convex optimization; Online Learning; Regret minimization; Stochastic gradient descent

Indexed keywords

CONVEX OPTIMIZATION; OPTIMIZATION; STOCHASTIC MODELS; STOCHASTIC SYSTEMS;

EID: 84907359690     PISSN: 15324435     EISSN: 15337928     Source Type: Journal    
DOI: None     Document Type: Article
Times cited : (293)

References (22)
  • 1
    • 84898066957 scopus 로고    scopus 로고
    • A stochastic view of optimal regret through minimax duality
    • Jacob Abernethy, Alekh Agarwal, Peter L. Bartlett, and Alexander Rakhlin. A stochastic view of optimal regret through minimax duality. In COLT, 2009.
    • (2009) COLT
    • Abernethy, J.1    Agarwal, A.2    Bartlett, P.L.3    Rakhlin, A.4
  • 2
    • 84860244324 scopus 로고    scopus 로고
    • Information-theoretic lower bounds on the oracle complexity of stochastic convex optimization
    • Alekh Agarwal, Peter L. Bartlett, Pradeep D. Ravikumar, and Martin J. Wainwright. Information-theoretic lower bounds on the oracle complexity of stochastic convex optimization. IEEE Transactions on Information Theory, 58(5):3235-3249, 2012.
    • (2012) IEEE Transactions on Information Theory , vol.58 , Issue.5 , pp. 3235-3249
    • Agarwal, A.1    Bartlett, P.L.2    Ravikumar, P.D.3    Wainwright, M.J.4
  • 3
    • 78650085692 scopus 로고    scopus 로고
    • Adaptive online gradient descent
    • Peter L. Bartlett, Elad Hazan, and Alexander Rakhlin. Adaptive online gradient descent. In NIPS, 2007.
    • (2007) NIPS
    • Bartlett, P.L.1    Hazan, E.2    Rakhlin, A.3
  • 4
    • 0003713964 scopus 로고    scopus 로고
    • Athena Scientific, 2nd edition, September. ISBN 1886529000
    • Dimitri P. Bertsekas. Nonlinear Programming. Athena Scientific, 2nd edition, September 1999. ISBN 1886529000.
    • (1999) Nonlinear Programming
    • Bertsekas, D.P.1
  • 6
    • 48849085774 scopus 로고    scopus 로고
    • The tradeoffs of large scale learning
    • Leon Bottou and Olivier Bousquet. The tradeoffs of large scale learning. In NIPS, 2007.
    • (2007) NIPS
    • Bottou, L.1    Bousquet, O.2
  • 7
    • 49949144765 scopus 로고
    • The relaxation method of finding the common point of convex sets and its application to the solution of problems in convex programming
    • Lev M. Bregman. The relaxation method of finding the common point of convex sets and its application to the solution of problems in convex programming. USSR Computational Mathematics and Mathematical Physics, 7:200-217, 1967.
    • (1967) USSR Computational Mathematics and Mathematical Physics , vol.7 , pp. 200-217
    • Bregman, L.M.1
  • 11
    • 84875000998 scopus 로고    scopus 로고
    • Beyond the regret minimization barrier: An optimal algorithm for stochastic strongly-convex optimization
    • Elad Hazan and Satyen Kale. Beyond the regret minimization barrier: an optimal algorithm for stochastic strongly-convex optimization. In COLT, 2011.
    • (2011) COLT
    • Hazan, E.1    Kale, S.2
  • 12
    • 35348918820 scopus 로고    scopus 로고
    • Logarithmic regret algorithms for online convex optimization
    • Elad Hazan, Amit Agarwal, and Satyen Kale. Logarithmic regret algorithms for online convex optimization. Machine Learning, 69(2-3):169-192, 2007.
    • (2007) Machine Learning , vol.69 , Issue.2-3 , pp. 169-192
    • Hazan, E.1    Agarwal, A.2    Kale, S.3
  • 14
    • 84865639277 scopus 로고    scopus 로고
    • Validation analysis of mirror descent stochastic approximation method
    • Guanghui Lan, Arkadi Nemirovski, and Alexander Shapiro. Validation analysis of mirror descent stochastic approximation method. Math. Program., 134(2):425-458, 2012.
    • (2012) Math. Program. , vol.134 , Issue.2 , pp. 425-458
    • Lan, G.1    Nemirovski, A.2    Shapiro, A.3
  • 16
    • 0032208221 scopus 로고    scopus 로고
    • The cost of achieving the best portfolio in hindsight
    • November
    • Erik Ordentlich and Thomas M. Cover. The cost of achieving the best portfolio in hindsight. Mathematics of Operations Research, 23:960-982, November 1998.
    • (1998) Mathematics of Operations Research , vol.23 , pp. 960-982
    • Ordentlich, E.1    Cover, T.M.2
  • 17
    • 84867120686 scopus 로고    scopus 로고
    • Making gradient descent optimal for strongly convex stochastic optimization
    • Alexander Rakhlin, Ohad Shamir, and Karthik Sridharan. Making gradient descent optimal for strongly convex stochastic optimization. In ICML, 2012.
    • (2012) ICML
    • Rakhlin, A.1    Shamir, O.2    Sridharan, K.3
  • 18
    • 56449110590 scopus 로고    scopus 로고
    • SVM optimization: Inverse dependence on training set size
    • Shai Shalev-Shwartz and Nathan Srebro. SVM optimization: inverse dependence on training set size. In ICML, 2008.
    • (2008) ICML
    • Shalev-Shwartz, S.1    Srebro, N.2
  • 20
    • 84897554805 scopus 로고    scopus 로고
    • Stochastic gradient descent for non-smooth optimization: Convergence results and optimal averaging schemes
    • Ohad Shamir and Tong Zhang. Stochastic gradient descent for non-smooth optimization: Convergence results and optimal averaging schemes. In ICML, 2013.
    • ICML , vol.2013
    • Shamir, O.1    Zhang, T.2
  • 21
    • 4644259882 scopus 로고    scopus 로고
    • The minimax strategy for Gaussian density estimation
    • Eiji Takimoto and Manfred K. Warmuth. The minimax strategy for Gaussian density estimation. In COLT, 2000.
    • (2000) COLT
    • Takimoto, E.1    Warmuth, M.K.2
  • 22
    • 1942484421 scopus 로고    scopus 로고
    • Online convex programming and generalized in finitesimal gradient ascent
    • Martin Zinkevich. Online convex programming and generalized in finitesimal gradient ascent. In ICML, 2003.
    • (2003) ICML
    • Zinkevich, M.1


* 이 정보는 Elsevier사의 SCOPUS DB에서 KISTI가 분석하여 추출한 것입니다.