메뉴 건너뛰기




Volumn , Issue , 2009, Pages 929-936

Stochastic methods for ℓ1 regularized loss minimization

Author keywords

[No Author keywords available]

Indexed keywords

DATA SETS; DETERMINISTIC APPROACH; LOSS MINIMIZATION; RUN-TIME ANALYSIS; STOCHASTIC METHODS; TRAINING EXAMPLE; WEIGHT VECTOR;

EID: 71149119963     PISSN: None     EISSN: None     Source Type: Conference Proceeding    
DOI: None     Document Type: Conference Paper
Times cited : (91)

References (25)
  • 1
    • 0037403111 scopus 로고    scopus 로고
    • Mirror descent and nonlinear projected subgradient methods for convex optimization
    • Beck, A., & Teboulle, M. (2003). Mirror descent and nonlinear projected subgradient methods for convex optimization. Operations Research Letters, 31, 167-175.
    • (2003) Operations Research Letters , vol.31 , pp. 167-175
    • Beck, A.1    Teboulle, M.2
  • 9
    • 0001971618 scopus 로고
    • An algorithm for quadratic programming
    • Frank, M., & Wolfe, P. (1956). An algorithm for quadratic programming. Naval Res. Logist. Quart., 3, 95-110.
    • (1956) Naval Res. Logist. Quart , vol.3 , pp. 95-110
    • Frank, M.1    Wolfe, P.2
  • 10
    • 62549130467 scopus 로고    scopus 로고
    • Regularized paths for generalized linear models via coordinate descent
    • Department of Statistics, Stanford University
    • Friedman, J., Hastie, T., & Tibshirani, R. (2008). Regularized paths for generalized linear models via coordinate descent (Technical Report). Department of Statistics, Stanford University.
    • (2008) Technical Report
    • Friedman, J.1    Hastie, T.2    Tibshirani, R.3
  • 11
    • 34548105186 scopus 로고    scopus 로고
    • Large-scale Bayesian logistic regression for text categorization
    • Genkin, A., Lewis, D., & Madigan, D. (2007). Large-scale Bayesian logistic regression for text categorization. Technometrics, 49, 291-304.
    • (2007) Technometrics , vol.49 , pp. 291-304
    • Genkin, A.1    Lewis, D.2    Madigan, D.3
  • 12
    • 0344875562 scopus 로고    scopus 로고
    • The robustness of the p-norm algorithms
    • Gentile, C. (2003). The robustness of the p-norm algorithms. Machine Learning, 53, 265-299.
    • (2003) Machine Learning , vol.53 , pp. 265-299
    • Gentile, C.1
  • 13
    • 0035370643 scopus 로고    scopus 로고
    • General convergence results for linear discriminant updates
    • Grove, A. J., Littlestone, N., & Schuurmans, D. (2001). General convergence results for linear discriminant updates. Machine Learning, 43, 173-210.
    • (2001) Machine Learning , vol.43 , pp. 173-210
    • Grove, A.J.1    Littlestone, N.2    Schuurmans, D.3
  • 14
    • 0008815681 scopus 로고    scopus 로고
    • Exponentiated gradient versus gradient descent for linear predictors
    • Kivinen, J., & Warmuth, M. (1997). Exponentiated gradient versus gradient descent for linear predictors. Information and Computation, 132, 1-64.
    • (1997) Information and Computation , vol.132 , pp. 1-64
    • Kivinen, J.1    Warmuth, M.2
  • 17
    • 34250091945 scopus 로고
    • Learning quickly when irrelevant attributes abound: A new linear-threshold algorithm
    • Littlestone, N. (1988). Learning quickly when irrelevant attributes abound: A new linear-threshold algorithm. Machine Learning, 2, 285-318.
    • (1988) Machine Learning , vol.2 , pp. 285-318
    • Littlestone, N.1
  • 18
    • 0026678659 scopus 로고
    • On the convergence of coordinate descent method for convex differentiable minimization
    • Luo, Z., & Tseng, P. (1992). On the convergence of coordinate descent method for convex differentiable minimization. J. Optim. Theory Appl., 72, 7-35.
    • (1992) J. Optim. Theory Appl , vol.72 , pp. 7-35
    • Luo, Z.1    Tseng, P.2
  • 21
    • 0001287271 scopus 로고    scopus 로고
    • Regression shrinkage and selection via the lasso
    • Tibshirani, R. (1996). Regression shrinkage and selection via the lasso. J. Royal. Statist. Soc B., 58, 267-288.
    • (1996) J. Royal. Statist. Soc B , vol.58 , pp. 267-288
    • Tibshirani, R.1
  • 22
    • 60349101047 scopus 로고    scopus 로고
    • A block-coordinate gradient descent method for linearly constrained nonsmooth separable optimization
    • Tseng, P., & Yun, S. (2009). A block-coordinate gradient descent method for linearly constrained nonsmooth separable optimization. J. Optim. Theory Appl., 140, 513-535.
    • (2009) J. Optim. Theory Appl , vol.140 , pp. 513-535
    • Tseng, P.1    Yun, S.2
  • 23
    • 84863879353 scopus 로고    scopus 로고
    • Coordinate descent algorithms for lasso penalized regression
    • Wu, T. T., & Lange, K. (2008). Coordinate descent algorithms for lasso penalized regression. Annals of Applied Statistics, 2, 224-244.
    • (2008) Annals of Applied Statistics , vol.2 , pp. 224-244
    • Wu, T.T.1    Lange, K.2
  • 24
    • 0037355948 scopus 로고    scopus 로고
    • Sequential greedy approximation for certain convex optimization problems
    • Zhang, T. (2003). Sequential greedy approximation for certain convex optimization problems. IEEE Transaction on Information Theory, 49, 682-691.
    • (2003) IEEE Transaction on Information Theory , vol.49 , pp. 682-691
    • Zhang, T.1
  • 25
    • 0001868572 scopus 로고    scopus 로고
    • Text categorization based on regularized linear classification methods
    • Zhang, T., & Oles, F. J. (2001). Text categorization based on regularized linear classification methods. Information Retrieval, 4, 5-31.
    • (2001) Information Retrieval , vol.4 , pp. 5-31
    • Zhang, T.1    Oles, F.J.2


* 이 정보는 Elsevier사의 SCOPUS DB에서 KISTI가 분석하여 추출한 것입니다.