메뉴 건너뛰기




Volumn 3, Issue , 2014, Pages 1937-1945

Least squares revisited: Scalable approaches for multi-class prediction

Author keywords

[No Author keywords available]

Indexed keywords

ARTIFICIAL INTELLIGENCE; ITERATIVE METHODS; LEARNING SYSTEMS;

EID: 84919949624     PISSN: None     EISSN: None     Source Type: Conference Proceeding    
DOI: None     Document Type: Conference Paper
Times cited : (13)

References (31)
  • 1
    • 84897528650 scopus 로고    scopus 로고
    • Selective sampling algorithms for cost-sensitive multiclass prediction
    • Agarwal, A. Selective sampling algorithms for cost-sensitive multiclass prediction. In ICML, 2013.
    • (2013) ICML
    • Agarwal, A.1
  • 4
    • 85162035281 scopus 로고    scopus 로고
    • The tradeoffs of large scale learning
    • Bottou, L. and Bousquet, O. The tradeoffs of large scale learning. In NIPS. 2008.
    • (2008) NIPS
    • Bottou, L.1    Bousquet, O.2
  • 5
    • 0003795688 scopus 로고
    • E-entropy of convex sets and functions
    • Bronshtein, E.M. e-entropy of convex sets and functions. Siberian Mathematical Journal, 17(3):393-398, 1976.
    • (1976) Siberian Mathematical Journal , vol.17 , Issue.3 , pp. 393-398
    • Bronshtein, E.M.1
  • 6
    • 80054732060 scopus 로고    scopus 로고
    • On the use of stochastic hessian information in optimization methods for machine learning
    • Byrd, R. H., Chin, G. M., Neveitt, W., and Nocedal, J. On the use of stochastic hessian information in optimization methods for machine learning. SIAM Journal on Optimization, 21(3): 977-995, 2011.
    • (2011) SIAM Journal on Optimization , vol.21 , Issue.3 , pp. 977-995
    • Byrd, R.H.1    Chin, G.M.2    Neveitt, W.3    Nocedal, J.4
  • 7
    • 34247849152 scopus 로고    scopus 로고
    • Training a support vector machine in the primal
    • Chapelle, O. Training a support vector machine in the primal. Neural Comput., 19(5): 1155-1178, 2007.
    • (2007) Neural Comput. , vol.19 , Issue.5 , pp. 1155-1178
    • Chapelle, O.1
  • 9
    • 26444551655 scopus 로고    scopus 로고
    • Discriminative reranking for natural language parsing
    • Collins, M. and Koo, T. Discriminative reranking for natural language parsing. In ICML, 2000.
    • (2000) ICML
    • Collins, M.1    Koo, T.2
  • 12
    • 0035470889 scopus 로고    scopus 로고
    • Greedy function approximation: A gradient boosting machine
    • english summary
    • Friedman, J. H. Greedy function approximation: A gradient boosting machine.(english summary). Ann. Statist, 29(5): 1189- 1232, 2001.
    • (2001) Ann. Statist , vol.29 , Issue.5 , pp. 1189-1232
    • Friedman, J.H.1
  • 15
    • 84877743512 scopus 로고    scopus 로고
    • Majorization for crfs and latent likelihoods
    • Jebara, T. and Choromanska, A. Majorization for crfs and latent likelihoods. In NIPS, 2012.
    • (2012) NIPS
    • Jebara, T.1    Choromanska, A.2
  • 16
    • 85162537884 scopus 로고    scopus 로고
    • Efficient learning of generalized linear and single index models with isotonic regression
    • Kakade, S. M., Kalai, A., Kanade, V., and Shamir, O. Efficient learning of generalized linear and single index models with isotonic regression. In NIPS, 2011.
    • (2011) NIPS
    • Kakade, S.M.1    Kalai, A.2    Kanade, V.3    Shamir, O.4
  • 17
    • 84898072863 scopus 로고    scopus 로고
    • The isotron algorithm: High- dimensional isotonic regression
    • Kalai, A. T. and Sastry, R. The isotron algorithm: High- dimensional isotonic regression. In COLT '09, 2009.
    • (2009) COLT '09
    • Kalai, A.T.1    Sastry, R.2
  • 19
    • 84866046574 scopus 로고    scopus 로고
    • Linear support vector machines via dual cached loops
    • Matsushima, S., Vishwanathan, S. V. N., and Smola, A. J. Linear support vector machines via dual cached loops. In KDD, 2012.
    • (2012) KDD
    • Matsushima, S.1    Vishwanathan, S.V.N.2    Smola, A.J.3
  • 22
    • 84865692149 scopus 로고    scopus 로고
    • Efficiency of coordinate descent methods on huge- scale optimization problems
    • Nesterov, Y. Efficiency of coordinate descent methods on huge- scale optimization problems. SIAM Journal on Optimization, 22(2):341-362, 2012.
    • (2012) SIAM Journal on Optimization , vol.22 , Issue.2 , pp. 341-362
    • Nesterov, Y.1
  • 23
    • 0003243224 scopus 로고    scopus 로고
    • Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods
    • MIT Press
    • Piatt, J. C. Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods. In Adavances in large margin classifiers, pp. 61-74. MIT Press, 1999.
    • (1999) Adavances in Large Margin Classifiers , pp. 61-74
    • Piatt, J.C.1
  • 25
    • 85162467517 scopus 로고    scopus 로고
    • Hogwild: A lock-free approach to parallelizing stochastic gradient descent
    • Recht, B., Re, C., Wright, S. J., and Niu, F. Hogwild: A lock-free approach to parallelizing stochastic gradient descent. In NIPS, pp. 693-701, 2011.
    • (2011) NIPS , pp. 693-701
    • Recht, B.1    Re, C.2    Wright, S.J.3    Niu, F.4
  • 27
    • 84972545670 scopus 로고
    • Characterization of the subdifferentials of convex functions
    • Rockafellar, R.T. Characterization of the subdifferentials of convex functions. Pac. J. Math., 17:497-510, 1966.
    • (1966) Pac. J. Math. , vol.17 , pp. 497-510
    • Rockafellar, R.T.1
  • 28
    • 84877725219 scopus 로고    scopus 로고
    • A stochastic gradient method with an exponential convergence rate for finite training sets
    • Roux, N. L., Schmidt, M., and Bach, F. A stochastic gradient method with an exponential convergence rate for finite training sets. In NIPS, pp. 2672-2680. 2012.
    • (2012) NIPS , pp. 2672-2680
    • Roux, N.L.1    Schmidt, M.2    Bach, F.3
  • 30
    • 84875134236 scopus 로고    scopus 로고
    • Stochastic dual coordinate ascent methods for regularized loss minimization
    • Shalev-Shwartz, S. and Zhang, T. Stochastic Dual Coordinate Ascent Methods for Regularized Loss Minimization. Journal of Machine Learning Reearch, 14:567-599, 2013.
    • (2013) Journal of Machine Learning Reearch , vol.14 , pp. 567-599
    • Shalev-Shwartz, S.1    Zhang, T.2
  • 31
    • 84863266107 scopus 로고    scopus 로고
    • Large linear classification when data cannot fit in memory
    • Yu, H.-F., Hsieh, C.-J., Chang, K.-W., and Lin, C.-J. Large linear classification when data cannot fit in memory. TKDD, 5(4), 2012.
    • (2012) TKDD , vol.5 , Issue.4
    • Yu, H.-F.1    Hsieh, C.-J.2    Chang, K.-W.3    Lin, C.-J.4


* 이 정보는 Elsevier사의 SCOPUS DB에서 KISTI가 분석하여 추출한 것입니다.