메뉴 건너뛰기




Volumn , Issue , 2012, Pages 177-185

Linear support vector machines via dual cached loops

Author keywords

coordinate descent; optimization; support vector machines

Indexed keywords

COORDINATE DESCENT; DATA EXPANSION; DIFFERENT SPEED; LINEAR SUPPORT VECTOR MACHINES; LINEAR SVM; ON THE FLIES; ORDERS OF MAGNITUDE; STORAGE SUBSYSTEMS; TRADE OFF; WINNING WORK;

EID: 84866046574     PISSN: None     EISSN: None     Source Type: Conference Proceeding    
DOI: 10.1145/2339530.2339559     Document Type: Conference Paper
Times cited : (18)

References (40)
  • 4
    • 25444522689 scopus 로고    scopus 로고
    • Fast kernel classifiers with online and active learning
    • September
    • A. Bordes, S. Ertekin, J. Weston, and L. Bottou. Fast kernel classifiers with online and active learning. JMLR, 6:1579-1619, September 2005.
    • (2005) JMLR , vol.6 , pp. 1579-1619
    • Bordes, A.1    Ertekin, S.2    Weston, J.3    Bottou, L.4
  • 6
    • 48849085774 scopus 로고    scopus 로고
    • The tradeoffs of large scale learning
    • L. Bottou and O. Bousquet. The tradeoffs of large scale learning. In NIPS 20, 2007.
    • (2007) NIPS 20
    • Bottou, L.1    Bousquet, O.2
  • 7
    • 84898936190 scopus 로고    scopus 로고
    • Algorithmic stability and generalization performance
    • O. Bousquet and A. Elisseeff. Algorithmic stability and generalization performance. In NIPS 12, pages 196-202, 2001.
    • (2001) NIPS 12 , pp. 196-202
    • Bousquet, O.1    Elisseeff, A.2
  • 9
    • 29144439194 scopus 로고    scopus 로고
    • Decoding by linear programming
    • E. Candes and T. Tao. Decoding by linear programming. IEEE Trans. Information Theory, 51(12):4203-4215, 2005.
    • (2005) IEEE Trans. Information Theory , vol.51 , Issue.12 , pp. 4203-4215
    • Candes, E.1    Tao, T.2
  • 11
    • 80052677867 scopus 로고    scopus 로고
    • Selective block minimization for faster convergence of limited memory large-scale linear models
    • K. W. Chang and D. Roth. Selective block minimization for faster convergence of limited memory large-scale linear models. In KDD, pages 699-707, 2011.
    • (2011) KDD , pp. 699-707
    • Chang, K.W.1    Roth, D.2
  • 13
    • 34249753618 scopus 로고
    • Support vector networks
    • C. Cortes and V. Vapnik. Support vector networks. Machine Learning, 20(3):273-297, 1995.
    • (1995) Machine Learning , vol.20 , Issue.3 , pp. 273-297
    • Cortes, C.1    Vapnik, V.2
  • 16
    • 73149095169 scopus 로고    scopus 로고
    • Message-passing algorithms for compressed sensing
    • D. L. Donoho, A. Maleki, and A. Montanari. Message-passing algorithms for compressed sensing. PNAS, 106, 2009.
    • (2009) PNAS , vol.106
    • Donoho, D.L.1    Maleki, A.2    Montanari, A.3
  • 17
    • 50949133669 scopus 로고    scopus 로고
    • LIBLINEAR: A library for large linear classification
    • August
    • R.-E. Fan, J.-W. Chang, C.-J. Hsieh, X.-R. Wang, and C.-J. Lin. LIBLINEAR: A library for large linear classification. JMLR, 9:1871-1874, August 2008.
    • (2008) JMLR , vol.9 , pp. 1871-1874
    • Fan, R.-E.1    Chang, J.-W.2    Hsieh, C.-J.3    Wang, X.-R.4    Lin, C.-J.5
  • 18
    • 56449086680 scopus 로고    scopus 로고
    • A dual coordinate descent method for large-scale linear SVM
    • C. J. Hsieh, K. W. Chang, C. J. Lin, S. S. Keerthi, and S. Sundararajan. A dual coordinate descent method for large-scale linear SVM. In ICML, pages 408-415, 2008.
    • (2008) ICML , pp. 408-415
    • Hsieh, C.J.1    Chang, K.W.2    Lin, C.J.3    Keerthi, S.S.4    Sundararajan, S.5
  • 19
    • 0002714543 scopus 로고    scopus 로고
    • Making large-scale SVM learning practical
    • B. Schölkopf, C. J. C. Burges, and A. J. Smola, editors
    • T. Joachims. Making large-scale SVM learning practical. In B. Schölkopf, C. J. C. Burges, and A. J. Smola, editors, Advances in Kernel Methods - Support Vector Learning, pages 169-184, 1999.
    • (1999) Advances in Kernel Methods - Support Vector Learning , pp. 169-184
    • Joachims, T.1
  • 20
    • 33749563073 scopus 로고    scopus 로고
    • Training linear SVMs in linear time
    • T. Joachims. Training linear SVMs in linear time. In KDD, 2006.
    • (2006) KDD
    • Joachims, T.1
  • 21
    • 0008815681 scopus 로고    scopus 로고
    • Exponentiated gradient versus gradient descent for linear predictors
    • January
    • J. Kivinen and M. K. Warmuth. Exponentiated gradient versus gradient descent for linear predictors. Information and Computation, 132(1):1-64, January 1997.
    • (1997) Information and Computation , vol.132 , Issue.1 , pp. 1-64
    • Kivinen, J.1    Warmuth, M.K.2
  • 22
    • 33646032356 scopus 로고    scopus 로고
    • The p-norm generaliziation of the LMS algorithm for adaptive filtering
    • May
    • J. Kivinen, M. K. Warmuth, and B. Hassibi. The p-norm generaliziation of the LMS algorithm for adaptive filtering. IEEE Trans. Signal Processing, 54(5):1782-1793, May 2006.
    • (2006) IEEE Trans. Signal Processing , vol.54 , Issue.5 , pp. 1782-1793
    • Kivinen, J.1    Warmuth, M.K.2    Hassibi, B.3
  • 24
    • 0026678659 scopus 로고
    • On the convergence of coordinate descent method for convex differentiable minimization
    • Z. Q. Luo and P. Tseng. On the convergence of coordinate descent method for convex differentiable minimization. Journal of Optimization Theory and Applications, 72(1):7-35, 1992.
    • (1992) Journal of Optimization Theory and Applications , vol.72 , Issue.1 , pp. 7-35
    • Luo, Z.Q.1    Tseng, P.2
  • 25
    • 80052652249 scopus 로고    scopus 로고
    • Efficient large-scale distributed training of conditional maximum entropy models
    • G. Mann, R. McDonald, M. Mohri, N. Silberman, and D. Walker. Efficient large-scale distributed training of conditional maximum entropy models. In NIPS 22, pages 1231-1239, 2009.
    • (2009) NIPS 22 , pp. 1231-1239
    • Mann, G.1    McDonald, R.2    Mohri, M.3    Silberman, N.4    Walker, D.5
  • 26
    • 0028544395 scopus 로고
    • Network information criterion - Determining the number of hidden units for artificial neural network models
    • N. Murata, S. Yoshizawa, and S. Amari. Network information criterion - determining the number of hidden units for artificial neural network models. IEEE Trans. Neural Networks, 5:865-872, 1994.
    • (1994) IEEE Trans. Neural Networks , vol.5 , pp. 865-872
    • Murata, N.1    Yoshizawa, S.2    Amari, S.3
  • 27
    • 0001562735 scopus 로고    scopus 로고
    • Reducing the run-time complexity in support vector regression
    • B. Schölkopf, C. J. C. Burges, and A. J. Smola, editors
    • E. Osuna and F. Girosi. Reducing the run-time complexity in support vector regression. In B. Schölkopf, C. J. C. Burges, and A. J. Smola, editors, Advances in Kernel Methods - Support Vector Learning, pages 271-284, 1999.
    • (1999) Advances in Kernel Methods - Support Vector Learning , pp. 271-284
    • Osuna, E.1    Girosi, F.2
  • 28
    • 0003120218 scopus 로고    scopus 로고
    • Fast training of support vector machines using sequential minimal optimization
    • B. Schölkopf, C. J. C. Burges, and A. J. Smola, editors
    • J. Platt. Fast training of support vector machines using sequential minimal optimization. In B. Schölkopf, C. J. C. Burges, and A. J. Smola, editors, Advances in Kernel Methods - Support Vector Learning, pages 185-208, 1999.
    • (1999) Advances in Kernel Methods - Support Vector Learning , pp. 185-208
    • Platt, J.1
  • 30
    • 48849117633 scopus 로고    scopus 로고
    • Pegasos: Primal estimated sub-gradient solver for SVM
    • S. Shalev-Shwartz, Y. Singer, and N. Srebro. Pegasos: Primal estimated sub-gradient solver for SVM. In ICML, 2007.
    • (2007) ICML
    • Shalev-Shwartz, S.1    Singer, Y.2    Srebro, N.3
  • 32
    • 80052119994 scopus 로고    scopus 로고
    • An architecture for parallel topic models
    • A. J. Smola and S. Narayanamurthy. An architecture for parallel topic models. In VLDB, 2010.
    • (2010) VLDB
    • Smola, A.J.1    Narayanamurthy, S.2
  • 33
    • 77956555214 scopus 로고    scopus 로고
    • COFFIN: A computational framework for linear SVMs
    • S. Sonnenburg and V. Franc. COFFIN: A computational framework for linear SVMs. In ICML, 2010.
    • (2010) ICML
    • Sonnenburg, S.1    Franc, V.2
  • 35
    • 76749161402 scopus 로고    scopus 로고
    • Bundle methods for regularized risk minimization
    • January
    • C. H. Teo, S. V. N. Vishwanthan, A. J. Smola, and Q. V. Le. Bundle methods for regularized risk minimization. JMLR, 11:311-365, January 2010.
    • (2010) JMLR , vol.11 , pp. 311-365
    • Teo, C.H.1    Vishwanthan, S.V.N.2    Smola, A.J.3    Le, Q.V.4
  • 38
    • 77956195198 scopus 로고    scopus 로고
    • Large linear classification when data cannot fit in memory
    • H. F. Yu, C. J. Hsieh, K. W. Chang, and C. J. Lin. Large linear classification when data cannot fit in memory. In KDD, pages 833-842, 2010.
    • (2010) KDD , pp. 833-842
    • Yu, H.F.1    Hsieh, C.J.2    Chang, K.W.3    Lin, C.J.4
  • 39
    • 33745784205 scopus 로고    scopus 로고
    • Parallel software for training large scale support vector machines on multiprocessor systems
    • July
    • L. Zanni, T. Serafini, and G. Zanghirati. Parallel software for training large scale support vector machines on multiprocessor systems. JMLR, 7:1467-1492, July 2006.
    • (2006) JMLR , vol.7 , pp. 1467-1492
    • Zanni, L.1    Serafini, T.2    Zanghirati, G.3
  • 40
    • 85161967549 scopus 로고    scopus 로고
    • Parallelized stochastic gradient descent
    • M. Zinkevich, A. Smola, M. Weimer, and L. Li. Parallelized stochastic gradient descent. In NIPS 23, pages 2595-2603, 2010.
    • (2010) NIPS 23 , pp. 2595-2603
    • Zinkevich, M.1    Smola, A.2    Weimer, M.3    Li, L.4


* 이 정보는 Elsevier사의 SCOPUS DB에서 KISTI가 분석하여 추출한 것입니다.