메뉴 건너뛰기




Volumn 4, Issue January, 2014, Pages 3689-3697

Fast prediction for large-scale kernel machines

Author keywords

[No Author keywords available]

Indexed keywords

ALGORITHMS; CLASSIFICATION (OF INFORMATION); ERRORS; INFORMATION SCIENCE; REGRESSION ANALYSIS;

EID: 84937824211     PISSN: 10495258     EISSN: None     Source Type: Conference Proceeding    
DOI: None     Document Type: Conference Paper
Times cited : (48)

References (29)
  • 1
    • 77951969231 scopus 로고    scopus 로고
    • Training and testing low-degree polynomial data mappings via linear SVM
    • Y.-W. Chang, C.-J. Hsieh, K.-W. Chang, M. Ringgaard, and C.-J. Lin. Training and testing low-degree polynomial data mappings via linear SVM. JMLR, 11: 1471-1490, 2010.
    • (2010) JMLR , vol.11 , pp. 1471-1490
    • Chang, Y.-W.1    Hsieh, C.-J.2    Chang, K.-W.3    Ringgaard, M.4    Lin, C.-J.5
  • 2
    • 80053459990 scopus 로고    scopus 로고
    • Adaptive kernel approximation for large-scale non-linear SVM prediction
    • M. Cossalter, R. Yan, and L. Zheng. Adaptive kernel approximation for large-scale non-linear svm prediction. In ICML, 2011.
    • (2011) ICML
    • Cossalter, M.1    Yan, R.2    Zheng, L.3
  • 3
    • 84897480859 scopus 로고    scopus 로고
    • Learning optimally sparse support vector machines
    • A. Cotter, S. Shalev-Shwartz, and N. Srebro. Learning optimally sparse support vector machines. In ICML, 2013.
    • (2013) ICML
    • Cotter, A.1    Shalev-Shwartz, S.2    Srebro, N.3
  • 4
    • 33751097630 scopus 로고    scopus 로고
    • Fast Monte Carlo algorithms for matrices III: Computing a compressed approximate matrix decomposition
    • P. Drineas, R. Kannan, and M. W. Mahoney. Fast monte carlo algorithms for matrices iii: Computing a compressed approximate matrix decomposition. SIAM J. Comput., 36(1): 184-206, 2006.
    • (2006) SIAM J. Comput. , vol.36 , Issue.1 , pp. 184-206
    • Drineas, P.1    Kannan, R.2    Mahoney, M.W.3
  • 5
    • 29244453931 scopus 로고    scopus 로고
    • On the nyström method for approximating a gram matrix for improved kernel-based learning
    • P. Drineas and M. W. Mahoney. On the Nyström method for approximating a Gram matrix for improved kernel-based learning. JMLR, 6: 2153-2175, 2005.
    • (2005) JMLR , vol.6 , pp. 2153-2175
    • Drineas, P.1    Mahoney, M.W.2
  • 6
    • 50949133669 scopus 로고    scopus 로고
    • LIBLINEAR: A library for large linear classification
    • R.-E. Fan, K.-W. Chang, C.-J. Hsieh, X.-R. Wang, and C.-J. Lin. LIBLINEAR: A library for large linear classification. JMLR, 9: 1871-1874, 2008.
    • (2008) JMLR , vol.9 , pp. 1871-1874
    • Fan, R.-E.1    Chang, K.-W.2    Hsieh, C.-J.3    Wang, X.-R.4    Lin, C.-J.5
  • 7
    • 84877783090 scopus 로고    scopus 로고
    • A divide-and-conquer method for sparse inverse covariance estimation
    • C.-J. Hsieh, I. S. Dhillon, P. Ravikumar, and A. Banerjee. A divide-and-conquer method for sparse inverse covariance estimation. In NIPS, 2012.
    • (2012) NIPS
    • Hsieh, C.-J.1    Dhillon, I.S.2    Ravikumar, P.3    Banerjee, A.4
  • 8
    • 84919800930 scopus 로고    scopus 로고
    • A divide-and-conquer solver for kernel support vector machines
    • C.-J. Hsieh, S. Si, and I. S. Dhillon. A divide-and-conquer solver for kernel support vector machines. In ICML, 2014.
    • (2014) ICML
    • Hsieh, C.-J.1    Si, S.2    Dhillon, I.S.3
  • 9
    • 68949154453 scopus 로고    scopus 로고
    • Sparse kernel svms via cutting-plane training
    • T. Joachims and C.-N. Yu. Sparse kernel svms via cutting-plane training. Machine Learning, 76(2): 179-193, 2009.
    • (2009) Machine Learning , vol.76 , Issue.2 , pp. 179-193
    • Joachims, T.1    Yu, C.-N.2
  • 10
    • 84897556730 scopus 로고    scopus 로고
    • Local deep kernel learning for efficient non-linear SVM prediction
    • C. Jose, P. Goyal, P. Aggrwal, and M. Varma. Local deep kernel learning for efficient non-linear svm prediction. In ICML, 2013.
    • (2013) ICML
    • Jose, C.1    Goyal, P.2    Aggrwal, P.3    Varma, M.4
  • 12
    • 84897567068 scopus 로고    scopus 로고
    • Random feature maps for dot product kernels
    • P. Kar and H. Karnick. Random feature maps for dot product kernels. In AISTATS, 2012.
    • (2012) AISTATS
    • Kar, P.1    Karnick, H.2
  • 13
    • 33745789043 scopus 로고    scopus 로고
    • Building support vector machines with reduced classifier complexity
    • S. S. Keerthi, O. Chapelle, and D. DeCoste. Building support vector machines with reduced classifier complexity. JMLR, 7: 1493-1515, 2006.
    • (2006) JMLR , vol.7 , pp. 1493-1515
    • Keerthi, S.S.1    Chapelle, O.2    DeCoste, D.3
  • 15
    • 80053436893 scopus 로고    scopus 로고
    • Locally linear support vector machines
    • L. Ladicky and P. H. S. Torr. Locally linear support vector machines. In ICML, 2011.
    • (2011) ICML
    • Ladicky, L.1    Torr, P.H.S.2
  • 16
    • 84897549944 scopus 로고    scopus 로고
    • Fastfood - Approximating kernel expansions in loglinear time
    • Q. V. Le, T. Sarlos, and A. J. Smola. Fastfood - approximating kernel expansions in loglinear time. In ICML, 2013.
    • (2013) ICML
    • Le, Q.V.1    Sarlos, T.2    Smola, A.J.3
  • 17
    • 79955153536 scopus 로고    scopus 로고
    • RSVM: Reduced support vector machines
    • Y.-J. Lee and O. L. Mangasarian. RSVM: Reduced support vector machines. In SDM, 2001.
    • (2001) SDM
    • Lee, Y.-J.1    Mangasarian, O.L.2
  • 18
    • 84870201308 scopus 로고    scopus 로고
    • Efficient classification for additive kernel svms
    • S. Maji, A. C. Berg, and J. Malik. Efficient classification for additive kernel svms. IEEE PAMI, 35(1), 2013.
    • (2013) IEEE PAMI , vol.35 , Issue.1
    • Maji, S.1    Berg, A.C.2    Malik, J.3
  • 19
    • 84897048395 scopus 로고    scopus 로고
    • Fast SVM training using approximate extreme points
    • M. Nandan, P. R. Khargonekar, and S. S. Talathi. Fast svm training using approximate extreme points. JMLR, 15: 59-98, 2014.
    • (2014) JMLR , vol.15 , pp. 59-98
    • Nandan, M.1    Khargonekar, P.R.2    Talathi, S.S.3
  • 20
    • 0034593060 scopus 로고    scopus 로고
    • Towards scalable support vector machines using squashing
    • D. Pavlov, D. Chudova, and P. Smyth. Towards scalable support vector machines using squashing. In KDD, pages 295-299, 2000.
    • (2000) KDD , pp. 295-299
    • Pavlov, D.1    Chudova, D.2    Smyth, P.3
  • 21
    • 77953218689 scopus 로고    scopus 로고
    • Random features for large-scale kernel machines
    • A. Rahimi and B. Recht. Random features for large-scale kernel machines. In NIPS, pages 1177-1184, 2007.
    • (2007) NIPS , pp. 1177-1184
    • Rahimi, A.1    Recht, B.2
  • 22
    • 0000263906 scopus 로고    scopus 로고
    • Fast approximation of support vector kernel expansions, and an interpretation of clustering as approximation in feature spaces
    • Berlin Springer
    • B. Schölkopf, P. Knirsch, A. J. Smola, and C. J. C. Burges. Fast approximation of support vector kernel expansions, and an interpretation of clustering as approximation in feature spaces. In Mustererkennung 1998-20. DAGM-Symposium, Informatik aktuell, pages 124-132, Berlin, 1998. Springer.
    • (1998) Mustererkennung 1998-20. DAGM-Symposium, Informatik Aktuell , pp. 124-132
    • Schölkopf, B.1    Knirsch, P.2    Smola, A.J.3    Burges, C.J.C.4
  • 23
    • 84919905043 scopus 로고    scopus 로고
    • Memory efficient kernel approximation
    • S. Si, C.-J. Hsieh, and I. S. Dhillon. Memory efficient kernel approximation. In ICML, 2014.
    • (2014) ICML
    • Si, S.1    Hsieh, C.-J.2    Dhillon, I.S.3
  • 24
    • 21844440579 scopus 로고    scopus 로고
    • Core vector machines: Fast SVM training on very large data sets
    • I. Tsang, J. Kwok, and P. Cheung. Core vector machines: Fast SVM training on very large data sets. JMLR, 6: 363-392, 2005.
    • (2005) JMLR , vol.6 , pp. 363-392
    • Tsang, I.1    Kwok, J.2    Cheung, P.3
  • 25
    • 84901632905 scopus 로고    scopus 로고
    • Iteration complexity of feasible descent methods for convex optimization
    • P.-W. Wang and C.-J. Lin. Iteration complexity of feasible descent methods for convex optimization. JMLR, 15: 1523-1548, 2014.
    • (2014) JMLR , vol.15 , pp. 1523-1548
    • Wang, P.-W.1    Lin, C.-J.2
  • 26
    • 84885649693 scopus 로고    scopus 로고
    • Improving cur matrix decomposition and the nyström approximation via adaptive sampling
    • S. Wang and Z. Zhang. Improving cur matrix decomposition and the nyström approximation via adaptive sampling. JMLR, 14: 2729-2769, 2013.
    • (2013) JMLR , vol.14 , pp. 2729-2769
    • Wang, S.1    Zhang, Z.2
  • 27
    • 84899010839 scopus 로고    scopus 로고
    • Using the nyström method to speed up kernel machines
    • T. Leen, T. Dietterich, and V. Tresp, editors
    • C. K. I. Williams and M. Seeger. Using the Nyström method to speed up kernel machines. In T. Leen, T. Dietterich, and V. Tresp, editors, NIPS, 2001.
    • (2001) NIPS
    • Williams, C.K.I.1    Seeger, M.2
  • 28
    • 77957779140 scopus 로고    scopus 로고
    • Clustered nyström method for large scale manifold learning and dimension reduction
    • K. Zhang and J. T. Kwok. Clustered Nyström method for large scale manifold learning and dimension reduction. Trans. Neur. Netw., 21(10): 1576-1587, 2010.
    • (2010) Trans. Neur. Netw. , vol.21 , Issue.10 , pp. 1576-1587
    • Zhang, K.1    Kwok, J.T.2
  • 29
    • 56449087564 scopus 로고    scopus 로고
    • Improved nyström low rank approximation and error analysis
    • K. Zhang, I. W. Tsang, and J. T. Kwok. Improved Nyström low rank approximation and error analysis. In ICML, 2008.
    • (2008) ICML
    • Zhang, K.1    Tsang, I.W.2    Kwok, J.T.3


* 이 정보는 Elsevier사의 SCOPUS DB에서 KISTI가 분석하여 추출한 것입니다.