메뉴 건너뛰기




Volumn 30, Issue , 2013, Pages 185-209

Sharp analysis of low-rank kernel matrix approximations

Author keywords

[No Author keywords available]

Indexed keywords

ALGORITHMS; REGRESSION ANALYSIS;

EID: 84898034803     PISSN: 15324435     EISSN: 15337928     Source Type: Journal    
DOI: None     Document Type: Conference Paper
Times cited : (220)

References (53)
  • 1
    • 84858713046 scopus 로고    scopus 로고
    • Data-driven calibration of linear estimators with minimal penalties
    • S. Arlot and F. Bach. Data-driven calibration of linear estimators with minimal penalties. In Adv. NIPS, 2009.
    • (2009) Adv. NIPS
    • Arlot, S.1    Bach, F.2
  • 2
    • 80555158386 scopus 로고    scopus 로고
    • Self-concordant analysis for logistic regression
    • F. Bach. Self-concordant analysis for logistic regression. Electronic Journal of Statistics, 4: 384-414, 2010.
    • (2010) Electronic Journal of Statistics , vol.4 , pp. 384-414
    • Bach, F.1
  • 3
    • 31844446681 scopus 로고    scopus 로고
    • Predictive low-rank decomposition for kernel methods
    • F. Bach and M. I. Jordan. Predictive low-rank decomposition for kernel methods. In Proc. ICML, 2005.
    • (2005) Proc. ICML
    • Bach, F.1    Jordan, M.I.2
  • 4
    • 51349158367 scopus 로고    scopus 로고
    • Kernel projection machine: A new tool for pattern recognition
    • G. Blanchard, P. Massart, R. Vert, and L. Zwald. Kernel projection machine: a new tool for pattern recognition. In Adv. NIPS, 2004.
    • (2004) Adv. NIPS
    • Blanchard, G.1    Massart, P.2    Vert, R.3    Zwald, L.4
  • 5
    • 48849115978 scopus 로고    scopus 로고
    • Statistical performance of support vector machines
    • G. Blanchard, O. Bousquet, and P. Massart. Statistical performance of support vector machines. The Annals of Statistics, 36(2): 489-531, 2008.
    • (2008) The Annals of Statistics , vol.36 , Issue.2 , pp. 489-531
    • Blanchard, G.1    Bousquet, O.2    Massart, P.3
  • 7
    • 84866726489 scopus 로고    scopus 로고
    • An improved approximation algorithm for the column subset selection problem
    • C. Boutsidis, M. W. Mahoney, and P. Drineas. An improved approximation algorithm for the column subset selection problem. In Proc. SODA, 2009.
    • (2009) Proc. SODA
    • Boutsidis, C.1    Mahoney, M.W.2    Drineas, P.3
  • 8
    • 34548537866 scopus 로고    scopus 로고
    • Optimal rates for the regularized least-squares algorithm
    • A. Caponnetto and E. De Vito. Optimal rates for the regularized least-squares algorithm. Found. Comput. Math., 7(3): 331-368, 2007.
    • (2007) Found. Comput. Math. , vol.7 , Issue.3 , pp. 331-368
    • Caponnetto, A.1    De Vito, E.2
  • 9
    • 34247849152 scopus 로고    scopus 로고
    • Training a support vector machine in the primal
    • O. Chapelle. Training a support vector machine in the primal. Neural Computation, 19(5): 1155-1178, 2007.
    • (2007) Neural Computation , vol.19 , Issue.5 , pp. 1155-1178
    • Chapelle, O.1
  • 10
    • 84859452391 scopus 로고    scopus 로고
    • On the impact of kernel approximation on learning accuracy
    • C. Cortes, M. Mohri, and A. Talwalkar. On the impact of kernel approximation on learning accuracy. In Proc. AISTATS, 2010.
    • (2010) Proc. AISTATS
    • Cortes, C.1    Mohri, M.2    Talwalkar, A.3
  • 11
    • 84864063405 scopus 로고    scopus 로고
    • The forgetron: A kernel-based perceptron on a fixed budget
    • O. Dekel, S. Shalev-Shwartz, and Y. Singer. The Forgetron: A kernel-based perceptron on a fixed budget. In Adv. NIPS, 2005.
    • (2005) Adv. NIPS
    • Dekel, O.1    Shalev-Shwartz, S.2    Singer, Y.3
  • 14
    • 0041494125 scopus 로고    scopus 로고
    • Efficient SVM training using low-rank kernel representations
    • S. Fine and K. Scheinberg. Efficient SVM training using low-rank kernel representations. Journal of Machine Learning Research, 2: 243-264, 2001.
    • (2001) Journal of Machine Learning Research , vol.2 , pp. 243-264
    • Fine, S.1    Scheinberg, K.2
  • 20
    • 84947403595 scopus 로고
    • Probability inequalities for sums of bounded random variables
    • W. Hoeffding. Probability inequalities for sums of bounded random variables. Journal of the American Statistical Association, 58(301): 13-30, 1963.
    • (1963) Journal of the American Statistical Association , vol.58 , Issue.301 , pp. 13-30
    • Hoeffding, W.1
  • 21
    • 84859405394 scopus 로고    scopus 로고
    • An analysis of random design linear regression
    • D. Hsu, S. M. Kakade, and T. Zhang. An analysis of random design linear regression. In Proc. COLT, 2011.
    • (2011) Proc. COLT
    • Hsu, D.1    Kakade, S.M.2    Zhang, T.3
  • 22
    • 84859392380 scopus 로고    scopus 로고
    • Tail inequalities for sums of random matrices that depend on the intrinsic dimension
    • D. Hsu, S. M. Kakade, and T. Zhang. Tail inequalities for sums of random matrices that depend on the intrinsic dimension. Electronic Communications in Probability, 17(14): 1-13, 2012.
    • (2012) Electronic Communications in Probability , vol.17 , Issue.14 , pp. 1-13
    • Hsu, D.1    Kakade, S.M.2    Zhang, T.3
  • 23
    • 84898043174 scopus 로고    scopus 로고
    • Improved bound for the nyström's method and its application to kernel classification
    • arXiv
    • R. Jin, T. Yang, M. Mahdavi, Y.-F. Li, and Z.-H. Zhou. Improved bound for the Nyström's method and its application to kernel classification. Technical Report 1111.2262v2, arXiv, 2001.
    • (2001) Technical Report 1111.2262v2
    • Jin, R.1    Yang, T.2    Mahdavi, M.3    Li, Y.-F.4    Zhou, Z.-H.5
  • 24
    • 69549111057 scopus 로고    scopus 로고
    • Cutting-plane training of structural SVMs
    • T. Joachims, T. Finley, and C.-N. Yu. Cutting-plane training of structural SVMs. Machine Learning, 77(1): 27-59, 2009.
    • (2009) Machine Learning , vol.77 , Issue.1 , pp. 27-59
    • Joachims, T.1    Finley, T.2    Yu, C.-N.3
  • 28
    • 85156260506 scopus 로고    scopus 로고
    • Fast sparse Gaussian process methods: The informative vector machine
    • N. D. Lawrence, M. Seeger, and R. Herbrich. Fast sparse Gaussian process methods: The informative vector machine. In Adv. NIPS, 2002.
    • (2002) Adv. NIPS
    • Lawrence, N.D.1    Seeger, M.2    Herbrich, R.3
  • 31
    • 78249289201 scopus 로고    scopus 로고
    • Compressed least-squares regression
    • O. A. Maillard and R. Munos. Compressed least-squares regression. In Adv. NIPS, 2009.
    • (2009) Adv. NIPS
    • Maillard, O.A.1    Munos, R.2
  • 32
    • 56449097022 scopus 로고    scopus 로고
    • The projectron: A bounded kernel-based perceptron
    • F. Orabona, J. Keshet, and B. Caputo. The Projectron: a bounded kernel-based perceptron. In Proc. ICML, 2008.
    • (2008) Proc. ICML
    • Orabona, F.1    Keshet, J.2    Caputo, B.3
  • 33
    • 0003120218 scopus 로고    scopus 로고
    • Fast training of support vector machines using sequential minimal optimization
    • MIT Press
    • J. C. Platt. Fast training of support vector machines using sequential minimal optimization. In Advances in kernel methods, pages 185-208. MIT Press, 1999.
    • (1999) Advances in Kernel Methods , pp. 185-208
    • Platt, J.C.1
  • 34
    • 85161980201 scopus 로고    scopus 로고
    • Random features for large-scale kernel machines
    • A. Rahimi and B. Recht. Random features for large-scale kernel machines. Adv. NIPS, 2007.
    • (2007) Adv. NIPS
    • Rahimi, A.1    Recht, B.2
  • 37
    • 85161993206 scopus 로고    scopus 로고
    • Tight sample complexity of large-margin learning
    • S. Sabato, N. Srebro, and N. Tishby. Tight sample complexity of large-margin learning. In Adv. NIPS, 2010.
    • (2010) Adv. NIPS
    • Sabato, S.1    Srebro, N.2    Tishby, N.3
  • 40
    • 48849117633 scopus 로고    scopus 로고
    • Pegasos: Primal estimated sub-gradient solver for SVM
    • S. Shalev-Shwartz, Y. Singer, and N. Srebro. Pegasos: Primal estimated sub-gradient solver for SVM. In Proc. ICML, 2007.
    • (2007) Proc. ICML
    • Shalev-Shwartz, S.1    Singer, Y.2    Srebro, N.3
  • 42
    • 0002493574 scopus 로고    scopus 로고
    • Sparse greedy matrix approximation for machine learning
    • A. J. Smola and B. Schölkopf. Sparse greedy matrix approximation for machine learning. In Proc. ICML, 2000.
    • (2000) Proc. ICML
    • Smola, A.J.1    Schölkopf, B.2
  • 43
    • 0000935894 scopus 로고
    • Spline smoothing and optimal rates of convergence in nonparametric regression models
    • P. Speckman. Spline smoothing and optimal rates of convergence in nonparametric regression models. The Annals of Statistics, 13(3): 970-983, 1985.
    • (1985) The Annals of Statistics , vol.13 , Issue.3 , pp. 970-983
    • Speckman, P.1
  • 44
    • 84898072914 scopus 로고    scopus 로고
    • Optimal rates for regularized least squares regression
    • I. Steinwart, D. Hush, C. Scovel, et al. Optimal rates for regularized least squares regression. In Proc. COLT, 2009.
    • (2009) Proc. COLT
    • Steinwart, I.1    Hush, D.2    Scovel, C.3
  • 45
    • 80053137912 scopus 로고    scopus 로고
    • Matrix coherence and the nyström method
    • A. Talwalkar and A. Rostamizadeh. Matrix coherence and the nyström method. In Proc. UAI, 2010.
    • (2010) Proc. UAI
    • Talwalkar, A.1    Rostamizadeh, A.2
  • 46
    • 84876688536 scopus 로고    scopus 로고
    • Online learning as stochastic approximation of regularization paths
    • arXiv
    • P. Tarrès and Y. Yao. Online learning as stochastic approximation of regularization paths. Technical Report 1103.5538, arXiv, 2011.
    • (2011) Technical Report 1103.5538
    • Tarrès, P.1    Yao, Y.2
  • 47
    • 80052645998 scopus 로고    scopus 로고
    • Improved analysis of the subsampled randomized hadamard transform
    • J. A. Tropp. Improved analysis of the subsampled randomized Hadamard transform. Adv. Adapt. Data Anal., 13(1-2): 115-126, 2011.
    • (2011) Adv. Adapt. Data Anal. , vol.13 , Issue.1-2 , pp. 115-126
    • Tropp, J.A.1
  • 48
    • 84864315555 scopus 로고    scopus 로고
    • User-friendly tail bounds for sums of random matrices
    • J. A. Tropp. User-friendly tail bounds for sums of random matrices. Foundations of Computational Mathematics, 12(4): 389-434, 2012.
    • (2012) Foundations of Computational Mathematics , vol.12 , Issue.4 , pp. 389-434
    • Tropp, J.A.1
  • 50
    • 84869463516 scopus 로고    scopus 로고
    • Breaking the curse of kernelization: Budgeted stochastic gradient descent for large-scale SVM training
    • Z. Wang, K. Crammer, and S. Vucetic. Breaking the curse of kernelization: Budgeted stochastic gradient descent for large-scale SVM training. Journal of Machine Learning Research, 13: 3103-3131, 2012.
    • (2012) Journal of Machine Learning Research , vol.13 , pp. 3103-3131
    • Wang, Z.1    Crammer, K.2    Vucetic, S.3
  • 51
    • 84899010839 scopus 로고    scopus 로고
    • Using the nyström method to speed up kernel machines
    • C. Williams and M. Seeger. Using the Nyström method to speed up kernel machines. In Adv. NIPS, 2001.
    • (2001) Adv. NIPS
    • Williams, C.1    Seeger, M.2
  • 52
    • 84877740547 scopus 로고    scopus 로고
    • Nyström method vs. Random fourier features: A theoretical and empirical comparison
    • T. Yang, Y.-F. Li, M. Mahdavi, R. Jin, and Z.-H. Zhou. Nyström method vs. random fourier features: A theoretical and empirical comparison. In Adv. NIPS, 2012.
    • (2012) Adv. NIPS
    • Yang, T.1    Li, Y.-F.2    Mahdavi, M.3    Jin, R.4    Zhou, Z.-H.5
  • 53
    • 33846580425 scopus 로고    scopus 로고
    • Local features and kernels for classification of texture and object categories: A comprehensive study
    • J. Zhang, M. Marszałek, S. Lazebnik, and C. Schmid. Local features and kernels for classification of texture and object categories: A comprehensive study. International Journal of Computer Vision, 73(2): 213-238, 2007.
    • (2007) International Journal of Computer Vision , vol.73 , Issue.2 , pp. 213-238
    • Zhang, J.1    Marszałek, M.2    Lazebnik, S.3    Schmid, C.4


* 이 정보는 Elsevier사의 SCOPUS DB에서 KISTI가 분석하여 추출한 것입니다.