-
2
-
-
0002493574
-
Sparse greedy matrix approximation for machine learning
-
Morgan Kaufmann
-
Alex J. Smola and Bernhard Schölkopf. Sparse Greedy Matrix Approximation for Machine Learning. In ICML, pages 911-918. Morgan Kaufmann, 2000.
-
(2000)
ICML
, pp. 911-918
-
-
Smola, A.J.1
Schölkopf, B.2
-
3
-
-
0000350486
-
Using the nyström method to speed up kernel machines
-
MIT Press
-
C. Williams and M. Seeger. Using the Nyström Method to Speed Up Kernel Machines. In NIPS, pages 682-688. MIT Press, 2000.
-
(2000)
NIPS
, pp. 682-688
-
-
Williams, C.1
Seeger, M.2
-
4
-
-
77953218689
-
Random features for large-scale kernel machines
-
Curran Associates, Inc.
-
Ali Rahimi and Benjamin Recht. Random Features for Large-Scale Kernel Machines. In NIPS, pages 1177-1184. Curran Associates, Inc., 2007.
-
(2007)
NIPS
, pp. 1177-1184
-
-
Rahimi, A.1
Recht, B.2
-
5
-
-
84965105965
-
Quasi-Monte Carlo feature maps for shift-invariant kernels
-
JMLR.org
-
J. Yang, V. Sindhwani, H. Avron, and M. W. Mahoney. Quasi-Monte Carlo Feature Maps for Shift-Invariant Kernels. In ICML, volume 32 of JMLR Proceedings, pages 485-493. JMLR.org, 2014.
-
(2014)
ICML, Volume 32 of JMLR Proceedings
, pp. 485-493
-
-
Yang, J.1
Sindhwani, V.2
Avron, H.3
Mahoney, M.W.4
-
6
-
-
84898989411
-
Fastfood - Computing hilbert space expansions in loglinear time
-
JMLR.org
-
Quoc V. Le, Tamás Sarlós, and Alexander J. Smola. Fastfood - Computing Hilbert Space Expansions in loglinear time. In ICML, volume 28 of JMLR Proceedings, pages 244-252. JMLR.org, 2013.
-
(2013)
ICML, Volume 28 of JMLR Proceedings
, pp. 244-252
-
-
Le, Q.V.1
Sarlós, T.2
Smola, A.J.3
-
7
-
-
84907027020
-
Memory efficient kernel approximation
-
JMLR.org
-
Si Si, Cho-Jui Hsieh, and Inderjit S. Dhillon. Memory Efficient Kernel Approximation. In ICML, volume 32 of JMLR Proceedings, pages 701-709. JMLR.org, 2014.
-
(2014)
ICML, Volume 32 of JMLR Proceedings
, pp. 701-709
-
-
Si, S.1
Hsieh, C.2
Dhillon, I.S.3
-
8
-
-
84898041590
-
Divide and conquer kernel ridge regression
-
JMLR.org
-
Yuchen Zhang, John C. Duchi, and Martin J. Wainwright. Divide and Conquer Kernel Ridge Regression. In COLT, volume 30 of JMLR Proceedings, pages 592-617. JMLR.org, 2013.
-
(2013)
COLT, Volume 30 of JMLR Proceedings
, pp. 592-617
-
-
Zhang, Y.1
Duchi, J.C.2
Wainwright, M.J.3
-
9
-
-
84862292598
-
Ensemble nystrom method
-
S. Kumar, M. Mohri, and A. Talwalkar. Ensemble Nystrom Method. In NIPS, pages 1060-1068, 2009.
-
(2009)
NIPS
, pp. 1060-1068
-
-
Kumar, S.1
Mohri, M.2
Talwalkar, A.3
-
10
-
-
77956549641
-
Making large-scale nyström approximation possible
-
Omnipress
-
Mu Li, James T. Kwok, and Bao-Liang Lu. Making Large-Scale Nyström Approximation Possible. In ICML, pages 631-638. Omnipress, 2010.
-
(2010)
ICML
, pp. 631-638
-
-
Li, M.1
Kwok, J.T.2
Lu, B.-L.3
-
11
-
-
56449087564
-
Improved nyström low-rank approximation and error analysis
-
ACM
-
Kai Zhang, Ivor W. Tsang, and James T. Kwok. Improved Nyström Low-rank Approximation and Error Analysis. ICML, pages 1232-1239. ACM, 2008.
-
(2008)
ICML
, pp. 1232-1239
-
-
Zhang, K.1
Tsang, I.W.2
Kwok, J.T.3
-
12
-
-
84937855981
-
Scalable kernel methods via doubly stochastic gradients
-
0002, and, In
-
Bo Dai, Bo Xie 0002. Niao He, Yingyu Liang, Anant Raj, Maria-Florina Balcan, and Le Song. Scalable Kernel Methods via Doubly Stochastic Gradients. In NIPS, pages 3041-3049, 2014.
-
(2014)
NIPS
, pp. 3041-3049
-
-
Dai, B.1
Xie, B.2
He, N.3
Liang, Y.4
Raj, A.5
Balcan, M.6
Le Song7
-
13
-
-
29244453931
-
On the nyström method for approximating a gram matrix for improved kernel-based learning
-
December
-
Petros Drineas and Michael W. Mahoney. On the Nyström Method for Approximating a Gram Matrix for Improved Kernel-Based Learning. JMLR, 6:2153-2175, December 2005.
-
(2005)
JMLR
, vol.6
, pp. 2153-2175
-
-
Drineas, P.1
Mahoney, M.W.2
-
15
-
-
84885649693
-
Improving CUR matrix decomposition and the nyström approximation via adaptive sampling
-
Shusen Wang and Zhihua Zhang. Improving CUR Matrix Decomposition and the Nyström Approximation via Adaptive Sampling. JMLR, 14(1):2729-2769, 2013.
-
(2013)
JMLR
, vol.14
, Issue.1
, pp. 2729-2769
-
-
Wang, S.1
Zhang, Z.2
-
16
-
-
84873435224
-
Fast approximation of matrix coherence and statistical leverage
-
Petros Drineas, Malik Magdon-Ismail, Michael W. Mahoney, and David P. Woodruff. Fast approximation of matrix coherence and statistical leverage. JMLR, 13:3475-3506, 2012.
-
(2012)
JMLR
, vol.13
, pp. 3475-3506
-
-
Drineas, P.1
Magdon-Ismail, M.2
Mahoney, M.W.3
Woodruff, D.P.4
-
17
-
-
84922209704
-
Uniform sampling for matrix approximation
-
ACM
-
Michael B. Cohen, Yin Tat Lee, Cameron Musco, Christopher Musco, Richard Peng, and Aaron Sidford. Uniform Sampling for Matrix Approximation. In ITCS, pages 181-190. ACM, 2015.
-
(2015)
ITCS
, pp. 181-190
-
-
Cohen, M.B.1
Lee, Y.T.2
Musco, C.3
Musco, C.4
Peng, R.5
Sidford, A.6
-
18
-
-
84955443267
-
Efficient algorithms and error analysis for the modified nystrom method
-
JMLR.org
-
Shusen Wang and Zhihua Zhang. Efficient Algorithms and Error Analysis for the Modified Nystrom Method. In AISTATS, volume 33 of JMLR Proceedings, pages 996-1004. JMLR.org, 2014.
-
(2014)
AISTATS, Volume 33 of JMLR Proceedings
, pp. 996-1004
-
-
Wang, S.1
Zhang, Z.2
-
19
-
-
84860672647
-
Sampling methods for the Nyström method
-
S. Kumar, M. Mohri, and A. Talwalkar. Sampling methods for the Nyström method. JMLR, 13(1):981-1006, 2012.
-
(2012)
JMLR
, vol.13
, Issue.1
, pp. 981-1006
-
-
Kumar, S.1
Mohri, M.2
Talwalkar, A.3
-
20
-
-
84862278427
-
On the impact of kernel approximation on learning accuracy
-
JMLR.org
-
Corinna Cortes, Mehryar Mohri, and Ameet Talwalkar. On the Impact of Kernel Approximation on Learning Accuracy. In AISTATS, volume 9 of JMLR Proceedings, pages 113-120. JMLR.org, 2010.
-
(2010)
AISTATS, Volume 9 of JMLR Proceedings
, pp. 113-120
-
-
Cortes, C.1
Mohri, M.2
Talwalkar, A.3
-
21
-
-
84884361899
-
Improved bounds for the nyström method with application to kernel classification
-
Oct.
-
R. Jin, T. Yang, M. Mahdavi, Y. Li, and Z. Zhou. Improved Bounds for the Nyström Method With Application to Kernel Classification. Information Theory, IEEE Transactions on, 59(10), Oct. 2013.
-
(2013)
Information Theory, IEEE Transactions on
, vol.59
, Issue.10
-
-
Jin, R.1
Yang, T.2
Mahdavi, M.3
Li, Y.4
Zhou, Z.5
-
22
-
-
84877740547
-
Nyström method vs random fourier features: A theoretical and empirical comparison
-
Tianbao Yang, Yu-Feng Li, Mehrdad Mahdavi, Rong Jin, and Zhi-Hua Zhou. Nyström Method vs Random Fourier Features: A Theoretical and Empirical Comparison. In NIPS, pages 485-493, 2012.
-
(2012)
NIPS
, pp. 485-493
-
-
Yang, T.1
Li, Y.2
Mahdavi, M.3
Jin, R.4
Zhou, Z.5
-
23
-
-
84898034803
-
Sharp analysis of low-rank kernel matrix approximations
-
Francis Bach. Sharp analysis of low-rank kernel matrix approximations. In COLT, volume 30, 2013.
-
(2013)
COLT
, vol.30
-
-
Bach, F.1
-
26
-
-
34548537866
-
Optimal rates for the regularized least-squares algorithm
-
Andrea Caponnetto and Ernesto De Vito. Optimal rates for the regularized least-squares algorithm. Foundations of Computational Mathematics, 7(3):331-368, 2007.
-
(2007)
Foundations of Computational Mathematics
, vol.7
, Issue.3
, pp. 331-368
-
-
Caponnetto, A.1
De Vito, E.2
-
27
-
-
47049125350
-
Spectral algorithms for supervised learning
-
L. Lo Gerfo, Lorenzo Rosasco, Francesca Odone, Ernesto De Vito, and Alessandro Verri. Spectral Algorithms for Supervised Learning. Neural Computation, 20(7):1873-1897, 2008.
-
(2008)
Neural Computation
, vol.20
, Issue.7
, pp. 1873-1897
-
-
Lo Gerfo, L.1
Rosasco, L.2
Odone, F.3
De Vito, E.4
Verri, A.5
-
28
-
-
84898072914
-
Optimal rates for regularized least squares regression
-
I. Steinwart, D. Hush, and C. Scovel. Optimal rates for regularized least squares regression. In COLT, 2009.
-
(2009)
COLT
-
-
Steinwart, I.1
Hush, D.2
Scovel, C.3
-
30
-
-
33846829559
-
On regularization algorithms in learning theory
-
F. Bauer, S. Pereverzev, and L. Rosasco. On regularization algorithms in learning theory. Journal of complexity, 23(1):52-72, 2007.
-
(2007)
Journal of Complexity
, vol.23
, Issue.1
, pp. 52-72
-
-
Bauer, F.1
Pereverzev, S.2
Rosasco, L.3
-
31
-
-
84965138547
-
Adaptive rates for regularization operators in learning theory
-
A. Caponnetto and Yuan Yao. Adaptive rates for regularization operators in learning theory. Analysis and Applications, 08, 2010.
-
(2010)
Analysis and Applications
, pp. 08
-
-
Caponnetto, A.1
Yao, Y.2
-
33
-
-
84898969606
-
On the sample complexity of subspace learning
-
Alessandro Rudi, Guillermo D. Canas, and Lorenzo Rosasco. On the Sample Complexity of Subspace Learning. In NIPS, pages 2067-2075, 2013.
-
(2013)
NIPS
, pp. 2067-2075
-
-
Rudi, A.1
Canas, G.D.2
Rosasco, L.3
|