메뉴 건너뛰기




Volumn 6, Issue 2, 2006, Pages 171-192

Learning rates of least-square regularized regression

Author keywords

Covering number; Learning theory; Regularization error; Regularization scheme; Reproducing kernel Hilbert space

Indexed keywords

COVERING NUMBERS; LEARNING THEORY; REGULARIZATION ERRORS; REGULARIZATION SCHEMES; REPRODUCING KERNEL HILBERT SPACES;

EID: 33744772341     PISSN: 16153375     EISSN: 16153383     Source Type: Journal    
DOI: 10.1007/s10208-004-0155-9     Document Type: Article
Times cited : (239)

References (32)
  • 1
    • 21844440024 scopus 로고    scopus 로고
    • Generalization bounds and complexities based on sparsity and clustering for convex combinations of functions from random classes
    • S. Andonova, Generalization bounds and complexities based on sparsity and clustering for convex combinations of functions from random classes, J. Mach. Learn. Res. 6 (2005), 307-340.
    • (2005) J. Mach. Learn. Res. , vol.6 , pp. 307-340
    • Andonova, S.1
  • 3
    • 5844297152 scopus 로고
    • Theory of reproducing kernels
    • N. Aronszajn, Theory of reproducing kernels, Trans. Amer. Math. Soc. 68 (1950), 337-404.
    • (1950) Trans. Amer. Math. Soc. , vol.68 , pp. 337-404
    • Aronszajn, N.1
  • 4
    • 0001347323 scopus 로고
    • Complexity regularization with applications to artificial neural networks
    • (G. Roussa, ed.), Kluwer Academic, Dortrecht
    • A. R. Barron, Complexity regularization with applications to artificial neural networks, in Non-parametric Functional Estimation (G. Roussa, ed.), Kluwer Academic, Dortrecht, 1990, pp. 561-576.
    • (1990) Non-parametric Functional Estimation , pp. 561-576
    • Barron, A.R.1
  • 5
    • 0032028728 scopus 로고    scopus 로고
    • The sample complexity of pattern classification with neural networks: The size of the weights is more important than the size of the network
    • P. L. Bartlett, The sample complexity of pattern classification with neural networks: The size of the weights is more important than the size of the network, IEEE Trans. Inform. Theory 44 (1998), 525-536.
    • (1998) IEEE Trans. Inform. Theory , vol.44 , pp. 525-536
    • Bartlett, P.L.1
  • 7
    • 84879394399 scopus 로고    scopus 로고
    • Support vector machine soft margin classifiers: Error analysis
    • D. R. Chen, Q. Wu, Y. Ying, and D.-X. Zhou, Support vector machine soft margin classifiers: Error analysis, J. Mach. Learn. Res. 5 (2004), 1143-1175.
    • (2004) J. Mach. Learn. Res. , vol.5 , pp. 1143-1175
    • Chen, D.R.1    Wu, Q.2    Ying, Y.3    Zhou, D.-X.4
  • 8
    • 0036071370 scopus 로고    scopus 로고
    • On the mathematical foundations of learning
    • F. Cucker and S. Smale, On the mathematical foundations of learning, Bull. Amer. Math. Soc. 39 (2001), 1-49.
    • (2001) Bull. Amer. Math. Soc. , vol.39 , pp. 1-49
    • Cucker, F.1    Smale, S.2
  • 9
    • 0036436325 scopus 로고    scopus 로고
    • Best choices for regularization parameters in learning theory: On the bias-variance problem
    • F. Cucker and S. Smale, Best choices for regularization parameters in learning theory: On the bias-variance problem, Found. Comput. Math. 2 (2002), 413-428.
    • (2002) Found. Comput. Math. , vol.2 , pp. 413-428
    • Cucker, F.1    Smale, S.2
  • 11
    • 24944432318 scopus 로고    scopus 로고
    • Model selection for regularized least-squares algorithm in learning theory
    • E. De Vito, A. Caponnetto, and L. Rosasco, Model selection for regularized least-squares algorithm in learning theory, Found. Comput. Math. 5 (2005), 59-85.
    • (2005) Found. Comput. Math. , vol.5 , pp. 59-85
    • De Vito, E.1    Caponnetto, A.2    Rosasco, L.3
  • 12
    • 0034419669 scopus 로고    scopus 로고
    • Regularization networks and support vector machines
    • T. Evgeniou, M. Pontil, and T. Poggio, Regularization networks and support vector machines, Adv. Comput. Math. 13 (2000), 1-50.
    • (2000) Adv. Comput. Math. , vol.13 , pp. 1-50
    • Evgeniou, T.1    Pontil, M.2    Poggio, T.3
  • 13
    • 0001166808 scopus 로고    scopus 로고
    • Rademacher processes and bounding the risk of function learning
    • (E. Gine, D. M. Mason, and J. A. Wellner, eds.), Birkhäuser, Boston
    • V. Koltchinskii and D. Panchenko, Rademacher processes and bounding the risk of function learning, in High Dimensional Probability II (E. Gine, D. M. Mason, and J. A. Wellner, eds.), Birkhäuser, Boston, 2000, pp. 443-459.
    • (2000) High Dimensional Probability II , pp. 443-459
    • Koltchinskii, V.1    Panchenko, D.2
  • 14
    • 0032166052 scopus 로고    scopus 로고
    • The importance of convexity in learning with least square loss
    • W. S. Lee, P. Bartlett, and R. Williamson, The importance of convexity in learning with least square loss, IEEE Trans. Inform. Theory 44 (1998), 1974-1980.
    • (1998) IEEE Trans. Inform. Theory , vol.44 , pp. 1974-1980
    • Lee, W.S.1    Bartlett, P.2    Williamson, R.3
  • 15
    • 9444269961 scopus 로고    scopus 로고
    • On the Bayes-risk consistency of regularized boosting methods
    • G. Lugosi and N. Vayatis, On the Bayes-risk consistency of regularized boosting methods, Ann. Statist. 32 (2004), 30-55.
    • (2004) Ann. Statist. , vol.32 , pp. 30-55
    • Lugosi, G.1    Vayatis, N.2
  • 17
    • 0037749769 scopus 로고    scopus 로고
    • Estimating the approximation error in learning theory
    • S. Smale and D.-X. Zhou, Estimating the approximation error in learning theory, Anal. Appl. 1 (2003), 17-41.
    • (2003) Anal. Appl. , vol.1 , pp. 17-41
    • Smale, S.1    Zhou, D.-X.2
  • 18
    • 3042850649 scopus 로고    scopus 로고
    • Shannon sampling and function reconstruction from point values
    • S. Smale and D.-X. Zhou, Shannon sampling and function reconstruction from point values, Bull. Amer. Math. Soc. 41 (2004), 279-305.
    • (2004) Bull. Amer. Math. Soc. , vol.41 , pp. 279-305
    • Smale, S.1    Zhou, D.-X.2
  • 26
    • 17444402055 scopus 로고    scopus 로고
    • SVM soft margin classifiers: Linear programming versus quadratic programming
    • Q. Wu and D.-X. Zhou, SVM soft margin classifiers: Linear programming versus quadratic programming, Neural Comput. 17 (2005), 1160-1187.
    • (2005) Neural Comput. , vol.17 , pp. 1160-1187
    • Wu, Q.1    Zhou, D.-X.2
  • 28
    • 0042879446 scopus 로고    scopus 로고
    • Leave-one-out bounds for kernel methods
    • T. Zhang, Leave-one-out bounds for kernel methods, Neural Comput. 15 (2003), 1397-1437.
    • (2003) Neural Comput. , vol.15 , pp. 1397-1437
    • Zhang, T.1
  • 29
    • 0036748375 scopus 로고    scopus 로고
    • The covering number in learning theory
    • D.-X. Zhou, The covering number in learning theory, J. Complexity 18 (2002), 739-767.
    • (2002) J. Complexity , vol.18 , pp. 739-767
    • Zhou, D.-X.1
  • 30
    • 0038105204 scopus 로고    scopus 로고
    • Capacity of reproducing kernel spaces in learning theory
    • D.-X. Zhou, Capacity of reproducing kernel spaces in learning theory, IEEE Trans. Inform. Theory 49 (2003), 1743-1752.
    • (2003) IEEE Trans. Inform. Theory , vol.49 , pp. 1743-1752
    • Zhou, D.-X.1
  • 32
    • 84876634838 scopus 로고    scopus 로고
    • Approximation with polynomial kernels and SVM classifiers
    • in press
    • D.-X. Zhou and K. Jetter, Approximation with polynomial kernels and SVM classifiers, Adv. Comput. Math., in press.
    • Adv. Comput. Math.
    • Zhou, D.-X.1    Jetter, K.2


* 이 정보는 Elsevier사의 SCOPUS DB에서 KISTI가 분석하여 추출한 것입니다.