메뉴 건너뛰기




Volumn 247, Issue , 2009, Pages 113-154

Regularization and suboptimal solutions in learning from data

Author keywords

Accuracy of suboptimal solutions; Ill posedness; Inverse problems; Regularization techniques; Weight decay

Indexed keywords


EID: 70350231420     PISSN: 1860949X     EISSN: None     Source Type: Book Series    
DOI: 10.1007/978-3-642-04003-0_6     Document Type: Article
Times cited : (3)

References (99)
  • 2
    • 0036506049 scopus 로고    scopus 로고
    • Optimization-based learning with bounded error for feedforward neural networks
    • Alessandri, A., Sanguineti, M., Maggiore, M.: Optimization-based learning with bounded error for feedforward neural networks. IEEE Trans. on Neural Networks 13, 261-273 (2002)
    • (2002) IEEE Trans. on Neural Networks , vol.13 , pp. 261-273
    • Alessandri, A.1    Sanguineti, M.2    Maggiore, M.3
  • 3
    • 5844297152 scopus 로고
    • Theory of reproducing kernels
    • Aronszajn, N.: Theory of reproducing kernels. Trans. of AMS 68, 337-404 (1950)
    • (1950) Trans. of AMS , vol.68 , pp. 337-404
    • Aronszajn, N.1
  • 5
    • 0027599793 scopus 로고
    • Universal approximation bounds for superpositions of a sigmoidal function
    • Barron, A.R.: Universal approximation bounds for superpositions of a sigmoidal function. IEEE Trans. on Information Theory 39, 930-945 (1993)
    • (1993) IEEE Trans. on Information Theory , vol.39 , pp. 930-945
    • Barron, A.R.1
  • 6
    • 0032028728 scopus 로고    scopus 로고
    • The sample complexity of pattern classification with neural networks: The size of the weights is more important than the size of the network
    • Bartlett, P.L.: The sample complexity of pattern classification with neural networks: The size of the weights is more important than the size of the network. IEEE Trans. on Information Theory 44, 525-536 (1998)
    • (1998) IEEE Trans. on Information Theory , vol.44 , pp. 525-536
    • Bartlett, P.L.1
  • 14
    • 0034558491 scopus 로고    scopus 로고
    • Training neural networks with noisy data as an ill-posed problem
    • Burger, M., Engl, H.: Training neural networks with noisy data as an ill-posed problem. Advances in Computational Mathematics 13, 335-354 (2000)
    • (2000) Advances in Computational Mathematics , vol.13 , pp. 335-354
    • Burger, M.1    Engl, H.2
  • 15
    • 0037261148 scopus 로고    scopus 로고
    • Analysis of Tikhonov regularization for function approximation by neural networks
    • Burger, M., Neubauer, A.: Analysis of Tikhonov regularization for function approximation by neural networks. Neural Networks 16, 79-90 (2002)
    • (2002) Neural Networks , vol.16 , pp. 79-90
    • Burger, M.1    Neubauer, A.2
  • 17
    • 0029411198 scopus 로고
    • Regularized neural networks: Some convergence rate results
    • Corradi, V., White, H.: Regularized neural networks: Some convergence rate results. Neural Computation 7, 1225-1244 (1995)
    • (1995) Neural Computation , vol.7 , pp. 1225-1244
    • Corradi, V.1    White, H.2
  • 18
  • 20
    • 0036071370 scopus 로고    scopus 로고
    • On the mathematical foundations of learning
    • Cucker, F., Smale, S.: On the mathematical foundations of learning. Bulletin of AMS 39, 1-49 (2001)
    • (2001) Bulletin of AMS , vol.39 , pp. 1-49
    • Cucker, F.1    Smale, S.2
  • 21
    • 0036436325 scopus 로고    scopus 로고
    • Best choices for regularization parameters in learning theory: On the bias-variance problem
    • Cucker, F., Smale, S.: Best choices for regularization parameters in learning theory: On the bias-variance problem. Foundations of Computational Mathematics 2, 413-428 (2002)
    • (2002) Foundations of Computational Mathematics , vol.2 , pp. 413-428
    • Cucker, F.1    Smale, S.2
  • 25
    • 62549127689 scopus 로고    scopus 로고
    • Elastic-net regularization in learning theory
    • De Mol, C., De Vito, E., Rosasco, L.: Elastic-net regularization in learning theory. J. of Complexity 25, 201-230 (2009)
    • (2009) J. of Complexity , vol.25 , pp. 201-230
    • De Mol, C.1    De Vito, E.2    Rosasco, L.3
  • 26
    • 47049131326 scopus 로고    scopus 로고
    • Discretization error analysis for Tikhonov regularization in learning theory
    • De Vito, E., Caponnetto, A., Rosasco, L.: Discretization error analysis for Tikhonov regularization in learning theory. Analysis and Applications 4, 81-99 (2006)
    • (2006) Analysis and Applications , vol.4 , pp. 81-99
    • De Vito, E.1    Caponnetto, A.2    Rosasco, L.3
  • 28
    • 70350246176 scopus 로고    scopus 로고
    • Dontchev, A.L.: Perturbations, Approximations and Sensitivity Analysis of Optimal Control Systems. LNCIS, 52. Springer, Heidelberg (1983)
    • Dontchev, A.L.: Perturbations, Approximations and Sensitivity Analysis of Optimal Control Systems. LNCIS, vol. 52. Springer, Heidelberg (1983)
  • 35
    • 70350225903 scopus 로고    scopus 로고
    • Girosi, F.: Regularization theory, radial basis functions and networks. In: Cherkassky, V., Friedman, J.H., Wechsler, H. (eds.) From Statistics to Neural Networks. Theory and Pattern Recognition Applications. NATO ASI Series F, Computer and Systems Sciences, pp. 166-187. Springer, Berlin (1994)
    • Girosi, F.: Regularization theory, radial basis functions and networks. In: Cherkassky, V., Friedman, J.H., Wechsler, H. (eds.) From Statistics to Neural Networks. Theory and Pattern Recognition Applications. NATO ASI Series F, Computer and Systems Sciences, pp. 166-187. Springer, Berlin (1994)
  • 36
    • 0000249788 scopus 로고    scopus 로고
    • An equivalence between sparse approximation and support vector machines
    • Girosi, F.: An equivalence between sparse approximation and support vector machines. Neural Computation 10, 1455-1480 (1998)
    • (1998) Neural Computation , vol.10 , pp. 1455-1480
    • Girosi, F.1
  • 37
    • 0001219859 scopus 로고
    • Regularization theory and neural networks architectures
    • Girosi, F., Jones, M., Poggio, T.: Regularization theory and neural networks architectures. Neural Computation 7, 219-269 (1995)
    • (1995) Neural Computation , vol.7 , pp. 219-269
    • Girosi, F.1    Jones, M.2    Poggio, T.3
  • 38
    • 77952978333 scopus 로고    scopus 로고
    • Regularization techniques and suboptimal solutions to optimization problems in learning from data
    • to appear
    • Gnecco, G., Sanguineti, M.: Regularization techniques and suboptimal solutions to optimization problems in learning from data. Neural Computation (to appear)
    • Neural Computation
    • Gnecco, G.1    Sanguineti, M.2
  • 39
    • 58149463469 scopus 로고    scopus 로고
    • The weight-decay technique in learning from data: An optimization point of view
    • Gnecco, G., Sanguineti, M.: The weight-decay technique in learning from data: An optimization point of view. Computational Management Science 6, 53-79 (2009)
    • (2009) Computational Management Science , vol.6 , pp. 53-79
    • Gnecco, G.1    Sanguineti, M.2
  • 41
    • 84898991622 scopus 로고    scopus 로고
    • Advances in Neural Information System Processing
    • Graepel, T., Herbrich, R.: From margin to sparsity. Advances in Neural Information System Processing 13, 210-216 (2001)
    • (2001) From margin to sparsity , vol.13 , pp. 210-216
    • Graepel, T.1    Herbrich, R.2
  • 42
    • 32944458952 scopus 로고    scopus 로고
    • On the exponential convergence of matching pursuits in quasi-incoherent dictionaries
    • Gribonval, R., Vandergheynst, P.: On the exponential convergence of matching pursuits in quasi-incoherent dictionaries. IEEE Trans. on Information Theory 52, 255-261 (2006)
    • (2006) IEEE Trans. on Information Theory , vol.52 , pp. 255-261
    • Gribonval, R.1    Vandergheynst, P.2
  • 43
    • 0001546764 scopus 로고    scopus 로고
    • Convergent on-line algorithms for supervised learning in neural networks
    • Grippo, L.: Convergent on-line algorithms for supervised learning in neural networks. IEEE Trans. on Neural Networks 11, 1284-1299 (2000)
    • (2000) IEEE Trans. on Neural Networks , vol.11 , pp. 1284-1299
    • Grippo, L.1
  • 45
    • 0032363670 scopus 로고    scopus 로고
    • The weight decay backpropagation for generalizations with missing values
    • Gupta, A., Lam, M.: The weight decay backpropagation for generalizations with missing values. Annals of Operations Research 78, 165-187 (1998)
    • (1998) Annals of Operations Research , vol.78 , pp. 165-187
    • Gupta, A.1    Lam, M.2
  • 46
    • 0032144712 scopus 로고    scopus 로고
    • Weight decay backpropagation for noisy data
    • Gupta, A., Lam, M.: Weight decay backpropagation for noisy data. Neural Networks 11, 1127-1138 (1998)
    • (1998) Neural Networks , vol.11 , pp. 1127-1138
    • Gupta, A.1    Lam, M.2
  • 47
    • 0002500486 scopus 로고
    • Sur les problèmes aux dérivées partielles et leur signification physique.
    • Hadamard, J.: Sur les problèmes aux dérivées partielles et leur signification physique. Bull. Univ. Princeton 13, 49-52 (1902)
    • (1902) Bull. Univ. Princeton , vol.13 , pp. 49-52
    • Hadamard, J.1
  • 49
    • 0004063090 scopus 로고
    • A Comprehensive Foundation. Macmillan, New York
    • Haykin, S.: Neural Networks. A Comprehensive Foundation. Macmillan, New York (1994)
    • (1994) Neural Networks
    • Haykin, S.1
  • 50
    • 33750610677 scopus 로고    scopus 로고
    • Nonlinear function approximation: Computing smooth solutions with an adaptive greedy algorithm
    • Hofinger, A.: Nonlinear function approximation: Computing smooth solutions with an adaptive greedy algorithm. J. of Approximation Theory 143, 159-175 (2006)
    • (2006) J. of Approximation Theory , vol.143 , pp. 159-175
    • Hofinger, A.1
  • 51
    • 70449518406 scopus 로고    scopus 로고
    • Hofinger, A., Pillichshammer, F.: Learning a function from noisy samples at a finite sparse set of points. J. of Approximation Theory (to appear) doi:10.1016/j.jat.2008.11.003
    • Hofinger, A., Pillichshammer, F.: Learning a function from noisy samples at a finite sparse set of points. J. of Approximation Theory (to appear) doi:10.1016/j.jat.2008.11.003
  • 54
    • 0000796112 scopus 로고
    • A simple lemma on greedy approximation in Hilbert space and convergence rates for projection pursuit regression and neural network training
    • Jones, L.K.: A simple lemma on greedy approximation in Hilbert space and convergence rates for projection pursuit regression and neural network training. Annals of Statistics 20, 608-613 (1992)
    • (1992) Annals of Statistics , vol.20 , pp. 608-613
    • Jones, L.K.1
  • 55
    • 70350219175 scopus 로고    scopus 로고
    • Kůrková, V.: Dimension-independent rates of approximation by neural networks. In: Warwick, K., Kárný, M. (eds.) Computer-Intensive Methods in Control and Signal Processing. The Curse of Dimensionality, pp. 261-270. Birkhäuser, Boston (1997)
    • Kůrková, V.: Dimension-independent rates of approximation by neural networks. In: Warwick, K., Kárný, M. (eds.) Computer-Intensive Methods in Control and Signal Processing. The Curse of Dimensionality, pp. 261-270. Birkhäuser, Boston (1997)
  • 56
    • 4043167595 scopus 로고    scopus 로고
    • High-dimensional approximation and optimization by neural networks
    • chapter 4, Suykens, J, et al, eds, IOS Press, Amsterdam
    • Kůrková, V.: High-dimensional approximation and optimization by neural networks. chapter 4. In: Suykens, J., et al. (eds.) Advances in Learning Theory: Methods, Models and Applications, pp. 69-88. IOS Press, Amsterdam (2003)
    • (2003) Advances in Learning Theory: Methods, Models and Applications , pp. 69-88
    • Kůrková, V.1
  • 57
    • 18144423627 scopus 로고    scopus 로고
    • Learning from data as an inverse problem
    • Antoch, J, ed, Physica-Verlag/Springer, Heidelberg
    • Kůrková, V.: Learning from data as an inverse problem. In: Antoch, J. (ed.) Proc. in Computational Statistics, COMPSTAT 2004, pp. 1377-1384. Physica-Verlag/Springer, Heidelberg (2004)
    • (2004) Proc. in Computational Statistics, COMPSTAT , pp. 1377-1384
    • Kůrková, V.1
  • 58
    • 0035443484 scopus 로고    scopus 로고
    • Bounds on rates of variable-basis and neural-network approximation
    • Kůrková, V., Sanguineti, M.: Bounds on rates of variable-basis and neural-network approximation. IEEE Trans. on Information Theory 47, 2659-2665 (2001)
    • (2001) IEEE Trans. on Information Theory , vol.47 , pp. 2659-2665
    • Kůrková, V.1    Sanguineti, M.2
  • 59
    • 0036165028 scopus 로고    scopus 로고
    • Comparison of worst case errors in linear and neural network approximation
    • Kůrková, V., Sanguineti, M.: Comparison of worst case errors in linear and neural network approximation. IEEE Trans. on Information Theory 48, 264-275 (2002)
    • (2002) IEEE Trans. on Information Theory , vol.48 , pp. 264-275
    • Kůrková, V.1    Sanguineti, M.2
  • 60
    • 18744392546 scopus 로고    scopus 로고
    • Error estimates for approximate optimization by the extended Ritz method
    • Kůrková, V., Sanguineti, M.: Error estimates for approximate optimization by the extended Ritz method. SIAM J. on Optimization 15, 261-287 (2005)
    • (2005) SIAM J. on Optimization , vol.15 , pp. 261-287
    • Kůrková, V.1    Sanguineti, M.2
  • 61
    • 18144390163 scopus 로고    scopus 로고
    • Learning with generalization capability by kernel methods of bounded complexity
    • Kůrková, V., Sanguineti, M.: Learning with generalization capability by kernel methods of bounded complexity. J. of Complexity 21, 350-367 (2005)
    • (2005) J. of Complexity , vol.21 , pp. 350-367
    • Kůrková, V.1    Sanguineti, M.2
  • 62
    • 61349203972 scopus 로고    scopus 로고
    • Approximate minimization of the regularized expected error over kernel models
    • Kůrková, V., Sanguineti, M.: Approximate minimization of the regularized expected error over kernel models. Mathematics of Operations Research 33, 747-756 (2008)
    • (2008) Mathematics of Operations Research , vol.33 , pp. 747-756
    • Kůrková, V.1    Sanguineti, M.2
  • 63
    • 57349097663 scopus 로고    scopus 로고
    • Geometric upper bounds on rates of variable-basis approximation
    • Kůrková, V., Sanguineti, M.: Geometric upper bounds on rates of variable-basis approximation. IEEE Trans. on Information Theory 54, 5681-5688 (2008)
    • (2008) IEEE Trans. on Information Theory , vol.54 , pp. 5681-5688
    • Kůrková, V.1    Sanguineti, M.2
  • 64
    • 0032096332 scopus 로고    scopus 로고
    • Representations and rates of approximation of real-valued Boolean functions by neural networks
    • Kůrková, V., Savický, P., Hlaváčková, K.: Representations and rates of approximation of real-valued Boolean functions by neural networks. Neural Networks 11, 651-659 (1998)
    • (1998) Neural Networks , vol.11 , pp. 651-659
    • Kůrková, V.1    Savický, P.2    Hlaváčková, K.3
  • 65
    • 56949103612 scopus 로고    scopus 로고
    • Complexity of Gaussian radial-basis networks approximating smooth functions
    • Kainen, P.C., Kurková, V., Sanguineti, M.: Complexity of Gaussian radial-basis networks approximating smooth functions. Journal of Complexity 25, 63-74 (2009)
    • (2009) Journal of Complexity , vol.25 , pp. 63-74
    • Kainen, P.C.1    Kurková, V.2    Sanguineti, M.3
  • 66
    • 0000406385 scopus 로고
    • A correspondence between Bayesian estimation on stochastic processes and smoothing by splines
    • Kimeldorf, G.S., Wahba, G.: A correspondence between Bayesian estimation on stochastic processes and smoothing by splines. Annals of Mathematical Statistics 41, 495-502 (1970)
    • (1970) Annals of Mathematical Statistics , vol.41 , pp. 495-502
    • Kimeldorf, G.S.1    Wahba, G.2
  • 68
    • 0000029122 scopus 로고
    • A simple weight decay can improve generalization
    • Morgan Kaufmann, San Francisco
    • Krogh, A., Hertz, J.A.: A simple weight decay can improve generalization. In: Advances in Neural Information Processing Systems, vol. 4, pp. 950-957. Morgan Kaufmann, San Francisco (1992)
    • (1992) Advances in Neural Information Processing Systems , vol.4 , pp. 950-957
    • Krogh, A.1    Hertz, J.A.2
  • 69
    • 0001671709 scopus 로고
    • Convergence of minimizing sequences in conditional extremum problems
    • Levitin, E.S., Polyak, B.T.: Convergence of minimizing sequences in conditional extremum problems. Dokl. Akad. Nauk SSSR 168, 764-767 (1966)
    • (1966) Dokl. Akad. Nauk SSSR , vol.168 , pp. 764-767
    • Levitin, E.S.1    Polyak, B.T.2
  • 70
    • 70350239919 scopus 로고    scopus 로고
    • Littlestone, N., Warmuth, M.: Relating data compression and learnability. Tech. rep., University of California, Santa Cruz (1986)
    • Littlestone, N., Warmuth, M.: Relating data compression and learnability. Tech. rep., University of California, Santa Cruz (1986)
  • 72
    • 0002902522 scopus 로고
    • Least squares methods for ill-posed problems with a prescribed bound
    • Miller, K.: Least squares methods for ill-posed problems with a prescribed bound. SIAM J. of Mathematical Analysis 1, 52-74 (1970)
    • (1970) SIAM J. of Mathematical Analysis , vol.1 , pp. 52-74
    • Miller, K.1
  • 76
    • 0000461110 scopus 로고
    • An approach to time series analysis
    • Parzen, E.: An approach to time series analysis. Annals of Mathematical Statistics 32, 951-989 (1961)
    • (1961) Annals of Mathematical Statistics , vol.32 , pp. 951-989
    • Parzen, E.1
  • 77
    • 0008977715 scopus 로고
    • Remarques sur un résultat non publié de B. Maurey
    • École Polytechnique, Centre de Mathématiques, Palaiseau, France
    • Pisier, G.: Remarques sur un résultat non publié de B. Maurey. In: Séminaire d'Analyse Fonctionnelle 1980-1981, vol. I(12), École Polytechnique, Centre de Mathématiques, Palaiseau, France (1981)
    • (1981) Séminaire d'Analyse Fonctionnelle 1980-1981 , vol.1 , Issue.12
    • Pisier, G.1
  • 78
    • 0025490985 scopus 로고
    • Networks for approximation and learning
    • Poggio, T., Girosi, F.: Networks for approximation and learning. Proc. of the IEEE 78, 1481-1497 (1990)
    • (1990) Proc. of the IEEE , vol.78 , pp. 1481-1497
    • Poggio, T.1    Girosi, F.2
  • 79
    • 0242705996 scopus 로고    scopus 로고
    • The mathematics of learning: Dealing with data
    • Poggio, T., Smale, S.: The mathematics of learning: Dealing with data. Notices of the AMS 50, 536-544 (2003)
    • (2003) Notices of the AMS , vol.50 , pp. 536-544
    • Poggio, T.1    Smale, S.2
  • 81
    • 84865131152 scopus 로고    scopus 로고
    • Schölkopf, B., Herbrich, R., Smola, A.J., Williamson, R.C.: A generalized representer theorem. In: Helmbold, D.P., Williamson, B. (eds.) COLT 2001. LNCS (LNAI), 2111, pp. 416-424. Springer, Heidelberg (2001)
    • Schölkopf, B., Herbrich, R., Smola, A.J., Williamson, R.C.: A generalized representer theorem. In: Helmbold, D.P., Williamson, B. (eds.) COLT 2001. LNCS (LNAI), vol. 2111, pp. 416-424. Springer, Heidelberg (2001)
  • 83
    • 0001743201 scopus 로고
    • Metric spaces and completely monotone functions
    • Schönberg, I.J.: Metric spaces and completely monotone functions. Annals of Mathematics 39, 811-841 (1938)
    • (1938) Annals of Mathematics , vol.39 , pp. 811-841
    • Schönberg, I.J.1
  • 88
    • 0032122764 scopus 로고    scopus 로고
    • Simulated annealing and weight decay in adaptive learning: The SARPROP algorithm
    • Treadgold, N.K., Gedeon, T.D.: Simulated annealing and weight decay in adaptive learning: The SARPROP algorithm. IEEE Trans. on Neural Networks 9, 662-668 (1998)
    • (1998) IEEE Trans. on Neural Networks , vol.9 , pp. 662-668
    • Treadgold, N.K.1    Gedeon, T.D.2
  • 89
    • 5444237123 scopus 로고    scopus 로고
    • Greed is good: Algorithmic results for sparse approximation
    • Tropp, J.A.: Greed is good: Algorithmic results for sparse approximation. IEEE Trans. on Information Theory 50, 2231-2242 (2004)
    • (2004) IEEE Trans. on Information Theory , vol.50 , pp. 2231-2242
    • Tropp, J.A.1
  • 92
    • 34250488412 scopus 로고
    • Relationship of several variational methods for the approximate solution of ill-posed problems
    • Vasin, V.V.: Relationship of several variational methods for the approximate solution of ill-posed problems. Mathematical Notes 7, 161-165 (1970)
    • (1970) Mathematical Notes , vol.7 , pp. 161-165
    • Vasin, V.V.1
  • 93
    • 0026106486 scopus 로고
    • Deterministic sampling - A new technique for fast pattern matching
    • Vishkin, U.: Deterministic sampling - A new technique for fast pattern matching. SIAM J. on Computing 20, 22-40 (1991)
    • (1991) SIAM J. on Computing , vol.20 , pp. 22-40
    • Vishkin, U.1
  • 94
    • 0018211150 scopus 로고    scopus 로고
    • Vladimirov, A.A., Nesterov, Y.E., Chekanov, Y.N.: On uniformly convex functionals. Vestnik Moskovskogo Universiteta. Seriya 15 - Vychislitel'naya Matematika i Kibernetika 3, 12-23 (1978); English translation: Moscow University Computational Mathematics and Cybernetics, 10-21 (1979)
    • Vladimirov, A.A., Nesterov, Y.E., Chekanov, Y.N.: On uniformly convex functionals. Vestnik Moskovskogo Universiteta. Seriya 15 - Vychislitel'naya Matematika i Kibernetika 3, 12-23 (1978); English translation: Moscow University Computational Mathematics and Cybernetics, 10-21 (1979)
  • 95
    • 0003241883 scopus 로고
    • Spline Models for Observational Data
    • SIAM, Philadelphia
    • Wahba, G.: Spline Models for Observational Data. Series in Applied Mathematics, vol. 59. SIAM, Philadelphia (1990)
    • (1990) Series in Applied Mathematics , vol.59
    • Wahba, G.1
  • 96
    • 0033266803 scopus 로고    scopus 로고
    • Rates of convergence for a class of global stochastic optimization algorithms
    • Yin, G.: Rates of convergence for a class of global stochastic optimization algorithms. SIAM J. on Optimization 10, 99-120 (1999)
    • (1999) SIAM J. on Optimization , vol.10 , pp. 99-120
    • Yin, G.1
  • 97
    • 0038897234 scopus 로고    scopus 로고
    • Approximation bounds for some sparse kernel regression algorithms
    • Zhang, T.: Approximation bounds for some sparse kernel regression algorithms. Neural Computation 14, 3013-3042 (2002)
    • (2002) Neural Computation , vol.14 , pp. 3013-3042
    • Zhang, T.1
  • 98
    • 0037355948 scopus 로고    scopus 로고
    • Sequential greedy approximation for certain convex optimization problems
    • Zhang, T.: Sequential greedy approximation for certain convex optimization problems. IEEE Trans. on Information Theory 49(3), 682-691 (2003)
    • (2003) IEEE Trans. on Information Theory , vol.49 , Issue.3 , pp. 682-691
    • Zhang, T.1
  • 99
    • 16244401458 scopus 로고    scopus 로고
    • Regularization and variable selection via the elastic net
    • Zou, H., Hastie, T.: Regularization and variable selection via the elastic net. J. of the Royal Statistical Society B 67, 301-320 (2005)
    • (2005) J. of the Royal Statistical Society B , vol.67 , pp. 301-320
    • Zou, H.1    Hastie, T.2


* 이 정보는 Elsevier사의 SCOPUS DB에서 KISTI가 분석하여 추출한 것입니다.