-
1
-
-
0026966646
-
A training algorithm for optimal margin classifiers
-
B. Boser, I. Guyon, and V. Vapnik. A training algorithm for optimal margin classifiers. In Proc. COLT, pages 144-152, 1992.
-
(1992)
Proc. COLT
, pp. 144-152
-
-
Boser, B.1
Guyon, I.2
Vapnik, V.3
-
2
-
-
0012296113
-
Algorithmic stability and generalization performance
-
O. Bousquet and A. Elisseeff. Algorithmic stability and generalization performance. In Proc. NIPS, pages 196-202, 2000.
-
(2000)
Proc. NIPS
, pp. 196-202
-
-
Bousquet, O.1
Elisseeff, A.2
-
3
-
-
0035478854
-
Random forests
-
L. Breiman. Random forests. Machine Learning, 45(l):5-32, 2001.
-
(2001)
Machine Learning
, vol.45
, Issue.L
, pp. 5-32
-
-
Breiman, L.1
-
4
-
-
0030211964
-
Bagging predictors
-
L. Breiman. Bagging predictors. Machine Learning, 24(2):123-140, 1996.
-
(1996)
Machine Learning
, vol.24
, Issue.2
, pp. 123-140
-
-
Breiman, L.1
-
7
-
-
0036436325
-
Best choices for regularization parameters in learning theory: On the bias-variance problem
-
F. Cucker and S. Smale. Best choices for regularization parameters in learning theory: on the bias-variance problem. Foundations of Computational Mathematics, 2(4):413-428, 2003.
-
(2003)
Foundations of Computational Mathematics
, vol.2
, Issue.4
, pp. 413-428
-
-
Cucker, F.1
Smale, S.2
-
9
-
-
84983110889
-
A decision-theoretic generalization of on-line learning and an application to boosting
-
Y. Freund and R. E. Schapire. A decision-theoretic generalization of on-line learning and an application to boosting. In European Conference on Computational Learning Theory, pages 23-37, 1995.
-
(1995)
European Conference on Computational Learning Theory
, pp. 23-37
-
-
Freund, Y.1
Schapire, R.E.2
-
10
-
-
0003591748
-
Greedy function approximation: A gradient boosting machine
-
Technical report, Dept. of Statistics, Stanford University
-
J.H. Friedman. Greedy function approximation: a gradient boosting machine. Technical report, Dept. of Statistics, Stanford University, 1999a.
-
(1999)
-
-
Friedman, J.H.1
-
11
-
-
0003743417
-
Stochastic gradient boosting
-
Technical report, Dept. of Statistics, Stanford University
-
J.H. Friedman. Stochastic gradient boosting. Technical report, Dept. of Statistics, Stanford University, 1999b.
-
(1999)
-
-
Friedman, J.H.1
-
12
-
-
84942484786
-
Ridge regression; biased estimation for nonorthogonal problems
-
A. Hoerl and R. Kennard. Ridge regression; biased estimation for nonorthogonal problems. Technometrics, 12(3):55-67, 1970.
-
(1970)
Technometrics
, vol.12
, Issue.3
, pp. 55-67
-
-
Hoerl, A.1
Kennard, R.2
-
13
-
-
34047101698
-
Random forest variable selection
-
I. Guyon, S. Gunn, M. Nikravesh, and L. Zadeh, editors, Springer, New York
-
V. Ng and L. Breiman. Random forest variable selection. In I. Guyon, S. Gunn, M. Nikravesh, and L. Zadeh, editors, Feature Extraction, Foundations and Applications. Springer, New York, 2005. This volume.
-
(2005)
Feature Extraction, Foundations and Applications
, vol.This volume
-
-
Ng, V.1
Breiman, L.2
-
15
-
-
33750573858
-
Bagging regularizes. CBCL Paper 214, Massachusetts Institute of Technology, Cambridge, MA
-
February, AI Memo #2002-003
-
T. Poggio, R. Rifkin, S. Mukherjee, and A. Rakhlin. Bagging regularizes. CBCL Paper 214, Massachusetts Institute of Technology, Cambridge, MA, February 2002. AI Memo #2002-003.
-
(2002)
-
-
Poggio, T.1
Rifkin, R.2
Mukherjee, S.3
Rakhlin, A.4
-
19
-
-
34047150611
-
-
A.N. Tikhonov and V.Y. Arsenin. Solutions of Ill-posed Problems. W.H.Wingston, Washington, D.C., 1977.
-
A.N. Tikhonov and V.Y. Arsenin. Solutions of Ill-posed Problems. W.H.Wingston, Washington, D.C., 1977.
-
-
-
-
20
-
-
1942450610
-
Feature extraction by non-parametric mutual information maximization
-
March
-
K. Torkkola. Feature extraction by non-parametric mutual information maximization. Journal of Machine Learning Research, 3:1415-1438, March 2003.
-
(2003)
Journal of Machine Learning Research
, vol.3
, pp. 1415-1438
-
-
Torkkola, K.1
|